Open GL 2.0 in Blender ?

I´ve heard the the new top of the line board are Open GL 2.0 compatible?

http://preview.millimeter.com/cg/video_nvidias_new_pipeline/

Is there any plans on integrating it (OGL2) in Blender anytime soon?

this is actually a nice CG topic, so I will move it where it belongs :wink:

I am not sure…I think OpenGL 2.0 wont happen in Blender until Blender 3.* series will be out…

yeah, by then there should be pretty common consumer hardware with 2.0 capabilities

of course, we even then will probably not be able to expect that everyone that may want to run blender will have such hardware (evidence: so many people are running blender on intel graphics cards and such [trident, old tnt cards, sis, …] which don’t have even opengl 1.2 support)

By the time 3.0 comes there would probably be tons of new computers with it built right in as well.

Maybe a stupid question, but what does OpenGL 2 bring to the table?

As far as I can tell, Blender doesn’t use all of the current OpenGL specs . . .

I believe OGL2 introduces native hardware shaders. Not sure on that. But I do know that once Blender supports hardware shaders, it (like Maya 6) will be able to use the power of the graphics card to do some of the rendering and still have the possability of being photo real.

I was reading an old interview with Ton (circa 2003) and he was asked: what would you have done diferently? He said someting like he woulçd scrap everything and start Blender over with OpenGL 2.0 programming.

i think it was also floating around a complete (from scratch) redesign/recode of blender for the 3.0 series, possibly using a node based architecture to make it more programmer and plugin friendly, and of course cleaner, with nothing from the old NaN stuff.

OpenGL 2.0 was released by SGI as a programmable version of OpenGL. It basically allows developers to program graphics in an environment similar to C to customize graphics on the fly.

What this will do is allow vendors to tune their software much much better. I won’t get into the programming details, but for those that understand it this is a big step. And with CPU power increasing the sky is the limit “to a point.”

CyberSorerer

Does Maya use the videocard in the renderingprocess? Are you sure? Won’t this cause different render results on different cards? And what about renderfarms, are they equipped with videocards?

[quote=“halfgaar”]

Does Maya use the videocard in the renderingprocess? Are you sure? Won’t this cause different render results on different cards? And what about renderfarms, are they equipped with videocards?[/quote]

There are certain things in Maya that can only be done via hardware rendering. One that I used was the sheared particle effect. I’m not sure why it’s hardware only but it is. You have to composite the hardware render with the software render - that was the first tutorial I did. You can actually render whole scenes with multiple hardware passes too. I never used those because I didn’t know how.

The closer that gaming quality gets to post-production, the more hardware rendering’s going to be used. On the newest cards, you can get post-production quality in real-time. As for render farms, they are usually just clusters of normal PCs so they might have graphics cards built in. With a render farm, you don’t really need hardware shading because you have so much processing power. It would be faster on a home computer with a good card though.

If you have a 256MB card and a PC with 256MB, it’s highly likely that the card will render it faster because of the higher data throughput. Ok on older cards with OpenGL 1 it won’t look very nice because programmers are limited to the functions of the card but with the new stuff, programmers can do much more complex effects that will allow post-production quality.

Really they aren’t going to be able to call it “Hardware rendering” for long since graphics cards are becomming as programmable as a CPU. The graphics card will end up being more of a multimedia processor as opposed to the old-fashioned concept of a simple display device. It started happening way back when Nvidia announced “Hardware T&L” and has recently exploded forward with programmable shaders.

I belive we’re moving to a multi-tiered processing environment, with more processors delegated to specific tasks, and all of them being interchangable. DSPs (Digital Signal Processors) are offloading all sorts of tasks from the main processor.

We’re also going to be moving away from X86. I already know of plans of this happening. And it won’t be a gradual change, either. Multiple companies are shooting for a single date to change over (hint think next gen. consoles). Did you know that Windows NT was originally designed simultaneously on Alpha and SPARC, but scrapped the idea at the last minute to go for X86?

I guess I’m going off on a tangent from the original topic, but so much more is possible with current technology than what’s actually being implemented. And it’s a shame…

What about the issue of difference of picture quality. The ATi 9000 series for examples has much better shader quality than Nvidia’s 5000 series. I’m dreading the day the videodrivers and/or chips will determin the quality of the renders.

And even though it’s off-topic, how concrete are those plans to move away from x86 and whos plans are they? I don’t think you can do that. If Intel does it, everyone will only buy AMD. If AMD does it, everybody will stick to Intel. If a move is made from one architechure to the other, all existing software will be useless. I agree that x86 is an ugly architechture, but the architecture is the means, not the end. What I mean is, you may develop a perfect architecture, but one does not buy a computer for that. The current architechture is just a means of running all the existing programs you want to use.

And even though it’s off-topic, how concrete are those plans to move away from x86 and whos plans are they? I don’t think you can do that. If Intel does it, everyone will only buy AMD. If AMD does it, everybody will stick to Intel. If a move is made from one architechure to the other, all existing software will be useless. I agree that x86 is an ugly architechture, but the architecture is the means, not the end. What I mean is, you may develop a perfect architecture, but one does not buy a computer for that. The current architechture is just a means of running all the existing programs you want to use.[/quote]
hrm, RipSting usually knows what he is talking about

well anway, the X86 architecture hasn’t aged very well over, what, 7 generations, going onto the 8th, where the coming [already here for amd chips, about or already here on a few intel chips] generation has 64 bit capablilities, yet fully supports 32 bit.

honestly, I don’t think now is a bad time for a completely new core design and a new instruction set, but one of the major improvements would essentially do what the gpu does now [act as a vector coprocesscor… apple hardware and the ps2 already has vector processcors, and properly used even if at a lower clock they can outperform both intel and amd chips]

… okay, enough of that

very true, of particular note is what should be smooth gradients where nvidia cards clearly show banding. this is true even for what should be simple things like a phong specular highlight on a flat sufrace…

Am I right in understanding that the computer industry is doing sort of a back-flip to the old days of the hobbyist/console oriented hardware? Something like the Amiga and it’s group of specialised co-processors working in tandem. Aah yes … back when my computer had bits with interesting names like Alice, and Paula , and Fat Agnus … those were simpler times.
:wink:

Samir

well, intel is switching to the multi-core design for it’s next set of processors, so multithreading will become very important in the coming few years.