I know that OpenGL 2.0 has been bounded around by many, especially with the imminent open-sourcing of Blender. From what I’ve gathered, there’s a lot of cool stuff in it which Blender could really excel with…
However, I’ve been looking for more info about it as related to Blender’s development, and I haven’t been able to find very much. I feel that this would be a good place to start a discussion in this forum (for the idiots like me!) what exactly is in store for us and for Blender with regard to OpenGL 2.0. So, I’ve got a series of questions about it (for everyone’s benefit, as much as for mine!). Any answers would be much appreciated!!
For those of us unaquainted with OpenGl, what is it?
Why is version 2 such a massive leap forward?
A lot has been mentioned about ‘realtime rendering’ through OpenGL - taking a render and breaking it down into OpenGL commands. Why is this so much faster, and how much faster is it than ‘traditional’ CPU rendering?
Will OpenGL 2 offer better speed and quality in the viewports as well, or are the benefits to Blender strictly when rendering?
Will it be easy to code into Blender, or are we going to have to wait a while for it to be coded in?
(last one!) If OpenGL 2 isn’t even out yet, isn’t it a bit early to be ‘leaping on the bandwagon’?
First… I’m just a novice programmer, have never done graphics, and don’t know much about OpenGL, but I can help a little.
Essentially it’s a code library which applications can use to interface with 3D hardware. I think it can be compared to MS DirectX, but OpenGL is open and cross-platform. More can be found at www.opengl.org
Obviously as 3d hardware advances, the code that uses it must be able to use those advances, I suppose that would be the biggest advantage of v2, as well as increased efficiency and stability.
Because the OpenGL libraries takes advantage of 3d accelleration and the GPU, transferring more of the work to the video card. That’s my best guess.
Ok, here I need some help… what are viewports? But, to answer the best I can, OpenGL can be used for standard rendering, realtime, and drawing the UI. Because it is designed to do this well, it provides both speed and quality, and because it’s code the Blender group doesn’t have to write, it allows faster implementation on new Blender versions.
It probably won’t exactly be easy… but not hard either, because Blender already uses OpenGL. All that would be needed is optimise Blender for the newer OpenGL2 features.
I think it’s being finalized as we speak. But, it’s not really a matter of jumping on the bandwagon. OpenGL 2 is not some experimental thing that may be great, but may fizzle out. To demonstrate this, from the OpenGL web site, look at who runs OpenGL…
As you can see from the list of board representatives, OpenGL is run by THE industry. It’s designed to be the best possible interface between applications such as Blender, and the hardware the industry produces.
Also, another important reason for using OpenGL is its cross-platform vendor-neutral nature. It is OpenGL that allows Blender to function across all the platforms we use it on…
Umm, I’d imagine that Opengl 2.0 will not speed up the normal editing views, but it may allow them to do more.
Imagine the shaded view with lighting calculated for each frame, and the internal procedural textures being drawn, and the image textures being mapped approaitely, and envioronment maps being calculated, and shadows, and bump maps, and spec maps … If your card can handle it. OpenGL 2.0 makes it easier. I doubt blender will get to this point immedately, but the hadware certainly allows it.
Except, it seems that blender’s render (as in F12, or non editing views) is opengl-ish. (I played with it today ;). For example, opengl doesn’t seem to want to draw things one pixel at a time, and show you that is what it is doing. (maybe if you draw on the visible buffer?), and opengl doesn’t seem to like things like "If the pixel on the screen is further away from the lamp (for this shadow lamp’s shadow-drawing session the nuber of which is controlled by ‘samples’ in the lamp buttons) than the value of the lamp’s zbuffer says the nearest one is, this pixel is not lit by this lamp. Opengl would rather you had another texture projected onto the object from the point of the lamp. It also would rather that zbuffer not be compressed (which supposedly blender’s is).
The render is in software. The user interface isn’t always.
so, I guess OpenGl 2.0 will improve blender, but probably not the render, nor render times.
Thanx very much guys. This has cleared a lot of it up!! It looks like we’ve got a lot to look forward to with the future versions on the way . I had no idea that was nearly finished - last I heard, it hadn’t even been started yet!!
The only thing I’m still unsure about re OpenGL 2.0 seems to be this new way of rendering scenes in almost realtime. I brought it up here because Ton mentioned in one of his posts that it was thing he was most looking forward to seeing implemented in Blender, and another user replied to this, saying that he/she couldn’t wait as well, because render times would drop to 2/3 seconds per frame (given a decent geForce card), even in highly complex scenes. This idea was previewed in a white paper called ‘Mutli-pass real time rendering’ at SIGGRAPH, I think it was, but since then, I’ve heard very little about it (surprisingly, since this would just turn the 3D industry on its head, you would expect!!). I guess we’ll just have to wait and see what happens when OpenGL 2.0 comes out - time will tell…
Oh well. Thanx for the quick but good replies, guys. This is what’s missing in Maya, 3DS Max, XSI etc. - they don’t have half the community support Blender does!!