Thinking of ways to push blender to OpenGL land.. Need help

In my ever constant prusuit of getting Blender to intergrate with todays tech. I am trying to figure out some way that we can better present ways to gather coders into the Blender source to add in Opengl upgrades.

This post is not one to argue the need for OpenGL and older computers. I and many others have dealt with such conversations before.

The goal here is to add one more layer onto blenders View modes instead. A purposed Shader Model View.

Something more has to be done to get it going faster. ( yes I am reading my code books again :smiley: )

Sheesh 3d cards are starting to get physics now… http://www.tgdaily.com/2006/03/20/nvidia_sli_forphysics/ Source 3dtotal.com

Whatever, looking for ideas.

https://blenderartists.org/forum/viewtopic.php?t=15493

What happened to your coding aspirations?

%<

Been busy… busy as hell :smiley: picked up my code books again. Read the C thing, and now reading the obj c , while drawing and starting another site for film and working and oh boy biils!

Plus I NEVER saw an email from Levon, But I have a new CG card and droool at what the xbox 360 can do in realtime now… Just slow it down to 2 fps and it can replace basic rendering now… But the old fight was not rendering yet. Just getting the shaders and 3d view to reveal what your tweaking is a major speed up…

Simple exaple is running the old Radiosity to convert all mesh to muge poly and vertex clor is a great example of realtime visualization…

Also Ton commented on “what about the real artists that are never seen in gallerys but constantly build graphics for cable and news and small tv and such” -not a direct quote of words.

Faster view, faster work, faster tweak phase removed.

Great initiative, Blender can never have too many projects or contributors. Personally I’m way too busy with pushing my own software to release to have time with any real contribution, but I could assist in the brainstorm process.

I think that the OGRE engine (http://www.ogre3d.org) could be a good place to start a case study. They have solved the layering of 3D card capabilities pretty well and managed to encapsulate it (spanning over many plattforms and interfacing to multiple APIs). Also their API provides a number of way to make capability fallbacks for different situations. I’m not saying you should try to port their codebase, but I think it is a nice resource for ideas regarding this matter. Just a tip.

Personally I would like the improvments to be invested mainly into two fields:

  1. Geometry performance in 3D-view using capabilities of new generation cards. Perhaps review current culling methods and look into improving it where applicable (I’m not sure about the current state of the realtime code in Blender so I’m not sure about how big the margins are for improvement. But it is worth a look imo, cause there have been improvements in this area.) Perhaps also look into ways to benefit from vertex programs in some situations.

  2. Preview of materials (either directly in 3D-window or using the preview window currently in CVS). Especially procedural textures and bump/normal-maps using GLSL shaders would be interesting, but anything that else needs to be tweaked by rendering might apply (I have, for example seen shades doing stuff like AO and radiosity, in real-time). It will never be as good as a real render, but probably good enough for quick preview and for material tweaking.

Using the preview window for HW-rendering stuff is an interesting possibility imo, because then you might not need the latest gfx-card and still reap the benefits. In a normal environment, shaders provide real-time visuals att perhaps 30-100 fps, that limits the workload on the card by how many shader ops it can produce per frame. Choosing a non-realtime approach makes it possible to do multiple passes using complex shaders on mid-range cards and still get quick feedback. Pressing Shift-P dragging a box and waiting some 2-5 seconds isnt such a big deal. Choosing a non-realtime solution to surface shaders also makes the tool more scalable and also saves up frametime to handle geometry and/or software workload (when you don’t need to preview the result).

Just some ideas from the top of my head.

Should be rather easy. What you are going to need first is GLSL shaders for each of the blender texture/material types. This is going to be the hardest part, because until Opengl 2.0, branching is not allowed. The rest should come easy.

To work with blender’s code you don’t need C++ or Objective C. Only C. C++ is only used in the Physics code and some other obscure places.

If you could hack out the GLSL shaders, it shouldn’t be too hard to get them implemented into the current Blender code.

just to add notes. One once told me that OpenGl 1.5 with shader patches was another option as well. Is this different ??

Not sure what u mean by “shader patches”. If you’re talking about GL extensions, then yes, it is very much possible.

tbc++: what do you mean by “branching”? Connecting parts of different GPU-programs or?

More tips:
Overview on wich shader models that apply to wich generations (nvidia only): Here

If you want to try some shader coding “out of the box” I recomend you try ATIs RenderMonkey-software: Here

When testing for compatability for shaders on different (lower) shader models I recommend you download Riva Tuner and on the compability-tab in OpenGL-settings set it to force the card to, let’s say, shader model 1.0 to test compliance.

Other good sources of info: [GLSL official spec](http://oss.sgi.com/projects/ogl-sample/registry/ARB/GLSLangSpec.Full.1.10.59.pdf)

GLSL reference card

(Unofficial) collection of GLSL resources and info: Here

In older GPUs (pre GeForce6) the GPUs cannot do branching. The best way to explain this is with example:

Here is a simple for loop:

for (x = 0; x< 4; x++) {
putpixel(x,100)
}

This is allowed in older GPUs. When the program is compiled, it will look like this in the code:

putpixel(0,100)
putpixel(1,100)
putpixel(2,100)
putpixel(3,100)

This is called loop-unrolling. In otherwords, older GPUs cannot make decisions (branch) during run-time. All for loops must be able to be unrolled, and there can be no if statements. The following part of code is not allowed in older GPUs (and this also means pre OpenGL 2.0).

int myfunc( int x, int y) {

for (x = 0; x< y; x++){
putpixel(x,10);
}

}

The compiler has no clue how to unroll the loop, because it doesn’t know what y will be. Branching shaders were a big boon to game programmers when they came out, because it allows for great flexability.

On the gforce side things look good. A Geforce 6200 is rather cheap. However, on the ATI side, branching shaders were not supported until the X series (can someone confirm this?). Personaly I try to avoid any contact with ATI cards, so I have no clue as to prices on that end.

But hey, I say go for it. As time goes on, more and more people will have the higher end cards. So your user base will only grow as time goes on.

Ah, you mean THAT kind of branching. Yes I’m familiar with that since I have coded quite a lot low-level stuff in the past. Wasn’t aware that it was a limitation on the older versions. Thought it mostly concerned length of the shader program and instruction sets and such. Well, it doesn’t change anything really since I guess you’ll have to write independant shaders for different card-generations anyway and other obstacles apply that restricts you from doing a stright port anyways. That is why I recommended he’d look into layering the shader models according to hardware spec in my first post, because I cannot see how a “one-size-fits-all”-solution would be possible.

That’s why I say, use GLSL. From what I’ve heard, it is cross-platform. And as of GL2.0 it supports all the features of the other versions (branching etc.).

Integrate Ogre into Blender - that would be the easiest way to get the realtime opengl and/or d3d that you desire. Use it as an alternative display engine.

LetterRip

I’m not sure. What I guess is requested is a way to expand upon what already exist, not to provide an alternative that will run in parallell with current implementations. Risks are that one will end up with an solution that is both bloated and hard to maintain. Not because either OGRE or Blender is hard to maintain, but rather that they differ so much in functionality and use (I’m not familiar with blender code but I do know about OGRE and I would guess that they aproach issues fundamentally different). You would run into a world of redundancy and/or things that need to be replaced. I would rather recommend collaborating with the game-enginge programmer since he no doubt have run into these issues already. I think it is important to handle such an implementation with care if it is to avoid becoming a tool in an experimental branch. Also, to make it easy to extend in the future, efforts should be made to tie it into the existing structure to prevent overlap with current tools.

Personally I would do the following:

  • Make a list of what to improve. What issues should be tackled? Materials? 3D view performance? More?

  • Check status quoe. What is good and what is bad with what we’ve got?

  • Figure out how to abstact the hardware (GPU) related stuff from the standard implementation to keep scalability with machines with different capabilities. Also consider how it should be extended by third parties in the future in an easy way.

Just my 2 cents.

Heh, I think it would be funny to be able to see true displacement in the view window and then not be able to render them in the internal engine.

I would rather recommend collaborating with the game-enginge programmer since he no doubt have run into these issues already.

Erwin plans to abstract the gameengine interface so any engine can be pluggable (Ogre, Crystalspace, etc)

From discussions with JesterKing ‘Ogre would not be hard to integrate’.

Ogre was made a plugin for 3ds max not sure what all is done, might be worth talking to the guy who did the integration work.

It would need to be pluggable and not a default item since Ogre is a very heavy install (Ton is pretty adamnant about keeping our download and install size small) and requires fairly beefy hardware specifications to be usable (Ie it is almost unusable on my 3-4 year old laptop).

Of course we’d welcome other approaches as well, I just think that Ogre might be the most realistic pathway to accomplishing things in a reasonable time frame.

LetterRip

I have some months worth of OGRE experience myself so I can agree on the fact that it rather easily could be made “pluggable”. At least from the OGRE side of it. I’ve used it personally as a gfx fronend plugin for another application. Also, OGRE is very modular in it self so one can choose to exclude much of it’s functionality to reduce size. However, the thing I think might be the issue is rather how well it will work some year or so into the future. Like for example, let’s say the noodles project would have been started after OGRE had been integrated. That would require quite some extra work to be able to preview node-materials. That is why I’m worried that using it right out of the box might lead to duplicated work later on, if not done right.

The person(s) implementing it would have to be really careful assuring that OGRE and Blender share a common and easy interface to one another to make it easily expandable by ppl who don’t know a thing about OGRE and want their data to be displayed too.

Imagine:
“save as OGRE runtime”
“Save as CrystalSpace runtime”
“Save as BGE runtime”
“save as Unreal3 runtime”
“Save as TorqueGE runtime”

drooooool
Ok I’ll stop dreaming now…
%|

So let me get this straight you want, blender to use the latest opengl 2.0 features that are not currently available to it?

So why dont you, create a new branch on the Blender tree that has this kind of support?
While keeping the old branch of the interface, then that would mean, when it comes to compiling you can choose what interface support you want.

This would mean your getting the best of both worlds.

Errr… Ya know that button pop up ya press to change the view to Wireframe, solid, shaded, Texture, Well add one more… GLSL…OpenGl… Call it whatever…

Just add one more. That method allows complete freedom to keep blender the same exact app for everyone, All it adds is a new veiw type, which intails then the hard work of converting all of blenders material nodes and shaders system into GL display… What fun eh?>!?

Ok progress… Err sorta…

To the ones that know Opengl code. But are the kinda of patterens I shoul;d be looking for is the mapping of the system ??

I know that all code methods have a ‘flavor’ patteren kinda like how different forien laguanges have that ‘thing’ that sets them apart…

For me I would be trying to focus of Obj-c if at all possible since I am learning the same in Cocoa OSX land… But whatever is needed.

Errr need examplessssss…

Blender doesn’t use Obj-C at all, and patches requiring obj-c wouldn’t be likely to be accepted.

LetterRip