Blender, render with gpu, or die!

lol… someone saw my typo.

But really, hardware rendering can yield something that is close to software and at very high speeds. Downside to this is development time and cross platform compatibility. Maybe a few years from now things will change but for now there isn’t a point to do it.

I’d let nvidia do the heavy lifting here for now because they are more capable and have better resources to do so. Not to mention they have access to the hardware developers too to figure out any questions they have, which is something we all don’t have.

I don’t like to re-raise the same point twice, but I still haven’t seen any empirical evidence to this fact.

More and more people are simply stating that GPU acceleration will give speed improvements over a pure CPU based renderer, but I still haven’t seen any proof of this. And I restate my original point for clarity, even nVidia failed miserably with Gelato which is no faster, and often much slower, than PRMan. If they can’t do it, who can?

I still feel that this GPU acceleration of offline rendering is a myth propagated by people who don’t understand the intricacies of photorealistic rendering, especially when compared to realtime rendering.

Happy to be proven wrong though.

Paul

I still feel that this GPU acceleration of offline rendering is a myth propagated by people who don’t understand the intricacies of photorealistic rendering, especially when compared to realtime rendering.

And you dont seem to understand that there is more to a production than final rendering…

Oh, I fully understand that, and I’m not questioning the request for faster/near realtime rendering with an acceptable drop in quality. I’m questioning, if you read my post, the constant claims that the GPU is a golden bullet for non-realtime rendering. Many people blindly claim that it is possible to use the GPU to get faster offline rendering, my question to those people is, where’s the proof of that? I’ve provided a valid counter example to the statement with nVidia/Gelato.

Paul

I don’t understand what the fuss is about. The difference is perfectly obvious to me, because I see it every day. I “stumbled upon” the previously-mentioned page (for another product…) precisely because it summed-up so very well what my perspective is.

What I really want is a choice. I want a set of types of “render nodes” that are GPU-accelerated. (It would be wonderful to have a Python node-type, also.) I do not mind having to construct a node-graph.

My output-media is video, my requirements are more-moderate, and my most urgent need is speed. Yet, it’s not “real time.” I want the GPU to do as much or as little heavy lifting as I wish.

I don’t care that the results may be hardware-specific or that the CPU renderer (node…) might not produce an identical result.

It takes me more than 10 minutes to get a proof-print of a complex machinery animation sequence that my hobbled-up GameBlender method can produce in less than one.

It’s in the countless apps that small developers have made themselves,wether for school or fun. They are most all windows based but you can find them on opengl.org, and google. But they are justtest apps.

No matter. No one wants to throw thier hat into the ring, so BA!

Absolutely… The more render force, hte better! ;):eyebrowlift2:

Someone start then. You might get some support.
It might start out like Gimp Skin of photoshop, http://linux.slashdot.org/article.pl?sid=05/09/16/155221
And then it might end up the same as well.

I tottaly agree
+10000

This is a real time head, rendered about 40FPS on my home computer.

At any rate this image is getting really close to production quality and even then the image your looking was rendered at 1920x1200 and scaled down. I would wage to say this image is about 90+% the quality of an off line render, at only a fraction of the cost.

So there are some benefits to this technology. But I still maintain that blenders resources should be left to interface and tool development rather than working on an evolving technology like this.

Oh and, After Effects and Motion and Combustion all utilize the GPU. And they use 3d, that said it’s not huge 3d models, But compositing 3d effects and film are huge time savers for the GPU now

I’ve seen plenty of GPU test code, that claims to do amazing things with GPU accelerated matrix math, GPU accelerated compositing ops, etc. But these are not accelerated rendering. I’m still utterly unconvinced that in the context of a full production renderer, GPGPU style acceleration can achieve the acceleration that many people are expecting. For one, the bus speed between the CPU/main memory and the GPU is too slow. It’s all well and good when all the work is done on the GPU, as is the case with realtime/GL style polygon rendering. However, when the GPU is being used to accelerate smaller parts of the overall rendering process, the transfer time will far outweigh any performance improvement gained by doing the work on a highly parallel, VLIW processor. This is where the next gen consoles win, the processors themselves are generally not that powerful in their own right, but the bus architecture is orders of magnitude wider and faster than that of mainstream PC architectures.

I understand that using GPU for it’s normal job, and accelerating realtime rendering to the view window, or to any window for that matter is a potential win if that is what you specifically need. However, it is a very niche requirement, and a very different solution to the accelerated non-realtime rendering that others are claiming can bring huge benefits. What you’re proposing there is an equivalent to Maya’s “Playblast” which basically runs the animation in the viewport, and dumps a framecapture of each frame. Having used this tool in a production environment, I know for a fact that you still don’t get “images to video” at the same realtime framerate that you get when just playing the animation in the viewport. This is because you have to factor in the time to write the images to disk, which is obviously a bottleneck, then you have to encode the frames into a video stream, which takes the same amount of time whether they come from a realtime source or from a non-realtime renderer. However, even taking into account this, the playblast to video on Maya is a very useful tool for producing previews of animations, and one we use all the time, so I can fully understand the benefits. If this is what you’re after, then yes, it’s a case of enhancing the viewport rendering to fully support OpenGL shaders, and as I mentioned earlier in this response, it’s quite a niche requirement, and one that isn’t likely to get mainstream development effort, so you may be better off trying to either do it yourself, or find a kind developer willing to do the work.

Paul

pgregory yeah I figured as much that it’s all a hack anyway. I dont think any developer will implement Gl hardware what ever into Blender for a long time. I sure dont want to code it.

I do however want to relearn further into C but Ruby has me right now.

Cuda is certainly a powerful engine. Look at Gelato vs. CPU render times.

What I just read above though shows a lot of confusion. Cuda is a matrix math wiz. CPUs are not optimized for matrix math like nVidia GPUs are. The trick is, most graphics mathematics are, or can/should be, performed in matrix math. Ever heard of matrix translation math? That’d be a matrix calculation. :wink: How about matrix collision calculation?

There’s a theme, no? :wink: Now, with matrix math optimized on GPUs, and the benefit of having hundreds of calculation cores (128+, I believe), your calculation time is going to be cut by ridiculous amounts!

http://forums.nvidia.com/index.php?showtopic=33761

Now, that all sounds good, but… Let’s throw some realism into the mix. :wink:

  1. As mentioned above, there are a lot of things to put into Blender, and few people to put them in. If someone wanted to step in to put Cuda into the mix, I’d love to shake their hand. :wink: But, until then, I think we’ll be waiting.

  2. It’s new(-ish). Last I checked, their SDK was like… 1.2, or something like that. That leads into…

  3. It’s not widely cross-platform yet. Once that’s done, the gap between customers and developers will be much less.

  4. It’s nVidia’s cards only, and of those 8800 and up. Until the 8800s trickle down to the lower consumer market, again, that gap mentioned above grows smaller.

Lastly, unless ATI goes and adopts the standard (somehow designing in compliance), something big is going to happen. If there’s enough reason for people to jump to nVidia for a performance upgrade, ATI will be in big trouble, but that’s going to mean a price-point war. The only reason for that war to erupt will be if software starts integrating Cuda, and boasting of the benefits.

Anyway, the benefits are real, I’ve seen them myself, but there’s much to be done between here and there. :slight_smile:

-Daniel

Guys

did you check out hypershot, VRAY RT, or Alias new toy?

The are only CPU based and damn fast.

I expect the biggest bang-for-the-buck (for now) will be efficient multi-threading. In a few years we will probably have 8 or more cores to work with. The second biggest impact would be to identify rendering bottlenecks and write alternate assembly/SSE code for them. The two of these combined could have an enormous impact on rendering.

Once Blender’s internal render engine is separated out (scheduled for 2.5, I believe), there will be some good opportunities here. I got into some SSE code a few years ago, and I think it might be fun to try my hand at it. Just wish I had some free time :rolleyes:

Eventually we might have a standardized way of coding to a hybrid CPU/GPU, but I don’t see that happening quickly, and any effort spent in that direction now will likely be wasted.

hi Spamagnet

I agree here - the cores get higher and faster and start to overun GPU again.
TIger and Vista might use GPU for some fx as well as some graphic applications
but those fx are also card specific or worst case OS specific.

there are already very fast raytrace engines which are CPU based and produce real time results. I am very curious about your SSE code - simply because I think Blenders internal raytracer needs a serious face lift.

After all the nice additions we got I think that should be on the hit list for the next releases.

ha 8 core cpus are still no were near as fast as a modern day graphics card. Intel did a demonstration of calculating physics on an 8 core nehalem where they had 50,000 to 60,000 particles that all had physics and they rendered it all on the cpu at about 15-25 fps. They challenged nvidia to do the same on a gpu using cuda. Nvidia had 65,000 particles all with full physics that it was rendering at nearly 300fps and no that is not a typo it was really 300 fps. An 8 core processor will never be near as fast a graphics card for calculating things that involve math its a simple fact. GPU’s do math really really fast and guess what rendering is… Math lots and lots of math. This same concept is demonstrated with video games. If you have ever used 3dmark 2006(i dont know about the others) one of the tests is rendering a scene on the cpu and the cpu alone and it looks like one of the older games from about 2003 or so and it is rendered at about 1-2 fps on a preatty good cpu. Whereas the other scenes use some preatty high end graphics and are all rendered on the gpu and on a good gpu they will go 30-60 fps. It is obvious that the gpu is faster than the cpu can ever hope to be at rendering and anyone who claims other wise has never played a video game

As mentioned in the post above me, the numbers just don’t compare. Take any of the render engines mentioned below my last post, port them to Cuda and your render times REALLY burn. :slight_smile:

I’m sure your videogames aren’t fully raytraced…