Paul Debevec - Realtime Raytracing

Hey all,

Not sure whether or not this subject has been discussed before. Anyways, my dad was showing me some realtime rendering with raytracing/texturing on a program that came with his computer. Apparently a man named Paul Debevec developed a way to quicken rendertime.

My question is, why does rendering have to take so long? There are many games out there that render 3D graphics on the fly including reflections and all, and yet it takes me a minute to render a glass ball on a stand at relatively lower settings. Games, though they do not have oversampling (I think), can render textured, lighted, and reflection on the fly.

I’ve got some links handy, and was wondering whether or not Blender could make use of these kinds of features. Personally, I’m not 1337 programmer and my knowledge of computers is limited to just using them, so enlighten me if you will :slight_smile:

http://www.debevec.org/

An interesting video I found on that site: http://gl.ict.usc.edu/research/DLS/DualLightStage_EGSR05_Submit.avi

It’d be great if Blender3D could implement this sort of stuff to speed up its rendering time. (I’m sure there’s something I don’t know that makes rendering in 3D programs take longer… if so, please tell me.)

It all has to do with accuracy.

Real-time raytracing sacrifices some depth and richness to the color by taking shortcuts.

Blender provides access to variously complex raytracers: the internal scan-line renderer (which is a sort of sub-raytracer), the raytracer proper, and yafray, which I believe has options for both raytrace and radiosity (a very advanced form of raytracing).

A “quick render” a la XSI options would be neat in Blender, but I don’t think it’s particulary high on the TODO stack, in part because the preview options work fairly well.

Hope this helps.

It would be nice to have that feature, however, I don’t think that the home PC (well mine at least) could handle that, especially my BreakRoom Project.

I have read topics from people refereing to something like that, realtime raytracing, but never looked it up myself.

Well, I’d be willing to sacrifice accurecy for rendering things like that on the fly. My dad told me it was done with hardware rendering as opposed to software rendering. Anyways, I have no clue what XSI and TODO are =P.

It’s the hardware/software rendering that makes all the difference. Blender’s rendering engine (and YafRay too) uses only the main processor, whilst games make use of your main processor and graphics card as well.

So if Blender had hardware rendering programmed into it, rendering times can be almost as fast as games. Maybe a bit longer to ensure accuracy…?

You might play a game like doom 3. Which plays in realtime and looks really nice. Detailed characters, advanced lightning, etc.,etc. These games looks that good because everything you see is first rendered on high-end machines using very high poly models. After (sofware) rendering of all these objects, specific data is projected on a low poly model. Then some tricks are used to make the low poly model look like the high poly model. One such feature in doom 3 is normal mapping, which simulates very high poly surfaces. Problem is you first need to create a high poly model and translate the surface properties to the low poly model by rendering it.
Another trick is using baked-in lightning. For example: using a software raytracer to render a color map of a building with radiocity. Then aply that map to a (low poly) object and give it some ambience value.

It’s quite simple. Game videocards are not able to do any raytracing. They are just to slow… Instead they use a graphical programming language like OpenGL to display simple figures like triangles, squares and others. This language doesn’t describe the way a ray of light should refract when it travels trough a material. If so a graphic card should now the whole 3d scene at any given time. Not only the part that’s in front of you. That would lead enormous amounts of memory taken by the game and computations…

So games looks good but they only use tricks to recreate what has been rendered before on a very high end system with a software raytracer.

Softimage XSI is another program like Blender, Maya, Max, etc. It’s very high-end and very expensive.

TODO means “to do”, as in “things yet to be done”. Often paired with ‘list’ as in “to do list”. Sorry about that.

Graphics cards have special hardware that increases speed in the following ways:
[>] The 3D data need only be sent to the video memory once, instead of every frame of the animation. Moving data from the host memory to video memory is still a fairly slow process (though not nearly as slow as it used to be). What this means is that the GPU (your video card’s CPU) can work on its own data without having to wait for it from the host CPU. The only thing it needs are a few small pieces of data here and there to instruct it what to do with its data (camera angle, etc.)
[>] Optimized cheats. When you load data to the GPU, you are required to do so in a specific way: typically as an array of faces, each of which is a collection of 3D spatial points and 2D UV-coordinates. The ability to map an image to a face is very highly optimized for speed. It is also very easy to do simple shading tricks (shadows, directional light sources, etc.) which when done right (as the GPU is programmed to do) produces some very impressive results.
[>] Hardwired code. The GPU only performs specific tasks as a slave processor. Thus, it doesn’t need all the overhead the host CPU does to load, decode, and execute multipurpose instructions. While some GPU’s have a limited ability to be programmed, it is far from the power and flexibility obtainable from a general purpose CPU.
So, removing all the general purpose overhead and doing a small number of things well and fast, the GPU has a speed advantage when performing graphic bitblt, shading, lighting, etc. operations.

The thing you should note is that it is all an illusion. The GPU does not raytrace a scene. If it did it’d take about as long as the CPU. The GPU cheats, and does it well.

The point of raytracing is the ‘accurate’ modelling of light in 3D space. The accuracy will of course depend upon the type of raytracer… but it is still far more capable than the GPU’s bag of tricks.
For example, try to model a fishbowl in bright sunlight and all the prismatic effects that come out of it. The GPU just can’t do it. It has to cheat to approximate the effect.

So the point is that 3D modellers and 3D game modellers have different goals. The latter is to make the game look pretty. The former is physical accuracy.

I hope this helps clear things.

hmm… it looks like [b]joostbouwer[/ib] beat me to the punch…

Hmm…last two posts are quite right. Still, it would be good if the GPU could somehow assist in the raytracing. Cuts down on rendering times.

By the way, a good example I can give about the GPU cheating is Half-Life 2. The water is so realistic it almost seems like it’s raytraced, yet even my 3-year old ATI card can do about 20 frames per sec. I don’t know how is it faked, but it isn’t raytraced that’s for sure.

Another eye candy is the subtle reflections on the weapons and other metallic objects. But those are all pre-rendered environment maps which, as some have surely noticed, are not very accurate.

How would you guys account for this…Real-Time High Dynamic Range Image-Based Lighting? Everything looks realistic as hell (just an expression - hell isn’t necessarily realistic) but do you think even those are somehow faked?

Of course there faked :slight_smile:

For example this one:

As you see, the balls reflect the enviroment… but you cannot find any reflection of the balls themself. :wink:

same goes for the lightning. No shadows on the spheres. It’s all cheating.
BTW Softimage XSI has no realtime preview or something like that. It has render region which actually just an option to render a portion of your view port with mental ray. Not realtime at all.

Wow, that’s very observant of you. In conclusion of all this, realtime raytracing is a contradiction in terms! :smiley:

I don’t agree on that last statement, yu_wang. There’s certainly some raytracing involved with realtime HDRI but it’s limited to the environmentmap only.
Real raytracing can results in millions + off lightrays to compute while they bounce of surfaces, loosing intensity or color, refract or reflect (blurred or not). It’s so complex that I think there will never be a truelly accurate realtime solution.

Joost

has anyone that nay says this stuff ever read docs and went to exhibits ?

The stuff is real, raytraceing and radiocity from the CG card is real and faster.Refelctions lighting normal maps clouds particles sprites smoke, all real time some might be slow but it is still faster than waiting for one frame every 2 minutes.

There are hacks yes, but those we already have hacks just in normal scanline rendering. Why the heck would they keep makeing advancements in the cg cards if it was all faked. BUt the greatest bottle neck I have found in talking to developers is basicly one thing, time. getting a real render engine out of cg cards that can handel massive scenes is a major work load… But workarounds can be made.

My understanding is why not make the tool render in passes a few objects at a time and later composite them in with depth masks…

Awww dont matter, you just gotta find someone willing to take the risk and develop it instead of more modeling features… :stuck_out_tongue:

Show me a demo with caustics, GI, and high detailed modells with textures over 300MB size, that will run on my Radeon with 128MB of memory… got the point why there arent any render engines that use graphic cards? :stuck_out_tongue:

Ya missed the part about work arounds… Like err todays work around that already exsit… Like what pro is going to hit render and try and render every single detail in one pass and call it a day ?? No my friend they have to render out each pass of lighting color dirt maps and so forth…

The cg card is just there to be smarter and faster than the cpu for tasks that it can handel faster than cpus…

I’m familiar with the operation of XSI. I use it every day. I’m interested in what you mean by “interactive”.

What I mean by interactive is that when I modify the scene the quick-render region immediatly updates to reflect the changes. I never said “realtime”, although XSI is pretty quick about it.

Blender, on the other hand, does not auto-update rendered stuff when you make a change. It is not an interactive preview.

youngbatcat: No one here is saying this stuff is impossible. The problem is practicality. Perhaps in a couple of years everyone will have video cards capable of off-loading a lot of render processing. Right now, they can’t. The costs are prohibitive. This is the natural order of computer innovation.

The other thing you are missing is the way things work under the hood. Demonstrations are meant to be impressive; they show the best of what can be done with the new hardware and software. I can’t speak for all the respondants here, but I at least know a little something about the code and structure that goes into raytracing. It is not a trivial process, neither to program nor to execute.

The real-time ray-tracing you are seeing is taking advantage of simplifications and optimizations. This limits it in comparison to software raytracers. Period. The promoters make it seem all like bread and honey, and it is for the goal they are seeking to accomplish (fast, hyper-realistic games), but to generalize it as the cream of the crop in all things demonstrates only a lack of knowledge in the subject.

Sorry. I think my “you’re ignorant” argument trumps yours. Going to exhibits and reading promo-docs is not a replacement for technical understanding. :<

…Look, I’m sorry to be harsh, but fallacious arguments bug me. To be sure, this stuff looks cool. But I won’t be deleting Blender or XSI any time soon. :wink:

Duoas: Sorry didn’t read your post very well. That’s obvious. Cool to see another XSI user on elysiun.

And to everyone else:
“realtime raytracing” make use of predefined maps. Lot’s of them! These all have to be taken from the realworld with special devices or software, or computed from very detailed scenes with very complex lightning simulations.
There is no way you can compare realtime with the rendering Mental Ray or PRMan does. Dirtmaps, colormaps etc, doesn’t just fall out of the air! They are pre rendered and have to be rerendered every time something in the scene changes. Nice to render in passes but it’s mostly a time saving solution that in case something in the colormap went wrong, not the whole scene has to be rendered but only that pass.
For example a city scene from spiderman 2 uses lot’s of passes. Beauty pass, Ambient Occlusion/dirtmap pass, shadow pass, specular reflec, etc.etc… Every pass had to be distributed over up to 100 cpu’s (bucket render style) all the parts where stitched together by PRman and all passes where later composited. Now it’s completely logic to think that all this can be done by just one GPU…
Or let’s look at the lightstage technology mentioned before. Also used in spidey 2. They decided to use this technique because it’s so much quicker then SSS and radiocity (which are raytracing techniques, as you know). So if even a complete renderfarm doesn’t have to power to render advanced raytracing techniques in the desired time frame, what makes you think a GPU can???

Joost

my only go on thought is passes… Though blender still does not have automated passes either.

What I am running for is this. A common work around I find doing useful is to make a copy of the blender file and blender program and run the same file in that new version with different render settings for passes. Two things, Blender does not make full use of the cpu at all. I can find myself running four blender at once to reender out passes then also do the same to other computers…

Sooooo the whole GPU thing for me is Just to get full use of the math number crunching thta gpus can do comparred to cpus for things like lighting and such… Also just very quick prerenders with normal maps and shadow casting turned on would save lots of time… And with tweaking in photoshop like apps get a presentable product made much fastere than giveing non textured iteems…

Ok lets try this one… There is a nice dirtmap baking script in the python forums, though it is useing blenders cpu render for it’s method of rendering. I have seen gpus do baking massively faster as a one bake prosses, then the baked texture would be the shadows for the scene. That is just one bit where blender could excel…

OK well it won’t be a bad thing to use all the computationpower your computer has. I find it strange that blender doesn’t use all your cpu power. What does the taskmanager says? I can run XSI and ZBrush next to eachother without any problem but even when XSI runs on it’s own it still uses 100% of my cpu’s…
It’s just windows balancing the workload of the two programs.

Joost

Haha wow, this discussion really took off. Anyways, I take it that currently there’s no way to hype the speed up, even with sacrificing some accuracy? Sooner or later I plan on rendering some animations, and it’d be nice not to have to build a beowulf cluster or rent a renderfarm :slight_smile:

(Gee, I sound so dumb compared to all the comments above :-? )