Paul Debevec - Realtime Raytracing

edit: Since this is topic-related, I move it to the top.

You cannot speed things up, but you can improve your workflow (which I just did in the past 5 minutes… happy to share the information so fast!) As it has been said before, you can render your animation by layers.
Good tutorial is a post by sundialsvc4 on this website. Another way to speed things up is to use Macouno’s BRayBaker 3 (Blender Ray Baker).


I am a beginner in 3d but I’m reading as much information as my brain can process, time can allow etc. (which means not as much as I would like to). I guess what is following is a little bit off the topic, so most readers can just skip: no frustration. Sorry for the long post… It’s only for informational purposes… Mostly it’s only a basic explanation of HDRI and photogrammetry, though I understand neither of them.

Debevec has made a program to make HDRI (high dynamic range images, images that have more contrast than what standard RGB images can allow) by yourself, with nothing more than a tripod, a camera and a christmas tree bulb. As it has been said on another forum/website (courtesy to the original author, I forgot the name and the url…:P), if you have a white background (RGB values 256,256,256; or 1,1,1 in blender sliders), take a square in the middle of this background and change the value to 10,10,10, it’s like opening a window on a white wall: sunlight comes through, it looks brighter! You find lots of information on Debevec’s website or googling around. Basically, what you do is taking a picture of a real life scene from the same point of view, with different exposures. I’ll add one last thing to this, because it took me some time googling around, you can achieve same or better results using a fish-eye lens (or maybe wide angle lens). There’s a website about it, but I lost all my bookmarks. :stuck_out_tongue: The point in 3D is that you can use HDRI images to simulate background and lighting.

But I guess what your father had was something like “Facade” which was used as a modelling and texturing tool. The program name was, if I’m correct, just the university research/project’s name. It is not an actual commercial and available product. This program does photogrammetry. Basically, you take pictures and then you model 3d meshes of what appear in the picture, based only on the picture information. What you do is guess the real distance by analizing one or several pictures. My knowledge stops here.

And it does make the whole process much faster. You take pictures, model it by/with (whatever word is fit) photogrammetry, erase the lighting information (tutorial on 3drender.com, I think) and texture it with the same image taken. Then render. It should be fast.

Side note: I think it was used in the movie Matrix. The scene on the rooftop when Keanu Reeves is dodging bullets. The whole surroundings were made using this kind of techniques, I think.

Again, I am no pro. This is just some (badly) compiled information I found myself on the web. You find the whole things on Debevec’s page and on other tutorials.

Side note 2: It made me want to learn more about photogrammetry and HDRI. So if anyone has some pointers as to how to do it and maybe how to implement it with blender’s workflow, i’d be glad to hear about it.

Real time Ray-tracing is not a myth (http://www.openrt.de/) but for the most of us it is neither a reality. I have worked with gameprogramming and developed 3D-engines. What you see is NOT what you get. Gameprogrammers cheat, not because they want to (it is hard and time consuming), but because thay have to if they want to stay “cutting-edge” and be able to provide “realistic” (if there can exist such a word in this context) graphics. But things that look realistic don’t necessary have to be. Programmers of real-time applications are VERY selective wich aspects of reality they want to simulate.

The dilemma about these kinds of questions are that real-time applications only care about the end result being realistic, not the method for creating the effect. This in turn makes the real-time solution very sensitive about it’s source data. Needing a lot of information to be pre-processed. Non-interactive solutions aren’t as picky. Because they rely on methods that (often) mimic real world physics, like optics. Because of that they work in conjuction with much more primitive source data (i.e non-preprocessed).

Well yeah… ok true real time raytracing is out of the question. I think we all understand that.

Saying that, there’s probably a nr of tricks that could be coded into blender. There has been talk of putting in shadow baking and such.

Also I’m sure more can be done with a vertex color type system.

And some of the calculations done by renderman (if it ever becomes a plugin, or integrated) could prove real timesavers.

All slightly hackish, but hey… there’s a lot to gain.