How come renderers are so different?

Hey, I was looking into ray-tracing and the Blender internal renderer, and Inoticed that in rendering, the equation is the same.
http://en.wikipedia.org/math/7/9/a/79afc7016dee9593d7adf88915acce97.png
If the equations is the same, is the GI equation the same? If so, how come a renderer like YafRay can’t render quite as good as say mentalray?

Most Raytracers will use this equation, however its how the equation is used. Yafray can render as well as mental ray if you know how to work your materials and lighting. The algorithms used to manipulate the equation as well as the other options in the program determine quality. Perhaps (and almost certainly) the two programs use different algorithms for fundamental raytracing, how light is scattered, point sampling, etc. Yafray may not be as optimized as MentalRay either. In short it really depends on how it was coded as opposed to the equation. I’m sure somebody else can come along and explain it better than me since I’m just getting into programming raytracers and openGL stuff.

No, you’ve hit it right on the head.

Lo, Le, fr, and Li are functions; how they are implemented (and the equations they use) determine the quality of the output. The equation you’re looking at here is really just to gather the colors collected by the ‘ray’ emitted by the raytracer (w is the vector, or direction the ray is travelling).

Raytracing works somewhat in reverse of reality. If, for example, you were rendering an image at 400 by 300 pixels. If you open Blender and look at the camera object you’ll see it has a pyramidal shape. The point is the eye (your eye looking at the monitor) and the rectangle opposite it is the screen (the 400 by 300 grid). One ray is shot from the eye through each ‘pixel’ in that grid out into the scene. When it hits an object (the default cube, for example), it reports that color back to fill in the pixel. The complexity of raytracers is in how that color is reported back. If all it does is compute the angle between the camera and the light, then it would be a fairly simple raytracer (this is pretty much how the OpenGL 3D View works in Blender). Yafray is smart, it sends out other rays based on angles of incidence, etc. to gather a better idea of the colors in the surrounding areas that influence the final result. That’s an extra ray for each light source, angles of incidence, reflection, and refraction, all raised to the power of your ray depth (raytracing stops sending rays after n iterations, until you hit a light source, or until the ray is tested against all possible collisions [i.e. if it hits the background]). You can turn on oversampling to send out more than one ray per pixel (which is why it takes longer), but you get a better idea of what the camera is looking at.

With all those rays being projected raytracing can get pretty slooowww.

And don’t get me started on radiosity… :stuck_out_tongue:

Hope this helps.

I understand what your saying. Have (the first reply) you programmed before? Or are you just new to OpenGL and Raytracing?

EDIT: How complicated is it to make a renderer? And were could I get info on how they are coded?

I personally have been programming for a while. Until recently it was mainly Python and/or Assembly, but now I’m stretchin my C/C++ wings. I’ve been doing some minor dabbling in OpenGL for about a year now though. I’ve been looking into raytracing for about as long.

The complexity of a renderer is immense, yet surprisingly simple on an overview-

You have to be able to first of all import your models, then you have to be able to define materials and shaders which dictate how light interacts and plays with a surface. Then you can get into texturing and lighting. Then you consider GI and Reflections and Transparencies. Then all the extras like Anti Aliasing and final rendering. At all these different stages (and by no means are they in any particular order of importance as they are all equally important) you have multiple routes you can take as far as methods, speed, and quality. The key is that all the routes you can take lead you to essentially the same place, except each place has just a little less or more quality than the next.

Renderers like MentalRay can pretty much take any route you want to. YafRay is a little more limited in this aspect and can’t quite go all the same places.

There are many sources to get started. A great place is to read some of the threads over at CGTalk’s graphic development forum. PBRT is a really excellent book that I’ve started and covers many topics. This article from flipcode helps as well. And don’t neglect your Siggraph Papers (Current and Past).