Blender's renderer


I need to find some information about Blender’s renderer and I came upon this email address on the webpage, I hope someone can help me.

I’m a student at the University of Utah and I’m trying to learn about Blender and one of my tasks is to compare Blender’s default renderer with Open Scene Graph. For blender, I’m only focusing on understanding how rendering is done when you press the Render button in the F10 render control center…basically I’m not trying to understand raytracing or any of the other more complex features. I tried to search through the source code to get an idea of what’s going on and I only got as far as realizing that a z-buffer algorithm is being used.

I’d appreciate it if somebody can let me know where I can find some type of documentation or just more info about how Blender’s renderer works.

thanks :slight_smile:

I suggest that you do further searching in
which is page dedicated for Blender developement and coding. elYsiun is mainly art forum.

in a nutshell: the blender codebase is difficult to understand but it is being seported out/cleaned up/ updated all the time.

you can browse the current CVS tree at

I believe where you want to go to have a look at blenders render pipeline is:

blender uses line scanning to render its images so what this means is that it breaks everything up into one pixel wide polyies , depth sorts them and then lights and textures them. this is done one line at a time. of course blender really has two ways of rendering things, the first is to use the default renderer that is optimised for speed (by rendering everything in layers) the second one is the “unified renderer” that renders everything in one pass, this is slower but it does a lot better job at halos and with lighting/texturing, (so you will have to be aware of this when browsing through the code.

the first file at the above address that i would have a look at would be: vanillaRenderPipe.c
which seems to contain the “outline” of the pipeline, if you wanted to know how it rendered halos, etc… then the file that you would want to look at would be: rendercore.c

I hope this helps


Thanks for the info! The one thing I wasn’t sure is you mentioned that the default renderer renders in layers, what exactly does that mean?

Now here’s another thing…the file “vanillaRenderPipe.c” is used for the unified renderer…am I right? Since I don’t want the unifed renderer I started looking around and found the zbufshade() method in rendercore.c , can I safely assume that zbufshade is being used by the default renderer?

I guess what i’m trying to get at is figuring out specifically what makes specular rendering in Blender better than the OpenGL rendering in Open Scene Graph…and that’s why I’m trying to figure out how default rendering works in Blender.

go here:…html
this has some detail what the developers mean by “rendering in layers”.

I think that you will have to go through the source code of each of the files
in that directory, unfortuently even though I am slowly getting aquanted
with the code there (i have a desire to implement volumetrics (without
using planes :slight_smile: )) I do not know the code well enough to help you.

the following file contains the code that deals with blenders pixel shading.
I think what you will notice is that the pipleline renders the pixels separtly,
as opposed to calculating the spacular values for the 3 points for a triangle
and then just inperpolating over the triangle’s area. however i am not
100% sure about this since in editbuttons you have the oppotunity of
setting the shading to either flat or smooth (gouroud sp??) shaded (that
shows up in the finial rendering, this implies that there is some
imperpolation going on behind the scenes, this could just be due to normal
calculations though, with specular effects being applied per pixel.

once again, you will have to goto the source.

best wishes