Computer Graphics/ 3d Graphics

Hi all,
I’m a student in highschool, and in order to graduate, we need to do create a senior project about whatever topic we choose. I’d really like to make a incredibly, incredibly simple rendering engine (I can imagine the large quantity of code) for my project, but I can’t seem to find outstanding books that are not for experienced developers and programmers. I have a very limited education in C ( like hello world kind of education) and so I would like some help finding a website/book/tutorial on this complicated task. What I want to get out of this rendering engine is the ability to import a cube, render it in different colors, and that’s about it, because I’m sure a sophisticated rendering engine like cycles can take a lot of time to create. Thanks!

There are a bunch of tutorials on C here, if you want to brush up on that a little more.

I’ve never made a renderer, but I saw that Udacity has a free course on 3D Computer graphics which might be good.

I have recently completed an introductory course in computer graphics on It covers making both opengl viewer for a simple scene and implementing a basic raytracer. The language used for course is c++ and the course restarts in march though. The course lectures are available online from their university website that covers the theory.

As someone who has been in your shoes I can tell you that without the programming background you’re just going to get frustrated. Even a VERY simple raytracer (no shading, flat colors) is deceptively complex. I would highly suggest working through and then moving on from there.

You need around 50-200 lines of code to write the raytracer you’re after, double it for a photon mapper.

I’d change your project parameters a bit.

First, just make a simple raytracer.

Go for rendering primitives. Planes (->cubes) and Spheres.
Loading geometry is a lot of additional trouble and not necessary for a cube.

Implement Pointlights, Shadows, one simple shader, mirror and transparency.
Writing it should take not more than two or three days if you know a programming language.

Factfinding and getting into the theory might take a while.

Fire Camera ray through each image pixel.
Brute force check if the ray intersects with any object.
If not, terminate ray after a set distance as hitting the environment.
If it hits, calculate point of intersection, normal of intersectionpoint.
Check lightrays from intesection point to each pointlight for each object. If the light is visible, calulate lighting/shading from the lightsources and the normal.
If the lightray hit’s an object, your intersection point is in shade, and it’s a shadowray.
If the material is mirroring, fire a new “cameraray” from the intersection point based on the normal, handle as above.
If the material is transparent, use beer’s law (yeh… thats what it’s called)

store the calculated value for the pixel, fire next ray through next pixel.
With OpenMP you can easily make it multithreaded.

Good place to start is to google for “Jakko Biccer Raytracer”

You need a basic understanding of linear algebra though, vector operations and a bit of a optical understanding.

Thank you so much! I was worried that I might have to delve deeper into the programming world, but I guess you kind of have to in order to create anything! I’m so glad that you guys recommended all this fantastic stuff! I’m glad I asked so early!

I also was wondering if in C\C++ the compatibility between windows and Mac would require a rewrite of the program? So if I wanted to make it cross platform would I have to copy and change the code?

Thanks again, it means a lot to me, and I wish you all luck in your blender adventures!

You can use common libraries, and if you use an environment like Qt for programming the work to get things working between platforms should be relatively painless.

Various libraries are differently to include and/or link between windows/linux/osx.
You usually handle this with compiler conditionals and directives.

For instance if you need OpenGl and want it to work on windows and osx:

#ifdef __APPLE__
#include <OpenGL/gl.h>
#include <OpenGL/glu.h>
#include <GLUT/glut.h>
#ifdef _WIN32
#include <windows.h>
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glut.h>

You see that OSX and Windows have different include directories for OpenGl and that windows needs the win32api included. This block will do the right thing during compile time.
Linking is another topic :wink:

You could also create header files for the platforms:

And setup your build system to include the right header, which will include the right headers. It’s a bit more classy than huge abominations of ifdef-s.

For windows creation and display of graphics on win/linux/osx you got a lot of options.
SDL - Kind of FOSS DirectX, also works with OpenGL. A bit of a behemoth though.
wxWidgets - alternative to QT and doesn’t work “on-top” but is integrated in c++
QT - does it all, but it’s a huuuge library, isn’t painless at all with MinGW especially under windows.

If you need a 3d window for somewhat modern OpenGL:
GLFW (somewhat like GLUT, but a lot more modern… precision timer, x-platform window creation, event handling, input devices and such)
GLEW (extension wrangler)

One more thing you should look into is recursion… You’ll need to understand it and how to implement it.
And if you really want to do OOP in C++ you should start with an UML diagram for the renderer.
Else you’ll do it like I did, start coding, end up with a C coding style and then be too lazy to refactor the code to a decent OOP form.

Thanks! I really need to get working on learning C++ then, so that I can fully understand you’re excellent advice! Thank you, I probably wouldn’t have figured that out if it were not for you! I have to apologize for yet another question, but is there anything that I could take a peak at to get a better understanding of what a renderer actually is? I have read some books and they all talk about implementing and optimizing and a whole bunch of other good bits that I can’t really use unless have the base program, unless I’m wrong? Thanks again!

Here’s a 99 line Pathtracer:

#include <math.h>   // smallpt, a Path Tracer by Kevin Beason, 2008
#include <stdlib.h> // Make : g++ -O3 -fopenmp smallpt.cpp -o smallpt
#include <stdio.h>  //        Remove "-fopenmp" for g++ version < 4.2
struct Vec {        // Usage: time ./smallpt 5000 && xv image.ppm
  double x, y, z;                  // position, also color (r,g,b)
  Vec(double x_=0, double y_=0, double z_=0){ x=x_; y=y_; z=z_; }
  Vec operator+(const Vec &b) const { return Vec(x+b.x,y+b.y,z+b.z); }
  Vec operator-(const Vec &b) const { return Vec(x-b.x,y-b.y,z-b.z); }
  Vec operator*(double b) const { return Vec(x*b,y*b,z*b); }
  Vec mult(const Vec &b) const { return Vec(x*b.x,y*b.y,z*b.z); }
  Vec& norm(){ return *this = *this * (1/sqrt(x*x+y*y+z*z)); }
  double dot(const Vec &b) const { return x*b.x+y*b.y+z*b.z; } // cross:
  Vec operator%(Vec&b){return Vec(y*b.z-z*b.y,z*b.x-x*b.z,x*b.y-y*b.x);}
struct Ray { Vec o, d; Ray(Vec o_, Vec d_) : o(o_), d(d_) {} };
enum Refl_t { DIFF, SPEC, REFR };  // material types, used in radiance()
struct Sphere {
  double rad;       // radius
  Vec p, e, c;      // position, emission, color
  Refl_t refl;      // reflection type (DIFFuse, SPECular, REFRactive)
  Sphere(double rad_, Vec p_, Vec e_, Vec c_, Refl_t refl_):
    rad(rad_), p(p_), e(e_), c(c_), refl(refl_) {}
  double intersect(const Ray &r) const { // returns distance, 0 if nohit
    Vec op = p-r.o; // Solve t^2*d.d + 2*t*(o-p).d + (o-p).(o-p)-R^2 = 0
    double t, eps=1e-4,, det=b**rad;
    if (det<0) return 0; else det=sqrt(det);
    return (t=b-det)>eps ? t : ((t=b+det)>eps ? t : 0);
Sphere spheres[] = {//Scene: radius, position, emission, color, material
  Sphere(1e5, Vec( 1e5+1,40.8,81.6), Vec(),Vec(.75,.25,.25),DIFF),//Left
  Sphere(1e5, Vec(-1e5+99,40.8,81.6),Vec(),Vec(.25,.25,.75),DIFF),//Rght
  Sphere(1e5, Vec(50,40.8, 1e5),     Vec(),Vec(.75,.75,.75),DIFF),//Back
  Sphere(1e5, Vec(50,40.8,-1e5+170), Vec(),Vec(),           DIFF),//Frnt
  Sphere(1e5, Vec(50, 1e5, 81.6),    Vec(),Vec(.75,.75,.75),DIFF),//Botm
  Sphere(1e5, Vec(50,-1e5+81.6,81.6),Vec(),Vec(.75,.75,.75),DIFF),//Top
  Sphere(16.5,Vec(27,16.5,47),       Vec(),Vec(1,1,1)*.999, SPEC),//Mirr
  Sphere(16.5,Vec(73,16.5,78),       Vec(),Vec(1,1,1)*.999, REFR),//Glas
  Sphere(600, Vec(50,681.6-.27,81.6),Vec(12,12,12),  Vec(), DIFF) //Lite
inline double clamp(double x){ return x<0 ? 0 : x>1 ? 1 : x; }
inline int toInt(double x){ return int(pow(clamp(x),1/2.2)*255+.5); }
inline bool intersect(const Ray &r, double &t, int &id){
  double n=sizeof(spheres)/sizeof(Sphere), d, inf=t=1e20;
  for(int i=int(n);i--;) if((d=spheres[i].intersect(r))&&d<t){t=d;id=i;}
  return t<inf;
Vec radiance(const Ray &r, int depth, unsigned short *Xi){
  double t;                               // distance to intersection
  int id=0;                               // id of intersected object
  if (!intersect(r, t, id)) return Vec(); // if miss, return black
  const Sphere &obj = spheres[id];        // the hit object
  Vec x=r.o+r.d*t, n=(x-obj.p).norm(),<0?n:n*-1, f=obj.c;
  double p = f.x>f.y && f.x>f.z ? f.x : f.y>f.z ? f.y : f.z; // max refl
  if (++depth>5) if (erand48(Xi)<p) f=f*(1/p); else return obj.e; //R.R.
  if (obj.refl == DIFF){                  // Ideal DIFFUSE reflection
    double r1=2*M_PI*erand48(Xi), r2=erand48(Xi), r2s=sqrt(r2);
    Vec w=nl, u=((fabs(w.x)>.1?Vec(0,1):Vec(1))%w).norm(), v=w%u;
    Vec d = (u*cos(r1)*r2s + v*sin(r1)*r2s + w*sqrt(1-r2)).norm();
    return obj.e + f.mult(radiance(Ray(x,d),depth,Xi));
  } else if (obj.refl == SPEC)            // Ideal SPECULAR reflection
    return obj.e + f.mult(radiance(Ray(x,r.d-n*2*,depth,Xi));
  Ray reflRay(x, r.d-n*2*;     // Ideal dielectric REFRACTION
  bool into =>0;                // Ray from outside going in?
  double nc=1, nt=1.5, nnt=into?nc/nt:nt/nc,, cos2t;
  if ((cos2t=1-nnt*nnt*(1-ddn*ddn))<0)    // Total internal reflection
    return obj.e + f.mult(radiance(reflRay,depth,Xi));
  Vec tdir = (r.d*nnt - n*((into?1:-1)*(ddn*nnt+sqrt(cos2t)))).norm();
  double a=nt-nc, b=nt+nc, R0=a*a/(b*b), c = 1-(into?;
  double Re=R0+(1-R0)*c*c*c*c*c,Tr=1-Re,P=.25+.5*Re,RP=Re/P,TP=Tr/(1-P);
  return obj.e + f.mult(depth>2 ? (erand48(Xi)<P ?   // Russian roulette
    radiance(reflRay,depth,Xi)*RP:radiance(Ray(x,tdir),depth,Xi)*TP) :
int main(int argc, char *argv[]){
  int w=1024, h=768, samps = argc==2 ? atoi(argv[1])/4 : 1; // # samples
  Ray cam(Vec(50,52,295.6), Vec(0,-0.042612,-1).norm()); // cam pos, dir
  Vec cx=Vec(w*.5135/h), cy=(cx%cam.d).norm()*.5135, r, *c=new Vec[w*h];
#pragma omp parallel for schedule(dynamic, 1) private(r)       // OpenMP
  for (int y=0; y<h; y++){                       // Loop over image rows
    fprintf(stderr,"\rRendering (%d spp) %5.2f%%",samps*4,100.*y/(h-1));
    for (unsigned short x=0, Xi[3]={0,0,y*y*y}; x<w; x++)   // Loop cols
      for (int sy=0, i=(h-y-1)*w+x; sy<2; sy++)     // 2x2 subpixel rows
        for (int sx=0; sx<2; sx++, r=Vec()){        // 2x2 subpixel cols
          for (int s=0; s<samps; s++){
            double r1=2*erand48(Xi), dx=r1<1 ? sqrt(r1)-1: 1-sqrt(2-r1);
            double r2=2*erand48(Xi), dy=r2<1 ? sqrt(r2)-1: 1-sqrt(2-r2);
            Vec d = cx*( ( (sx+.5 + dx)/2 + x)/w - .5) +
                    cy*( ( (sy+.5 + dy)/2 + y)/h - .5) + cam.d;
            r = r + radiance(Ray(cam.o+d*140,d.norm()),0,Xi)*(1./samps);
          } // Camera rays are pushed ^^^^^ forward to start in interior
          c[i] = c[i] + Vec(clamp(r.x),clamp(r.y),clamp(r.z))*.25;
  FILE *f = fopen("image.ppm", "w");         // Write image to PPM file.
  fprintf(f, "P3
%d %d
", w, h, 255);
  for (int i=0; i<w*h; i++)
    fprintf(f,"%d %d %d ", toInt(c[i].x), toInt(c[i].y), toInt(c[i].z));
[TABLE="class: source"]

SmallPT is well known and also available as 4k version and as OpenCL version.

If you’re more after a raytracer than a pathtracer check Jakko Biccers source out:
It has well documented and explained sourcecode.

For Photon mapping in “realtime” you can go here:

Other nice reads:

i’m an old artist, so it’s amazing to think of how the education & tools have evolved such that teenage coders or artists can utilize digital tools for 3d and/or rendering now–

Great to see UDACITY mentioned on this thread–they’re the gr8 guys @ Stanford who decided to tryout an open//FREE online course,
in AI, few years ago & got over 150,000 students !

Now, they’re trying to bring the future of higher-education to anyone anywhere----truly brilliant, will benefit everyone & raise the bar for konwledge / affordable education & such across the board, a game changer.

Yeah i recall using CG tools before my career got moving (late 80s) & PCs & MACs were still intimidating because the art stills & coding skills were both hard to get all 2gether, fortunately the apps were just then coming down to personal level ((hah, TOPAZ @ $ 7,000 was the “affordable cheap” 3d till Autodesk/YOST brought it lower & things boomed from there–tho hi-end systems didn’t budge
in cost for decade or more it seemed)).

GREAT to see the GNU/shared knowledge attituded here-- I’m getting bak into 3d now, on my own & was hoping Blender’s
tools + community would
be both ROBUST…and still open/free, etc, so i’ve already noticed, e.g. this thread & few others 2nite,
that this is the case.

Keep up the gr8 progress, i’m now uber-interested in trying out Blender.