Path Tracer for Gradient Domain Path Tracer

Hello everybody,

I am new in this forum and I am a newbie in the computer graphics field. I need some help in my project that I keep bringing on since months without big results. What I am trying to implement on my own is a gradient domain path tracer in C++:

https://mediatech.aalto.fi/publications/graphics/GPT/

Before arriving at the main algorithm itself, I am trying to build my own path tracer where afterward I can work on the GDPT. Due to my lack of knowledge in computer graphics, results so far are not great. What I have tried to follow so far is this implementation that contains both a simple Path Tracer and a GDPT built on it:

So I tried to reproduce this code on my framework, I am going to explain you slowly the problem but it just needs a bit or prerequisites. The rest is just the implementation of the rendering equation that somehow is wrong.

First of all: the following is the definition of a struct (will be declared and initialized later), containing some scene info needed to cast ray from the camera into the scene.

struct RenderData
{
    vec3 E;
    vec3 p1;
    vec3 dx;
    vec3 dy;
};

It is initialized in the InitializeScene method:

void InitializeScene(){ // setup virtual screen plane
  vec3 E( 2, 8, -26 ); //Eye position
  vec3 V( 0, 0, 1 ); //LookAt vector
  float d = 0.5f, ratio = SCRWIDTH / SCRHEIGHT, focal = 18.0f;
  vec3 p1(E + V * focal + vec3(-d * ratio * focal, d * focal, 0)); // top-left screen corner in SCREEN SPACE
  vec3 p2(E + V * focal + vec3(d * ratio * focal, d * focal, 0)); // top-right screen corner
  vec3 p3(E + V * focal + vec3(-d * ratio * focal, -d * focal, 0)); // bottom-left screen corner
  mat4 M = rotate( mat4( 1 ), r, vec3( 0, 1, 0 ) );
  p1 = vec3(M * vec4(p1, 1.0f)); //rotating the above points
  p2 = vec3(M * vec4(p2, 1.0f));
  p3 = vec3(M * vec4(p3, 1.0f));
  renderData.dx = (p2 - p1) / (float)SCRWIDTH;
  renderData.dy = (p3 - p1) / (float)SCRHEIGHT;
  renderData.E = vec3(M * vec4(E, 1.0f));
  renderData.p1 = p1;
}

The code above is to make you understand how I initialized the scene. I also have structs to store informations about my paths:

struct PathVert {
   vec3 p; vec3 n; //hit point and normal of the surface hit 
};

struct Path {
   PathVert verts[MAX_DEPTH]; //maxDepth is 15 for now
   int vertCount;
   int x, y; //which pixel this path is referring to
};

therefore I start consider pixel after pixel.

for (int y = 0; y < SCRHEIGHT; y++) for (int x = 0; x < SCRWIDTH; x++)
{     
   Path path;
   if(generatePath(x,y, path)){
      Sample(path);
   }
}

The generatePath() method indeed tracks the path into the scene and checks all the vertices it hits. The checkIfRayIntersectSomething(t) method you will see used, it’s just a pseudo method implemented in my framework and that I omit posting cause of its length. I use it to check if my ray hits something in the scene, if it does, it update the “t” with the distance to that object. NOTE: the light is not considered an object itself. Hence, I also have a checkRayLightIntersection(hitLightPoint) which checks the intersection with the light, if there is any, the hitLightPoint is updated with the point on the light I have been hitting. The light is a 2D surface.

Vec lightPos = Vec(5, 15, 2); //hard coded position of the light

And the light, as said, is a surface, but exactly a square surface, whose 4 angles are:

Vec P1 = Vec(lightPos.x - 20, lightPos.y, lightPos.z + 20);
Vec P2 = Vec(lightPos.x + 20, lightPos.y, lightPos.z + 20);
Vec P3 = Vec(lightPos.x + 20, lightPos.y, lightPos.z - 20);
Vec P4 = Vec(lightPos.x - 20, lightPos.y, lightPos.z - 20);

Quite big, I know, so the first question relies on this aspect, is it correct having such a big light?
But let’s go to the main methods. Hereby you can see the GeneratePath method:

bool GeneratePath(int x, int y, Path &path){
   path.x = x;
   path.y = y;
   path.vertCount = 0;

   vec3 P = renderData.p1 + renderData.dx * ((float)(x) + Rand(1)) +  renderData.dy * ((float)(y) + Rand(1));
   vec3 O = renderData.E + vec3(Rand(0.4f) - 0.2f, Rand(0.4f) - 0.2f, Rand(0.4f) - 0.2f);
   vec3 D = normalize(P - O); //direction of the first ray, the one from the camera towards the pixel we are considering

   for (int depth = 1; depth <= MAXDEPTH; depth++){
    float t;
    Vec hitLightPoint;
    PathVert vert;
    if (!checkIfRayIntersectSomething(t)){
        //we didn't find any object.. but we still may have found the light which is an object non represented in the scene
        //the depth check avoids me rendering the light as a white plane
        if (depth > 1 && checkRayLightIntersection(O, D, hitLightPoint)){
            //update the vertex since we realized it's the light
            vert.p = hitLightPoint;
            vert.n = Vec(0, -1, 0);//cause the light is pointing down
            path.verts[depth - 1] = vert;
            path.vertCount++;
            return true; //light hit, path completed    
        }
        return false; //nothing hit, path non valid
    }
    //otherwise I got a hit into the scene
    vert.p = O + D * t; //reach the hitPoint
    vert.n = methodToFindTheNormal();
    vert.color = CalculateColor(vert.p); //according to the material properties (only diffuse objects so far)
    path.verts[depth - 1] = vert;
    path.vertCount++;

    //since I have the light, and a path terminates when it hits the light, I have to check out also if my ray hits this light,
    //and if does, I have to check whether it first hits the light or the object just calculated above
    //moreover with the "depth > 1" check, I avoid again rendering the light which otherwise would be visible as a white plane

    if (depth > 1 && checkRayLightIntersection(O, D, hitLightPoint)){
        float distFromObj = length(vert.p);
        float distFromLight = length(hitLightPoint);
        if (distFromLight < distFromObj){
            //update the vertex since we realized it's the light
            vert.p = hitLightPoint;
            vert.n = Vec(0, -1, 0);
            vert.color = Vec(1, 1, 1);// TODO light color? or light emission?

            path.verts[depth - 1] = vert;
            return true; //light hit, path completed
        }
    }
    if (depth == MAXDEPTH) return false;
       Vec newDir = BSDFDiffuseReflectionCosineWeighted(vert.n, D);//explained later
       D = newDir;
       O = vert.p;
   }
   return false;
}

The BSDFDiffuseReflectionCosineWeighted() just calculate the new directions, tested and working. What remains last is the Sample method which calculates the final color of the pixel.

Vec Sampling(Path &path){

   Vec color(1, 1, 1);

   for (int vert = 0; vert < path.vertCount - 1; vert++) { //considers the last vertex as the light
      const PathVert &currVert = path.verts[vert];
      const PathVert &nextVert = path.verts[vert + 1];
      Vec wo = (nextVert.p - currVert.p).norm();
      double cosTheta = fabs(wo.dot(currVert.n));
      float PDF = cosTheta/PI;
      if (cosTheta <= 1e-6) return Vec();
      //considering only DIFFUSE objects
      color = color.mult(currVert.color * (cosTheta / M_PI) / PDF);
   }
   return color.mult(Vec(10.0f, 10.0f, 10.0f)); //multiplication for the light emission?
}

Result with 16SPP is:

My target reference image is instead:

In my target reference framework, the light is much smaller… it has a size of 2x2, but if in my case I make the light of the same size, this is what happens (showing on purpose the light)

In all the images above the SPP are 16. Therefore I aim to obtain a good image like the target one with a low SPP. There is something wrong witht he implementation of the rendering equation which is mainly in the Sampling method. I would be glad if you could help me understanding where the problem is, I rely on the fact that I apply wrongly the PDF or something like that, or that a term is missing in the rendering equation but I have been passing through the theory many many times already, seeing no error. Thanks in advance

Attachments