TLRcam- new UnbiasedRenderer
large mesh test.

first cornel box.
first energy loss test.

for the past couple of weeks (months) i’v been working on my new renderer in java. at its current state it has octree acceleration, first hit speedup, phong and diffuse materials, and its own scene format. the sceneformat is something similar to the .obj format and i basicaly just editited the .obj exporter in python to export objects in my format, then hand edited the scene. i’d realy appreciate it if somebody would write a blender exporter so that i could do tests easier.


after i finish applying to colleges and maybe get in, i might end up making this renderer open source.

sounds fantastic! expecially the latest “opensource” part :wink:
hey oodmb, nice work! What does this render supports now ?

Hope you decide to make it GPL, a new renderer is always welcome.

Any reason why you’re deciding not to improve the Internal Renderer within Blender, that way we can use all of Blender’s features with your improvements.

I’m guessing its because Java != C/C++… but I could be wrong…

Sometimes people like to just do projects like this for fun, anyways.

Looks good, I await a GPL release. Ive been thinking about writing my own much simpler raytracer for a couple years now, just haven’t had the time/motivation to get into all the nitty gritty behind the basic concepts yet.

If you’d like to post how your file imports work, I may be able to get around to writing a little exporter too, if you’d like (or somebody else may beat me to it and write it better…:p)I see they are just basic text files with different listings like vCo, vNormals, Face connectivity etc., but I’d need things like what order to put them in, if that matters as well as expected material, camera, etc. settings.

well, basicaly the scene import format is .obj with a twist. the camera has to come first, then the materials, then the objects. the camera is defined: camera “type” “origin” “forward vector” “up vector” “appature” “distance (if simple or manual, or point for auto)” “Film Size” i’ll send you specific details.

 Any reason why you're deciding not to improve the Internal Renderer within Blender, that way we can use all of Blender's features with your improvements.

i have about three or four reasons.
a. i know java much better than i know C
b. i’ve never gotten blender to compile.
c. i know more about unbiased and biased raytracers than i know about biased scanline raytracers.
d. i cant pass off open source programs as my own work when applying to colleges.
oh, and tomorrow happens to be the first aniversary of my first hello world program

Very nice :slight_smile:

The render API hasn’t been finished yet, has it? Also, programmers will generally work on whatever the hell they find interesting enough to dedicate their time to. CD, I really think you should learn to code.

btw, what do people think about the first intersection speed up algorithm, i havent assesed its speed up yet, probably somewhere from 5-15%, it only works with a pin hole camera, and it only realy allows antialiasing for textures or front objects, it also requires super sampling to get the desired antialiased effect, which should in theory not slow it down at all.

Umm… does he need a reason?

By the way, oodmb, cool project. :slight_smile:

higher resolution mesh. rendered for about 20 hours with first ray speedup. i think the slowness could be attributed to two things, 1. its java, 2. i seem to have a memory issue where the longer it runs the more it uses up. i’m working on a quick fix for that where it saves a resume every so often so that if you run out of memory you just restart it with the resume.

well, i fixed the normal smoothing problem. it turned out to be a UV issue. i also got the cameras working properly so they can render non square images (the x and y were being switched awkwardly somewhere). i added two types of resumes: runtime pause, and start from file resumes. i added a grid texture. also seemed to have fixed the memory leak error.
there are some major scene file changes though
(the normals are still at error in this image)

here is the resulting stuff. its virtually bug free now:

it also includes the compiled .exe

indigo topic post:

+1 to opensourceing! :smiley: and btw, looking good! :wink:

i would realy appreciate it if a couple people could download this and attempt a long render, apparently there have been issues with the rendererer crashing and i want to know if this is a one case java error or a coding error.

i got MLT working! precedence doesn’t quite work yet and i’d be carefull when putting two refractive materials within each other, but other than that, it seems to work. rays are still re-shot in glossy materials. i’m currently working on another method that doesn’t even allow rays to be shot below the normal. i also took out the code that decides the importance of the threads, i’m not sure how well threads would work with the mlt anyway. i havent yet got the resume to work with MLT. there is also a new tag in the scene file that specifies use of MLT: sample metro

here is a new image of some caustics, note that the caustics might not be bright enough because the render of this picture lacked compensation for the probability that the ray might end without hitting the light.

ok, some major fixes in the renderer: caustics now work: somebody pointed out that colors bled too much so i fixed my direct lighting equation and fixed how caustics work.
here’s a preliminary picture( i’ll post a file and better picture tomorrow)

what i did to compensate for the brightness of the caustics in respect to the rest of the scene was to multiply the actual diffuse colour by 1/pi and then by 1/pi again for the direct light calculation. in this image the diffuse colour for the walls were .02 .02 .4 .4 .4 .4 .4 .02 .03 the emiter was scaled way up

Quite an achievement oodmb to already have implemented MLT.
The math required for that sort of thing is completely over my head.
Well, that is true for almost any math really… :wink:
Good luck with your project! :slight_smile: