reflection / refraction stuff

for a long time now everyone on the boards have asked for reflections and refractions in blender which often means they want raytracing. Well i was wondering does it necisarrily have be raytraced? how do games have reflections when the obviously dont raytrace? also dosent the new Half-Life 2 preview shown at E3 show some sort of refraction in both thier water and glass?
Just wondering why we always have to resort to asking for raytracing.

Photrealistic RenderMan was for almost 15 years a program with no raytrace functionality and this never stopped people to create amazing renders.

Go yafray, go.

games with reflections use 2 models one either side of the reflection which works untill you want a curved object.

and as for refraction in a game not to hard on a flat surface but again on a curve not possible.

point these people to yable its an incredible script (needs more docs tho) that exports to yafray which is a neet raytracer.

Games have reflections by using the basically same method that Blender uses, cubic environment maps. Although newer games might be able to use DX9 pixel shaders instead of environment maps for more accurate results. As for the refraction in the HL2 tech demos those were DX9 pixel shaders at work. I don’t know exactly how they work but I’m sure they are not ray-traced in the way CGers think (that would need too much calculation), just an elaborate fake.

So no, ray-tracing is not required for fairly accurate reflection and refraction. However it would require writing some specialized code to do the fake and if you are going to go that far might as well just integrate a real ray-tracing system. No matter how elaborate you make your fakes you still won’t be able to get certain effects like real caustics without ray-tracing.

The one to look at is the water in Mario Sunshine - simply astonishing. However, after a while you begin to realise its done by environment maps, at quite a low res, and it only applies to specific objects. As a result Sunshine can look rubbish in stills, yet moving it’s the best looking game around at the mo.

I think (after trying to study the “water reflection” pixel shader code provided by NVidia at the time of making of the game TES3:Morrowind) that it is not raytracing, on the contrary, it is scanlining.

For each pixel displayed on screen, a little program (called “shader”) knows the viewer’s eye direction, and the normal of the surface a pixel is from. From these two it calculates the “reflected viewer” vector, and, knowing 3D position of the “pixel” in the world, calculates what colour is seen in that direction - and then more or less draws that colour instead of our pixel (may be mixing it with the refracted colour too).

The way it is different from ray-tracing (if I am not mistaken) is that the “ray” is cast from the eye of the viewer, not from every light source. So, the overhead is that for each “reflective” pixel there is simply one more ray to cast. And one more for each refractive pixel. If all of them on the screen like that, it only means triple the render time.

The advantage of this method over the pre-calculated environment maps is that, the way I see it, reflection for each point is dependant on its 3D position. In Blender (and if using an envmap in general) reflection is the same for the whole object, which makes it look a bit unnatural (and, no self-reflection is possible, either).

Anton42

The way it is different from ray-tracing (if I am not mistaken) is that the “ray” is cast from the eye of the viewer, not from every light source.

I’m not sure about that: I think most raytracers still “shoot” rays backwards out of the camera rather than out of the light source - but I could be wrong! If you shoot rays from the lightsource, raytracing still takes far too long to be useful, even on a really fast computer. As I understand it, this is why you need algorithms like monte-carlo and radiosity to calculate light reflected off diffuse surfaces (it’s much easier but much slower if the rays come from the lightsource).

I have a feeling that refraction / reflection in games are still a sort of clever fake… (ok well so is raytracing, but you know what I mean!).

It would be good to get someone to post clear definitions of how “scanline” and “raytrace” rendering approaches are different.

It would be good to get someone to post clear definitions of how “scanline” and “raytrace” rendering approaches are different.

I hope this is a clear(ish) definition…

Raytracing involves firing rays from the ‘camera’, through an imaginary plane which represents the 2d output image. Fire one ray for each pixel of the image. If you’re not interested in calculating relection / refraction it is sufficient to check which (if any) of the objects in the scene the ray hits. You then fire secondary rays from the closest intersection point to all the lights in the scene to see how much light reaches the intersection point. The total amount of light * the object’s colour = the colour of the pixel. The reason that raytracing is a lengthy process is that if rays hit a relective / refractive surface, you go thru’ the whole process again for a predetermined number of steps to calculate the illumination due to reflected / refracted light.

Here is much better description of raytracing…

http://fuzzyphoton.tripod.com/

Scanline rendering I am less familiar with, but as I understand it, it involves converting 3D polygon coordinates into 2D coordinates on a flat plane from the camera’s point of view (the image), making sure that the nearest polygon to the camera is the one that is used (depth sorting). Then the edges of each polygon are found and the interior filled with colour according to the illumination / surface properties of the object. Scanline rendering is quicker because you don’t need to keep the whole 3D scene in memory all the time to keep checking for intersections.

There is stuff about the scanline process in the educational material here, but its pretty heavy going…

People talk about ‘fake’ reflection etc, but any computer algorithm is fake. There are just different types of fake!

Thanks, flippyneck.

It looks like you are right, leon :slight_smile:

excellent, I got something right (well close enough anyway!): this calls for a celebration (in other words, another beer!)

And this must mean I’ve read enough about 3d, now its time to learn how to actually do it!!!

Well, I was thinking about that the other week. I think that in theory my method is fast and it will fit in the fast rendering framework of a scanline renderer.
OK, in a nutshell:
Instead of tracing back rays and how they bend in a transparant material (hereafter lens), we sould consider the lens as some kind of a forcefield generator that deforms the object behind them that are in view. Just like a lattice will do. Let’‘s call it a defraction lattice for the time being. The deformation of the object behind the lens depends of the camera angle, lens angle, camera distance , normals on the lens, refraction index, distance of the object to the lens (maybe I forgot some parameters). With these parameters the ‘forcefield’ would be calculated.
So it doesn’‘t matter if you bend the lights, or bend the object itself. The end result should be the same: a virtual object that is the refracted image of the object behind the lens. Now… I have to figure out the mathematics to make this idea work. If someone out there is interested to help me out, I’'ll be grateful.

I implimented refraction mapping into tuhopuu at one point (for use with environment maps), but it was the first tuhopuu, and the feature never got moved to the later itterations, largely because it wasn’t entirely stable.

But I assure you that convincing refraction effects can, in fact, be achieved without raytracing.

cessen: any chance of that coming back? I played with it and it was neat (and I might use it in my current project)
toontje: I’ve done that exact trick with latice constrained to parallel the camera inside a “snow-globe”- only works on the interior objects, though.

I have question. Is anyone working on working in fragment/pixel shaders programming into the blender scanline renderer? New shaders without recompilation of blender?

/jannevee

I’‘m all theory and little action.
How about you make a scene with a refracting body in it. Then you should place mirror objects in the scene on the opposite side of the refracting body AND outside of the camera view. The refracting body will get an enviroment on it. But… I don’‘t think that is is a water tight solution. It could be done if you can assign a layer to the enviroment map. In this layer there should be a mirror univers of the scene. I think this could work better. Maybe it could be transparant for the user if it is handled automaticaly by Blender. An example that coul work right now in Blender:
Just make a pool. But instead of placing objects inside the pool (let say some dolphins), we place the dolphins outside the pool behind the camera. Put an enviroment map on the watersurface. It would look like as if the dolphins are inside the pool showing the refraction effect. It’'s a problem though if the objects are half submerged in the water. :expressionless:

checkout halflife 2

want realistic realtime water, thats the shit

Yes, there is a chance, but not any time soon. I am rather busy with school, spending time with friends and family, and other such important things. I do hope to re-impliment it (in a stable fashion) some time in the future, though.

That was, actually, the original intent of my shader system. But I got busy before I finished it. I may finish it some time in the future, but if I do it will probably be in the far future (i.e. two or more years).
However, if someone would like to impliment a programmable shader system before then, that would be great. :smiley: