Radiosity questions...

I want to know if there is a difference between using the radiosity tool in blender (collect meshes, go, replace mesh…) and then render, and rendering with the radio button pushed?

Because I’ve tried the second option with a very simple scene (a sphere in a big cube, illuminated by a little plan), and I get very strange results. Each face of the cube has it’s own color, and every pixel of the face has the same color, as if the cube was emitting light, which is not the case. There is no shadow and no licking of color between the faces and the sphere.

If I use the radiosity tool before rendering, I get the expected result, with smooth shadows and color licking.

I really don’t understand why there is this big difference between the two results.

I’ve tried with much more complexe scenes and the two results were nearly the same, and the rendering with the radio button pushed was even better since it can handle much more textures.

So it’s very odd…

If I subdivide the faces of the cube, the result is much better. It seems that the radiosity algotihm implemented in the render engine does not subdivide the face during the radiosity calculation. It only gives a global color to the face. That’s really odd.

Why the render engine does not use the same algorithm then the one use by the radiosity tool?

Is it possible to force the render engine to subdivide the faces?

well the diffrence is; blendertool applies the radiosity to the vertex colours. so you can use them in gameengine, even as a modeling tool. radiosity rendering is for normal use… as simple as that. here’s the blender documentation about radiosity:

Probably the most significant difference is that render-time radiosity updates per frame and therefore can be animated, the mesh editing style one can’t.

You can solve this manually by raising the subsurf level, or but applying simple subsurf (just like subdividing manually, but non-destructive).

Why the render engine does not use the same algorithm then the one use by the radiosity tool?

I have a feeling it’s because it might do wacky things to a mesh, eg if it’s being deformed by an armature or something. Could be wrong though!

:oops: I used an old version of the blender documentation, and everything is explained in this one.

I’m sorry for asking such a stupid question. :roll:

But I have another one:

How does the “hemires” value affects the results? It seems that a high value give a smoother result, but a bigger calculation time, but since it is not explained in the documentation, can you explain more?