Using radiosity for complex scenes


I’m working on the modelisation of an house, especialy the inside. I want to use radiosity because it looks so great.

I cannot use the radiosity tool because I have to much textures (the radiosity tool can not handle more than 6 textures to replace the mesh). So I have to use the radiosity during the render. Hence I have to subdivide my mesh to have a correct result and it takes a lot of time (more than 1 hour) for an image.

It is stupid since the radiosity is calculate for each image, even if it doesn’t change (because only the camera moves). So is it possible to use the radiosity tool even with lots of texture, or is it possible to keep the radiosity solution calcule during the rendering, to use it for the following images?

Thank you.

when you calculate the radiosity solution during rendering, it is recalculated ever frame [there is no way to save it]

when calculating not at render time the limit on materials is 16, if there is only one object. if you use multiple objects, and more than one has the same material it will be counted more than once [last I checked].

once you have the radiosity solution, however, you can go back and split it appart and reapply the materials. You will probably need to anway.

a note, if you have materials altering the normal, with only radiosity lighting the change will not be visible [because radiosity doens’t care about textures or the normal other than that of the face], so you will need to have several regular lamps

Thanks for all thoes explanations.

I regulary render my scene to see if the modification looks good. So I can not use the radiosity solution each time, separate the different objects and then apply the textures. But maybe I can merge all the object, than use the radiosity solution since I don’t think I have more than 16 textures.

For the moment I render during the night or the week-end. Maybe it would be great to add a button in the next blender to only calculate the radiosity solution once during the render.

I also use a little light for bump mapping and more precise shadows.

Hi !

I have found many topics and tutorials about Radiosity and Global Illumination.

I have also seen many pictures made using these tools… They look fine,but not natural.

These tools are made to recreate realistic optical phenomenons,but for my own,I don’t find them realistic at all.

For animations ,there is an other problem,that you described in this topic: the rendering time is very long.

I never use Radiosity. In my current project (museum),I am using at least 35 lamps of various types,to put light with the desired level,exactly where I want.

I use also a very low level of Ambient Occlusion in some cases, and a low emit value on materials (walls for example) that don’t receive enough light. Sometimes,I use shadow only tomake shadows mor visible,or negative lamps to darken areas of the decor.

I’m used to photography,and I threat Synthetic pictures in the way I’ll do in the actual world,except that you can’t make the walls glow or use negative lamps in the actual word !


there is a radiosity baking script (in the python section). it dosn’t seem finished, but might be useful.

it basically uses the viewport-radiosity-solution (of the highly subdivided mesh), generates an image and UV-maps it on the original, low-poly mesh. the advantages of such a solution are obvious - only one long “render” to calculate the solution and super-speedy rendering of the animation using only image maps that replace lighting (unless you want additional lights)

go check it out.

I agree with Roubal. I, too, am doing ‘museum’ projects and simply couldn’t afford the render-times of Radiosity even if they were necessary. I find that they are not.

You can do some amazing things. Like: - If you set a lamp to shine only on certain layers, it shines right through objects on other layers! (Try it.) Take full advantage of the fact that the layer-setting is a bitmask not a single value. - Composite things together that each have been lit in different ways. - You can do “light-only” layers and add them to a composite with the Add filter (for brightness) or the Subtract filter (for a darn-good close-to-a shadow effect).- Figure out what each element you think about adding to the picture is supposed to be there to do, then try to figure out another, cheaper, faster way to do it. For example, you can get a lot of mileage out of the fact that there’s a big computational difference between “an area that is dim,” (as in under-exposed), and “a shadow.” 3D calculations are expensive when a 2D composite-operation might accomplish the same result on your (it’s 2D, after all…) final image. “Whatever works, as long as it’s fast…” Necessity is the mother of invention. If you simply don’t have the computer-resources to throw at the problem, nor nights-and-weekends to wait for results, you improvise … figure out ways to give the computer only very-simple problems to solve. And you get very, very similar results. Sometimes in minutes.

I recently tried out some radiosity stuff and although it was simple, I got nice results and fairly quickly. One of the time consuming parts was physically subdividing shapes instead of using subsurfs. Subsurfs are much easier. For things like the Cornell Box, I just subsurfed on level 3 and made the subsurf creases all 1. Then I rendered at 400 iterations and it looked fine. It only took about 30 seconds at 640x480 on a 700 MHz CPU.

Ok, it was probably the most simple scene you could do but if you optimize enough, you should get fast enough results. I prefer radiosity to AO because IMO it’s faster and is easier to avoid artifacts/grain. The results for interior lighting just look so much better.

It seemed to me that it mattered if you matched the subsurf mesh resolution between models in the scene. If I had two objects subsurf level 3 and one at level 4, I got more shadow artifacts than if they were all level 3. I guess that makes sense because of the relative sizes of emitting and receiving patches. Does anyone know more about that?

I used to use multiple lights but for a dynamic scene, they can be a pain. They are fast though and if you’re good, you can get some nice looking scenes.

Good images do take a long time. People in the industry are used to waiting ages for good images and often leave their machines overnight or at weekends for rendering. As always, the best things come to those who wait.

subsurf calculation has no dependency on render size, AO and other raytracing things do

also, the simple subdiv subsurf type is for subdividing flat sided things, at one time blender had that feature and no edge hardness options [probably only a couple 2.3x releases tho]

I use simple-subsurf and it works great.

Python scripts don’t woorks on my computer (problem of configuration because I’m not Root) but the suggested sripts seems to be the answer to my problems.

I find images with radiosity “warmer” then with only raytracing. The lighting is softer, and for me, looks more natural (not necessarly more realistic).