[newbie] Generating images based on variations


I need to generate hundreds images bases on variations (color, texture, material, camera position) of a same model. This is for a configurator.
I am a programmer and I have a little experience with Blender.

Can this be done with Blender ?


Should be pretty simple, depending on how you wish to generate the variation. Random select should be easy enough.
What do you mean by configurator ? Like a web app where a user chooses a color, cam angle, etc?

Hi Muffy,

Exactly. What I would like to do is :

  • define colors (around 30)
  • define materials (around 5)
  • define camera angles (around 12)
  • define lights (interior / exterior)
  • build the models

And start a script that will go through all the combinations, render and generates JPG images.
The web app will just have to get the right image from the database.

If the variation is fixed and not random, it should just be a couple of nested for loops.

(pseudo code)

for MDL in models :
    for LC in lightingcondition :
        for CA in camAngle :
            for M in material :
                for CLR in color :

30 colors * 5 materials * 12 cam angles * 2 lighting conditions = 3600 renders per model. Thats a lot of pictures!

It may be possible to speed this up by seperating the output into layers though, and composite them together in post or realtime in the app if possible (is your platform flash?). For instance rendering out lighting to it’s own pass means 2 lighting conditions * 12 cam angles = 24 renders. And then the unlit model with 30 colors * 5 materials * 12 camera angles = 1800. You’re already down to 1824 renders.
In this case you multiply the proper lightbuffer with the matching unlit image to get the final output. The heavier the renders, the more time you will save doing it this way.

The above is assuming a single color option per model though. Multiple colors will make things more complex. If you are able to composite images in realtime in your app, i suggest you try to output scalar channels to use as alpha masks for applying local color values. If not, than it will also be faster to color models this way in post, and make it much easier to add more colors on request, rather than render out whole new batch of renders per model.

Thanks for this quick answer. I don’t think I will use Flash, considering recent Ipad issues - it probably will be Javascript.
Images are quite small (around 700*400 px) and I shouldn’t have to reprocess the images too often (once a year at most).

What I need now is to perfect my Blender skills :wink:

Thanks again for your help.