Motiva SOAP - open source realtime postproduction

Take a look at this realtime postproduction / relighting solution from MOTIVA Infografia

Motiva SOAP is a free and realtime postproduction tool that allows change colours of the objects, materials and lights modifing his reflections and ‘color bleeding’ too.
All the processes are made by the GPU, and that guarantees the maximun speed.

Download site:

Does not look Blender related to me.

Is this an ad?

No, it is still a fairly useful “open source” tool for color tweaking/proofing…could easily be used alongside blender.


Thanks for the info…
This looks very very good.

Fort Ash

This is not an ad. This is completely open source, if a bit much platform centric. But I think the development is very nice, lookdev is a very time intensive task with all the re-rendering. I hope Blender is able to embrace this kind of technology to enable a workflow of rendering to buffers preserving as much data from the rendering process as possible, then lookdev in near realtime in the compositor and finally passing all the tweaked parameters (shaders, light, depth of field, motionblur) back to the renderer for a finalized beauty pass with improved sampling. See here for another implementation. I truly believe this to be an area where a lot of time can be saved, resulting in a better realization of artistic vision while accelerating production.

I hope Blender is able to embrace this kind of technology

This is what the compositor in Blender does. The example in the link is just a GPU-accelerated compositor like Apple’s Motion. For the most control, you just need to be able to output shader variables (in Renderman they are called AOVs, arbitrary output variables) and you can store whatever data you want in image buffers and reuse them in post to adjust the scene in real-time.

GPU-based compositors are very fast for certain things but they can get bogged down more easily. The output pass control should be much more flexible when the Blender shading language comes around.

I fully agree.Once you experienced the power of AOV’s they become central to the rendering pipeline. To me this is not primarily about GPU usage but about the power of saving data and deferring as much computation as possible to a stage where one is able to influence it in near realtime. At this stage, IMHO, a few shortcuts can and should be taken to put the emphasis on speed. But something I see missing in current solutions outside of studio pipelines is being able to feed back the tweaked parameters to the renderer to provide for optimally sampled, artifact free images. For example, multiplications outside of the renderer are prone to cause artifacting in combination with transparency. Or the well known “anti-aliased Z pass” problem.

For reference, here is the post and the thread that got me started fantasizing about this a year ago. Back than I wouldn’t have thought this possible outside a specialized studio pipeline. But having witnessed the power and usability that Blenders integrated compositor brings, combined with the pace of its development I see it as possible.

Can’t wait for the “Blender Shading Language”!