For those who don’t know what G(eometry) Buffers are, they’re 2D images that encode information about a 3D scene and are useful in compositing and image manipulation. ‘G Buffer Extractor’ is a Python script that uses mesh vertex colours to store scene information and render it. So far my script will render a depth buffer, object ID buffer or surface normal buffer as below using Endi’s Rhino…
Render:
Depth image:
Surface normal image:
Object ID image
Hopefully I can also add illumination and a crude edge buffer as well.
The question is, would anyone else find this useful? At the moment the script is pretty rough and ready, but if there is enough demand I will develop it as a proper tool. I don’t know if it can be done, but in an ideal world it would be great to automatically combine the different buffer types and load them as layers into Gimp/Photoshop. Does anyone know if this is possible? http://www.flippyneck.com/wip/g_buffer_extractor_a05.py
I don’t think it is possible to render to a file from the python api
I really wish it was, from that point loading in another app (or back into python) should be reasonably trivial.
well anway, with the information you were able to extract this way you could create some nice toon edges, if you can play with the ati toon shading demo to see what I mean http://www.ati.com/developer/demos/r9700.html
[non-photorealistic rendering]
I don’t think it is possible to render to a file from the python api
It isn’t, but i im going to enquire at .org to see if this is going to be added at any point. It is possible to set the render directory for output though so I think i can hack around the current limitation.
well anway, with the information you were able to extract this way you could create some nice toon edges, if you can play with the ati toon shading demo to see what I mean http://www.ati.com/developer/demos/r9700.html
[non-photorealistic rendering]
That’s exactly what I’m hoping to be able to do: write some scripts that will work on the g buffers to generate NPR images. Thanks for the link.
This is much needed. That said, though, I’ve always wanted to see this integrated in the main code. If you’re really vertex painting things, then it kind of destroys the usefulness of this for any project that uses vert painting. A direct implementation would not mess with your vert paint values, but just render in the data space you requested.
harkyman, I agree. I see no reason why this couldn’t be implemented as a special renderer type in Blender itself. I wish I had one iota of an understanding of C so that I could give it a go. My hope is that if I knock together the best script that I can, someone else with the skills will recognise the interest, pick up the baton and code something ‘real’.
It would be possible to read existing vertex colours from a model and restore them after rendering (my script already strips and then restores material settings). This would lengthen the processing time but I’ll try and get it working for those that might need it. A far more fundamental limitation of vertex colours is of course that this will only work with meshes…
Seeing the considerable interest I shall continue with this. Thanks guys. As they say, ‘watch this space’…
Just to update you, folks. I’ve done a bit more work on this based on a few of your comments. Principally the script now gives the option to either render to an image, or to ‘bake’ the scene information into the vertex colours. The latter can hopefully be used in conjunction with jms’ vertex paint script to export normal maps.
youngbatcat, I haven’t followed all the work that you’ve done on normal mapping, but if anything I have done is useful to you then feel free to use it.
hagen, I looked at the rla/rpf file information, but I can’t get access to enough of the data types encoded in those files to export to them. Also I don’t have access to Combustion etc, unless anyone knows of other (free or OS) software that can view them. The plan is to export to image files that can then be manually or automatically loaded into Gimp/Photoshop layers for making composite masks. By the way, there is a zbuffer rendering plugin for the sequence editor. I’m not sure if it still works though…
Any chance of making it possible to render out a image sequence
of the buffer type choosen for animations
/nozzy
I’m on it; should’ve mentioned that. The user can now define a project name, set the required buffers and a range of frames and render the frames into a directory dedicated to that project.