G Buffer Extractor

Hi, again.

flippyneck, I like rla/rpf because specifically to make depth images in after effcts and combustion is flexible. I had tryied that zbuffer plugin and the method of giving to all objects the same whithe/black material and control the focus point with an empty, but those are not as flexible than rpf (this let you control the focus point inside the compositing app). So basicly I am looking for that way to interactive control the focus point value.

http://www.freepgs.com/hagen3d/rpf/rpf.jpg

http://www.freepgs.com/hagen3d/rpf/2.png

Maybe I am wrong, but I believe rpf/rla is the standard for the work you are doing, not only 3dsmax gives this export option, but lightwave and maya and maybe some others. I found some of it history here.

Lightwave´s extended rpf export.

And some another tinfo from SGI (old info from here, open with wordpad).

This is the ftp were I found them.

This is where I found that ftp.

And the url where I found that bunch of ftps.

Wavefront RLA

I don´t have a clue of coding (but hope it helps), and I have few time of knowing this file format, maybe somebody else has furter info.

I know the depth is just a part of the output you are working on (I don´t know how to work with the others yet)

For finish, would be so good, if this could work within blender´s secuencer to make it powerful and avoid the use of commercial packages.

I Hope to be good enough with my english.

Another thought is about your “Object ID”. I was at 3d festival last year and one of the talks was by “The Moving picture company” by there head of 3d on there the scenes they had worked on in “Harry potter 2”.

He mentioned of the time saved and the flexibilty of the work flow by having a “Object ID” pass, where objects can be singled out in the composite stage to keyed out objects for effects etc, saving on
rendering out objects in separate passes in some cases.

My request for the “Object ID” would be being able to choose your colour for your object, I dont know what happens now but it seems to be random at the moment. I would make life a lot easier when comping the sequences together. :slight_smile:

Hey, you guys are demanding :wink:

Nozzy, it would be possible to assign a particular colour to a particular scene object I guess. If all you need to know is which colour in the ID buffer represents which object, maybe the colour selection could still be random but the script could output a text file which says what is what? This would be quicker than designating everything by hand for a scene with many objects…

hagen, i agree how useful it would be to export to rla/rpf. There are a couple of problems that will stop me from being able to do this though. Firstly, without access to the renderer itself, some of the information that goes into an rla or rpf can not be obtained. Secondly, although I can manage Python, binary file formats are beyond my understanding (for now). Maybe in the future, but not yet…

Well, for the time being, I can see HOW this could be used but I but I dont know how to use it.

I would, however, like to say “Damn good work!” and “keep it up”

Have a tall frosty one on me :wink:

dante

Looking at this, I was wondering if someone could make a texture to replicate the normal map output.

Well… kinda
>http://www.clubinfo.bdeb.qc.ca/~theeth/Temp/nor.jpg<
The colours are on the wrong axis, but still neat I think.

Martin

Theeth, you’ve hit the nail on the head! I tried that approach a while back, but abandoned it because it couldn’t be automated through Python. It just struck me though that the new ‘Library’ module means that you can import predefined materials into a scene through Python. The images below show that other materials can be used to show z-depth, global co-ords and angle to view as well as vertex normals. Even better, doing it this way means that all Blender objects are viable, not just meshes. Brilliant stuff. I think this paves the way for ‘Son of G Buffer Extractor’.

depth…

http://www.flippyneck.com/wip/depth.png

global co-ords…

http://www.flippyneck.com/wip/global.png

angle to view vector…
(actually, it’s angle to view plane strictly speaking, but close)

http://www.flippyneck.com/wip/ang.png

surface normal…

http://www.flippyneck.com/wip/normal.png

I’m going to finish and release the original vertex colour script as it remains useful to those who want to bake vertex normals into their model and then unwrap it using jms’ vertex paint script. Sorry to those of you who are waiting, this may take a little longer, but it will be woth the wait…

Aw! It looks like the time has come to try and test this out again !
eeshlo’s normal mapping render code !

http://www.blender.org/pipermail/bf-committers/2004-January/005123.html

https://blenderartists.org/forum/viewtopic.php?t=19862&highlight=normal+map

This will be very useful.

Sweet :smiley: looks great.

/nozzy

Very usefull for realtime 3D with X3D:
http://vrml.base.com/forums/viewtopic.php?t=478

You got my vote.

hey flippyneck, sorry to dig up a very old post, but have you made any further development on this script?

The script is great!

If it will save also a alpha mask and works with animations it will be cool for compositing!

flippyneck. Is it possible? =)

you can already save alpha with blender, with out any special scripts, and the way he has done it (with materials) you can render animations, it just requires a bit more work.

levon. i know that! but it is not good because image is preparing with rgb and it is slowly!

Sorry, I just notived that this thread was resurrected and that it’s had over 7000 views! I’ll have a look into completing the second version of the script as there’s obviously some demand.

Very interesting!

Btw, you can already render from Python and save, with a kindo of hack. macuono did this for the BGC script.

Stefano

BUMP but also some useful info

this is how you can render an image, or animation out of blender using python

import Blender
from Blender import Object, Scene
from Blender.Scene import Render
path ="D:/render/"
name = "rendered.tga"

curscene = Scene.getCurrent()

curframe = Blender.Get('curframe')

scn = Scene.GetCurrent()

cam = Object.Get("Camera")
curscene.setCurrentCamera(cam)
context = scn.getRenderingContext()
context.setImageType(Render.TARGA)
context.setRenderPath(path + name)

#if you want to render a single frame make the start and end frame the same
#if you want them to be the current frame use the curframe variable
context.startFrame(1)
context.endFrame(25)
context.renderAnim()

First of all, i´m sorry for my terrible english…

flippyneck… that´s a nice work… i was looking around for a relightning system for blender… just like pixar´s lpics:

http://www.vidimce.org/publications/lpics/

so someone in a brazillian forum told me about your script… and that´s pretty close to the preprocessing phase of almost all relightning systems…
this is a excerpt i´ve found in some forun on the net…

"There is a “preprocessing phase” required before LPics. A “deep buffer” is rendered, using PRMan. It takes roughly the same amount of time as a final render (although it can be sped up by lowering sampling rates). At the end, instead of running all of the shaders, the data for the shaders is stored, for each pixel.

LPics just executes the shaders on the data in the deep buffer, and allows the shader parameters to be tweaked. Lights are type of shader. You can adjust the shader parameters and see results quickly.

The camera position cannot be changed. If a new camera position is required, the expensive preprocessing pass must be run again.

This is not a new idea. The basic technology of the deep buffer was available in a commercial package in the early 1990’s: IPR (interactive photorealistic rendering) from TDI. TDI was absorbed by Wavefront, which was in turn merged with Alias. IPR is still available in Maya.

The innovation of LPics is that it accelerates calculations using the GPU (ie. graphics card), permits lights to be moved, and recomputes low-res shadow maps in “interactive time” (ie. not real time or high quality, but good enough for interactive lighting). It does NOT do ray-traced shadows! It uses shadow maps, which are a “scan line” technique.

LPics is intended as a tool for lighters to light CG scenes. It is well-suited to that task. It is not a game engine, nor a general-purpose animation system.

After lighting, the final high-resolution, high-quality images must be rendered. An expensive renderfarm is still needed."

There is an paper about relightning:
http://www.cs.umd.edu/class/spring2005/cmsc828v/papers/p353-gershbein.pdf

And the ACM’s link to lpics and other articles:
http://portal.acm.org/citation.cfm?id=1073204.1073214

That´s it… now that i´ve explained what is lpics, there is the question: what should be changed in your script do get a relightning system? is that possible in blender? there´s someone avaible to do that?
anyway, thanks for your script! that´s great!