Rich Pixel Format

Does anyone know if there are any plans in the works to implement some form/variation of RPF into Blender? This would be extremely useful for compositing. I hope the developers will do this since they are on their new found compositing kick.

For those of you who are wondering what RPF is, it allows image files to be saved with Z-Scale (depth) information (I think this is Y-Scale in Blender) via the use of image tags and such so that you can composite 2D & 3D images at any point along the depth plane of another pre rendered image or movie. Basically you take 2 layers of the pre rendered image or movie and sandwich the image or movie that you want to composite in between them. Now whenever you scale the sandwiched movie or image (your meat layer) along it’s depth plane it will move in front of or behind any object in the two “Bread” layers of the sandwich that you just built. This is an extremely powerful compositing technique and allows you to combine unrelated image files into a 3D masterpiece, thus you can reuse 3D image files without having to re-render the 3D geometry.

Know what that is? That is a format developed by some kid with zero insight thinking he’s clever. As long as it’s properly matted and rendered separately, EVERYTHING is already like that. The fact that someone wants to make a modifier for an existing container format to allow tagging for what layer something should be on is just ludicrous. There’s already an alpha channel, and unless you’re going to store an individual alpha channel for each “sublayer” then it’s just dumb.

To me, it sounds like RPF = Really Pointless Format.

how is RPF different/better than openexr which blender already supports?

I don’t really know much about openexr. Can it do this? I’ve stayed away from it because I’ve read about the massive file sizes that it generates. Is openexr the one that generates seperate and accessible passes for specular, color, shading, shadows, depth, etc…? If it can just slice between the layers then I’ll quit crying about file size and buy more hard drive space!

Edit: Sorry Heavily Tessellated, I wasn’t trying to offend anyone with a question about an image format.
I just looked it up and it can store zbuffer info. Now, how do I access/use this information when compositing in blender?

Update: I figured out how to do it with openexr, thanx z3r0 d! The inbetween layer is in the precise position that it was origonally placed from the camera, which bisects the other 2 layers. Is there a way to animate it back and forth along the depth plane, seperate from any depth animation that may already exist in the file? I’ll post some images and a blend file when I get back from my first week on this new job (they’re on my other computer), but I’ve gotta go for now. Thanx again, this is awesome and it’s taking Blender to a whole new level for me!

Dude, do not take offense to anything I say when it’s just an opinion. Well, if you were the person that came up with that idea for the file format, you can be mad at me. :smiley: I certainly hope you don’t think I was “yelling” at you, if so I apologize. I was just stating how pointless it is. Seems like a leftover idea from the days, where you could get VC money for any idea the average Joe didn’t know much about. :stuck_out_tongue:

I didn’t come up with the format idea, it has been in use for some time now. I think it’s a 3DS format. Actually it’s pretty cool when someone shows you what can be done with it. I used it a little when working with total training videos for AE 6.5. and have been trying to find a way to export files from Blender that are capable of this for nearly a year now. z3ro d clued me in that openexr is capable of this so I’ll be doing a lot of research into this format when I get home. But now that I’m starting to figure out how to use these nodes a little bit, I won’t have to do quite as much exporting. They seem to have some awesome capabilities and the vector blur just blows me away.

Errr? I think RPF data has been used in various of post-production firms! For me the greatest benefit has been the ability to export camera paths as well as null positions to a post-production software (for example After Effects) for composition. This is crucial for motion graphics and speeds the composition project like a gadzillion times.

Anyway. As for me, implementing this kind of feature would REALLY improve the capabilities of Blender. I have to experiment this openexr format…

After researching this a bit, it seems that RPF is certainly not some format a kid cooked up in his basement, but a derivative of RLA, an older SGI format. It supports a lot of information when it comes to render passes and is used by a lot of different applications. There’s some info on it here: but I can’t for the life of me figure out how it stores camera path information (probably one of the more interesting features), from the resources I’ve seen on the web. Apparently it’s in the 3ds Max SDK, in file rla.cpp, but I don’t own max, and as such don’t have access to that file so can’t confirm what’s in there.

From some of the things that I’ve read in your posts it seems that you may have some contact with the developers. From what I’ve read about openexr since z3r0d clued me, it is capable of everything rpf is and more. The docs state that it is capable of including an unlimited number of additional user defined channels. It looks like the devs may have already had this in mind when they began coding the nodes but would you mind asking if they intend to to include render passes as layers in the openexr files? RPF simply saves these passes as alpha channels and gives you access to them individually. What a time saver when your images have areas that need to be tweeked or color changed but they took several minutes to several hours to render.