Blender and Shake or Nuke (sorry, long..)

We are running a compositing course next year with either Shake or Nuke. Students have learned 3D modelling in Blender. Compositing professional who will teach the course has never heard of Blender. His concerns are:
“> Blender is not compatible with Shake or Nuke .I explain myself. When you import a 3D image in Shake or Nuke coming from Maya or Max it comes with an history and whole lot of Data that you can read and modify in the 2D package. like depth of field , transparencies etc…
> Ex : I import a matte that I will use for modifying the depth of field of my background, it is coming with informations that I can modify via a slider in Shake or Nuke. I’ll be able to import an image from blender but with no data informations.[/SIZE]
> With Maya there is lots of third party plug ins very useful, like:
> Tracking and matchmoving
> Rendering and lighting (with Mental ray or renderman) etc…
> It makes me more credible too. I’ll be able to call my contacts and say : I have trained this year several very good digital Artist on Maya and Nuke . I won’t talk about Blender because nobody knows or use it.”
Can you import this kind of data from Blender to Shake or Nuke?
Please answer as course planning depends on this.
Thanks Andi

the internal compositor gives acsess to ao passes/ depth maps/ vecor data etc, most of these can be saved to images using the file output node. why dont you just use the internal node compositor?
http://uploader.polorix.net//files/908/images/output.png

Thanks
The plan is that I teach Blender (have been doing that for a while now) and he teaches compositing (lots of industry cred - worked on Happy Feet, Lord of the Rings, King Kong etc. etc.) and he knows Shake, Nuke, After Effects, Maya etc. We need the software to work together. He’s pushing for Modo or Maya. I’m pushing for Blender as the students already know it and it’s free. I need some info to take to planning meetings. Is Blender getting a higher profile in the VFX industry?
Andi

use openexr file format to store z-buffer information, you don’t even need to open the node editor.

from the blender wiki:

OpenEXR - an open and non-proprietary extended and highly dynamic range (HDR) image format, saving both Alpha and Z-depth buffer information.

Enable the Half button to use 16-bit format; otherwise 32-bit floating point precision color depth will be used
Enable the Zbuf button to save the Z-buffer (distance from camera) info
Choose a compression/decompression CODEC (ZIP by default) to save disk space.
Enable the RGBA button to save the Alpha channel.

it dus not store all the passes, witch is posable using the compositor

It does store the passes. Just choose ‘Multilayer’ in the output menu and it will save a multilayered EXR file.

Wow. I never knew .exrs or blender could do that :open_mouth:

Andi,

where are your students likely to get placed? It matters a great deal if it is going to be large movie studios, tv studios, game studios, arch viz, advertising, or if they are going to be more likely to do freelance etc. In actuality I suspect most are probably going to be placed in arch viz, advertising, or freelance.

"> Blender is not compatible with Shake or Nuke .I explain myself. When you import a 3D image in Shake or Nuke coming from Maya or Max it comes with an history and whole lot of Data that you can read and modify in the 2D package. like depth of field , transparencies etc…
> Ex : I import a matte that I will use for modifying the depth of field of my background, it is coming with informations that I can modify via a slider in Shake or Nuke. I’ll be able to import an image from blender but with no data informations.

.[/quote]

As others have noted those can be exported as passes into openexr. You should test this and make sure that all of the passes and metadata you need are included.

> With Maya there is lots of third party plug ins very useful, like:
> Tracking and matchmoving

Some of the trackers/matchmovers have a blender import, but the integration isn’t likely as good. There is also an in development motion tracker, but no idea when that will be finished and what level of integration it will eventually have.

> Rendering and lighting (with Mental ray or renderman) etc…

This is certainly something worth considering, the artists options are more limited in this respect.

> It makes me more credible too. I’ll be able to call my contacts and say : I have trained this year several very good digital Artist on Maya and Nuke . I won’t talk about Blender because nobody knows or use it."

Probably true for the US, especially West Coast and movie studios.

LetterRip

Since the students already know Blender it might not hurt to introduce another app. Recently Houdini 9 has made itself very available with the free watermarked version or the $100 HD version.

At our studio most of the artists are using XSI. I’m using mostly Blender for what little 3d work I do. Shake is used for compositing but I’m now also looking at Nuke (and Houdini for particles and dynamics/sim.)

Learn tools that will help fill the gaps left by your main apps weaknesses.

Replying the Letterip
Because of the compositor’s contacts, some of the better students might be placed (work experience) in film studios like Fox Studios in Sydney Australia where he worked on The Matrix Reloaded or other closer places like Photon in Queensland. Some are into gaming and others will be freelance. Maybe TV commercials?

That’s the workflow I use: Blender > 3delight (renderman) > Shake. It’s kind of an early stage workflow and it needs a lot of improvement but I’ll explain what can be done.

I use a Python script to export to renderman from Blender and my export script also exports a Maya Ascii camera tracking file for Shake. I render multiple passes e.g depth passes, masks using renderman AOVs. These are simple tiff files and each pass can be floating point if needed, which is more accuracy than exr provides (though the exr people say it’s overkill to use 32-bpp). The good thing about tiff is that it’s well supported in almost any program without plugins.

So I can load these arbitrary passes into Shake very easily and combine masks, depths etc. The camera tracking data loads into the Multiplane node and I can add as many 2D planes in post and they track exactly to the 3D data. Shake’s implementation of this has some quirks, which are kind of annoying like not being able to parent more than two layers and not being able to intersect layers among other things.

I did a project like this recently where I had to do a motion graphics style 3D animation flying over a stylized logo. I built the over-complicated logo up in 3D and animated it. I rendered it in 3delight with masks and passes and imported these into Shake. I then added glow effects to individual parts and used the tracking data to add in movie clips (Renderman is harder to use movie clips with as you have to separate the data into image sequences).

I also had some green screen footage of about 5 or 6 people jumping around so I used Shake to key those, stylize them like the ipod ads and then make about 10 layers of these people to simulate a crowd of about 200 people and I added the layers in Shake using the tracking data to do a shot where the camera flies through a 3D rendered equalizer-type city (audio driven thanks to programmable shading) and then through the crowd just above the heads.

The tracking data I got out of Blender was actually better than the data out of Maya because you are led to believe that you just save a Maya ascii file and then open it in Shake but you can’t because the .ma file only stores the keyframes so you have to manually bake the camera, which you have to make sure to undo before you save because otherwise you just lose all your keyframes. My blender script bakes the camera at export time so it leaves your camera unmodified.

Now for the bad news. My renderman script is not at a point where I feel comfortable with using it on important things. However, there was one released recently here:

http://blenderartists.org/forum/showthread.php?t=108076

So you can experiment with that and as people have said, Blender’s exr output may suffice. I can clean up my camera export script and separate it from my main one so you can use it standalone to export a .ma file. The same file should work in AE if any of the students use that instead.

Concerning movie tracking, it’s a little different and I don’t know if there is a way to get data from say PFTrack in Shake back into Blender. People have used Icarus for things like this with Blender and I’m sure that a script would be fairly easy to put together. That’s one of the best things about Blender is the Python scripting. I think it’s far better than Maya’s MEL and it should be far easier for students to pick up. The script links aren’t as friendly as in Maya but just as powerful and I find Blender’s more stable.

I think Blender will fit into your workflow just fine and you can use the .fbx import/export to go between Maya and Blender.

Thanks for sharing. I’m always curious how professionals integrate Blender into their pipeline. Most of the cases I’ve seen seem minor.

What are you using for creating Renderman Shaders? I’m looking into adding an Rman renderer to our pipeline (both XSI and Blender). To start it should be open source and use nodes in it’s GUI (resembling XSI shader tree/Blender material nodes).

That is definitely something to keep an eye on. I recall that there’s also a Blender to Mentalray in case users want to go that direction.

A script like this would be a great addition to Blender’s export capabilities. Every so often a Blender user inquires how they can get the camera data into their compositor.

you can try shaderman

The workflow is capable of professional quality but I couldn’t call my own work professional. I have managed to start using it on paying jobs but it has a long way to go. I strongly believe that it’s the best type of workflow though as it has flexibility at every level. Blender has programmable animation, Renderman has programmable shading and Shake has programmable compositing. Between the three packages, there’s very little that can’t be done and it helps when you work beside artists that come up with crazy ideas.

I write them in code. Nodes are a great way to do things but I actually visualize things more easily coding directly as I can debug the output at any stage in the shading process. If I need to figure out a node setup, I actually write the code first and then figure out how the nodes would create the same effect. I just come from a programming background so it makes more sense to me that way.

Blender’s node editor drives me a bit crazy and it has too many limitations just now. Hopefully it will be improved and accessible from the Python API so that it can be used for external rendering engines so that artists can more easily put custom shaders together.

It’s a fairly trivial script too. As Cambo has said before, these things really just require a data file and someone to take the time to recreate the format. It’s only about 50 lines of code or something. Adding lights and object locations are additions that can be made on top. The trickiest part was trying to figure out the relationship between the aperture settings and the fov etc. If you don’t get it exact, things start sliding around. I think I got the equations right but it needs tested on different output.