Pixar GPU Tech Conference talk: Real-Time Graphics for Feature Film Production

Thought you’d find this interesting as well:
http://i.imgur.com/N8qEeNs.jpg
I always find Pixar tech presentations tremendously inspiring and motivating. :slight_smile:

The first thing I noticed is that of course Pixar’s Presto, just like Dreamworks’ Premo, has gotten rid of ugly locators, or floating curves as controllers for animators to pick. Instead the animator sees controls light up as they hover over the mesh itself. This reduces visual clutter and improves clarity and artist-friendly ease of use tremendously.

MODO has this feature called Command Regions, demonstrated just at the beginning of this video:

I’d love to see something like this as a core part of Blender’s rigging tool set as well. Being able to get rid of floating curves and other control handles that obstruct the animators’ view on the actual mesh should be a priority for improving rigging tools everywhere.

To my disappointment, Maya doesn’t have this feature yet (although you can work around it to kind of, sort of get a similar effect. The result is less elegant than MODO’s implementation though.)

Does anybody know if such a feature is being worked on for Blender at the moment?

This is something the blender animation devs have been working towards for a while. It’s not ready yet, but they are working towards this goal.

Awesome! Can you point me to what it’s called and where I can follow its development, provided any of it is public yet?

Chris ->( Viewport Project )

I said this was the future long long ago

Next thing we need besides a good fast viewport render (NOT A PREVIEW!)

Next after that is plastic mocap force feedback exoskeletons, so actors can interact with virtuals elements*

The same tech can help disabled people walk, or Act in VR, or game with unheard of immersion *

brp is its an exo you are going to have to do motion tracking to get rid of it anyway, why not just give the actor a stand in which is real not virtual? as far as helping the disabled walk its not need as much today. most places are wheel chair accesable and wheel chairs are much more stable than walking robots, even without a real person inside them flailing away because they are about to fall in a panic. the segway was supposed to be unable to make it fall even on purpose on paper, yet many people fell including president bush jr. i think the exo you describe would be much more useful for virtual sculpting than mocap.

as far as teh video i haven’t watched much of it yet, but i did hear him mention renderfarms and up to 600 hours per frame to render, that is not what i was dreaming of when i read real time graphics. and thats on rigs that cost thousands. hopefully i fid something that may relate to my finances later and not a multi billion dollar corporation, but for now it seems like quantum computing, even if it works its not something average people will have access to.

Early stuff, but it’s gonna hit Blender sometimes in the “near” future…

The bigger project is the Custom Manipulator Project -> link

And this is an example of what they would like to achieve, indeed inspired by Presto…

I took a stab at this a while back, made some decent progress, but ran into issues trying to do everything in pure python. I brought this up to some of the devs, and from that conversation the wiggly-widgets branch was born. It’s been a long time coming, but face maps for bones and shape keys (along with a ton of other quality-of-life improvements) are the end goal of that project, and we can expect to see them in master at some point in the future.

If you’re interested in seeing what was possible with pure python and the bpy API, you can check this out.

That talk was kind of interesting but it’s still only viewport stuff/things that help you animate. The rendering itself requires a huge render farm. :stuck_out_tongue:

I’m interested in movie production in game engines like UE4, that way you can render at real time speeds, well usually faster. If you get 60fps in the game (or whatever you want to call it) you’ll render at twice the real time speed. If you use a fancy GPU you can probably render at even higher speeds.

Since you’ve got a NLE called Sequencer in the game engine (like in Blender! :cool:) you can get realtime feedback like they’re getting in the Pixar talk but also render in real time as well which is pretty crazy. In the next engine version (4.12) there will be some new features like corrective twist nodes/bone driven controllers (think drivers for shape keys in Blender) and morph target normals on the GPU (normals update when the mesh deforms) which will probably help make stuff look even better.

And this is just early stuff, in the future there may be a high quality render mode that crushes your FPS but gives even better results for actually rendering movies. And the engine is free so anyone can use it. There are drawbacks of course (no alembic, you’ll have to create effects/stuff in the game engine) but watching a low res test render rendering at 200 fps or something just blows my mind. It’s not path tracing but it can look good enough for sure. Maybe not feature film level yet (well, no one has tried yet) but it’s going that direction.

While doing rendering in a real time game engine has it’s place, the differences between that and renderers like Cycles, Renderman and Arnold are so great that it’s not even worthy of comparison.

That being said, I would love to know more about Pixar’s Hydra real time renderer that they are OPEN SOURCING at SIGGRAPH! Did anyone happen to catch that during the talk? My jaw dropped on that one…

EDIT: Here we go…

I’d say it very much is worth comparing.
No doubt that we still have quite a way to go before real-time rendering will be at the level of current raytracing/pathtracing production renderers like Arnold, V-Ray, Cycles, etc., but if you look at the progress real-time graphics have made in the last decade or so, they’ve closed the gap quite significantly. The difference between VFX render quality and game graphics used to be a lot bigger.

Now of course, as any sane investor will tell you, “Past Performance Does Not Guarantee Future Results”, but it’s quite impressive nonetheless.

I fully expect that in only a few years from now, a large part of even non-interactive CGI content will be rendered solely in real-time, leaving traditional path-tracers out entirely.

Now those are not going to be your ILM or Pixar productions probably, but for indie short films, or even smaller budget feature films, but certainly things like animated TV series, I’m certain this is the future!

Just like with Netflix/YouTube vs. Blu-Ray, there is of course indeed a quality difference between streaming 1080p content at 5800 kbps via Netflix or to see it at 40+ mbps off a Blu-Ray but the difference doesn’t outweigh the convenience, efficiency and cost of content distribution for most people to be worth it.

Likewise, if we can create animated TV series and short films with the visual quality of these short films below done in UE4 and Unity, and save tremendous time (=money) and hardware cost on offline rendering, which especially in the fast-paced production schedules of TV makes a big difference, I’m sure many productions will take the deal.

This is why I’m so excited about developments like MODO’s Advanced Viewport and Maya’s viewport 2.0 (although I find MODO’s to be visually superior):

[video]https://youtu.be/MyzkwwY0sEc?t=49s[/video]

Having a PBR 3D viewport that visually matches the raytrace renderer as well as matching what your assets will look like when taken into Unity or UE4 (via the respective shaders available in MODO) makes a massive difference in workflow and efficiency.

I hope to see Blender get there as well in the near future, in a simple, efficient and artist-friendly manner.

USD! instead of freaking allembic =)

Indeed. :slight_smile: Until USD is industry standard, we’ll still need Alembic in Blender though.

I watched it somewhat quickly but USD is a different thing, it’s not an Alembic “competitor”, it would actually go hand in hand with Alembic, IIRC. USD describes entire scenes to be used across different DCCs without troubles.

Opensubdiv mixed with OSD and hydra being released open source as well for scene graph interaction is AWESOME, how did i miss this!.

Going to have to look how i can integrate this into UE4. Hydra looks really interesting

So does Alembic, doesn’t it?

In terms of transferring geometry they’re both essentially point caching formats - with USD having the added benefit of Pixar’s layer-based referencing system built in?

Feel free to correct me if I’m mistaken, my understanding of both formats is superficial at best.

EDIT: Just found this article which clears up some of my misconceptions. Pixar’s USD system: the new super-Alembic?

Also, this: http://graphics.pixar.com/usd/overview.html

Why is USD not Alembic?

At the onset of the project, we considered whether Alembic or either of the two existing scene description systems currently in use at Pixar could serve as the basis for all of our pipeline scene data.

It quickly became clear that referencing operators and the non-destructive editing capabilities they provide are vital to achieving the scalability and incremental-update goals described above. While Alembic provides a good solution for representing flat, baked/animated scene description, because it has no facility for file-referencing or sparse overrides, it cannot be our unified basis for pipeline data.

This does not preclude a future in which Alembic and USD merge into a single entity. Until that time, native Alembic files can most definitely serve as the inputs to the referencing operators in USD - that is, in any graph of referenced files in a USD scene, any leaf node can be an Alembic file.

That’s a pretty amazing conference video. USD was the most interesting part for me personally. Of course, all the graphical parts are amazing, but workflow wise, I believe that Blender would benefit a lot from USD if used consistently throughout the whole application.

Thought you guys might be interested in this.
USD and Hydra are a powerful package. :slight_smile:

@chris yeah I am awaiting that feature myself and it will be huge for my blender usage since unreal is my game engine of choice.

  • Physically-based rendering following Unreal Engine 4’s model

thats from the viewport 2.8 page. So mike is already working first on updating the code from legacy open gl to newer Open GL 3.2
i guess once that is done. more advanced features can be added.

So PBR is on its way. I remember reading it will be node based. like cycles shaders. so you could so PBR that matches unreal, unity, frostbite? i guess with time and knowledge and effort any real time pbr engine you want.

USD and Hydra are screaming “We are the Future” to the rest of the CG community. This is now an open source development library. As FOSS, Blender needs this to remain relevant. You’re going to see more and more programs start to pick up this technology and integrate it into their packages. If Blender can’t communicate or talk to these packages, what’s the point?

100 billion real time polygon drawing and animation is a GREAT LEAP forward for feature film production. So here’s to keeping Blender relevant as the FOSS package of the future. Integrate an amazing FOSS library. Satisfy the need for speed.

MaterialX is another interesting prospect that can be integrated UNDERNEATH USD. As mentioned in the video, ILM and Pixar have been working together alot lately. Does no one else see the writing on the wall from The Foundry, Disney and ILM? Open sourcing extremely HQ production level scene/material definitions + realtime rendering applications…

Blender 3.0 is around 5 years away. We have work to do.