What compositor is used in your production pipeline.?
Cryptomatte was not compatible with fusion when i tested it , it worked only in blender compositor .
Blender comp update speed per frame is not really fast.
What compositor is used in your production pipeline.?
I can’t wait to see the movie. I sincerely hope it does very well indeed a the box office!
Congratulations on completing this project!
DCI 2K, or 2048x1080. The movie will be presented in 2.39:1 widescreen, however.
We would not have been able to use 3D motion blur, volumetric rendering throughout, and 3D DOF if it wasn’t for the work that Stefan Werner did in implementing Intel’s Embree, and his work on optimizing volumetric rendering, among other things. Render times were too long, and too unpredictable when there were large amounts of motion blur.
We used Blender for compositing. Most of the look of the movie was accomplished in-camera, and via the Colour Management tab for gamma/contrast changes. We used the Filmic colour transform as well, and worked in linear colour space to enable us to perform (sometimes drastic) colour grading in post.
Our composites were quite simple, and weren’t modified much - it was mainly to give the show a consistent look from beginning to end, and concentrated on providing some “grunge” to the look of the show using things like camera distortion, chromatic aberration, etc.
When we needed to modify a specific pass, we extracted the pass from the beauty pass (which was the main pass used in our composites), modified it, then added it back to the beauty pass. Cryptomattes were crucial for this.
The compositor is slow, but it was fine for our purposes. We have plans to add caching to it, and add a mask channel to nodes that are missing it - there are workarounds, but it’d be nice to be able to use all nodes in the same fashion.
Thanks for the Info,
caching and mask channel sounds great.
I really hope you guys come again to the blender conference (2018) and share your experience it was a great talk last time.
That is a known problem and reported many times. As I understand the latest short films from Blender Institute, they did not use motion blur either.
I hope that it can be improved by developers sometime.
We made huge improvements to Cycles in this area, with the help of people like Brecht. Beyond adding Embree as the raytracing core of Cycles (which can be switched on or off), we made some other minor changes that made 3D blur much more predictable, and enabled us to use it on our feature. There are no shots in the movie that do not have 3D blur in it.
All these changes will eventually be checked back in and made available to all. We’ll start with a branch that can be tested by others, once the movie is completely out the door (we’re in post now).
I’ll take a look at Fusion compatibility. My reference were the sample files from the Cryptomatte paper, I didn’t spend that much time yet verifying it with imports/exports from other programs. Certainly the goal is to make this compatible with every other Cryptomatte implementation out there.
Sergey did some work on motion blur for Agent 327: https://cloud.blender.org/blog/cycles-turbocharged-how-we-made-rendering-10x-faster
The motion blur implementation in Embree is yet again several times faster than that and is saving memory too.
The open question for moving this to Master is whether to add Embree as an additional dependency and at least initially keep this as a CPU only improvement or if the algorithm from Embree should be reimplemented in Cycles’ own BVH to make this available to GPU renders too.
thanks for the info again! Yes those situations (volumetric and motion blur) are basically a no go at the moment in cycles, especially the second. I am glad to hear you managed to pull out such big improvements, it’s great! One thing wasn’t clear: Was the motion blur still a problem in some situation?
Not really, no. It would be easier if I could draw a graph to illustrate, but here’s a simple theoretical example to illustrate what we saw with Stefan’s Embree implementation (plus other improvements):
- Assume that average render times were 1hr on a particular scene with standard Cycles. Heavy motion blur could spike that to 6-10hrs on some frames
- Using the same example and turning on Embree, we generally saw a 10% decrease in render times overall, with a large decrease in memory usage. Those motion blur spikes were reduced to .5hrs to 1hr extra, instead of 6-10hrs
Very rough example, but illustrative of the improvements that we saw, and it made it possible to use 3D blur, 3D DOF for some really nice rack focus shots, and volumetrics wherever we wanted. There were no restrictions placed on the Directors regarding these features - they simply became part of the Artistic language of the show.
well the fact that you could use it whenever you wanted it’s great. I guess with motion blur some scene it’s always problematic, even with big production engines. Great work guys!
good to know its really a great feature and work very well in blender.
Hey Jeff, thank you for your precious time out here, and patiently answering most of the questions from among us enthusiast who would really like to know more about your process and all the technical insights.
I have couple of questions for you as of now.
You mention of improved 3D DoF and Rack Focus. Can you show us a sample (visual) example of what you have achieved and the difference that what we would normally get “out of the box” of an official Blender release?
Here is a still from Big Hero 6, and I think this sort of DoF can be quite achievable through some tweaking in the Aperture parameters of the active camera settings, though I also think there are room for improvements in the f-stop (i.e. it may have an option of just increments as in a physical camera) and such.
And my next question is about your entire team that have worked on this show. I believe, much of your workforce are animators and specially seasoned character animators. Can you share a little about the composition breakdown of all your team members? And as you mentioned in your BConf talk that many of your various surfacing artists, layout artists, riggers, TDs, production managers/supervisors et al. came from Arc Production, how was their first hand experience/reaction to do much of their respective task(s) in Blender? How was the initial training like. Share some of the frustrating and as well the breakthrough moments during this phase of the production, building a robust pipeline and more importantly building a close-knitted team.
I reached out to one of your founders a while back before Filmic made it into default with the hopes that you might try a more appropriate view transform than the then-default sRGB OETF. Looks like that happened…
Would be interested in hearing the grading issues you faced, as well as if a dialogue could be had with the BI to push colour managing the UI along.
Are you planning an HDR release?
Feel free to hit me here or via PM / email.
Kickass work on Next Gen! I’m hugely impressed. Congrats, guys!
I have a question concerning animation. Do you feel you have been getting competitive rig performance in the viewport compared to tools such as Maya? Autodesk has been making strong performance gains by using extensive multi threading of rigs and I was wondering how blender stacks up. Are rugs multithreaded? How scaleable is it when your scene contains multiple cpu intensive rigs?
Do you have examples of what that implementation would look like? What would be the minimal implementation that could actually achieve real results? Any example files? Defining the problem and solution properly is one of the most important steps of software development and doesn’t necessarily require any programmers.
Exciting times! CAn’t wait to watch Next Gen on Netflix!
By the way - What’s your next project?
When i made that comment i was unaware that Stefan Werner and Luca Rood already added proper support for VDB voxel data, and improved the Alembic support.
So the solution is already there and we just need to wait a little longer.
Guys … Stefan Warner already told and confirmed about this in last Conference … See this at 1.44
Indeed I implemented support for loading OpenVDB caches in Blender, and Stefan added support for a bunch of rendering features required for the production (e.g. volume motion blur). Stefan also implemented OpenVDB rendering directly from Cycles (bypassing the Blender OpenVDB importer).
I joined the team at Tangent at a late stage of production, and OpenVDB rendering was a requirement which had to be fulfilled in a reasonable time frame, therefore the implementation is relatively poor, as there was no design stage prior to the implementation. Because of this time restriction, the importer was simply implemented as a wrapper to the smoke modifier, which reads from the OpenVDB cache. This is not without it’s quirks, and is highly inefficient.
Stefan’s direct implementation was then done to fulfil the high rendering demands, but he will have to fill in the details regarding the code’s production readiness.
So, unfortunately I will not be submitting my OpenVDB code as a patch to Blender, as it is not fit for the real world, and will be unsustainable. However, the code is of course available, and if you want you can make use of it (https://github.com/tangent-animation/blender278).