So, I’ve been using Blender for some time now, and for as long as I have been using it, I’ve never been able to quite get my head around the compositor and how it works, so I was wondering if there were any videos or documents that focus on it that might help me to understand it better? Because in all of my search’s so far, I’ve yet to find anything that covers it.
I don’t know if any will help you but I have a bunch of tutorials for the compositor on my Youtube page, link below in my sig.
Its not my video but here is a good introduction https://www.youtube.com/watch?v=RaBL3kbQ6qU&list=PLZI9BDZ5udV1N9HqS9C1wxnFhXV28bbYR&index=19
Ahhh, thank you! These videos look very helpful.
Is it the nodes idea that throws you off? It can take a while to get the hang of, but once you do, it all just clicks and you see why nodes are superior to layer based compositing. And once that clicks, you can take the concepts to any node compositor (Nuke, Natron, Fusion…)
Well, unless I’ve entirely misunderstood how the compositor works, what’s really been throwing me off is the relationship between different nodes, and how people know which specific nodes to use to get any general effect.
Also, if I understand it right, it works in a way that when you connect one node to another node, the information from the first node fills in the second nodes input, and the second node adds onto that with it’s own information, which is then fed to the next node and so on and so forth, is that right, or…?
Well it’s mainly the relationship between the nodes that throws me off. I don’t understand how people know what nodes to use to get any desired effect they want for any given scene or whatever they’re rendering.
Its very like mixing colours, getting to know how much of each primary colour to make other colours. Try starting with the result and work back to see the ingredients used. There are plenty of specific tutorials available to see how they build an effect.
Someone said nodes is a way for the programmers not having to create an interface, and let the user do the programming instead. So some general idea about programming may be an advantage.
I am learning myself but find that once you start to get the hang of it, its powerful with infinite possibilities. I suggest playing around and see what happens
Your understanding is correct. A node graph is essentially a flow chart of information. You start with an image, and pipe that into another node that does something (changes the color, moves the image, scales it, blurs it, masks it… ) and then finally writes the modified image to disk. It allows you to build your image up and see exactly what has been done to it at a glance. A compositing node graph and a material node graph are similar in concept but serve two different purposes. Personally I find the compositing version much easier.
The hardest part in compositing can be (at least in the beginning) figuring out what you have to do to get the result you wish. If you know what you wish you are already halfway there. Then you must deconstruct the final result into parts/elements/effects that make up the image. And when you have the idea deconstructed the final step is to build it as a series of operations that create all the necessary bits one by one until you reach the final image.
An easy example: what is glow? Glow is light bouncing around inside lens or in atmosphere, allowing it to reach the camera sensor also from these parts of the image that are directly emitting light. To simulate that you first spread the light by blurring your original image and then Add (Mix node) it over the original image to add light also into the areas that were not so bright before the glow. You can control the size of glow with Blur settings and the intensity of the glow by multiplying the blurred image with some value.
When dealing with light, there are two basic operations: add and multiply. When you add light (different lights rendered in separate layers, direct and indirect light passes etc) you Add. If light is reflected on surfaces, the spectra of light and surface combine, so you multiply. White light multiplied with red object results in red object. Blue light multiplied with red object results in black object because there is no red light to reflect.
Here’s something to think about …
“The compositor” is strictly concerned with “post-production.”
The inputs to the compositing process (so to speak …) are “multi-layered files of data,” preferably represented by MultiLayer OpenEXR file-sequences.
Each of these input-files are two(!)-dimensional. :eek:
At the end of the day, “the input to the compositing process” consists of one-or-more two(!)-dimensional matrices of information: “because ‘the picture’ is so-many pixels wide, and so-many pixels high.”
Yes, the compositor has access to multiple “layers” of information about each (x,y) pixel, including things such as “Z-Depth,” which describes how far ‘whatever-it-is that provided the source for this pixel’ was from the virtual camera. (But the compositor has no idea ‘what whatever-it-is’ was, unless there is also an “object-ID” layer, which in any case is ‘just a matrix of numbers.’)
The compositing process, therefore, consists of “intelligently combining such layers of 2D information,” and nodes are the process by which you describe to Blender how to do it.
(And if you find yourself thinking, “Hey! I think I must be writing a computer program here!” . . . well . . . “yeah, well sort-of” . . . (koff, koff) . . . “well, it is ‘a digital computer,’ you know” . . . (koff, koff) . . .)
This is a rather late response, but thank you everyone, this was all very helpful~