Cycles (node) help,technical.

Hi guys,this is probably the wrong time to post this, as its almost Christmas and everyone will be offline (or not if you are a loser internet addict like me)

I wanted some technical help.I’ve been an animator (using maya) As an animator as was never really required to learn anything about rendering.

If anyone has seen Barteks compositing tutorials. You’ll know that he goes through some nodes,and explains what they do and why.This is extremely useful.

If your like me,you probably get sick of the “click here and click there,I don’t know what this is but it works” kind of tutorials.

I was hoping,and I know its a lot,if perhaps we could go through each node and explain the functions of each,or most of them,for absolute noobs who have no clue about technical terminology and as a result of that,find the wiki unhelpful .

I thought it would be a good resource.Anyone with a good knowledge base willing gradually help out the noobs here?

perhaps not nodes specifically,but the type of renderer/behaviour ,etc.

Sure, I’ll play along in getting to know the nodes. I’m a sort of semi-noob myself so I definitely could stand to learn more about them.
But there’s a lot of ground to cover, seeing as you use nodes to affect the materials as well.
Does this mean we should skip over the render layers and just discuss the actual functionality of each node in the compositor?
Or should we include the material nodes as well?

Another thing worth mentioning is that the Blender wiki does in fact provide a lot of information for the nodes…

I think this is why most tutorials are problem-based, in that they describe one or two solutions to receive a predictable result.
I guess my question to you is, where do we start?

actually I was more referring material creation,rather than compositing nodes.

what I liked about bartek is he explains the functions of the nodes rather than the “plug and play” attitude.With materials creation I realize it can be difficult without showing examples,but I wondered if we could do something similar.

Where to start? well first we can talk about what is the main difference between Cycles and BI,the type of render engine,what is necessary to understand? (lightpaths etc.)

The “input” nodes would be a good start (fresnel/texture coordinate etc.).I’m not sure how we can do it in an organized fashion,perhaps I can edit the first post of this topic as information is received.

Ok… I hear ya…
And I agree that Bartek’s tutorial was one of the most useful tutorials I’ve seen on the subject.

As far as the difference between BI and Cycles is how it handles light in general. Also that the general direction of the Blender development is to discontinue BI in favor of the strengths of Cycles. However, so far there’s a lot of things you can do with BI that still hasn’t been implemented in Cycles. Hair (which is on its way), smoke and volume rendering and maybe a few other things I don’t know about.

For a noob I’d say that the node style editing is by far superior than the in-your-face settings that BI shows you. Since the compositor also uses the node system, I’d recommend for everyone to start with learning cycles as it is the most used system (can also be activated in the BI, for instance)…

The differences of nodes between BI and Cycles is of course the Shader nodes. Those are the best ones to master from the get go (as it also includes the mix node, which I’d think is the most used nodes in most shader trees). Also, since lamps also uses the emission shader it makes for a great group of nodes to get a hang of from the start.

(As a side note, the output nodes are obviously essential but since they are usually there when you start it up, they are mostly accessed if you accidentally deleted them.)

I’ve done quite a few quick jobs in which I didn’t use anything else but the shader nodes too, so I’d say they give a quick predictable result without having to know too much about the other node types.

well often I hear terms like “unbiased” this and “pathtracer” that.

and obviously when creating non basic shaders,like car shaders,masking materials.We get more complex.

and things like the cause of fireflies,correct use of Hdri .these are things I want to touch open I guess,aswell as the nodes themselves.hmmmmm

Well… I think the technical jargon is maybe a bit much to go in on for starters and also I’d say that it requires someone else with a lot more insight to explain it properly.

The way I’d explain it (which probably is not the correct way, but in broad generalization) is that Cycles does render the bouncing of light. Something which game engines can’t really do in real-time and which the BI doesn’t do well, as it’s more a way of cheating and getting around the whole light bouncing problem…
As an example is that if you shine a light down the hall, light will spill into the rooms and light them (albeit very dimly). In the render tab you can set the number of bounces that Cycles calculates (I think the default is 8 bounces)…
This directly affects the calculation times and could be lowered, especially when just doing preview rendering.

Another strength of Cycles is that you can put the viewport in render mode and (depending on your computer power) get instant feedback on the final result, without having to click render and exit the image view, reposition the camera and do another render…
So, while working it’s faster and more responsive.

As far as the non basic shaders, well… Most shaders still use the basic ones supplied with a tweak or two added (like lightpath or fresnel nodes that you mentioned before).
Like the car shader for instance, that is a mix of several different types of shaders, but where the mix of them is where the magic is at.

In part, learning to construct good materials is about picking apart the material you are trying to recreate (thinking about them in layers like in photoshop)…

Then you have the property of the mix node. If unconnected, the mix value will just be some position in between on or the other… Having a mix between white and black with a certain mix level would give you some grey value for instance.
But, and where the real usefulness comes in, is when you feed something into that mix value. For instance, plugging in a image will give you a masking value instead. Having some parts white and some parts white, with areas of grey…
Stacking those mix nodes on top of each other gives you a more complex shader.

As for the fireflies, it’s about having pixels that are brighter than the image type can handle (limitations with rgb)… Basically it’s just super blown out with a value way beyond the usual 0-255…

but doesn’t a Hdri file naturally have a value beyond 255? or am i Just confused.
It seems some topics you can’t avoid the jargon,avoiding fireflies is an important part of rendering,and I do seem to get the most fireflies when using HDRI ,image based lighting.

The explanation of cycles behavior seems logicial.So what are light paths? is it self explanatory (the path of light) or is it more complex?

Under the light paths menu,you have bounces,which you have explained,but you also have transparency,diffuse,glossy,transmission which all seem to be set to 128 by default.So whilst this is set at 128,light won’t bounce on these specific things,for more than 128 bounces? again might sound stupid,but just want to clear things up.Does increasing these values create a more “realistic” image?

but then,You also have,the lightpath input node which says things like “is camera ray” “is glossy ray”

this is where I am slightly confused.

thanks for the feedback so far,and you are right,someone that has more insight will be useful for compiling the more technical information.Path tracer,and biased/non-biased seem to common terms though.I suppose I can google that.

Hdri stands for high dynamic range imaging, so those are .hdr or .exr files, floating point formats which can store realy big or realy small numbers, but have a limited precision. Normal images are mostly 8bits per channel only, so limited to 0-255, so if you want to capture an environment you logicly have realy those bright parts, and you can encode that range much better without losing details. the sideeffect is that raytracing is based on random light bounces, so the brighter and smaller points you have in an environment image the more noise it will contribute and thus take longer to clear up, but there’s tricks to counteract this, for example the Importance Sampling of the background (always enable that with hdr backgrounds!).

Light Paths just tell you what the incoming ray type is, so you can do tricks like let the camera see something different then any other bounces, or make secondary reflections blurrier to reduce noise, stuff like that. You lose precision and accuracy that way, but it’s important for production use to be able to tweak parameters and bring rendertimes down, cheat where you can to save time :wink: The bounces settings are sort of in the same area, to tune your needs and possibly reduce rendertimes if needed. Say you render an image that’s totaly lit by direct light, you don’t need it to bounce around 8 times, maybe 1 or 2 is enough? You will probably cut your rendertime in half and it will look nearly the same. It sure is a bit overwhelming to have all those settings available to you, but some can be realy usefull for advanced users or professionals.

You are right about the HDRI… My explanation was more of a general way of saying that they are ‘out of proportion’ pixel wise. I ahven’t really tried to see if the amount of fireflies depend on what output format you chose to render to, but I’ll leave it to the experts to explain that one better…

Light paths are a way of taking the light calculation and modifying them… For instance you could use some math to see how far the light has traveled into an object, or separating the light that is reflected and using that…
One example would be to have an invisible light bulb. You make a scene with an object as an emitter shining light on the cube (or Suzanne which is the standard Blender testing object)…
Then you take your light bulb and mix it… One input is the emission shader, the other is the transparent shader…
If you plug in the is Camera Ray into the mix factor it will essentially make the bulb invisible to the camera, but still have it shine light on your objects.
The Is Shadow Ray has been used to create fake caustics
I haven’t seen much of the others but I’m sure there are some crafty buggers that have come up with some specialized solutions with those…

You are right… In an ideal world the light paths should be infinite, but then the computer would probably need an infinite amount of time to calculate their paths. So it’s one of those trade offs between getting good render times and good results. Most of the time you probably wouldn’t need to amp up those way too far.

I understand the confusion because of their similar names… The light paths in the render tab is the overall global settings and puts a restriction on how much bouncing around there can be… Whereas the lightpath node is used for ways to cheat with those light paths…
Here’s one nice and short example:

I guess it depends on your personal understanding of those things… I’ve found that if I start googeling those things I quickly come to a point where I feel I need to discuss it with someone to really get it (especially wikipedia where things start off simple enough and then quickly turns into something that looks like a math teacher exploded)…

this part was not clear to me.

whenever I here about hdri and exr,I hear 32bit which holds more colour/brightness information right?

eviltesla says above that fireflies are caused when certain pixels in in hdr image are too bright for the image format,what image format is this? (or have I misunderstood)

where are importance sampling options in cycles?

the rest of your post was pretty clear,making a lot more sense now,especially the lightpath node.The Hdr think is still confusing me a little,but i’m almost there haha.

hhaha that is exacty it!!! a math teacher exploding is pretty spot on when it comes to wikipedia,thats why i’m hoping I can compile the information a little better for noobs!

the rest of your post was very informative,i’ll try your light bulb example soon.

Hey gexwing… Nice of you to join in…
I have a question to you regarding the light path node… Have you seen any good examples of using the outputs that aren’t:
Is Camera Ray and Is Shadow Ray?

I saw BlenderDiplom had one where they faked light absorption with the help of Is Transmission and the backfacing Geometry output…

Just as you ng-material, I’m curious to find creative uses for this node as it does seem to be a the heart of the most of the awesome shaders I’ve seen (that aren’t the OSL ones)…

floating point numbers work like the scientific notation. i.e. 1.5e20 (1.5*10^20), you have some bits for precision and some for the range of the exponent, so you can potentials store realy small and realy big numbers together. maybe look up floating point numbers if you’re interested. the benefit is that you can up to a precision of a few decimal places, plus in a huge range of exponents.

The fireflies are just an inherent problem caused by the algorithm, you take random samples and average them, the more you take the better the value goes to the real value. potentialy you’d need infinite many samples, but once you see no difference any more, why bother right? It’s not that the pixels are too bright to be stored in the hdr, but if you have a small bright spot, the chances it exactly this spot gets picked is quite small, but once in a while it gets picked, and because the value is huge, it has a big impact on the average, so you get fireflies. The importance sampling for the environment is in the world tab, under settings. Try it out with a hdri which has the sun in it for example, you’ll see that enabling that setting makes a huge difference.

EDIT: also check out to get some good tips

I used a few of them, there’s also infos on what they are exactly on the wiki:

Creative uses mostly come when you’re fiddling around with Stuff in my oppinion, for example the one mentioned earlier with the fake caustics, it’s common to make the glass transparent for non-camera rays so you don’t have shadows with noisy caustics, then maybe he got the idea to darken the the transparent shader depending on the angle the light shines through it -> tada you got your idea for an awesome shader setup and share it :wink: The great thing about cycles is it’s flexibility.

Yeah… I just feel the wiki doesn’t do the community justice in that sense.
But you are right, the most amount of joy comes from happy accidents, but at times some examples from others are great way to create a spark of ingenuity. That and a seemingly impossible task that needs to be solved in no time…

With that I bid you good night, I’ll check in tomorrow to see if you’ve got more questions up your sleeve ng-material.
Take care and happy Blending to the both of you.

i’m in the u.k its late here ,so i’m off too,i’ve got a lot more up my sleeve.haha

merry christmas.thanks for the help so far! things are getting clearer

edit: are you talking about “sampling as lamp”? gexwing

Next I have Fresnel,well i’m not sure I fully understand the concept of what Fresnel is.

also there is a Fresnel input node,and a layer weight node.The layer weight node also has a Fresnel noodle.I think i’m clear on what IOR is ( a number that describes how fast light travels through a medium) and there seems to be lists of accurate Index of Refraction out there for materials/mediums.

I guess scale of the scene also becomes important here…I was always told "meh,scale doesn’t matter) but for correct light/materail calculation I suppose it does?

Fresnel is one of the easier one to understand…
Basically it takes the angle of the surface into account. Plugging those into a mix factor you could have the effect increase depending on where the viewer is. For example having a lake being reflective when you are seeing it straight on, but when you see it at an angle it becomes more transparent…
You can use that for car shaders or even skin shaders (giving a peach fuzz)…
The reason there is two is probably because the layer weight node is newer. Fresnel is the optics calculation whereas the Facing noodle just takes the angle of the faces and that’s it… So basically they do the same thing, it’s just easier to get the point of it with the Facing noodle…

And you are right about the IOR value… Because light travels differently in different mediums, it will bend in odd ways…
The most common value is either for water 1.333 or for some type of glass which has a value between 1.5 - 1.9 depending what it’s made of… I guess the whole scale is centered that air should be 1 (which in reality fluctuates for a myriad of reasons)…

Blender isn’t physically accurate, for that you would need to install an external renderer like LuxRender which deals with energy and wavelengths and so on. For instance you can just model a prism in Blender and think that it will create rainbow effects.
But if you brake the light color into each of it’s rgb components, you could give each color it’s own ior value to simulate it.

I’m sure you can find a lot threads covering this as I’m sure the question has been answered better before.

Scale does matter for many reasons. The most obvious being that if you are going to link object from one file to another it just makes sense to have them all to scale. Other reasons would be values of the camera for instance, using Depth of Field and those sort of things…

really? I remember,watching a tutorial once,this was with BI I think,where the guy said the scale of the object can effect the IOR of the object.Maybe it was a maya tutorials,I can’t remember.

hmmm so what do you mean it isn’t physically accurate? is this the difference between biased,non-biased?

the rest made perfect sense.

The terms non-biased/biased are bit more complicated to explain, they are terms used in statistics, maybe read some here: . Basicly you can get less noise if using a biased ray distribution, but you won’t get to the real value in the limit case for unlimited sampling. Non-biased sampling is sure to get to the correct answer, but it may take longer to clear up noise, and for animation especially you want a non-biased sampling strategy so you don’t have biasing artifacts. The term physicly accurate describes other things, mainly energy conservation (bounced light can’t have more energy than incoming light) and the material systems being close to physical materials. For most parts though those terms are quite technical and you shouldn’t worry about them.