Cycles Development Updates

Goodbye n to the power of 1/2. If anyone needs cube root or beyond change 1/2 to 1/3, 1/4, etc. Fract was via modulo 1 ? More discoverable I guess. Thanks devs.

it would be nice,if the math nodes could handle rgb calculations.now, if you build a complex shader,you have to split all the calculations to the three R,G and B channels.if the math nodes could handle this,the noodels would be shrink to 1/3 and maybe a slight boost in calculation time.not talking about better easyer bugfixing in shader build too.

2 Likes

That’s really great news - can’t wait to try it out. I love using procedural textures in my materials.

I’ll give it a try once the buildbot builds update (just downloaded the current 2.79 build - and it’s not in there yet)

This is what the mixRGB node is for. The blending modes (multiply, add, subtract, divide, etc…) are just math operations on the three color channels (make sure to set the factor to 1).

kind of yes,but no.have you tryed, to build a complex formula ,with the color rgb nodes, sqrt ect?

your calculations must be in rgb from beginning to the end.if there is a calculation what makes the rgb to a greyscale all your calculation is useless.

i can talk only from my thinflim shader project,there i had to split all calculations to three RGB channels.i cant see that i could do this, with the mix RGB in one noodle.

and of course nice to see all the optimizations.

edit,and if we talk about optimizations.i have folllowed the filmic gamma thread,and there was a talk about ground trouth settings for lightings.
if you add a sun light to your scene.the default value is 1 right?,ok what does that mean?is this the physical correct value for a sun hits a material surface on earth?
i mean there should physical correct basic settings that is readable in the settings.

same goes for the volume absorption,there should be a physical correct value implemetation.for example the k value ,now its a approximation,that is not bad,but eye catching is needed.you can get the k value from refractive index and papers and what not for most used materials in the net.

its not rocket science.a k value for water at 520 nm for example has a k value of 0.000488 -1/cm
this means at every 1 cm unit distance ,this value gets a absorped,simple as that.

i think if all the light,absorption,scatterings ect are physical based correct with all values then we have a even more push in terms of realism.

same goes for scale and light,i have read somethere if you scale down your scene, the light scale stays in blender units,im not sure about this one, but just saying.the scaleings and physical behaviors must be correct.

the units must be correct

and this is no bashing,its opposite ,i really like blender and how its grows,i want to help and push it even more to get better.

Dang, lol I was going to add an example image to my last reply. I’ll post the edit here since i don’t want the pic to go to waste.

You can manually override the values in the sliders if you need to:

Naw, I haven’t. I’m not a wiz when it comes to node composites (you sound like you are a lot more advanced than me). However, despite that, isn’t this more like a request for more blend options on the mixRGB node?

Each blend mode is just a math operation on the inputs. The mixRGB node is basically already what you are asking for. It just only supports those standard blend modes atm. I think it would make more sense to expand what operations you have available within that node. It’s at least worth requesting.

Man, I am really close to being able to answer that. I read a response from either lukas or brecht concerning the units cycles uses. I just can’t remember what the unit was called.

It is based on a widely used and physically correct unit, but its a different unit from what a lot of the other rendering engines use (its not blend units, I think it was related to light falloff within a meter). I’m sorry that I can’t name the unit. I just can’t remember where I read this, or what it was called (I think there was a link to a wiki article on it too).

It’s related to cycles not being a spectral renderer. They can’t expose a lot of the values mentioned in research papers. It would require changing how cycles works from the ground up, plus I heard there is a performance penalty when a renderer is full spectral.

edit: Oh, and I forgot. We now have a wavelength node. Can’t you construct the equations you need with math nodes, then feed the result into this node before it’s hooked up to a shader?

Yep - if all of the mathematical operators available in the math node (e.g. power, logarithm etc) were added to the Mix RGB node - that would massively expand the possibilities.

I also think we should have a more general shader math node - which would allow more operations with shaders too (e.g. mix, add, divide, subtract etc). The shader Mix and Add nodes could then be removed too.

1 Like

Devs should have a look here for useful stuff to put into math node, color mix node, and vector math node. Keep in mind this was way back in about year 2000; although a pure recursive raytracer we had some better options even if the “language” was “nodebased as a stack of nodes” and as such didn’t have graphical nodes as we see them today.

It would be useful if i.e. the math operator colored the second input red if it wasn’t used, and for cases above it could take any input if you changed the type from divide (with 1 in the upper slot) to invert. Maybe buttons to inc/dec number of inputs, which could allow an average operator? Furthermore, the recent 2.79 have two rows; why not separate them more (with headers) by type; arithmetic, comparison, trigonometry, exponentiation, complex numbers :), other useful functions? Modulo could have a new ModuloC(ontinuous) version, which is continuous over zero.

Two of my most used custom math nodegroups are fLerp and fStepFunctions (linear, smoothstep, and smootherstep outputs for three inputs - the two latter being very useful for bump interpolation where a constant change in gradient looks far better than a fixed gradient). Sure, colorramp node can give greater control over how the curve is blended, but it can’t take any control inputs and such is impossible to expose to a node group.

The same could be said for a custom mapping node with input sockets for everything, which also could output a mask slot if any min/max stuff was going on.

A four input slotted fValueNormalizer is certainly handy for making sense of some of the possibilities the Musgrave generator can output; Input Value, Measured Minimum, Measured Maximum, and a Clip Preview if enabled will color n<0 black, n>1 white, and 0<n<1 grey. It’s also a valuable general purpose debugging tool for your nodes.

And then there is the custom nodegroup builder, which is very limited and could use some love. Allow us to use:
Checkboxes.
Radiobuttons.
Dropdown menus.
File and paths (for textures).
Interval ranges for value sliders.
Change slider behavior (value node vs bump strength node, which I often prefer).
Custom tooltips.
GUI text, aligners, spacers, dividers, and packers.
One line near header to show (color coded; green, yellow, red) info about complexity; number of nodes, groups, depth, and so on.

2 Likes

We’ll probably add a configurable noise basis to some texture nodes, but nothing else is planned to be ported directly.

The result of Fract is always positive in the range 0…1, which is not the case for modulo.

I agree it would be convenient. In general it would be useful to support nodes that dynamically changed socket type depending on what is connected.

The unit is 1/m (assuming 1 Blender unit = 1m), it’s physically correct as far as I know.

It’s W/m^2, and as far as I know quite standard among renderers. Which other unit did you see other renderers using?

This is not true, the choice of units exposed is not limited much by not being a spectral renderer. Generally we just made the choice of exposing the same types of units that you find in physically production renderers like Arnold or PRMan and less something like Maxwell or Luxrender, but we’re open to having options on shader nodes to use different units.

1 Like

same goes for the volume absorption,there should be a physical correct value implemetation.

The unit is 1/m (assuming 1 Blender unit = 1m), it’s physically correct as far as I know.

and this is the thing,how should a artist know that 1m units are used?so the best thing is,to add a unit to the input value,so every artist know,it is 1m unit used peer absorption value.

isnt this a bit rough?asuming the red absorption in water is arount 4 m before get absorbed?i have to test this.and now we come back to scaleing.what happens if you scale the scene up or down,are the values still valid?

usally the most measurments are taken at 1cm curvets.this means if you want a simple value for k ,its mostly at -1cm measured.but its ok,we can multiply the cm x 100 to get m.

a physicist told,to every measurement,legend or whatever,you have to place the unit used,otherwise no one knows what unit is used.

i cant say it enough,labeling the units are so important.and keep it consistend to scaleing,if that makes sence.

edit,if 1m unit is used for absorption,what unit is used for density?i guess ist peer m too?but what means 1.0 density?if the density is full opaqe,is it 100 or 10 or 1,what is density % is used from what unit?

wait,i guess its k * density ?

I said this based off of the vague memory of a forum post I read somewhere. I thought you or lukas made the statement, but I guess that might not have been the case.

The claim was that there is another unit other rendering engines use. Both units are physically based, but the difference in units is why we can’t enter the same lamp intensity in cycles that you would use in something like vray. I’m going off vague memory here. It’s possible I’m just mis-remembering what was said.

Oh, sorry if I misunderstood that. I thought the selection of inputs to work with was related to having to deal with color values instead of wavelengths.

In vray and Corona there are many other units.
http://help.chaosgroup.com/vray/help/150R1_old/light_params.htm
https://coronarenderer.freshdesk.com/support/solutions/articles/5000516249-what-light-units-does-corona-use-
Those are sometimes handy when you have real world data and because some don’t change intensity with lamp size.

Woah! Was not expecting that.:grin: I filed that in my “never gonna happen” directory long ago. Awesome! One less over-complicated node group that I need.

Between that and the AO node, what’s next? Bake/cache node? DirectX/OpenGL (-Y toggle) in the normal map node?

not quite what you’re asking for, but there has been a diff to make life a little easier with baking by introducing bakepasses.

That BakePasses system looks interesting for sure. So a BakePass can store settings for a specific pass type per object? For example: the optional high poly source object, and optional cage object, destination image, samples, pass specific things, etc.

Quite an improvement. I assume new pass types are going to be added in addition to the PBR ones? Curvature, id, and similar?


As for nodes, how about a “Bake Pass” node. A node with a single input socket that takes anything and a text field where you name your custom pass or override an existing pass.

The reason I mention overriding a pass type is all the talk of baking PBR maps. Where does the baker get each pass from? For example, a metallic pass would have to be from the Principled BSDF node as it’s the only one with a metallic input. In most of my projects I don’t use the Principled BSDF node at all, I use a custom group, but I would still like to be able to bake out those same passes. I guess I am getting deep into the mud here…

Essentially if somebody links/appends my material and they put it on something and want to bake a metallic map, I should be able to specify what data is “metallic” because the baker has no way of knowing as I did not use a Principled BSDF. This way even though they used my material they would get an expected result rather than just black.


Also on the subject of baking. I would love to see a pure python API for baking that avoids the operator and UI state or the user bake settings. A single function that a script/addon writer can call to bake with their own settings. Pass the bake settings to a bake function as a dict, where you specify the usual stuff: high_mesh, low_mesh, cage_mesh, uv, material (override), camera_settings (for view dependent inputs), pass_type (normal, curvature, ao, roughness, custom, etc.), samples, and on and on.

Yeah, exactly - like a Renderlayer, a Bake pass contains the settings for running that particular pass. When you run the bake operator on a specific object, all bake passes are executed.
The idea is that you can set up e.g. Color, Normal, Roughness and AO passes and then bake or re-bake all of them with one click.

As for the node, that’s an interesting idea! Now that the output node has a usage enum anyways, it’s absolutely possible to add a “Bake” option to the output node.
Edit: This also would solve the issue of having to switch between a shader for baking and one that uses the bake result - just plug e.g. a Diffuse BSDF into the Bake output and the bake result texture into the Render output.

Regarding API: Hm, tricky. I see why it would be great, but one of the goals of the new system is to let render engines define their own options, and adding parameters to the bake call wouldn’t really work. However, at least the bake pass system removes the need to mess with the UI state - you can just add a bake pass, set all the settings, run the operator and then delete the bake pass.

1 Like

Yeah that’s why I specified to just pass a dict for the options. There are too many options to have individual args for the function. Settings not specified in the dict have a reasonable default and if someone puts a setting in the dict that the render engine does not use or understand it can just ignore, or warn, or both. A script writer would have to just know the options to use for a specific engine (preferably using docs of the bake passes + universal API options such as image size, etc).

Maybe a few different functions for the desired type output. One for returning a new or existing bpy.types.Image, one that writes out to a file in a specific format, and one that just returns the raw pixel data in either byte or float array (RGB or gray).

Does this include only the settings from the current ‘Bake’-panel or possibly certain related settings (which afaik live in the scene, thus might be bound to it), mainly the number of samples used? Would be handy to be able to specify sth. low like 2SPP for like baking a normalmap, while ideally leaving ‘global’ (scene-wide) sampling-settings (as used 4 F12) untouched.
Or is this going too far for current designs and maybe rather sth. to be expected with 2.8’s overrides?

greetings, Kologe

I downloaded the latest build - but that version doesn’t contain the exponent value for Manhatten. In fact the only option that has the exponent value exposes is Minkowski.

(incidentally - Minkowski with an exponent of 1 looks identical to Manhatten - and with an exponent of 30 looks identical to Chebychev). It therefore seems that both Manhatten and Chebychev are redundant options since both results can be achieved by using Minkowski with different exponents.

Is this still a work in progress?