Cycles Development Updates

and threating the datas like bitmaps/image ect ? its datablocks could be used for min max and the function.
something simple like this?

in theory you could feed in every datablock you want.first get the min and max value from the datablock,and then start the function

But when pathtracing ā€˜every datablockā€™ is millions of paths, bouncing millions of times, shaded by any possible nodetree. Youā€™d essentailly have to render the entire scene, then do the normalization, and even then it wouldnā€™t work, since you would need the shaders that use that normalization function to contribute to the scene itself.

As far as I understand it, Normalizing is functionally impossible in a path tracer shader tree.

hm not sure about this.

i have another idea .the normalize function with input value slots,like in the other math nodes,there you could put in min and max value you want for yourself.this way you dont need a data block and you could normalize it right now,with the simple formula.

you only have to get the values with the nodewrangler preview ect

Sounds like @Brecht is aware of it. This is the map range node he refenced in post #300(Cycles Development Updates - #303 by brecht):

You can set the max and min, for both the input and the output.

2 Likes

yes nice,this should work for most cases, if you know the min max values.

1 Like

Brecht, i noticed when i try to mix two sss shaders (random walk with the other one) the end result tends too dark, much darker than each of the shaders separately or their average. Is it a bug, or a todo thing?

Mixing the shaders will work properly if you use the Branched Path Tracing instead.

Though Brecht wants to unify the integrators eventually. When it happens though, it should hopefully not include that energy loss issue (this also occurs when two Random Walk shader nodes are mixed together).

1 Like

Thanx for the tip. Will it still work with 1 sample per branch? While i use BPT quite often, it converges faster with less spp, it tends slow down a lot as the scene gets very complex. My Seahorse scene wont even start rendering with BPT, gives cuda error right away.

Surely that doesnā€™t matter - at least for value or colour inputs.

All you do is take the maximum input value (x) - perform the calculation y=1/x - then multiply the entire input by y.

You then end up will all input values normalised in the range 0-1.

Iā€™m pretty sure it wouldnā€™t be that easy for any type of data that isnā€™t an image.

In an image file, all of the values are fixed and you can get the brightest pixel. Procedural textures meanwhile are done using math and the resolution is infinite. Then you have the idea that it couldā€™ve been operated on and mixed with other data before it gets to any normalize node.

Iā€™m pretty sure we can count on Brecht to have enough in-depth knowledge to determine the viability of things.

1 Like

In my own custom normalize I have measured min, measured max, and preview on/off (I donā€™t have output scaler, but would probably be nice). When preview is enabled, output < 0 is black, output > 1 is white, and anything between is gray. Preview is very important when tweaking or even when bugfixing a complex node setup.

Whilst that is true - a procedural texture ultimately outputs either colour data or a fac.

At the point it does this you know what the output value is and hence can normalise it prior to passing it to another node (or you could parse it through a node that could perform the normalisation - like the math or colour ramp nodes).

That would be good as it would make it backwards compatible with the old Voronoi node.

However another solution would be to remove the apparent clamping that is happening in the colour ramp node (or make it a tick box option like in other nodes).

This might actually be more useful since itā€™s not intuitive that the colour ramp node should clamp the input value. The fact that it appears to would also affect other inputs - some of which might legitimately be outside the 0-1 range.

Until this issue - I was always under the impression the colour ramp node was unclamped and simply remapped any input values to the specified colour ramp.

1 Like

It would still depend on zoom level/locality of the texture; adjust the scale on a rectangle (could differ on different geometries) and you could get new values that clipped. Using musgrave in the visible output range, Iā€™m always forced to scale it using a custom normalize node.

Using the Brightness/Contrast node (with contrast set to -0.75) works too.

As for the Color Ramp node no longer being clamped, it needs to be optional (sometimes, you want to ensure an output is clamped for shading and texturing purposes).

Maybe sometimes, but rarely in my experience. Wild tweaking, because you donā€™t know what the settings actually do, can sometimes produce differences in the tens of thousands range.

2 Likes

Also Musgrave has Values < 0 that you canā€™t adjust with Brightness/Contrast

Iā€™m not sure I understand.

When you pass a noodle from a texture node - you know what all the values are that are being passed down the noodle. Surely it would be trivial at this point to read the min and max values - then normalise these to the the range 0-1 - and interpolate all values in between. This is essentially what the color ramp and RGB curve nodes do already (except the colour ramp is clamping the input).

I think itā€™s calculated per ā€œRaycastā€ so you canā€™t really predict what the max value is gona be without a performance hit from precalculating the texture. As Ace already mentioned - if it was that easy surely Brecht would know it.

1 Like

If this were true - the output of all procedurals would be unpredictable would it not?

How can we perform predictable mathematical operations on procedural textures if they are subject to change each time you render.

For example - if you use the ā€œgreater thanā€ math operator on say the voronoi texture - you would expect the observed black and white blotch pattern to change depending on the lighting, camera angle, object geometry, zoom level etc.

I havenā€™t observed this behaviour (or if it does occur, itā€™s so minor, itā€™s unnoticeable).

These two cubes have the same material applied - but are different scales. The patterns are identical as far as I can tell.