Cycles Development Updates

(pixelgrip) #307

hm not sure about this.

i have another idea .the normalize function with input value slots,like in the other math nodes,there you could put in min and max value you want for yourself.this way you dont need a data block and you could normalize it right now,with the simple formula.

you only have to get the values with the nodewrangler preview ect

0 Likes

(SterlingRoth) #308

Sounds like @Brecht is aware of it. This is the map range node he refenced in post #300(Cycles Development Updates):

You can set the max and min, for both the input and the output.

2 Likes

(pixelgrip) #309

yes nice,this should work for most cases, if you know the min max values.

1 Like

(BlackRainbow) #310

Brecht, i noticed when i try to mix two sss shaders (random walk with the other one) the end result tends too dark, much darker than each of the shaders separately or their average. Is it a bug, or a todo thing?

0 Likes

(Ace Dragon) #311

Mixing the shaders will work properly if you use the Branched Path Tracing instead.

Though Brecht wants to unify the integrators eventually. When it happens though, it should hopefully not include that energy loss issue (this also occurs when two Random Walk shader nodes are mixed together).

1 Like

(BlackRainbow) #312

Thanx for the tip. Will it still work with 1 sample per branch? While i use BPT quite often, it converges faster with less spp, it tends slow down a lot as the scene gets very complex. My Seahorse scene wont even start rendering with BPT, gives cuda error right away.

0 Likes

(moony) #313

Surely that doesn’t matter - at least for value or colour inputs.

All you do is take the maximum input value (x) - perform the calculation y=1/x - then multiply the entire input by y.

You then end up will all input values normalised in the range 0-1.

0 Likes

(Ace Dragon) #314

I’m pretty sure it wouldn’t be that easy for any type of data that isn’t an image.

In an image file, all of the values are fixed and you can get the brightest pixel. Procedural textures meanwhile are done using math and the resolution is infinite. Then you have the idea that it could’ve been operated on and mixed with other data before it gets to any normalize node.

I’m pretty sure we can count on Brecht to have enough in-depth knowledge to determine the viability of things.

1 Like

(CarlG) #315

In my own custom normalize I have measured min, measured max, and preview on/off (I don’t have output scaler, but would probably be nice). When preview is enabled, output < 0 is black, output > 1 is white, and anything between is gray. Preview is very important when tweaking or even when bugfixing a complex node setup.

0 Likes

(moony) #316

Whilst that is true - a procedural texture ultimately outputs either colour data or a fac.

At the point it does this you know what the output value is and hence can normalise it prior to passing it to another node (or you could parse it through a node that could perform the normalisation - like the math or colour ramp nodes).

0 Likes

(moony) #317

That would be good as it would make it backwards compatible with the old Voronoi node.

However another solution would be to remove the apparent clamping that is happening in the colour ramp node (or make it a tick box option like in other nodes).

This might actually be more useful since it’s not intuitive that the colour ramp node should clamp the input value. The fact that it appears to would also affect other inputs - some of which might legitimately be outside the 0-1 range.

Until this issue - I was always under the impression the colour ramp node was unclamped and simply remapped any input values to the specified colour ramp.

1 Like

(CarlG) #318

It would still depend on zoom level/locality of the texture; adjust the scale on a rectangle (could differ on different geometries) and you could get new values that clipped. Using musgrave in the visible output range, I’m always forced to scale it using a custom normalize node.

0 Likes

(Ace Dragon) #319

Using the Brightness/Contrast node (with contrast set to -0.75) works too.

As for the Color Ramp node no longer being clamped, it needs to be optional (sometimes, you want to ensure an output is clamped for shading and texturing purposes).

0 Likes

(CarlG) #320

Maybe sometimes, but rarely in my experience. Wild tweaking, because you don’t know what the settings actually do, can sometimes produce differences in the tens of thousands range.

2 Likes

(Simon Storl-Schulke) #321

Also Musgrave has Values < 0 that you can’t adjust with Brightness/Contrast

0 Likes

(moony) #322

I’m not sure I understand.

When you pass a noodle from a texture node - you know what all the values are that are being passed down the noodle. Surely it would be trivial at this point to read the min and max values - then normalise these to the the range 0-1 - and interpolate all values in between. This is essentially what the color ramp and RGB curve nodes do already (except the colour ramp is clamping the input).

0 Likes

(Simon Storl-Schulke) #323

I think it’s calculated per “Raycast” so you can’t really predict what the max value is gona be without a performance hit from precalculating the texture. As Ace already mentioned - if it was that easy surely Brecht would know it.

1 Like

(moony) #324

If this were true - the output of all procedurals would be unpredictable would it not?

How can we perform predictable mathematical operations on procedural textures if they are subject to change each time you render.

For example - if you use the “greater than” math operator on say the voronoi texture - you would expect the observed black and white blotch pattern to change depending on the lighting, camera angle, object geometry, zoom level etc.

I haven’t observed this behaviour (or if it does occur, it’s so minor, it’s unnoticeable).

These two cubes have the same material applied - but are different scales. The patterns are identical as far as I can tell.

0 Likes

(Ace Dragon) #325

I think the better way would be to tweak the actual mathematical algorithms used to create the procedurals rather than trying to implement a normalize node to be placed afterward.

Sure, the algorithms are predictable, but how do you pass that on to every other node further down the tree without making the code far more complex?

0 Likes

(LazyDodo) #326

I got curious on why there would be a difference between 2.79 and master and ran some tests, the behavior actually is identical between the versions.

But it has to be a fair comparison, putting the distance metric to distance will give identical results (with output values theoretically in between 0 and sqrt(2) , going over 1.0 although rare, the math does allow for it) in the screenshot posted you used Manhattan distance which allows for between 0.0 and 2.0 hence it’s easier to see the clipping happening.

It’s relativity easy to determine the worst case scenario and normalize the outputs by the looks of it, but the results will be darker than they were in 2.79 , it’s ultimately up to @brecht to decide if breaking backwards compatibility here is worth it.

0 Likes