Randomizing texture mapping "location" 4 emitter objects w/procedural material

NOTE/UPDATE:
This has been partially solved (see below discussion). However, my goal was to apply this kind of randomness to cycle displacement. Based on the discussion below, I think what I’m trying to do is impossible, but I have decided to move that to a separate thread in order to hear all the possible solutions to that question. Here is that thread:

Original Post:
I am trying to use an emitter to create a field of objects (in this case bumpy rocks) with a procedural texture where each object has a randomized bumpy pattern.
Ideally what I would like to do is utilize the “Random” node output from the Object Info node to manipulate the generated UV map in the Mapping node in terms of “location”.
So this randomized value per emitted object could be utilized to make the object intersect with the procedural texture at different locations (so to speak). This would make all the rocks unique in appearance.

Unfortunately I can’t seem to figure out how to do this in the node editor.

I do not want to use world space coordinates as that would mean the objects could not move without changing appearance.
I am not a pro at python or drivers, but I would prefer a driver-based solution if given the option. At this point however, any adequate solution would do.
Thanks in advance for the help!!!

Well… since the Mapping node doesn’t allow to plugin values for scale/rotation/translation, you need to transform the coordinate system ‘by hand’…

Changing the location is just a matter of adding a new vector to the original coordinate system. For example, if the new vector has a X component of 10 (assume 0 on Y and Z, for simplicity), the result of adding it to some Coordinate system is that everything will move 10 units in the X direction (of the coordinate system).


Thanks so much Secrop for your answer! I also was told about another answer which is basically quite similar (and surprising to me). Apparently you can connect the Random value to the mix factor of a Color Mix node. Then instead of colors just plug two different Vector Mapping nodes with different rotation or position (credit to Duarte Farrajota Ramos for this great idea).

HOWEVER, I have discovered a new problem which makes this not ultimately work in one particular way (displacement). These methods appear to be working when I feed the results into a diffuse color node. HOWEVER, I’m noticing that it does not show any variation when I plug the results into a displacement node with true displacement on.

I’m using Cycles micro displacement. It’s very curious. I have a procedural height map with the random elements in there. When I input it into a color in a shader, I can see variations in each emitted object. However, when I plug the same procedural grayscale map into the displacement node I get true displacement but the bumps are the same for each object.

If this can be solved it would be quite a game changer in my opinion.

Unfortunately that option is not available, and it won’t be for some quite some time (probably not even in 2.81)

The problem is that geometry and shaders are stored in different parts of the render…While the shader is recreated whenever it is needed (when a ray hits an object with that shader), the geometry is permanent and is stored in the BVHTree with the displacement already applied, and because of this all instances of the same geometry will share the same displacement information.

Hey Secrop; thank you for your reply. That is quite a bummer as my hope was to use the particle emitter and randomized cycles micro displacement across objects. I think that would be pretty amazing. I wonder if there is some kind of work-around or script that could make this option available, but from what you’re saying it looks impossible. I guess one brute force work-around is to “convert” the particle system into a field of individual objects. After that, Blender does appear to make them all individually randomized but at quite a cost in terms of render speed. I would expect there to be a negative hit for render speed of course, but I suppose that this workaround is additionally inefficient for other reasons. Anyway, I’m all ears if anyone can think of a better work-around.

In this case and for the moment, converting everything to particles is the only solution.
Render times won’t change very much…

The problem is building of the bvhtree and the final size of it. Every instance must be stored individually, and with microdisplacement, we’re talking about meshes that can have far more information than the original one… If there’s enough RAM for the scene and the converted particles, then go for it. Animations will suffer a bit as the bvhtree must be rebuild, unless everything is static and then you can cache the bvhtree.

Perhaps a more efficient way to apply displacement at the shader call could be found, but implementing this will still require lots of work and creativity.

Well… since the Mapping node doesn’t allow to plugin values for scale/rotation/translation,

Someone fixed that and I abuse this rather heavily.

:slight_smile:

It’s not difficult to make the transformations with the nodes that come with cycles. Translating and Scaling are as simple as adding and multiplying, and Rotations are a mix of both.

And it would be even easier if we had matrices in SVM nodes. :wink:

Ehh, huh? How would that setup look like? I’m using trig functions for my rotations.

@sirmaxim
Ooooo that’s really cool! Thanks for sharing! :smiley:

I’m not saying that sines and cosines are not involved, but at the operation levels one uses only additions and multiplications (that’s how transformation matrices work). And even sines and cosines can be calculated in sucessive additions and multiplications. Funny math world. :stuck_out_tongue: