Procedural with Texture data

Hi everybody!
I have been working on a really cool way of makin particles cheap. My approach is to take an image of any size and convert it to a pixelated map. Each pixel will then be the basis for a procedural texture which also can modify its local behaviour with the color data (!).

So easy example - think a map of a star sky, but instead of a pixelated star, each star will render a procedural sphere of INFINITE resolution :slight_smile:

I have manage to divide the image data and apply the procedural spehere, however when I try to get some random noise on each tile It gets messy.

My first approach was to use OSL to write a shader. After some coding I realised that this could be done with nodes instead, and so making it render on GPU.

This is what I get so far, but As I said - cant get the same noise on the tiles y-position to match that of each sphere.

First cube has a material with the node version (tile set to 10x10) - also semi transparent tiles so that you see how they almost align.
second version has 100 tiles or something.
last version is same approach with osl, (my first attempt).

Why not just blur the image and let a random star patterns get its color?

  • My plan is to use each pixelated tile-color-value to determine the whole procedural point (aka star) behaviour. This could also be used to create som really cool semi-procedural materials that takes some data from an image map.

Any ideas or seen someone done something similar?

I’ve an OSL shader that does something similar to this, but it draws bitmaps randomly through the uv area.
My approach is similar to the Worley algorithm (look up the neighbour cells to check if the elements there occupy the coordinates being sampled), thought I had to change the routine to have a depth layer, and a density per cell…
it’s very, very slow, but here’s my results:

Wow, that’s really awesome. In a way the opposite of what I am trying to achieve. Great idea about the Worley algorithm. Maybe I can use that somehow instead.

Great idea about the voroni. I have now successfully created a mosaic-node-setup that dived an image into arbitrary voroni-tiles. No OSL

This will be the basis data of each cell instance. So managed to get this result:

and with some lines so that you can see that there is no spill between the cell. The source for the image is a texture.

Pretty cool IMHO. Have you seen anyone do this before? If so, is there a file? My current problem is that as soon as I scale or rotate the object everything gets messed up.

Does anyone know a way to “lock” the normal-data so that regardless of how I rotate the object, I get same color as middle one (output color set to emission so no shadow or other shading should be seen). I want to use the normal data as a color-input map to my mosaic part above, and if this changes when i rotate the object, well, then the mapped image will move :slight_smile:

seeing as you are using “generated” coordinates …

use object

In OSL, using the voronoi() function, if you divide the pa[0] by the scale factor and use it as the vector for the bitmap, you get exactly that effect. And it’s not view dependent, as your setup with the bump node.

My initial approach was to use OSL (to solve a bigger quest), but then thought the image-handling (path) was cumbersome and not so flexible.
The OSL reading image data was cumbersome, because you have to give path to image (and its not accepting relative path on mac). That makes it not so flexible. Giving an image color input does not let you sample anywhere on the image except in that point hence you need to have the sampling logic outside (input to the image) the OSL.

I don’t get the object coordinate to work unless someone knows how to effectively do that in example above.

I guess I’ll write an OSL that produce this voroni-bump that can be used as lookup-data for an image.

EDIT: Would really prefer to not use OSL, since blender becomes a lot slower, also other node things does not work when using OSL. EDIT2 - just a bug, the other shader started working again. EDIT3 - NOOO forgot that OSL wont work with GPU-rendering which improves render-times in viewport ALOT.

Okay - programming a OSL BW -> Heightmap was not as easy as I thought. Any ideas of how to program a OSL a Voroni BW-> NormalMap shader?

I guess one have to sample every points surroundings to determine its slope and then color that with offset in the direction of the slope…

You can use the BW output of a script in conjunction with the Bump node… But this requires an extra step:

Derivatives (slopes) in the bump node are calculated by shifting a bit the coordinate system. But the coordinates in OSL are fixed; they are given to the shader as value, and not as reference, and the coordinate shifting is not reflected in the OSL globals. The solution is to feed the script with coordinates from cycles nodes instead of using OSL global variables.

For example, if in your shader you have something like ‘vector loc=P’, you still need to plug the Position vector from outside the script (i.e. from the ‘Geometry::Position’) to get the bump node to work.

OSL has some derivative functions one can use [Dx(), Dy(), etc], but they are quite complicated to deal with since they are using camera space, and they depend in the size of the area being sampled. (and the bump() function is only a decoration! it does not work at all!)

Yay. Succeeded almost. Mine to the right, the blender bump-map to the left

As can see, it looks like i interpret the data linearly, wheras blender see height exponetial (my guess).

Going to play around until I solve this.

remember few things
shading is also based on light sources ray path
so if you rotate it will affect the nodes set up too

otherwise you are like in a no shadow situation which then becomes very un interesting in 3D render

shadows help get the 3D effects in render
same with color gradient color ramp ect.

happy cl

Interesting! Are you going to share your efforts?

The reason for this approach is to create a method for creating hi-res semi-procedural textures from low-res images. SO for example using this technique, i can let each cell of this voronoi map a copy of an image with different size based of the voronoi-cell. Step two is to use another image as basis for each cell’s color and use that as property of the cell-image.

I have already partly succeeded in creating a star-map this way.

My goal is so to say not to create some bump-mapping or so, but rather create some smart UV-transform that can help me create semiprocedural images from textures.

Sure! Ill put any code on github. So far I havn’t solved anything yet, but I will…

If you want to play around and/or come up with some cool ideas. feel free to download my testfile.

Conceptually its working as I wanted, but alot of fiddeling with parameters that would ideally be automatically adjusted.

I kind of reversed-engineered the bump map node, but didn’t get same behaviour.

Tried them, but ended up making my own by sampling the voronoi-map Right and below the current position by adding a step of 0.01. Then I caclulate the delta from that.

    voronoi(UVs,center);    voronoi(vector(UVs[0]+stp,UVs[1],UVs[2]),stpRight);
    center = center;
    stpRight =stpRight;
    stpDown =stpDown;
    xDelta = ((center-stpRight)*str+1)*0.5;
    yDelta = ((center-stpDown)*str+1)*0.5;
    float ss = max(min(1,Strength/100.0),0);
     Vector = vector(xDelta*ss,yDelta*ss,0.0);

got a few like this one

let me know
it could be modified

happy cl


Problem solved!
The OSL i had did not normalize the color-vector. With that simple fix everything works as wished.
I now have a Voronoi-uv-transformer that change in-uv to out-uv so that everything is warped around the voronoi-pattern.

Lets create procedureal textures from image-textures!

My first one: I call it Potasso

Hi, run into the same problem of overlapping things. How did you solve that? Do you simply loop over the same area several times (hence the depth-thingy) or do you repeat the node with some offset?
Is this anything you share somewhere? Would love to see that.