I have been working on a really cool way of makin particles cheap. My approach is to take an image of any size and convert it to a pixelated map. Each pixel will then be the basis for a procedural texture which also can modify its local behaviour with the color data (!).
So easy example - think a map of a star sky, but instead of a pixelated star, each star will render a procedural sphere of INFINITE resolution
I have manage to divide the image data and apply the procedural spehere, however when I try to get some random noise on each tile It gets messy.
My first approach was to use OSL to write a shader. After some coding I realised that this could be done with nodes instead, and so making it render on GPU.
This is what I get so far, but As I said - cant get the same noise on the tiles y-position to match that of each sphere.
First cube has a material with the node version (tile set to 10x10) - also semi transparent tiles so that you see how they almost align.
second version has 100 tiles or something.
last version is same approach with osl, (my first attempt).
Why not just blur the image and let a random star patterns get its color?
My plan is to use each pixelated tile-color-value to determine the whole procedural point (aka star) behaviour. This could also be used to create som really cool semi-procedural materials that takes some data from an image map.
I’ve an OSL shader that does something similar to this, but it draws bitmaps randomly through the uv area.
My approach is similar to the Worley algorithm (look up the neighbour cells to check if the elements there occupy the coordinates being sampled), thought I had to change the routine to have a depth layer, and a density per cell…
it’s very, very slow, but here’s my results:
Does anyone know a way to “lock” the normal-data so that regardless of how I rotate the object, I get same color as middle one (output color set to emission so no shadow or other shading should be seen). I want to use the normal data as a color-input map to my mosaic part above, and if this changes when i rotate the object, well, then the mapped image will move
In OSL, using the voronoi() function, if you divide the pa by the scale factor and use it as the vector for the bitmap, you get exactly that effect. And it’s not view dependent, as your setup with the bump node.
My initial approach was to use OSL (to solve a bigger quest), but then thought the image-handling (path) was cumbersome and not so flexible.
The OSL reading image data was cumbersome, because you have to give path to image (and its not accepting relative path on mac). That makes it not so flexible. Giving an image color input does not let you sample anywhere on the image except in that point hence you need to have the sampling logic outside (input to the image) the OSL.
I don’t get the object coordinate to work unless someone knows how to effectively do that in example above.
I guess I’ll write an OSL that produce this voroni-bump that can be used as lookup-data for an image.
EDIT: Would really prefer to not use OSL, since blender becomes a lot slower, also other node things does not work when using OSL. EDIT2 - just a bug, the other shader started working again. EDIT3 - NOOO forgot that OSL wont work with GPU-rendering which improves render-times in viewport ALOT.
You can use the BW output of a script in conjunction with the Bump node… But this requires an extra step:
Derivatives (slopes) in the bump node are calculated by shifting a bit the coordinate system. But the coordinates in OSL are fixed; they are given to the shader as value, and not as reference, and the coordinate shifting is not reflected in the OSL globals. The solution is to feed the script with coordinates from cycles nodes instead of using OSL global variables.
For example, if in your shader you have something like ‘vector loc=P’, you still need to plug the Position vector from outside the script (i.e. from the ‘Geometry::Position’) to get the bump node to work.
OSL has some derivative functions one can use [Dx(), Dy(), etc], but they are quite complicated to deal with since they are using camera space, and they depend in the size of the area being sampled. (and the bump() function is only a decoration! it does not work at all!)
The reason for this approach is to create a method for creating hi-res semi-procedural textures from low-res images. SO for example using this technique, i can let each cell of this voronoi map a copy of an image with different size based of the voronoi-cell. Step two is to use another image as basis for each cell’s color and use that as property of the cell-image.
I have already partly succeeded in creating a star-map this way.
My goal is so to say not to create some bump-mapping or so, but rather create some smart UV-transform that can help me create semiprocedural images from textures.
The OSL i had did not normalize the color-vector. With that simple fix everything works as wished.
I now have a Voronoi-uv-transformer that change in-uv to out-uv so that everything is warped around the voronoi-pattern.
Lets create procedureal textures from image-textures!
Hi, run into the same problem of overlapping things. How did you solve that? Do you simply loop over the same area several times (hence the depth-thingy) or do you repeat the node with some offset?
Is this anything you share somewhere? Would love to see that.