Projecting image cylindrically.

Hello, i need to project an image around a cylindrical object, similar to the UV Project modifier, but, in a cylinder way, is that possible in any way? The texture that is projected also needs to be manipulated in realtime just as the UV Project does it, i tried using multiple objects in the UV Project, but the UVs get distorted.

What i want is to get a cylinder, UV unrwap it, apply a texture, then project the texture towards each face negative Z axis, in the direction of another object that is in the center of the cylinder, thus, projecting the cylinder’s texture onto the surface of the second object.

Is this effect possible in any way?

Thanks in advance.

Edit, i also thought about taking a cylinder, separating all the faces into separate plane objects, mapping each plane object to a piece of the texture i want to map, then, put them all into separate UV Project modifiers, but that distorts the image when i scale and rotate.

In other words I want the U Coordinate to wrap around the object and V coordinate to just go from bottom to top of the object, and then, being able to rotate, move and scale those coordinates.

Edit 2: I’ve come up with this very ingenious node setup that takes the Object Normals and it’s Z value, does some weird math and makes a Gradient that revolves the Object 360 degrees, creating this way the U value. And the V is just same as Z but reduced in size:

(Ignore the Material Output Node in the middle, it’s just for previewing stuff)

The problem with this, is that the Normals are not always straight resulting in unwanted stretch of the texture, and it’s really tricky to rotate and move and scale. And this only works if the object is a vertical cylindrical object, i’ve come up with setups for horizontal cylindrical objects as well, the tough part is changing setups when the object rotates and changes direction.

Result, image goes around character and can be positioned wherever you want, though it’s tricky, and there’s a lot of stretching and clipping, for short, it’s very crappy:

Does anyone know of a better way to do this?

I’m not sure if this is what you’re after, but this one uses location, orientation and size of an empty to create a cylindrical coordinate system:

In addition it also creates a 2D distance from center should you want it (say for 2D disk mapping, which can then be combined with 2D cylinder mapping to form a compound 2D mapping based on normals and so on). I’m unable to make a working atan2 node based on maths, but the radial node works just fine.

Cylinder map = x:angular, y: height, z:center distance. Ignore z for 2D.
Disk map = x:angular, y:center distance, z:height. Ignore z for 2D.

From what I understand you only need toe angular gradient node and the height prior to the combine xyz node, ignore the rest.

This is a cylinder projection based on an external empty, so no need for any UV unwrap. The red nodes are just scaling nodes, wouldn’t be needed if empty was located in center of cylinder and cylinder height was 1.

Sorry if this is completely not what you wanted. Even if turns out to be impossible, would be nice with image illustrating what effect you want.

Thanks so much for this man, how can i ever repay you? I’ll test your system and i’ll let you know if that works.

I’ve posted an image of a texture being projected onto a character and it wraps around the character, that’s the effect i want.

EDIT: Yeah, it works perfectly, i didn’t know there was a Radial feature in the Gradient Texture, that’s so handy, it’s like a Sine value for the angle around the Z Axis, it could be used to get the angle of each vertex around the center of the object.

Anyways, the image gets wrapped around the object perfectly, no seams or distortion is apparent, however, for me it doesn’t seem to be affected by moving, rotating or scaling the empty, it stays in place, but i can move it with math nodes.

Does anyone know how to make a perfect spherical projection based on an empty?
I tried the following xyz combine:
z = 1 - gradient (spherical).
y = arccos (empty -z/calculated z) * 0.31415.
x = gradient (radial) + 0.75.
This I can further improve to crop the coordinates to create a mask.
When masking a texture this is then multiplied with facing side only (1 - dot(empty texture coord, geometry normal).
Which is then multiplied with an inverse square “light falloff” function to attenuate the “light”.
Obviously since I cant cast test rays towards the empty so I can’t check for intersections and produce shadows, but it kind of works as a “light projector”. I think I could even introduce faux blur by distance.

But swapping between the raw texture output and a generated texture coord for the object using sphere image texture, there is a slight change. The constants were adjusted to get close to the real spherical texture mapping, but I don’t understand why they are needed, or why they are so “weird”.
Here is the node setup:

Have you tried only two radial textures? One for the X and one for the Y, you will have to rotate the Y texture 90 degrees, then combine both factors into the X and Y inputs only, no Z axis for UV Mapping is needed.

I made some different approach, is this good enough?

I’m pretty sure the y component should be able to be a trig function given 0=northpole and 1=southpole.
It feels like this obvious thing staring me in the face, yet I can’t even get a single hemisphere to give the exact same result.

I’m not able to follow what goes on in your setup, which is also incredibly hard to read.

Im sorry i cant get a better resolution, ill make segmented pics of my nodes and post them

Have you checked if it works not with a brick texture but with a UV grid texture?
Do a generated texture coords -> spherically projected image map of it, and compare if the output of that changes to when hooking the image up the the coords you generate with the above method. If no line changes the slightest, I’d say you have an exact equal output as in the image node. Mine “looks like” it generates spherical coordinates, but they are not exact replica. Doing 0.75 on the x (radial) doesn’t bother me, but I can’t figure out why I have to do what I need to do to y (arccos).

Probably easier for you just to share the node setup via pasteall. :slight_smile:

Well, I managed to get mine working if changing to:
y = arccos (empty -z/calculated z) * 0.31831 (rather than 3.1415).
Using a difference color node on generated spherical and my object based custom spherical coordinates, it gets completely black (no difference). And although I guess task acomplished, I’m still wondering:
Where the heck does the 0.31831 number come from?
Would have been nicer to calculate it somehow rather than tweak small fractions until there is no difference. :slight_smile:
Image of the node setup (only the top half deals with the spherical coordinates):

Yeah the comparison is not exactly the same, i’ll have to tweak it a little bit, but it’s very close, i just gotta scale the UVs that i made, make them fit corretly the image, in a Sphere projection it seems like the texture repeats itself twice on the V Axis, or am i mistaken?

That number must be a fraction of Pi, if it’s a sphere, gotta make some experiments

But i can see you have a pretty solid knowledge base on Vectors, arithmetic and trigonometry, unlike myself who only knows the very basics. Good job with that man!

you need to bake the texture from the cylindrical projection to the uv map. then it will work with transformations and armatures.


Does anyone know how to make a perfect spherical projection based on an empty?

i posted a few years ago all the projection types made with nodes… i can look for it, or i can build new ones from scratch if you want.

Sure. Although I think I’ve nailed it above (#9), I’m confused about where that weird constant is coming from.

In that example I also have pyramid mapping - simply a parallel mapping that gets smaller towards origin. Z attenuation and backface handling would be done outside this group to create a projector light shader, but on a per material basis and no way to handle shadows obviously. Also, I haven’t done blended box map yet.

So, in my spherical approach I have to rotate by 90° to achieve the same look as generated texture space for spherical coords. In your mind, does it make more sense to allow different results to have cleaner maths going on, or use additional maths to generate equal results? Equal results sounds nice, but you either have to do a modulo to get back to 0-1 range which doesn’t allow multiplication (say 0-10), or have it span 0.75-1.75 without any modulo which has a weird starting point but can be multiplied? I know I could expose such a multiplier and still have 0-10, but I’d rather have it clean.

The major point about mapping is that you don’t need any UVs, and doing stuff to the empty (any stuff) should modify the result as long as the input texture object points to the empty. Obviously this only works for objects not deforming. Or, as Secrop mentions, it could be used as an intermediate step to generate the projected pattern back to UVs, in which case you only need to assign UV map as texture input with the generated texture as input (empty and this complex material no longer needed). The UV mapped texture will then deform according to the mesh, with all the drawbacks UVs have (stretching, artefacts etc).

No I don’t have any knowledge of those things. Wikipedia has :smiley:

ahah, that constant is very well known!! thought in another form:

a/pi=a*0.31831 <=> 1/pi = 0.3183099… :wink:

Lol, got it, thanks. :o

here’s the screen shot from the old post… but I’ve the feeling that it can be improved. It was made when I was still starting learning math, so there may be some errors.
Anyway, I think the main problem of this thread is already answered.

I’m trying to clarify your spherical projection node tree but I don’t really get how it works:

@secrop Could you complete it with more commented frames (I hope I got the first ones right at least)?

Here’s my current tree in a blend file:
Spherical projection - Clarification.blend (511.0 KB)