# Estimate texture mapping location values for texture mapping node

Good evening

I’ve found out lately that using texture coordinate and mapping node is easier for casting a bitmap onto surface is in many cases easier than go through the process of UV unwrapping and so on…

Are there any mathematical rules of what values to use in the mapping node for a specific objects dimension and the used bitmap dimension and resolution?

For example…using a cube of the dimension 0.3m x 0.15m x 0.003m and a bitmap of 137x61 dimension at 72dpi I have to to set the location inside the mapping node to:

X: 6.44m
Y: -6.2m

and scaling set to 16.

to just have the bitmap appear in the middle in the lower part…

So why I have to use offsets of over 6 meters for an object which is only 0.3 meters wide?

Any hints or pointers are very welcome

richard

You’re using UV mapping in your node group. You don’t have to unwrap, because your mesh is already unwrapped. Unwrapping is one kind of way to UV map a mesh. It’s not the only way. (Consider projection from view, for example.)

UV coordinates are in the 0,1 range. If you project from view (bounds) then shift 6.44 in x, the lower right corner of your mesh is going to be at 1+6.44=7,44, 0.

But note that you’re also scaling your texture, which changes the kinds of shifts you need. Because you’re scaling it sixteen-fold in the x axis, your only shifting it 6.44/16 of the way across its entire UV space.

I don’t know why 2.8 puts the ‘m’ there. It has nothing to do with meters. It is misleading. (If you were using unscaled object coordinates, it would. But otherwise, no, texture coordinates are not measured in meters.)

What happens if you choose texture instead of point? It’s only the reciprocal value (think texture density vs texture scale), so nothing special about it, but sometimes things become more logical to think of things in another manner. It also helps to visualize what is going on by chaining three mapping nodes together; one for each family of transforms. Sometimes you have to chain them correctly in order to get the transform you want.

Also, bitmap size doesn’t matter. It’s space is 0-1 range no matter the pixels. Then you have to manipulate that space. Note that most texture nodes also have their own mapping node. But since they are hard to find and not exposed through the same UI, I recommend not using those if you can.

If you want to scale using real world sizes, use object coords instead. If you want to use UVs, you could add a loopcut in half if you want a halfway guideline.