Uv map gets bent for no reason

I can’t figure out what’s happening here:

I try to UV map an image onto two perpendicular faces. But I’m trying to map the floor and walls from an image of a room. I need to adjust the points in the UV editor to get them in the corners of the room, but when I do this, I get a very strange bending of the image along the diagonals of the faces.

Why won’t blender just stretch the image according to the distortion of the face in the UV editor?

Would love any help.

I’ve just put a screen-shot up that may help describe my problem better.

Attachments


i think you’re running into what is called tesselation. subdividing the planes will solve it.

I tried that. Here is what I get. There still is the same kind of distortion going on.
Now it’s just smaller.

PLus, doesn’t that (subdiv) make it really hard to manipulate the map around to fit the Image? I only need four planes for this project.

This just seems so simply wrong. Why does tessellation happen?
(I tried this on a bunch of versions of blender)
I really appreciate you help.

Attachments


Part of the problem in the first image you showed is that the unwrapped UV went off the border of the image but the image itself is not seamless, as for the stretching, I’m not sure. Also I think that it would be easier if you texture the wall with a flat texture of a wall, the perspective image you’re using would make the image get more and more stretched as you go down the hallway.

I’m not worried about the border of the image or seamless-ness at this point. I just set up that example.
What I think you’re suggesting is to use photoshop to stretch the image, (say, the floor into a right-angled square) And then, use a normal Uv map (not “project from view”). It would work, I agree, and I may be finding out that that is what I’ll have to do.
BUt it shouldn’t make the back corners any less stretched than they would be using “project from view”. At least according to the way I think it’s supposed to work in Blender. But I’m really trying to understand what’s going on here.

But maybe I’m not seeing things correctly. I thought that’s what “project from view” was supposed to do.

Any explaination of any flaw in my thinking here could really help me here. Could you imagine the tesselation distortion just, not-happening?

Then “project from view” could work for the kind of thing I’m doing here.

There is also UVProject modifier. I don’t know why the image is so much distorted, but you are using very simple mesh compared to more common models (low-poly or high-poly). I don’t thing using any method to calculate the UV layout will change the result. This seems to be dependent on the perspective (distortion) of your image. Stretching the image in PS/GIMP is also one of possibilities.

okay so, maybe it’s UVproject modifier that I need to use.

How can I use that to map a 3d geometry to a photo?

I am still wondering about how UV “project from view” is supposed to work though.
seems it was made for what I’m trying to use it for. But you’re right: even if I start out with squares and then stretch them to the photo, I get the same distortion.

Could it be that Blender just has an inferior stretching mechenism or something?

Thanks so much.

I am still wondering about how UV “project from view” is supposed to work though.

This option in UV unwrapping menu is useful for simple objects, without to much bent surfaces or mechanical things. What it does is only a projection of what you see in 3d view without any actual unwrapping of surface. When you hit U in UV Select mode, everything there is the same if you plan to move the points by hand. The UV mapping will be the same, only difference will be the starting layout.

It should work to just grab and move the verticies in the UV image editor to match your picture borders, that should remove the repeating and distortion

I’m convinced this problem is just because the “UvProject from view” was meant to do perhaps more organic things like faces. I guess,… I’ve done my tests here. I think it’s just bad (distorted) unless the map is a perfect right-angle square or rectangle…and then the image is supposed to be adjusted to it, if it’s not. But regardless, that’s just an unexpected bad distortion. I wanted to map the faces of my 3d mesh to a 2d photo, not the other way around. If not this way, then how?
Why should I have to make the image into perfect squares (in photoshop)?
I want it to stay as one image. I have a whole project in mind that requires this.

Any other Ideas

Attachments


Okay, in this Wiki image you can see it happening. The swirl in the third picture is doing it…(I’ve heard it called “Tessalation”) It’s Distorted along the diagonal of the face …FOR NO worthwhile REASON. It’s a problem.

THIS ACTUALLY HAPPENS ON EVERY FACE for any type of UV mapping that is not square.
we mostly don’t notice it because we can increase the number of faces (or mostly do this on organic-shaped models) until it becomes “visually-unnoticeable” BUT it is still happening there, which is imperfect; and it makes things messy.

This makes it very hard for me to do very simple camera projections. I wish I could figure out a way around this.

I want to use Blender as I’ve always thought it was so cool. But this is a fundamentally retarded way to distort images according to the UVs.

Why not have the image proportionally stretched rather than all at the diagonal?

HOW does anyone use the camera UV project to do camera mapping?!? or Camera Map at all in Blender?!?

Help.

Attachments


One way you could map it is to select the background image you modeled this from as an image texture for the object. In Map Input click sticky. Turn on shadeless, and texface. Then go to the Mesh Panel and under Sticky: click make. This should project the background image onto the object according to how the camera is set up.

YES! that does help me out a lot. THANK YOU. I can actually use that to do a lot of what I want to.

It’d be nice to have a bit more control in real-time like I imagined UVproject or “project from view” to work, but this is something I can actually work with.

Also, I should point out that the more complex of a mesh the better this works, with the most extreme case of just one face bringing pretty scary results. I put up two examples here.

Thanks Again.

Attachments