As I’ve realized some minutes ago, Blender does always use triangles to map, even if you are using quads.
The attached screenshots show, that it does not matter if you use a quad or two triangles when mapping what would be a deformed quad. The result is exactly the same.
So I was asking myself, if there would be a way to map a texture on a quad just as it is, not imagining that a quad is really two triangles (what brings us to the weird result shown on the screenshots).
Never mind what I said I see your problem now! I thought you were mapping a nonplanar object but I see that this happens even without!!! The only suggestion I have is to subdivide it once or twice or three times to minimize the effect.
You can see that to get a truly undistorted image mapped to your non rectangular quad you will have to subdivide to the one vertex to one pixel level!!
Well! back to the GIMP, eh? The best way to do this is to unwrap without stretch, export the layout and correct the image in the GIMP to fit exactly.
The problem is that wha I’m doing is a sidewalk construction for a game, so it would be better to use as less vertices as possible. Subdividing twice or more to better the detail increases a lot the amount of vertices… The game itself is able to deform textures correctly on a quad, but Blender does not and exports it as a triangle mapping.
It would be a good solution, but given that I’m moddeling a sidewalk with different deformation at each part, it would not be a good idea to deform the texture manually for each deformed quad (see “screenshot3”).
Thank you for attention. If you get any new or way to improve it, please tell me!
I’ve just realised that even the game does the mapping of quads as two triangles. So in conclusion, there is no way to map a quad as a quad. It will always end up as the mapping of two triangles.
If these texture coordinates are linearly interpolated across the screen, the result is affine texture mapping. This is a fast calculation, but there can be a noticeable discontinuity between adjacent triangles when these triangles are at an angle to the plane of the screen.
Looks like you have identified the problem exactly, but the solution that would be more acceptable doesn’t have to be perspective correct which is more complicated to program, but simply one in which a linear scaling constant and an offset for the location of each pixel is applied, i.e. it’s not desirable for the size of the grid to scale quadratically from large squares at the bottom to small squares at the top. I only mention this in case some developer is wondering which solution users might prefer, and also because the less desirable option is the more CPU intensive one.