UV map from 3x3 photographs - reduce?

I’ve got an object that has been assembled from a 3D scanner. The scanner takes a series of scans to produce the final model + UV texture. The problem is that the texture is a stiched-together image that has been constructed from, say, 3x3 images. The regions of the texture that are actually being used in the model probably takes up only 20% of the area of the texture.

Is there a neat way of being able to ‘move’ these areas, i.e consolidate them to a smaller area, so that I can then ‘crop’ the texture, and so dramatically reduce the texture size (which is currently 8192x8192!)?

Thanks in advance!

Andy

Do you have that image file? You need to reduce it and reapply it in UV I think.

Here’s the a screenshot of the problem I’m trying to solve:


As you can see, much of the texture image is unused, and I’d like to consolidate it to use only areas that are actually useful.

Do I get it right that the texture currently is unwrapped and textured, but with several texture maps (images) and you want one texturemap and a smaller one?

Simply duplicate the object, delete the UV layout and material. Create a new material, Unwrap.
Create a clean texture map of the desired size.

Bake the textures from the original object on the duplicate with one texture map. Save the texture map! Blender does not save baked and created textures automatically.

Done.

If you got the problem of getting the series of image onto the scanned object, use projection painting.

Your searchterms are:
“bake texture”
“projection painting”

hth

ps: nice clean scan btw. what scanner are you using?

EDIT: looking at what you posted, your problem is solved with texture baking, and I´d bake the texture in a rather high resolution. As long as it has the power of 2 you can simply scale down the texture later, the UV´s will be preserved.
So having the texture in 8000x8000 initial and replacing the texture lateron with a map of 512x512 just changes the quality and size, not the appearance, the UV´s will be mapped the same because a UV map is from 0 to 1 over the images size. so if one point is mapped to 0.5 it doesnt matter if it is 0.5 of 8000 or 512. the middle is the middle :smiley:

Thanks for taking the time to reply. I hadn’t heard of texture baking - so now I’m going through some of the tutorials to find out more.

The model is properly textured using a single image. (This single image is stitched together by Rapidform during the merging of meshes). This final texture image, however, has lots of wasted space and its really big (8192x8192).

What I hoped Blender would let me do is, in the UV image view, select the faces, and move them where moving them causes the underlying pixels to move with the faces. That way I could tightly pack them into a smaller space. Then I would crop and scale this final (power of two) image as needed.

I may end up having to write some software to do this. I’ve poked around Blender and I’m not sure if it has this capability. I’m unsure how much the texture-baking process will lose the ‘originality’ of the current texture image?

Thanks again!

BTW, the scanner is a Mephisto Extreme scanner from 4D Dynamics. I’m using Rapidform to merge the meshes and Blender for all other processing and to prepare the models for the web.

Nope, but this is no Blender issue, it is the same in all Programs, it is just how UV mapping works, hence the name mapping.
Basically each vertex has xyz coordinates. Now it gets 2 additional coordinates, U and V. These coordinates point into the texture map.
U and V ranges from 0 to 1, making it independend from the textures size.
UV[1,1] is the bottom right, no matter if the texturemap is 8192x8192 or 256x512.

It will loose nothing at all during baking, it is mathematical correct. I could roll the theory and algorithms of texture mapping here, but rest assured besides minor rounding which also happens with your base texture nothing happens besides some color smoothing, depending on the filtering algorithm.
Just go with baking.
However it is strange how your scanner does the texturing, thats a waste of space and you loose lots of information even in 8k/8k

If you need more texture information it might be better to take photos with a camera and then use blenders projection painting to paint it, but it will not be too accurate.

If you need help with this, or want to see if the result matches your expectations before learning how to bake it, you can mail me your file (including texture, be sure to make paths relative and pack the texture into the blend) to [email protected]. I´ll gladly help, I love 3D scanners and I guess you´r doing it for educational or commercial purposes, i doubt someone just has a mephisto extreme at home to toy with :smiley:

Since you have the object and number of images, what you need to do is to unwrap the object than do Projection Painting.

The object is unwrapped and ready textured.
What he wants is to make an efficient and smaller resolution map from the crap the scanner produced :slight_smile:
Just read his post carefully and look at the image he provided then you´ll understand.
Projection painting is not really an option I guess as it is far from accurate, while the 3D scanner stitches the result to an almost accurate texturemap.

another problem i think with projection map is that you need pic as per global axis
and more pic at different specific angle like 45 or 60 degress
then you can do a nice job!

happy 2.5

you need pic as per global axis and more pic at different specific angle like 45 or 60 degress then you can do a nice job!

Right, that is what he has. His UV image is a composite of all the views. Only problem is that there are 10 images with all the background included! By repainting, you paint that information on to a new single UV map.

No he has various different angles off more than 1 global axis and would have to match the object orientation by hand as we don´t have a camera matching tool in blender and additionally would create blended seams between the textures. A lot of todos with lots of inaccuracy.
Even worse is that the current texture has same planes of geometry (see socket of the statuette) mapped to various different images.

Don´t you think its faster and more precise to press ctrl+d, create a new unwrap and bake the existing texture into a new one? :smiley:

Ya I suppose you are right; bake existing texture on to new UV map / Blank image. I did some texture / projection painting so I am familiar with how those UV Texture layers work. But baking it that way is little different. Never tried that. But it should work.

Thanks for all contributions. I haven’t figured it out in Blender just yet. In the meantime, I wrote a program to do what I needed. Its not production-quality - just a tool to get the job done.

Here’s the before:


And here’s the after:


If you’d like to know more about the tool, written in Java, quite happy to chat. I’ve posted more on my blog andrewhatch (dot.) net, or e-mail me andrew (at) andrewhatch (dot.) net. Oh, and in answer to an earlier comment - yep, this is a University project I’m working on!

To be blunt, I think you wasted a lot of time writing this tool, the workflow to bake a texture is pretty rigid and can be learned in an hour or two and it would give you a dense texturemap with stiched islands and a high usage of the texturespace.

Bluntness aside, thanks for that. I’m quite happy that I can produce the models I need. Thanks again for your input. I’ll be sure to plug away at learning blender.