I’m really new to blender, this is my first post and I’d like to ask about some problems I’m having rendering very low resolution images. I’m going to post the problem here and if you want to know why I want to do this read below.
For the test I’m using a cube and a camera with a special isometric configuration. The cube has a texture in which every face is black and the non used part of the image is white.
If I set the camera resolution to 73x73 or higher and the background color to white I get the expected result, the figure of a black isometric block in a white background.
But if I set the resolution to 72x72 or lower I get the same figure with white color in some of the cube edges:
It looks like blender is taking pixels outside of the UV map when lowering the resolution. Is this normal? Am I doing anything wrong? I can provide the blend file if needed. I need the image to be 24x24 pixels. Resizing the image is a workaround, but it would be much faster to render directly into 24x24 images.
Why I want this?
I’m helping in the code of Minecraft Overviewer (it maps minecraft maps with a google maps interface) and I’d like to improve the textures used to render the maps. At the moment Overviewer works in this way: It renders an fake isometric image of every minecraft block and then pastes these textures in bigger images using the game world data so it looks like a map.
The blocks are rendered using matrix transformation through PIL (python imaging library), and they tend to have transparent holes and they don’t tile perfect (and this can be a big issue while doing the map).
I had the idea of rendering these textures using blender and then use them in overviewer. I did some tests in blender and the blocks rendered with it are perfect, they have no holes and they can be tiled perfectly. But then I found the problem said above.
The images used by overviewer to compose the map are 24x24 pixels RGBA images.