A Tip For Optimizing Colorless Textures

DO NOT!!! use jpg’s !!!

height maps tend to be 16 bit signed integer
displacement can be 32 bit floatingpoint ( most of the time values from 0 to 1 ) but the values can be any thing
specular maps tend to be 8 bit gray but 16 unsigned can be used

so the best format would be a multi layer 32 bit Floating Point Tiff

or one could rebuild the blender source and include HDF5 and Vips code
these two image formats are designed for multi dimensional data sets ( many layers )

(i was thinking of game developers willing to shrink assets as much as possible)

we use S3 compression for games. example: *.dds

Wont work this way for gamedevs, unfortunately.

This is a bit off-topic again, but allow me this short note for the sake of correctness.
Only really important to gamedevelopement:
Texture compression will always assume that you fed it with the uncompressed original. It doesnt recognize other compression formats and then improves on them.

By using jpg’s for your intermediate file format you enforce compression artifacts an additional time, for no benefits. (“additional” because modern gameengines take care of the textures final form in the background anyways.)

Stay with lossless texture compressions like png or tga. :slight_smile:

Interesting, I have used pyramiding (mipmaps) in images for the last 20 years I think, as image processing and GIS software needs it for the large satellite images (and other image sources). Especially when you tile the images together into a even larger mosaic and display it in a web viewer for example. There are open source tools that can create pyramids (also called overlays) in images (mostly geotiff) out there if someone would like to try. The GDAL tools are the most popular and open source as well.

http://www.gdal.org/gdaladdo.html

I would also add that most (all?) renderers will uncompress the image before it gets to memory (both CPU and GPU) so compressing textures will not save you any memory and just add more time (to uncompress).

information can be stored in a image

like this screenshot for an example

the asteroid Vesta
https://3-t.imgbox.com/ALslo209.jpg\

red = latitude ( +90 to -90 )
green = longitude ( 0 to 360 )
blue = Radius in KM ( 3.2963 to 31.0446 )

Not quite right. They use GPU friendly compression algorithms (mostly a kind of block compression) and let the hardware decompress it at every acces. These compression algorithms are made in a way that makes it possible to use them in hardware without loosing much performance. All the compression methods that you can store in dds files are such algorithms.

Only there’s still a problem I found for doing certain things with them in Blender. Apparently the node that outputs vertex color is affected by the current color space setting for the render. That becomes a real pain when you’re wanting to use vertex color in the same way you’d use an image texture in it’s “non color data” mode. I wonder if anyone will come up with a fix for that? (If not already, it’s been a while since I tried it.) My solution for the last time I used vertex colors that way was a hack making up my own corrective curve to remove the color-space so I’d get linear values. But it’s a bit of a PITA to get just right, and I suspect it’d be broken if somebody changed color spaces and tried using that material in their renders. Yet I’ll admit using vertex color linearly is pretty neat for making animated segmented displays with material zones, even if it’s a kind of weird kludge for doing so.