Relief map generation

Here is the thing I am thinking about for quite some time:
I have a color coded map (image) with about 16 different colors identifying different height on a map (as well as sea and rivers). Upon this base image I want to add more detail to make it into a real relief map. This should be an automated process (thinking script). The end result however will again be an image seen from right above (no perspective). There are quite some questions that should be answered beforehand:

  • Is Blender the right tool for that?
  • Are there other options (tool-wise)?
  • Is there an altogether other approach to get a relief map image?
  • On the premise that it is done with a Python script, how can this be integrated in an application outside of Blender (e.g. run from the console, or from a Java application using Jython)?

Any insights on these issues (or further considerations) are very welcome.


What 16 colors are in that image? Grayscale?

You could turn it into grayscale, if it isn’t, and use it as Displacement map for a Displace modifier. Could be with a python script, but also a photo manipulation application (e.g. Photoshop - Gradient Map).

In a photo editor, you could also apply blur filters and alike to remove the hard edges between different colors and make them smooth transitions.

Basically I was thinking of greens, brown and blue hues. However these colors could be mapped to grayscale.

why not read the image and function of pixel color create a mesh with a certain height function of color!

happy bl

That’s my basic thought. However is such an approach practical to do in Blender or even using Python? I’m more of a Java guy, so I could imagine to create such a height map in Java and then output the mesh for further use to another program or tool.
My main point is, that the whole process should be automated.

I found the scripts on this page useful for image pixel manipulation in blender

well we did some test some times ago with image and it is not very fast
and if you need to make a loop with many colors it will make it sluggish

what size of image are we talking here KB or MB?

if you can find a working PIL it might work faster !

can you show an image just to see how it looks

and if you make a mesh with one vert per pixel it will give a mesh with Mpixels
or try to minimize verts in the loop but will be even more slower !

happy bl

It’s not that slow if you sue it right and got a decent CPU.

import bpy
from mathutils import Matrix
from time import time
import cProfile

# Native-typed array is slower for pixels, casting?
#from array import array

def main():
    realpath_pic = "C:/Users/CoDEmanX/Desktop/heightmap_tmp.png"
    ts = time()
        pic =
        raise NameError("Cannot load image %s" % realpath_pic)
    width, ht = pic.size[:]
    me ='piltest')
    picmem1 = pic.pixels[:]
    pic_data_len1 = len(picmem1)
    >>> timeit.timeit(lambda: int(23 / (3*7)), number=1000000)
    >>> timeit.timeit(lambda: int(23 / (3*7)), number=1000000)
    >>> timeit.timeit(lambda: int(23 / (3*7)), number=1000000)
    INT DIV:
    >>> timeit.timeit(lambda: 23 // (3*7), number=1000000)
    >>> timeit.timeit(lambda: 23 // (3*7), number=1000000)
    >>> timeit.timeit(lambda: 23 // (3*7), number=1000000)
    pixel_len1 = pic_data_len1 // (width * ht)
    scale = 15
    # 2x mul + 1x add vs. generator call = the former seems 0.02 sec faster
    #def fourths(limit):
        #c = 0
        #while c < limit:
        #    yield c
        #    c += 4
        #yield from range(0, limit, 4)
    #gen = fourths(width * ht * 4)
    #vs = [(i, j, picmem1[gen.__next__()]) for i in range(ht) for j in range(width)]
    vs = [(i, j, picmem1[(j + (i*width)) * 4]) for i in range(ht) for j in range(width)]

    fs = []
    fs_append = fs.append
    for u in range (0, ht - 1):
        for w in range (0, width - 1):
            v1 = u * width+w
            v2 = u * width+w + 1
            v3 = (u + 1) * width+w + 1
            v4 = (u + 1) * width+w
            fs_append((v1, v2, v3, v4))
    # Link object to scene
    #me.from_pydata(vs, [], fs)
    me.vertices.foreach_set("co", [item for sublist in vs for item in sublist])

    nbr_faces = len(fs)
    me.loops.add(nbr_faces * 4)
    me.polygons.foreach_set("loop_start", range(0, nbr_faces * 4, 4))
    me.polygons.foreach_set("loop_total", (4,) * nbr_faces)
    me.loops.foreach_set("vertex_index", [item for sublist in fs for item in sublist])
    mat = Matrix()
    mat[2][2] = scale
    # Update mesh with new data
    ob ='ob1', me)
    ob.location = (0, 0, 0)
    ob.show_name = True
    print('Time in seconds =', time()-ts)"main()")

did a test on a very small pic and I got 250 000 verts!

how do you decimate that quickly ?


Decimate modifier?
Remove doubles?
Limited dissolve?

Or simply scale down the input image in any photo editing software…

did you test that on 250 K verts mesh

my computer I asking me an aspirin !

have to modify the loop to deal with that
will take longer but at least less then trying to simplify it with decimate !


how do you find the RGB value for the pixels ?

is it

        y1=picmem1[(j + (i*width)) * 4 ]
        r1=picmem1[(j + (i*width)) * (4 -1)]
        g1=picmem1[(j + (i*width)) * (4 -2)]
        b1=picmem1[(j + (i*width)) * (4 -3)]

but does not seem to work

so how you get colors


p = (i*width + j) * 4
r, g, b, a = picmem1[p:p+4]

ok got the data for colors

now this was a PNG file
and black seems to be 1,1,1 and white 0,0,0

is this normal ?


Doesn’t sounds right… did you enable the invert option?

or maybe the formula is wrong, swap i and j?

use a grid with number of vertices egal to number of pixels;