Photoclinometry! Help with python to cnc WIP

I am using PIL and trying to create heightfield mesh from a greyscale image. I have been able to read the image and its size to create the mesh points, but I am having difficulty in indexing the faces to create them. Here’s the code so far:

import Blender
from Blender import *
from math import *
import Pil
from Pil import Image

#CNC variables
CD = .25    #cutting depth
CS = 125    #cutting speed ipm
PS = 12     #plunge speed ipm


pic = Image.open("c:/piltest.jpg")
me = NMesh.GetRaw()
for j in range (0,pic.size[0]):
    for in range (0,pic.size[1]):
        v = NMesh.Vert(j,i,0.0)
        me.verts.append(v)
NMesh.PutRaw(me, "piltest", 1)
Blender.Redraw()

My test image, piltest.jpg is simply a 10 x 12 greyscale image. The above code creates the points, but now I need to know forumulaically how to create or define the faces, since the greyscale images will vary in x and y. My z will be created later by get.pixel(x,y) which will return the value 0-255 for the pixels greyscale value. I will normalize it to the cutting depth variable above: get.pixel(x,y) * (CD/255) = Z

I guess I’m looking for a way to algorithmically roll through the matrix to create the faces without errors.

Thanks,

Rob

Here’s an attempt I made at facing my data:

import Blender
from Blender import *
from math import *
import PIL
from PIL import Image

#CNC variables
CD = .25    #cutting depth
CS = 125    #cutting speed ipm
PS = 12     #plunge speed ipm


pic = Image.open("c:/piltest.jpg")
me = NMesh.GetRaw()
for j in range (0,pic.size[0]):
    for i in range (0,pic.size[1]):
        v = NMesh.Vert(j,i,0.0)
        me.verts.append(v)
for u in range (0,pic.size[0] - 1):
	f= NMesh.Face()
	f.v.append (me.verts[u])
	f.v.append (me.verts[u+1])
	f.v.append (me.verts[u+1+pic.size[0]])
	f.v.append (me.verts[u+pic.size[0]])
	print u
	print u + 1
	print u + 1 + pic.size[0]
	print u + pic.size[0]
	me.faces.append(f)
NMesh.PutRaw(me, "piltest", 1)
Blender.Redraw()

I believe my error is now in how Blender counts verts. Is it left to right and start again at next line or go left to right and then continue from right to left at end of line? thanks.

Rob

The order is internally defined by how they are stored in the data aray, it doesn’t have to do with geographical situation.

Martin

Thanks, but do you have any info. or leads as to how I can rearrange or determine the order?

Rob

Get the face data, then you’ll know which verts are assigned to a face.

Well, it works!

Here’s the code:

import Blender
from Blender import *
from math import *
import PIL
from PIL import Image

#CNC variables
CD = 5    #cutting depth
CS = 125    #cutting speed ipm
PS = 12     #plunge speed ipm

pic = Image.open("c:/Pisces.jpg")
n = pic.size[1]
me = NMesh.GetRaw()

for i in range (0,pic.size[0]):
    for j in range (0,pic.size[1]):
        v = NMesh.Vert(i,j,pic.getpixel((i,j))*CD/255)
        me.verts.append(v)

for u in range (0,pic.size[0] - 1):
	for w in range (0,pic.size[1] - 1):
		f= NMesh.Face()
		f.v.append (me.verts[u*n+w])
		f.v.append (me.verts[u*n+w+1])
		f.v.append (me.verts[(u+1)*n+w+1])
		f.v.append (me.verts[(u+1)*n+w])
		f.smooth = 1
		me.faces.append(f)

NMesh.PutRaw(me, "piltest", 1)
Blender.Redraw()

I’d like to add an interface and the conversion to greyscale through PIL. It creates a 3D mesh from a greyscale image. I thought I could do the same through displacement mapping of the image on the plane, but the exported geometry was only the flat plane without the displacements. Still it helps to visualize the same thing my program accomplishes. Now, I am going to parse the necessary g-code through Python in Blender so I can mill the piece on my cnc router table. I’ll post pics soon. Thanks.

Robert

Here is a jpg of my son and the resulting screenshot of using the program to transform the image to a 3dmesh:

http://www.thewoodenimage.com/images/greyscale.jpg

I am machining it in wood to see how it looks. The difficulty is working in Gimp to change the greyscale levels to best get the relief desired. For example, most dogs noses are black. This would make a deep cut and the surrounding light fur would be raised or uncut. Threshhold and levels seem to be the way to preprocess the image.

Rob

One of the problems I am having is aesthetic. I normalized the greyscale data by mulitiplying it by CD (.25 in this case) and then dividing by 255 to arrive at an inch value to cut to. It works fine, but islands seem to develop. I think I need a way to normalize the data logarithmically instead of linearly. Does anyone have any input on this?

e.g.

 log [pic.getpixel((i,j))]

Thanks.

Robert Herman

I’ll post pictures of first cut once it’s done. It’s been running for 2 hours now.

I feel like I’m having a one-way conversation. Do you not love me anymore?

I just found software that does what I’m trying to do in Blender, but it costs $2400! on sale for $975.

http://www.designscomputed.com/vs3d/examples/img_emboss.html

If my script gets going, it will be free, and along with Blender would be pretty much what the expensive software does.

Upon looking at his example, he states he didn’t have to do any manual work on it using his “hammer” tool in his software. My images have what appear to be topographical relief steps. Pil is basically grabbing the greyscale value 0-255, and the code is just making verts and faces. Should I be placing splines, nurbs whatever on the data instead of verts and faces? Please somebody help me! Thanks.

Rob

You may find something interesting here:

http://users.commspeed.net/tbabbitt/rbranch_strangness5.htm

Halfway down is the “Shape from Shading” and the 2 “DEM” items just above it: all are basically what you are doing. All the scripts are Python so you should be able to scrounge something from them or get horribly sidetracked (in which case I’ll just say “adios” now and hope you come back sometime).

Above link referenced from http://www.vex.net/parnassus/

%<

Thanks. The images aren’t coming up on the site for that particular program, but it sounds like I’m about to waste the afternoon away. It just might be the ticket. I’m thinking on offering anyone who gives me the best lead to my solution a free 8" x 10" foam or wood carving of a photo of their choice as an award. Thanks again.

Robert Herman

You have possibly already considered this, but if the problem is the terracing effect of having 256 levels of grey, why not convert your photo into a 16-bit greyscale? This will give you a greatly extended vertical resolution. It may involve a lot of fiddling with photo manipulation tools and file formats, but I think that would be preferable to messing about with the limited Python/NURBS tools.

Possibly terrain editing tools may be the solution. After all, what you are carving is effectively a terrain in wood. http://koti.mbnet.fi/pkl/tg/TerraConv.htm is a tool that converts uncompressed tiffs to terragen landscapes, but also exports 16-bit tif and pgm files.
You could try importing a colour photo, then exporting it as a greyscale.
The output is binary, I think, so it might need some Python work to get it into Blender, or a photo manipulation tool that does 16-bit, but in theory it should create a surface as smooth as the original gradients in the original photo.

I think that using Blender to output CNC data is a superb idea, as I have been toying with the idea of looking into CNC as a means to realize some of the shapes created by my Python script. If you have any words of advice on the subject, I would be grateful. For instance, is it a practical idea to build your own CNC rig, or is an off the shelf purchase better?

Hope this has been helpful.

Andy

Fligh %: Thanks. Photoclinometry is the key. I’m trying to rework my script right now. I’ll post the results tonight.

serendipiti: Thanks. My program already takes the image and uses PIL to convert it to greyscale, and then reads the pixel data at each pixel, 0-255 to assign a height. I don’t need 16 bit since a cutting depth in z of .25 / 255 < .001"! is already fine enough. My issue was the heightfield mesh it generates is not a “true” representation of relief. Photoclinometry, using one camera and the sun’s or light sources zenith angle is amazingly sufficient for what I am after.

If you can afford it, buy something. If you love to tinker and have the time and little money, you can learn a lot by building any kind of motion control system. I’ve built the eggbot seen on this site: http://www.taomc.com/

Truly inspirational! I myself am into robotics and have built many animatronics and mechatronic devices. Feel free to ask anything about cnc or motion control in general.

Rob

BTW, this is my first Python script let alone for Blender. I am having a mental brain freeze on this particular piece of code:

for i in range (0,pic.size[0]):
     for j in range (0,pic.size[1]):
         for zh in new_line:
              v = NMesh.Vert(i,j,zh)
              me.verts.append(v)

zh is a sequence object. My object is to use it’s value as the z value, but no matter how I rework this loop, it doesn’t assign the sequential zh value. Loop headaches! Do I need to convert it to an array instead of a sequence? Any thoughts? Should I post this as a new topic? Thanks.

Rob

Maybe put your z height values into a 2d array named zh, with dimensions of pic.size[0]*pic.size[1] then try something like this.


for i in range (0,pic.size[0]):
     for j in range (0,pic.size[1]):
     
        v = NMesh.Vert(i,j,zh[i][j])
        me.verts.append(v)

I’m not really an expert myself, but your code appears to be a triple loop going over a 2d array. I’m not sure how the line " for zh in new_line: " works.
Is new_line the sequence and zh the local variable name? It looks like you create a new vert for the length of new_line. Are you getting “list out of range” errors?

Hope I haven’t misinterpreted again. Thanks for the link, lots of potential mini projects there.

andy

Wow, I’m surprised you haven’t heard from a ton of potential end-users about this!

I, for one, am very excited. Please do continue! :slight_smile:

Thanks, all. I’ve tried numarray, Blender.Mathutils.Matrix, Numeric, I just can’t seem to find a simple way of making the sequence new_line the ‘z-value’ for the NMesh.Vert line below:

new_line = elevation_line(line_data, zenith_angle, scale_factor)
#print list(new_line)

for i in range (0,pic.size[0]):
    for j in range (0,pic.size[1]):
        v = NMesh.Vert(i,j,new_line[i]) #Here's where I'm stuck!
        me.verts.append(v)

Any takers? If this were in c or java I could do it. Python is new ground for me. Me likes it, and me hates it. Thanks.

Robert

I assume that elevation_line returns a single list of height values.
Looks like that piece of code will take the first chunk of new_line, up to length of pic.size[0] and repeat it pic.size[1] times.
Make new_line into a list of lists, ie


new_line=[ [first set of heights  up to width of pic data]
                 [second set following immediately on        ]
                 .
                 .
                 .
                 .
                 [final chunk at depth of the pic data          ] 

That should make the 2d array which can be referenced by

        v = NMesh.Vert(i,j,new_line[i][j])

Hope this helps.

andy

I don’t know if you are like me and Python is new or just new as regards Blender/Python, but I found out that without Numeric, numarray or Blender.Mathutils you are left with Python’s limited array function. I don’t think, but I’m not sure if Python’s array function supports anything other than a one-dimensional array. Anyway, I’ve got it working! Yea! BUT it is extremely slow. I know it is due to the update function, since the new_line values are calculated seemingly instantly. Here’s the working snippet:

for i in range (0,pic.size[0]):
    for j in range (0,pic.size[1]):
        v = NMesh.Vert(i/20,j/20,0) #Here's where I'm stuck!
        me.verts.append(v)

for v in me.verts:
    v.co[2] = (new_line[v.index])/100
    me.update()
    print v.index

Anybody know of a better or more efficient method? I’ve tried the following:

v = NMesh.Vert(i/20,j/20,new_line[me.verts.index])

But, it didn’t work. Something about number or string required. I thought the float in new_line and indexed by me.verts.index was the the float value? Thanks.

Rob

I will post the working code once I solve this issue.