Homemade 3D laser scanner - in progress

I have been playing with photoclinometry and blender (see: https://blenderartists.org/forum/viewtopic.php?t=36528&highlight=photoclinometry ) to create 3d meshes from single camera photographs with some luck, but now I’ve started making a cheap (< $100 ) 3d scanner. This will allow me to make more accurate scans of objects to cut on my cnc machines. I am getting stuck with how to triangulate the 3d point on the object with it’s location in the captured 2d image from my camera. I am capturing one image without the laser line on it and one with the laser line on it and then using PIL to Difference the images. I am left with the laser line pretty much. The angle between the camera and the laser is fixed approx. 20 degrees. I will use a stepper motor to rotate the object on a turntable. Using PIL, I can fetch the x,y of the line in the 2d image. Does anyone know how to triangulate this and ahieve the x,y, and z of the point on the 3d object? I can then mesh the point cloud, and voila!, a decent mesh inside of Blender. Sorry about the long post. I am going to work on the mechanics now. I will post pics of my progress. I found a python image capture module that works well. I will also post plans, software and circuits for those interested. Polhemus, eat your laser diode out! Thanks.

Rob

i’m really interested into this kind of experiments.
keep it up.

I think you will need at least another camera angle to get a points cloud. This is because you don’t know if the laser is lighting up a point that is “further aside” or “further out” than the camera.
A cheap way to get a second view of the line without having to buy another camera could be to just have a mirror reflecting the object from another angle within the field of view, so that each image will have two sets of information, that combined with the angle of the turntable should be enough info to have a 3D model

No.
If the laser beam have some angle respect to the camera, and is pointing to the center of rotation, the object size will be more voluminous while the beam go to the side, and more tiny while the beam goes to the center.

Anyway, I think you can get more precision in the concave zones if you use two beams (one each side of the camera) :wink:

Sorry by my poor english :stuck_out_tongue:

rheman…
Some reference for You… I just happened to find it right after I read Your Post.

Brandano: you can do this with one camera, i remember in the LOTR exhibition they were using this method to scan in a pointcloud for the ogre. It was handheld (so possibly used motion tracking of some sort) but had one camera, and looked like a barcode scanner (i think, it might have had one beam, or a bunch of intersecting beams, i really cant remember)

rherman: i found some code on the web to do this, but couldn’t get it to compile :frowning: they were just using shadows and a pencil, and using blender to join and display the mesh. This page has the code:
http://sans.chem.umbc.edu/~nicholas/3dscan/
some more links to 3d scanning:
http://pheatt.emporia.edu/projects/3D_Scanning/
http://www.thaumaturgy.net/~etgold/scanner/
http://robotics.cs.columbia.edu/~pblaer/eclipse/

I’m really interested to see what you come up with, especially if i can get it to work on my computer - binaries or python code for blender would be awesome! it would be great to make a model in plasticine, and be able to scan it in…

Sorry, looks like I had my brain turned off, what Caronte said is quite right. As long as the laser beam points to the center of the turntable any sideways offset of the laser beam is directly proportional to the distance of the lit point from the center. you just need to calibrate this on a known object, correct for any perspective induced error (if any) and you should have your points cloud.

wow! thats a really cool idea… i think i’ll also try to build and programm one of those…

Bekecs: Cool link, thanks! The cool thing about PIL and differencing an image with the laser on and then off before the next rotation step is that you don’t have to shut off the lights!

Brandano: Yes, the math has been there for a while. You can get a pretty accurate mesh without correcting for lens distortion etc… for 3d modeling purposes where the precision is not that great.

zenoscope: There are many implementations of structured lighting scanning. The beauty of those techniques is that the subject can move and an ICP algorithm can stitch the data together. The fellow who implemented the “Shadow Scanning” code within blender uses the OpenCV library from Intel; however, the source code is borrowed from his listed sources and is not for commercial use.

That is why I am making a laser scanner at first. After I’ve proven the concept for myself, I plan on implementing my own version of a “Structured Lighting” technique so that I don’t have to worry about blinding someone! And they can move around a bit.

I know the basics behind the math, but I can’t seem to nail the equations and then implement them in Python. The point can be extrapolated by projective geometry. The intersection of the point on the laser plane (vertical) with the vector from the ccd of the camera gives you the 3d point. Obtaining this from the 2d image is what I am working on now.

I will release the code, hardware, software and circuits when it is working. You will be able to scan and mesh the point cloud all from within blender (hopefully).

Desktop systems (Polhemus) cost $25K and up!

Rob
Rob

Actually it is rather simple maths :slight_smile:
Well this is what i worked out till now:

http://img177.echo.cx/img177/222/triangulate8yh.jpg
The setup is with the camera below the laser, and the laser gives you a dot along the vertical middle-line of the camera…

We are looking for the distance E, we know angle a and distance F. We can calculate angle b by how far the laser dot is away from the middle of the image. (still working on that though)

Then there is a well know Formula:

(sin x) / y
is the same for every corner of a triangle where x is any angle and y is the opposite edge of x.

So we get:
sin(b+d) / E = sin© / F

Then
d = 180°-90°-a = 90°-a (all the angles in a triangle summed up are 180°)

c = 180°-90°-(b+d) = 90°-(b+d)
c = 90°-(b+(90°-a)) = 90°-b-90°+a = a-b

So the final formula is:

distance E = sin(b+(90°-a)) / (sin(a-b) / F)

Now i’m still working out how to calculate angle b. Will update when i’ve got it.

I’m familiar with Pythagoras’s formula. It’s the camera calibration and locating the center of the ccd and it’s vector’s intersection with the vertical plane of the laser line that’s giving me headaches. For instance, is focal length crucial within a reasonable range i.e. not a macro lens or a 125mm telephoto lens. I will not be accounting for lens distortion, since I am not looking for that kind of accuracy (yet). Here’s a link to a different approach where the laser is rotated: http://www.muellerr.ch/engineering/laserscanner/tutorial/the_principle.html
Vectors are more convenient when using projective geometry, especially when trying going from 2d image to 3d space. Thanks for the input.

Rob

Hem, I know it’s pretty academic, but:
b = (180° - 90° - c) - d

I think the diagram is wrong and the problem is actually simpler then it looks.
We know the lengths of the sides formed by the middle of the turntable, the camera and the laser.
Lets say that the side from camera to middle is K, the side from camera to laser is L and the side from laser to middle is M.

We als know the distance of the point where the laser hits the surface perpendicular to the side K. lets call that a, and let the unknow distance from that point to centre of turn table be b.
Then b = (a/L) * (K^2 + L^2)^0.5
edit: actually (K^2 + L^2)^0.5 = M
so it becomes b = M*a/L

(this asumes that the laser is setup so that the perpendicular anlge is at the camera’s corner of the triangle)
There’s your answer.

Again, the law of triangles IS pretty academic. The issues I am dealing with is the location of z in 3d space and it’s location in the 2d image. There’s also the issue of camera calibration (see: Blix by Jonathan Merritt in these forums). I am looking for a pragmatic way of dealing with each of the vectors of the pixels of my camera’s ccd chip and some filtering on the laser line, since it is about 1/8" wide projected. See: Ch6 of this fellow’s thesis http://www.vision.caltech.edu/bouguetj/ICCV98/index.html and you can begin to see where I am heading. His source is for non-commercial use. I will begin with the laser and then implement some kind of shadow scanning second. The math is projective geometry vs Euclidean geometry. Thanks for all your input. I’ve been able to use Blender for many things: animation, modeling, visualization, 3d carvings on my cnc machine, etc… It is amazing how handy it is.

Rob

ok, I was suspecting it would head in that direction.
My reaction was more on the diagram above.
I could spend some more paper on scriblings and cook up a way to calculate the distance d, taking the camera angle into account.
And making a more elaborate calculation for X,Y in the picture and X,Y,Z in world space.
There wil be some variables though you will have to put in yourself.
Like the distances and angles between camere, laser and turn table, the lens angle etc.

I read the information on the site you suplied.
I have one question about your setup.
Will it be a static setup. eg fixed position of subject, camera and laser and take 1 picture per turn, or do you want something like they built, with a dynamic aproach?
The latter makes it way more complex,math-wise, but on the upside way more flexible as well.

The current is static, and only involves the subject rotating. You could rotate the camera and laser fixed to a 180 degree semi-circle for seated people scans. I wish to implement what you’ve read about, but instead of usiing a ruler or stick, I will stick to motorized approach thereby eliminating a lot of the math involved in defining the shadow plane and the spatio-temporal processing. There’s another link that I found very interesting where the subject can even be in motion. It uses a projector and projects zebra bars across the subject and an ICP algorithm to stitch the different time captures together into a 3d mesh.

Rob

Have you already checked this out ?

http://developer.berlios.de/projects/copos/
http://www.fpsols.com/point_cloud.html

Thanks for the links 3D-Penguin. I didn’t see them; however, I already have the mechanics and the capture method. I am working on the interpretation of the images to arrive at 3d points/mesh. The first link may have some of the software I need to do that. It’s in French though. The second link is commerical ware @ $250 usd. I have 2 related projects involving Blender: One is a toolpath generator and g-code exporter. The second is photoclinometry in Blender. By building the scanner, also within Blender, I hope to be able to scan in an object or bring in a photo and then generate the necessary g-code to run on my cnc machines. I have a 4’ x 8’ x 6" cnc router table good for plastics, wood and other soft materials. And, I have a 12" x 5" x 4.5" desktop cnc milling machine for ferrous and non-ferrous metals, plastics and other goods. I will release these as I finish them. Thanks.

Rob

I have another complete 3d-scanning software also in C++, but commented with a spanish
thesis. Nothing in english, sorry. Concerning the commercial point cloud link: No GPL’ed
program solving this problem has yet been published. I never publish commercial links
if i dont have a good reason for this.

Once you publish your solution gpl’ed, it will be the first one at least for the point cloud
problem. I guess the french commented proggi will nevertheless be of some use, because
of the non trivial algorithms involved.

Cheers and good luck