I just want to show what i’m working on in my spare-time.
If you search for “3D scanner” or “structured light”  on the web you’ll see what i’m trying to achieve. It’s a scanner you can used to create 3D pointclouds for surfaces with.
I tried to do this with the lowest costs possible and simulating the hardware in 3D is a lot cheaper than even a cheap laser-pointer and a camera … not to speak of setting up a real test environment and the calibration.
The simulated ‘Hardware’ i made in Blender:
A top-mounted (90° to the ground-plane) rotating mirror -> free floating ‘rotating’ plane-mesh
A red (or green) LASER(-pointer) that points to the mirror from the bottom and is then reflected onto the ground-plane -> red/green spotlight that is shaped into a line by a mesh with a small slit.
A camera (i.e some cheap digital camera … best with video-mode) -> default blender camera
A moving craddle that moves the scanned object towards the camera (or away). -> empty+IPO curve.
This is of course not made with “good looks” in mind but functionality (e.g. the projected light-line needs to simulate a real environment).
Sample output image (one of many):
I’ll use the animated sequence (the red/green parts of the laser) to recalculate the objects shape. It’s a spare-time project of mine and i still need to start coding at all, but i think this setup could be interesting for somone else as well. Hence this posting.
PS: I’m posting this in the WIP forum since i can’t think of another that is better suited … but i may be wrong.
Yes, i’m still working on this and i have first results … but they are not really render-worthy yet.
There are still problems with resolution and some bugs, but i’m getting somewhere.
For those of you how know how to program (in this case perl):
The current script can analyse the images produced by the hardware-simulation above and convert them into obj-files.
I just point it to a directory full of these images (you need alot of them get high resolution) with a laser-line in the middle and run it.
If somebody is interested in the script/files just post here or pm me.
This little heighfield is the result of a scan cycle of 50 images for all of the face you see at a resolution of 400x300px per image.
The test object was suzanne with the face straight into the sky/laser I hope PETA hasn’t hear of this yet.
The output mesh is roughly twice (or more) as wide as suzannes head, so there would still be a better resolution possible when zooming in with the camera. in the optimal case the camera-field is exactly as wide as the scanned object in it’s maximum.
I also changed the test-environment to be completly dark and the only light-source is the laser. (see the lower right image for a sample input-frame)
There are of course some problems with areas that are hidden behind other geometry (e.g the aera right over the nose -> spikes) but this was to be expected and can be quite easily fixed by hand most of the time.
As you can see it’s a neat grid-mesh (quads) without strange geometry except for the spikes of course.
Considering the fact that the script is quite simple right now (only basic math is involved) i’m kinda surprised with this result … it’s far from the perfect 3d-scan, but it works.
I really can’t wait to make a high-res testcase out of it
I’ll try to simulate a real environment as close as possible (average digicam resolution including jpeg compression and highter ‘frame-rate’)
Also i try to reduce the angle between the camera and the laser so the ‘hidden’ surface is minimised … this will lead to a less accurate scan though :-/ .But maybe i can find tricks … i’ve read about dual-beam scanners and i’m playing thi the idea of placing a mirror to scan the backsides I do not want to increase the count of cameras, because it roughly doubles the data volume (and thus calculation time) and would basically double the hardware-costs as well.
PS: As mentioned in the previous post the source of the script and the test-cases in blender are availabe by contacting me (it’s under the LGPL, i just don’t have a place to upload them currently).
Who would make a 3d scanner in an already 3d program?
I think i’ve given a lot of reasons in the first two posts.
One of them is money, the second is i can simulated different scenarios without the need to adjust or change the hardware.
It also works a a perfect test-case and can be reproduced by anyone (who has blender i.e everyone)… even by people who do not have (built) the hardware … this still includes me.
Just thing of the scene where i rendered the camera as a ‘real-life’ scenario and you are on the right track.
I think its a nice project because it shows how far you get with scripting
but it would be nice if you explained the advantages of such a scanner!
Just image you have a complex statue or woodcut or something like that lying around and you wan to use it in a scene … but there are more cases where it would help. Just search the forum for “3d scanner” and see that lots of other people are doing similar things
Besides the code for this script (perl) isn’t really that complex, it’s acually pretty easy compared so some solutions i’ve seen (they use matrix transfomations and other more complex calculations). i’m just using one single sinus-functiuon in the whole script