Tool/Program Needed - 3D from photo's

Hello, does anyone know of a free or cheap program that can convert a stero pair of images into a depth image?

Stereo pairs are pictures of the same thing taken about 2.5-4 inches apart, so that upon viewing you can determine the depth of the objects.

Your brain does this, and I’m certain that there are computer programs that can take this info and then create a Z-buffer type depth image from it…

I found a extremly expensive company selling multiple camera, mounting setups that included some software to create a mesh from the photo, but all I need is Depth info from a photo…

Anyone have a link or some good info on this?

Not sure if any of these are what you’re looking for:

Some reading.

Also searching here for “Depth” and “ZBuffer” in general and Python forums will lead you to info on the ZBuffer sequence plugin.



What I mean is taking 2 pictures with a digital camera of a real life object, and then using a program to compare to the 2 images to derive depth information(in the form of a grayscale image).

I know its possible, and the last site you gave had a tiny bit of info, but no program or anything.

Anyone know if there is a program for this?

Maybe photomodeller lite?

Old thing and I doubt in general this is a useful method…for now.

But the only thing free that came to mind


here’s another download of it if those don’t work

Wow that is really really cool…

But sadly not what I’m talking about…

I’m not trying to create mesh’s from reference photo’s, I’m trying to derive depth info from a stereo pair of photo’s.

Check out this link:

That’s basically what I want, though without fancy camera’s or expensive software…

I didn’t think this was asking alot, because it seems very simple… you take 2 photo’s, compare them and then depending on how far apart some things are, that determines their depth…

It’s what our eyes do everyday…

What I want is to be able to take 2 photo’s that are about 4 inches apart from one another, put them into a program that will determine which parts are close and far away, and then the program spits out an image that is the same size as the inputs, only with grayscale depth info.

there are no points or meshes, just depth info…

The end result should look like a z-buffer image of the real life photo.


ps thanks for that photomodler, that looks really cool to use for doing exact from life work…

Does it export UV’s and images for the models too?

Hi MacGyver

I understand what you are trying to do, but I don’t think you will find any free software to do it. What you are asking may sound very simple, but it is very very difficult. There is a reason you need expensives cameras and software. It may well be what our eyes do, but us humans have had many thousands of years to evolve this capability.

An easier, though no less expensive method might be laser scanning. As the name suggests a laser is used to scan an object/area/whatever at a phenominally high resolution. Obviously this does not give you any textures so it all depends what you want the depth information for. If you just want a one-off scan, you could hire the equipment to do it.

What do you want the depth information for anyway? Even if you do manage to find some software that works for you, the depth map will only be from one viewpoint. If you are trying to integrate your 3d models into photos, there are easier ways. Post-processing in Photoshop for one.

Sorry if all this sounds like I’m trying to shoot you down in flames, but I can’t work out why you just want single-viewpoint, depth-only information from photos

It wouldn’t be single viewpoint but rather the interpolation from 2 stereo photographs…

I have a special slider mount for a camera tripod, so that you can take 2 perfectly spaced stereo photo’s right after one another…

Anyway, we are working on getting a profile embossing, much like you would find on a coin, for CNC machining.

Obviously you could model each person’s head, but not only would it have to be photo-real but that is timeconsuming…

I know the depth infomation in the stereo photo’s is there and waiting, but I didn’t know how hard it was to extract that…

Ideal-ly it would be photo the subject, get the depth map, apply it and fix any problems and then CNC it…

Hopefully less than 5-10 minutes on each photo…

There are some CNC programs that actually do this, like the link I have above but they are very expensive, and since I have everything but the program to extract depth info, I figured it could be found free from somewhere…

Oh well.

It’s called sterio pairs or sterioscopy but google doesn’t returm much.


This might do the job:


Actually it would be from just one viewpoint. What you originally asked for was a way of taking two pictures and deriving ONE depth map from them. That was what I meant. Sorry if it caused confusion.

I am in the same boat. I tried using just the greyscale image and PIL to create a heightfield map so that I could route cameos in 3d in wood or metal. I have a 4x8 x 6in depth router table and a tabletop cnc milling machine for metals. Here is my work so far regarding your issue:

Another approach I have is to use the above technique with edge finding. Why are you trying to do this?

Good Luck.


I just found this software which is exactly what I am trying to do in Blender with the script in my previous post.

Better than reinventing the wheel, I guess.


Try this: :smiley: Hope it helps!

Thanks. I remember that software and it didn’t come up in my short-term memory. However, I’m looking for an automated way of taking a grey-scale image, as my script does, and this link
does. That is with little work on the mesh afterwards. If you look at my heightfield as compared with vs3d’s, you will see that his comes in much smoother. I am working on a normalization of the data using a sin function. Thanks again.