Camera Mapping and Spherical Projection

While I’ve been working on my current video project, I’ve been trying to think of quick ways to recreate environments in 3D. I’ve not had time to research this yet, so in the meantime I wanted to ask if anyone has done this before, or knows if it is possible inside Blender.

Essentially, I thought if you had a raw scan of an environment, you could use a 360 degree panoramic image to camera map all your textures, and quickly approximate the details. If you could use an HDRI panorama, even better.

Tonight, I came across a link that shows how it is possible to do this in MARI: http://www.fxguide.com/fxguidetv/fxguidetv-165-scott-metzger-on-mari-and-hdr/

(The video is a little long, but keeps a good pace and doesn’t drag.)

Now, LIDAR scanners are still way too expensive to use, but you could use a Kinect to do this for low cost. I know there are a number of different programs out there for various applications. You could also do this with a photogrammetry program such as AGIsoft, but that is much more sensitive to reflective surfaces, whereas I believe that is less of an issue with the Kinect.

So, to sum up, is it at all possible to setup camera projected textures to be projected as a 360 degree panorama?

In principle, yes you can follow a workflow similar to the one described by Scott in that video using Blender. As a test I modeled, textured, and rendered the bridge scene shown on this page in Blender using essentially the same methods he described in that video.

http://www.lidarguys.com/?page_id=109

For that scene I didn’t shoot spherical HDRs and did the camera mapping using individual frames but the basic process is the same. Of course, that also means i didn’t have the HDRs for lighting so it is a little bit off.

In practice, Blender doesn’t really scale up to this type of application. The viewport performance makes retopoing meshes of the size you are dealing with difficult and the limited support for tiled textures or ptex makes texturing painful. With enough patience you could probably do an individual room like Scott shows in the first video but I can’t imagine doing an entire warehouse like he shows in the video from this year’s Siggraph.

These types of applications are becoming more and more common and I suspect it won’t be long before scanning is as routine as shooting light probes for a lot of VFX work.

Thanks for replying, jedfrechette! And for the links as well, I hadn’t seen either of those before.

I had suspected Blender might not quite be able to handle such a dense mesh and very high resolution texture for such a technique (or at the least it would be very difficult). I would love to have a program such as Mari to aid my work, but $2,000 is well out of my price range.

I’m watching the link to the video you posted right now, really incredible work! It’s slow to load on my computer, but well worth it.

The bridge in the first link looks great, very well done! You say you used individual frames for the camera mapping; do you mean it is possible to camera project more than one image onto a mesh, or did you split up the object into multiple pieces? I had always thought it was only possible to project a single image onto a mesh in Blender, but if that’s not true that would be wonderful news to me.

m

Don’t forget $5000 for a K6000. :smiley:

I kept the object as a single mesh with 1 material.

What I did was setup a master UV map and texture that would contain the final composite texture. Then I created additional UV maps for each frame I wanted to project. Unlike in Scott’s videos, where it looks like he is just eyeballing camera poses and parameters my camera parameters were determined using a photogrametric reconstruction, similar to what you would do with Blender’s camera tracking tools. Therefore, I had a camera object for each frame with the correct position, rotation, focal length, etc. This object was used for UV Project’s projector object. Applying the modifier gave me real UV Maps I could use as the source for Texture Paint mode’s clone brush. The clone brush and “Apply Camera Image” were then used to transfer the textures from the source frame texture maps to the master texture map.

There may be more recent tutorials but I think Sebastian Konig did one for CMIVFX that used this method for texturing a mammoth. If I remember correctly the tutorial dates from the 2.49 days, but the basic process hasn’t changed since then.

Thanks for the detailed explanation! I will look into this for my own project soon, so it’s much appreciated.

And you’re right, I didn’t even think about the hardware requirements of Mari. To be honest, my “workstation” is a refurbished Gateway quadcore I bought in 2008. It’s got 3 GB’s of ram and a small CRT monitor from 2000. Literally today, just got a new GPU to render my scenes in Cycles, so at least things are improving slightly!