Research paper: Integrating 3D into photos

Found this interesting paper (to be presented at Siggraph Asia 2011) on how to integrate 3D objects into photos:

The examples are rendered using LuxRender. I personally would love to see the perspective helper inside Blender :slight_smile:

very cool :slight_smile:

Daaaaaamn… Very cool indeed.

Thats amazing! I never thought something like that was possible. It almost looks too simple to do!

Well, many will say that´s yesterdays stuff, some might not even be intrested, personally I am rather excited about computer vision and image processing.
This is actually the most astonishing development I´ve seen on this sector recently.
The clou is, you only need one photograph and no scene information which removes it from yesterdays stuff and places it in tomorrows stuff :slight_smile:

You setup the perspective in the photo and mark the lighsources which is shown in the video and the software algorithms calculate a 3d scene and lighting model with various optimizations and it even works with lightshafts and interior and exterior images.
You can render objects still or animated in the scene, even use object boundaries for physical simulations.

I think that is great stuff and this guy will get a few job offers in the near future :wink:


A good idea for GSOC 2012 on Blender, isn’t it? Or even before, if any volunteer comes forward…

i would sell my soul for this. yes. i would sell my soul to the devil if Blender would have this in the next 6 months.

I would love to see this “absorbed” into Blender. It would certainly be a powerful adjunct either way.

What do you think?

4th separate thread merged with the same link. Please search before starting a new thread.

Image analysis volumetric lighting, very cool for cinema too. But fav tech was perspective matching. That one tool is worth it, perhaps as an addon?

Will be interesting as an alternative, or addition to camera projections :slight_smile:

As i said before I’m also very interested in the perspective matching tool. Other apps have this already though, so it is the “least innovative” aspect of this paper. Would be very cool to have as an addon!

I wonder if any of the image analysis stuff from Tomato can be co-opted for a tool like this, where the user defines the space and planes while Blender interpolates the lens and placement.

It looks awesome!