Making 3-D Models Look Like They Are "In" A Photo

I dont know if this is the right place for this question, so please move if it needs a better home.

Returning to Blender after a few years off, and being ecstatic to discover the Images->As->Planes plugin, I have been getting to do a lot more work with photos in Blender.

But one discovery usually leads to many new questions.

Once imported, photos look very realistic, and it’s kind of hard to make 3-D models look like they are actually a part of the photographed pictures. Things don’t look seamless or natural; they just kind of look all jumbled together.

A lot could go wrong to make things look out of order, so the answer may depend upon what is wrong, which could be anything. But maybe there are some guidelines on how to make virtually created 3-D objects look like the are part of a 2-D picture?

The only things I can think of is to make your models realistic looking; keep them in proportion to their place in the pictures; and keep them somehow in perspective. But still, issues like light source, color balance, or even RT/VT camera tracking (in an animation), etc - can create a lot of riddles I have no idea how to address.

Are there any general guidelines or dialogue on this topic out there?

Thanks!

Dawg

use transparent “shadow catcher” planes for obj’s you need to make sure the shadows are correct in the scene…

like if you add a person into an outside photo you need to add a “shadow catcher plane” inline with the ground plane in the scene so when you render the picture the shadow overlays the picture in the right spot.

I’m attempting a bit of this at the moment but I’m not being very successful so don’t feel I can actually offer advice. Except maybe try to match the lighting in your render with any obvious lights in the photo. I just set up my photo as a background image in Camera view rather than using empties or image planes, and add it to the final render using compositing nodes or else just render with transparent background and composite in Photoshop (or similar).

Besides the lighting, which can be managed a number of ways, one immutable aspect of compositing “live” imagery with CGI is getting a solid match in the perspectives of the two elements being composited. Perspective is a factor of both camera look angle and field of view, and field of view is determined by camera lens focal length, so choosing the proper Lens setting in Blender, plus placing the camera such that its viewing axis is close to that of the camera used to make the “live” imagery, is very important. Just sticking a CGI image over a live image without some consideration of these points will almost always result in a perspective mismatch that viewers will recognize, even if they can’t define it. The composite will look faked.

I have a simple tutorial on my blog that should help you get started.

The Color Balance node is a very powerful tool for matching the color cast of a photo or video. Look for something in the image that should be white, then use an eyedropper tool to see what the RGB values are. Use that info to set the color cast.
And the fact that there are 3 color wheels (for shadows, midtones, and highlights) allows you to give shadows a bluish color cast, while the part of the object in direct sunlight has a warm yellow cast. You can leave the color of the light source in your scene to plain white, then use the Color Balance node to make your adjustments. This saves you the hassle of having to re-render each time you make an adjustment to the color of the light source to see what the results are. In the nodes editor, when you make an adjustment, it shows up immediately in the background preview.

The three vertical slider bars adjust the intensity of shadows, midtones, and highlights.

http://i1089.photobucket.com/albums/i347/Steven_A_S/tabletop.jpg

Steve S

Wow! Thank you Steve and hjmediastudios. :slight_smile: I had come back to revisit this idea, and leave some notes for posterity, but I didnt expect to find these Christmas presents.

Three further thoughts I had were: 1.) Maybe to use some extra photos of the environment to model in proportion to it, 2.) to use the rule of thirds in the design process, if applicable, and 3.) something I found but cant explain in a greenscreening study, which might somehow be helpful.

1.) using extra photos of the environment might be like we do when we model a human - we get a front and side view, and then use that to design the model. Thus, it might be helpful to use multiple pictures of the environment in which the object is to be set just to (somehow) get a better sense of perspective, proportion and context.

2.) Using the “rule of thirds”. Rule and Thirds are actually a bit of a misnomer because it’s not necessary based upon the rule, and you could create as many layers as needed. The point, is a background could be designed, like a forest or sky (for the first “area”) using a RT picture on a plane, and then (in your second “area”) your main 3D objects can be set into place in the midground, and (in the third area of the rule) everything could be “camoflagued” with more virtual trees in the foreground. The idea would be to mix the RT set in with the virtual set to make them less distinguishable from one another.

3.) The greenscreening idea is only a hint. I dont understand it, or even if it would really work. But it has to do with calibration marks in the environment. See the attached image. Somehow they got the background in sync with the calibration points on the greenscreen.

Attachments


Have you seen the Tomato branch of Blender?

The rule of thirds most often refers to a division of the image plane into quadrants(?) but more like a tic tac toe field
|__|
|__|
…|…|…
Where you place subjects at the key intersections and guide the eye around in a curve.

Perhaps what you are refering to is depth composition, placing scenery and characters in space regressing from the camera. Starting with the foreground then midground and ending with the background. You can cheat 3D with paralax error, that is sliding the background slower than the mid or foreground. This can all be achieved with compositing. But since Blender does full 3D it’s easier to do it in the 3D view and map images to planes.

Yes, I meant by depth. Just flip the diagram around 90 degrees on the X axis, which is what I was trying to refer to.

Will check out the Tomato video.

Thank you for the input. :slight_smile:

Dawg