working around background compression in footage

My lens is a 50mm prime…I have been very loosely following the transformer VFX tut on blender cookie and have managed to do quite well so far except that there is alot of compression of the background in my video so that the scale of the scene isn’t very accurately captured in the video, to better explain what I mean,the distance from the camera to the location of the bot is approx. 20-25 meters give or take, adding a film distortion node to the robots tree structure helps some but not totally and scale the robot down makes it look to small in “car mode” …are there other way to deal with that kind of thing …I will be buying a Rokinon 35mm Cine Lens sometime next year(suffers less background compression) but I’m stock with the 50mm until then…and yes the image has other issues those will be dealt with during animation stage

as an aside the …shadows? I shot the video in overcast conditions to avoid natural shadows appearing in the footage … I created the shadows based on what I observed on the day most objects and people wont casting very noticeable shadows save for under vehicles…so I cast a very soft shadow for the bot am I right or wrong are there good docs out there that discuss rules for casting shadows I could use some advice. most of the other people I asked could agree some said heavier some say soft and light is good

http://i922.photobucket.com/albums/ad68/mcbeth316/tran_v87a_zps2199eeb7.png

try using some Gaussian blurs / soften filters… also match up the blacks / highlights more as well will help!

Also needs reflected light from the ground plane.

Are you talking about the perspective distortion caused by the distance between the lens and the background?

yes…if I understand u correctly

If that is it, then you just need to tweak your camera position and field of view to match conditions of the photo.

With all those straight lines, it looks like you might even be able to use BLAM.

camera is tracked and solved would changing the field of view mess with the solve …the image is a single frame from a tracked video sequence and as for blam , never used it I already extended the ground plain to match some structures in the video for shadow catching.

Hm, it’s a video? Hm…

Can you post what you got so far for the whole video? (doesn’t need to be the actual robot, just a gray cube in the same spot might already help)

very veeery early test there has been some post processing added since

Is that pretty much just tripod motion?

yes I was lowering the crank shaft from a high point

Ah, so there was some parallax motion, just very small?

yes there was

Are you sure the tracking went well with so little parallax? Perhaps you should go back there and shoot a few additional shots (with exactly the same zoom setting on the camera) from a different position and add some frames from that into the sequence, just for tracking, to get a better estimation of the parameters of the camera and the scene.

If the position of the tracked points in 3d don’t fit with the real geometry, things have a big chance of going wrong.

yes I did actually plan to re-shoot the video and from a slightly different angle to hopefully achieve better looking sense of depth and I will also take shots from more angles to combine, this track though was fine I think…the issues seems to mostly revolve around decision made while shooting.

If the tracked points are in the right position in 3d, then i’m not sure i understand what issues you’re having and what you mean by “background compression”…

Sometimes you can get a solve with the wrong length lens. You sure you have correct sensor size? A cropped sensor like apsc is 1.6 x 35mm standard lens length. Your 50 may give 80mm lens

I mean the sense of distance and depth is lost making the bots scaling seem off what looks like a short distance is in reality quite long …in reality even at that size the robot would take more than 4 steps to get A to B

Look at the video again that tree on the left is actually several meters in front of the bot when it starts walking yet it always seem to be directly overhead…as u suggested before perspective distortion

Next time you go in that place, remember to take a measuring tape and measure things like the distance between the poles, the height of those holes in the poles, the height of the benches and the width of their seating surface, the dimensions of the tiles on the ground etc; then recreate those things in blender (doesn’t need to be the exact shape; cubes and cylinders should work), then see if after tracking you can fit those things in place to confirm you really got the tracking right.

I’ll do that thanks