Matching dimensions of a 3-d virtual room to a still photo of a real room


What I would like to do is position and composit 3-d rendered objects accurately within a still image (or similarly a piece of video using a stationary camera) of a room but am not quite sure the best approach in Blender.

I have tried taking the measurements of the real-life room and measurements to the centre of the lens of the real-life stills camera within the room to match up with a virtual set of vertices (to act as a virtual set of walls) and camera in Blender, but can never quite get them to match up - the perspective always seems wrong. I suspect there may be a better approach - please could I have some advice?

I am aware of camera-tracking software such as ‘voodoo motion tracker’, but that obviously only works with a moving real-life camera.

Many thanks,

Just posted this in another thread: