I’ve been trying to learn camera tracking, and have followed a tutorial or two. I’ve tried with two cameras now, a cheapy Go-HD camera and a fancier Sony HVR Z7U (couldn’t find the lens/sensor specs for the Go-HD).
I track a bunch of point, trying to find ones in multiple planes and distances to show parallax, and receive a solver error around 0.65 (not perfect, but not terrible). When I apply the solution, the grid and objects move with the camera very nicely. I can’t seem to get the grid to align, even after setting an origin, floor, and scale.
I’ve tried a couple clips, with both cameras, and had the same problem where the grid tracks nicely in 2D, but isn’t oriented as I’d expect. Am I getting fake solutions, or am I missing a step. I had the focus all the way at 4.4mm if I’m reading it right, and the sensor is 1/3" cmos, so 8.47mm. It seems very different from the camera presets, but I think that’s because it’s a video camera and not a still camera that records video? Any tips would be appreciated, thanks
So, I’ve tried a bunch of different point sets (started from scratch a few times) and still haven’t have much luck. I was wondering if anyone had some sample (known good) footage I could try just to see if I’m doing something radically wrong? Since I’ve got objects of known geometry (the poster on the table), is it possible to help the solver along by providing additional information?
I have uploaded the video here if anyone wants to see the kind of motion in the clip. There’s a bit of side to side motion which I thought would be enough to establish the tracking points. It looked similar to some stuff I saw in other examples, but I may be way off base.
The camera aligns correctly based on the data available. I might be terribly wrong here but looking at the footage and the point cloud generated, it’s obvious you don’t have enough parallax in the shot for the program to create a correct reconstruction. It’s not a total loss though as the camera movement seem stable enough. What you can do is align it manually by selecting it and rotating it around the 3D cursor (don’t worry, the 3D-trackers will follow). Having a cube or a plane as a reference might help.
Looking at the point cloud in more detail, it’s definitely a fake solution since the points on the poster don’t make a nice rectangle. It would be neat if there was some way for the solver to take into account known geometry. I’ll give the manual alignment a try just for the sake of learning how to salvage a clip. Is there any camera mode that makes it easier to zoom and rotate the camera around a point (sort of like the flying camera)? I’ve been just viewing the camera in one pane while moving it around in the other and it seems tedious but doable.
In the future, if I want that kind of mostly-stable-with-a-little-handheld-motion shot, is there any way to solve it? I was thinking about maybe filming a couple seconds of footage before the desired shot to establish tracking and just clipping the final composited track. Is there a type of motion that is best to establish tracking, or will it vary from scene to scene?
You would actually do better with stable and no motion shot as opposed to stable with a little motion. You could add fake motion later.
For tracking to be successful it has to have clear parallaxes. “An apparent change in the position of an object resulting from a change in position of the observer.”
The larger the change the easier it is to track and for the algorithm to figure
out what is ‘far’ and what is ‘near’ and the 3D relationship.
Here’s a good but longwinded tut on YouTube that explains that better:
It deals with Syntheyes, but the ideas are the same, only the interface is different.
Planar tracking works somewhere along those lines, and might get implemented into Blender some time after GSOC.
As I wrote earlier, the 3D-cursor is your best bet. There’s an icon at the bottom of the viewport where you can change what pivot point to use. Simply switch to “3D-cursor”.
That would be a good way to do it. A few seconds of camera translation is sufficient for Blender to create a correct solve, after that you can make pretty much any movement you want. Until a tripod/hand-held algorithm is implemented, that’s the way to go!
A small animation on the following page shows a textbook example of what you should aim for:
Of course, it doesn’t need to look exactly like that. But in order to get a proper solve, the points in the background and the foreground need to move relatively to each other. That is how Blender and many other software calculates camera movement, assuming you have proper data as input such as - in this case - sufficient movement.
Thanks for all the tips! I tried the rotate around cursor approach while viewing through the camera, and the result wasn’t too terrible: https://vimeo.com/42095704. Not great, but I’ve done worse things.
I’ve tried at fake handheld motion before, but haven’t hit on something that doesn’t look exactly like what it is, unless there’s lots of distractions in the shot, since it misses out on the subtle parallax.
This seems like one of those garbage-in/garbage-out scenarios, so I’m going to try getting hold of the camera again and making sure to get some good side to side motion in there.
I looked into this a bit today and managed to do a bit better…First of all, a 1/3" sensor has a sensor width of 4.8mm not 8.47 (1/3" is the diagonal measurement not the width). Then I put the keyframes at 131 & 184 ->those seem to be where the largest change in parallax are. That got me down to an error of .3045. Finally, I solved again with refine set to focal length, k1, k2 and ended up with an error of .2198, focal length 4.531, k1= -0.211, k2= 0.327. This looked much better but the 4 markers for the poster was still a bit diamond shaped so just for kicks and giggles I set refine back to nothing, focal length back to 4.4 and solved again… error .2082 & a nice rectangular set of markers for the poster:
BTW: you could get to the same results by just leaving sensor width at 35 and setting focal length to 32 ( according to the specs here, the camera comes with this lens: f = 4.4 - 52.8 mm; When converted to a 35 mm still camera 32.0 - 384 ) The point is that the important part is the ratio between the 2 not the exact right measurements
paprmh, that’s awesome! Is there any difference between “Camera Motion” before a “Clear Solution” vs. on top of a previous solution? I’m pretty sure there’s not, but it sometimes seems to come up differently?
I filmed another test segment and it tracked much more easily (https://vimeo.com/42312218). Now that I know it’s possible, I’ll focus on making my tracks pixel perfect. That and figuring out how to get the table background object to catch the shadows from the cubes stack without casting it’s own onto the carpet.