Initial Camera Tracking Commit by Sergey Sharybin

AWESOME!!

And here’s the tech talk about it all :smiley:
http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011#Week_4:_13th-19th_June

sooooo sweeeeet. omg, when GSoC is ower, I hope Ton makes a decision to get all the branches that are “done” into blender trunk asap!

Sergey is amazing. You can already do fun stuff with Blender’s tracker!

This is looking impressive so far. Sergey has some serious skills. :slight_smile:


https://youtu.be/04s4TQ6zHrk

@Aermartin. getting the branches integrated fast is what the Salad branch is for :slight_smile: it’s one branch that combines all other gsoc projects.

@Sneg, he sure has some skills getting stuff functional so fast. but dont forget the Libmv guys! they did most of the hard work :wink:

Looking at Sergey’s notes:
“Convert track to location fcurves for object in 3d world. 1 pixel on footage is equal to 1 Blender unit.
NOTE: Added for testing only, could be removed any time – real parenting to markers/bundles are only under design now.”

I really hope he doesn’t take this out. This is so useful.

The workflow in tomato is a little bit awkward still, mainly because of scale issues.
Here’s the process:
Track a point by ctrl-clicking on a well visible feature (really large search area helps).
Go to 3d viewport, top view, add movie to background.
If it is HD, set movie scale to 96 and set x and y location to 96 as well (1920/2/10)
Create 1 empty, call it “parent” or something like that, clear location alt+s so that it sits at lower left corner of footage in viewport.
Create another empty. While it is selected, go to movie clip editor, select one marker, in “test” panel in toolshelf, press convert.
Back to 3D-Viewport.
Shift-select the “parent” empty and press ctrl+P. Then only select parent empty and press S and then 0.1, so scale it down.
If you want to render that, what i did is to set camera to orthographic and manually place so that it fits to the marker.
I’m sure this workflow can heavily be improved. :wink:

@Aljo, just re-read the message then…
real parenting to markers/bundles are only under design now.”"
Indicating the current implementation is just a test, to be replaced by a better version.

@blaize:
I saw that.
The point is that generating fcurves to an empty lets you retarget the motion and have full control over the curves.
I’ve been playing with this a bit lately http://moviemation.de/facial-motion-capture-software-en.php
It works really well. When I seen where the camera tracking in blender was going I realized that this could be accomplished without ever having to leave blender. Very cool. But, to get full control over the motion you need to be able to generate fcurves that can be used as drivers for facial bones or shape keys. And once the fcurves are generated there is no need to keep the tracking points and the video that was used to create them.
I don’t see the point of removing something that is already there and that has the potential to be so useful.

I think this feature should be made as a node in composite node. Having it in 3D View is weird indeed.

Something like this:

2D tracking in 3D view doesn’t make sense.

I think this AE tracking , eg. 2d , is good! way better than nothing!
I don’t know what’s best because I dont do tracking work at all. but I’ve heard that bojou is the best. bojou does as voodoo make a massive point cloud, you dont have to select “spots” as in AE and tomato branch.

rather later would be sweet if a point cloud was generated from the 1st frame, tracked, and they turn red when they are bad and you could have a circle select -tracker -brush- to clean / remove bad tracks. but also just paint parts of the footage that are important to track.

although I’m really stoked about this :slight_smile:

@rozmiarek, I think phonybone is getting the go signal on remaking the node code, so in the future we can more easily create nodes, last time I checked the node code and try to do something you hade a set number of diffrent nodes data types to chose from.

phonybone will make it way way more dynamic, not only node types, but socket i/o etc.

Probably then lots and lots of stuf will be moved to a node tree instead.

you don’t need a cloud for 2d tracking, such markers as now would be really great. But why, oh why 3D View? Composite nodes would really benefit from it (needless to say parenting to hooks would no longer be necessary).

@Aermartin, Libmv supports the feature that it autoselects tracking points, resulting in a point cloud. i think the reason Sergey didnt integrate this yet is because it takes allot longer to process a whole cloud instead of just a few points.
But the Libmv GSOC student will be working on 3d tracking next week, so i’m sure we can expect to see that in Blender very soon too :slight_smile:
Exciting times!

@blaize is that so ? I was reading on the tomato branch page. I guess also, to get these 2d trackers in first, is a nice step to take. and ironing out the blender gui, how to put stuf and where.

can hardly wait! as the libmv devs marches on, stuf will get into blender faster and faster thanks to Sergey and tomato branch, and google for paying for these feast of 3d/motion graphics - candy! as it is.

one good thing with 3d tracking is that each and every point has a relation to each other. don’t they ? otherwise how does it know to form the 3d cloud and get depth.

also, then it would be easier to make trackers not just die because they get out of picture,t hey can still use the recent relevant data to extrapolate linearly outwards of the footage.

man it will be soo cool to see :smiley: I think blender soon needs a vfx desktop or make almost everything node based.

The points are indeed in relation to eachother. when a camera moves it figures out the 3d depth using Lens information and perspective. so that’s why you need camera movement to get 3d (or 2 pictures from 2 locations)
I’m really looking forward to seeing this in Blender, cause 3d tracking DOES need the 3d view :wink:
Currently the tracking algorithm is based on KLT, not the most advanced one out there. but it’s stable and pretty easy to work with. and you can also improve it in the future (libmv will probably include a GPU version, resulting in huge speed improvements)
and the good thing is that when in the future you want to use a different method, it’s hardly any work to get it to work in blender.

there’s a inherent danger though to gobble up libs like libmv for camera tracking if it ever dies out in the future, best case scenario a general API for the movie clip editor view, and camera tracking should be developed in parallel

in the future what lib used to camera track could be in properties settings _

but great times ahead indeed.

guys, are you really serious with the after effects discussion? I consider it not sane to start such discussion after the first commit.
Instead of such senseless ranting, you could take your time and cooperate with the GSoC student on the future workflow design.
Its totally logical that the tracker data will be accessible from compositor, but saying that in 3d view theres no use for tracker data is really showing the lack of imagination that things could work different than in the Adobe Amateur suite.

+1 haha, Adobe Amatuer suite! Are you "Flame"ing them?
Anyway what is wrong with having the tracking products available anywhere/everywhere throughout Blender? I like the idea of interacting in a proper point cloud or just as a 2D vector in Comp nodes.

In the 3D view you could remap vision and stabilise a camera path. Even make a new camera path!