need blenderheads with camera tracking experience to give input on UI design/integrat

Hi all,

the next Blender open movie project is live action, and thus camera tracking (match moving) will be a key component. The current plan is to integrate ‘libmv’

We need some experienced blenderheads with matchmoving/camera tracking experience to help design how the UI should be done, where and how it should be integrated.

Some folks we are aware of whos feedback we would value are Sebastian Konig, Colin Levy, and francoisgfx.

I’m sure there are a number of other individuals skilled with matchmoving who might be able to give thoughtful input.

Tons words are

<quote>[The goal is] to define a very solid ground layer design, but not try to copy industry standard tools in a few weeks of coding time. Simple, stupid, but working sufficiently :slight_smile: to start with</quote>

Thus we are definitely aimed at a ‘less is better’ approach.

If you are interested, you can post here both your experience and ideas. There might be a mailing list set up for discussion if deemed necessary, and probably a wikipage.


we should integrate the camera estimation into libmv, and use system libmv for the sake of package-ability.

For those interested here is some info about 2.4x integration that gives an idea of some of the features that were exposed.

I’m interested, though I might be more useful as a tester than as a designer. I am a frequent user of Syntheyes + Blender. Are we interested solely in reverse engineering the camera, or do we need to do more in-depth operations like object tracking, or even motion capture (which requires multiple camera angles)? That’d be ripping if Blender could do even basic motion tracking.

PFHoe from pixelfarm is probably the best example of a “simple” 3D tracker, by the way. (And frustratingly so, as you’ll get stuck with a problematic solution, and be out of options…)

As far as project Mango - there won’t be any motion capture AFAIK, just pure match moving . This will be solely integrating visual FX and 3D elements into a live action context.

I have AEffects/Mocha /Boujou background and I could give some ideas or feedback on the tracker UI.

LetterRip, I guess what could help blender as a whole together with this system is some kind of 2d widget system for image/sequencer view -
imagine you create a tracker widget(just a point), rectangle, and later maybe also beziers e.t.c. These can than be code-wise connected - e.g. by selecting a compo node, like transform, a rectangle widget appears in the image view and you can use it to quide the node values. This kind of widget would be a good start also to work with trackers.

A good example for this is e.g. Shake(or practically any software using compositing), where it works exactly like this.

The other step would be to have a track - view, where you can basically animate the tracks manually. It is very needed feature in matchmoving to have tools to edit the large number of tracks obtained, and not to have everything automagical. I guess that the dopesheet would be totally enough for this, showing keys for selected tracks, without the need to edit curves.

The user then autotracks, when there is trouble he deletes/fixes/ adds his own tracks, after difficult phase which has to be done manually(imagine middle of the shot), the process can continue on automatic mode again - take user-defined tracks into account e.t.c.

Otherwise, I think blender has a lot of what’s needed - a camera can be connected to the output so the user can actually check the fitting with the scene during matchmoving process, maybe by displaying either the video as background in 3d view or overlaying OGL scene in image view.

that’s the bit of experience I have with mt and how I would like to use such tool.

I have experience with PFTrack and SynthEyes. I could provide some input later on.

I have years of experience with 2D & 3D tracking, using Maya Live, PF Track, Combustion, After Effects & PF Hoe, you can check some of my trackings here:

So feel free to contact me if you need some advise :slight_smile: , Blender is actually my main tool.

I have no experience in tracking but I think it is not a good thing for a solid ground layer design to start with widgets for existing views.
IMHO, UV/Image Editor already does a lot of things (like painting, unwrapping, scopes, render result, viewer, textures, image references).

I think it is the occasion to create a new editor to display image sequences on the same basis than image editor.
This image sequence editor could be used for tracking, rotoscoping, sequence preview, compositing filters.
And libmv settings would be put in header properties and tool columns of this editor.

Have worked with PFTrack and Syntheyes 3D trackers and AE and Mocha 2D trackers.

I’d be veeery interested to see how this develops…

I also believe that the new features should be kept within the current editors. The Image Editor, Dope Sheet, Graph Editor - as mentioned - already provide a good base to build upon.

In addition to several 3D and 2D Software (Maya, Blender, Max, XSI, Houdini / After FX, Shake, Nuke, Fusion) I work with Matchmove Software since more than ten years (3D Equalizer, Matchmover, PF Track, Boujou … and my favorite (and affordable) one : Syntheyes).

I’m very very busy for this month, but if I can help you, it will be a pleasure to give you a list of feature that should be implemented in Blender. :wink:

Thanks for this Topic !

count me in and please feel free to bug me as much as you want. I want to be part of this.


So, if I understand this correctly you want to add 3D matchmoving inside Blender to then be able to add objects in 3D space via Blender and obtain a video clip with the result.

We have two sides of the problem: a) the tracking/matchmoving and b) creation of objects in the 3D space calculated by the matchmoving

Given that matchmoving is done on footage it would seems logical to place that feature in the Blender NLE (VSE).
On the other hand the process requires switching to different points of view in a 3D space and adding test objects to check sliding etc. This makes it logical to integrate matchmoving in the 3D view of Blender.

For tracking you can have automatic and supervised matchmoving. Automatic will do the guesswork automatically and supervised is, well, manual. If you need to do something simple then focusing on supervised tracking and providing an really good UI is probably best. Automatic tracking is nice but can be very tricky to do.

When working on tracking, given the very detailed and tedious nature of the job, you need a good UI. A magnified view of where the tracker is positioned is fundamental. Fine-tune moving of the trackers , both by mouse and keyboard, should be also available. Very smooth playback controls for the footage are a must and the clip must play in the 3D background. We might just need to create a locked view: a set of top, front, side views plus the view from the footage camera, which will need to be added by the solver once the scene is tracked.

Unlike Syntheyes, Blender uses real-world units. A way of converting to meters is necessary.

On a related subject, during the shooting of the scenes to be tracked it will be helpful to have 3D markers that can help in the process. One way of doing that is to build a wireframe pyramid of a known dimension. The pyramid is just a wireframe and it’s painted green for easy removal. Placing a couple of those pyramids in the scene can provide the parallax information for the tracking plus being of a known size, can be used to provide sizing information to the tracker.


There are probably 3 areas that libmv will be used for

  1. 2D tracking
  2. 3D tracking
  3. Camera stabilization

we likely need interfaces for

  1. selection of what feature points in the image to track
  2. adding/removal of feature points
  3. correcting of bad tracks
  4. selection of frames to recalculate a tracking on
  5. deleting of a tracking
  6. visualization of a tracking
  7. tracker data scaling

probably numerous others of basic functionality for tracking that need an interface

it needs to be decided where in Blender these should integrate and display - sequencer?, compositor?, 2d image viewer?, 3d view?
where the panels should go and how they should look.
is it another button in the buttons panel? How is the feature enabled/disabled?

All sorts of interfacing decisions that need to be made :slight_smile:

LetterRip, why not integrate it in a similar fashion as the game engine or the external renderers? By simply switching to “Compositor” from “Blender Internal”, one could fit a whole variety of tools in the Properties editor without affecting the current layout.

EDIT: Whoops! I meant “Match Moving” of course, not “Compositing”.


one important factor is that certain core blender developers will ultimately have to approve the design. Another constraint is the desire for to have the minimal changes to the UI that still accomplish the goal. A massive UI design doesn’t fit that goal.

Another topic, but not one for here :slight_smile: Is matting/rotoscoping. I’ll just put them in this thread for now so i can easily refind my links later.