RotoBezier Addon in Blender 2.5

Hey, sharing the new RotoBezier addon in 2.5 Trunk. I do a fare share of compositing in Blender and this functionality has been lacking badly so I asked Campbell Barton for some new features and fixes in 2.5 API and RNA system and then I was able to come out with this. Hope you like it!

I didn’t get a chance to try it yet, but man, this is soo awesome. Thanks ZanQdo! A really great feature!

Awesome, I’m happy you like it, tell me if you use it for anything :slight_smile:

Nice work dude!
Combine this with some 2d tracking and bezier based feathering, and we’re almost up there with the high end compositing packages.

Awesome stuff!

Love it! Absolutely needed for live action composition.

what do you mean by that? a double mask? I’ve been thinking in how to do that

Things are getting better and better regarding compositing in Blender :slight_smile:

Awesome stuff, would have needed that quite some time ago.

I recently had to write (c++) a 2D pattern matchmover with NCC and meanshift for adaptive pattern tracking. If the code would be of any help I´ll be glad to mail it to ZanQdo or just post it up ^^
It uses boost::gil which if you don´t want / can´t use it, would be easy to get rid of. I am almost certain though my code has an off by one error somehwere, the integral image for tracking has some odd pixels but it works and I guess its more about the NCC, meanshift and Integral Image algorithms anyways.

I have some visual effects projects (live action/ sfx) to do next semester, so I’ll definitely be making use of this!

Thanks! :slight_smile:

Basic Blending.
To create a feathered mask for the roto based on the bezier shape so that it blends into the original footage.


  for (int v = 0; v <= objectView.height()-1; ++v)
  { 
      for (int u = 0; u <= objectView.width()-1; ++u)
        {
         float w = maskView(u,v) / 255.0;
         outputView(start_x+ u, start_y + v)[0] = (unsigned char)(((1 - w) * backgroundView(start_x + u, start_y + v)[0] + w * objectView(u,v)[0]) + 0.5);
         outputView(start_x+ u, start_y + v)[1] = (unsigned char)(((1 - w) * backgroundView(start_x + u, start_y + v)[1] + w * objectView(u,v)[1]) + 0.5);
         outputView(start_x+ u, start_y + v)[2] = (unsigned char)(((1 - w) * backgroundView(start_x + u, start_y + v)[2] + w * objectView(u,v)[2]) + 0.5);
      }
  }

mmm but that can done easily in the compositor?

I’m working on some extra tools to facilitate doing some tipical roto work like assigning matte materials and other stuff

Edit: Done, check out the new stuff in SVN :slight_smile:

Is there somewhere I can download just the add-on? I can’t seem to find it listed in the builds on graphicall. Which revision number would include it?

Oh and by the way awesome feature! I’ve been missing this as well.

You can download it from here:

https://svn.blender.org/svnroot/bf-extensions/trunk/py/scripts/addons/animation_rotobezier.py

be sure to have a recent build, I think [33198] or newer

cheers

Yayy! Fanboy reply. this is sooo awesome. At last, animated bezier shapes without headache. Thankyou ZanQdo. About bezier feather, would that be like the “inside/outside feathered matte” determined by secondary bezier curves that are derived from the original shape?
Check out Apple’s Color for use.
http://documentation.apple.com/en/color/usermanual/index.html#chapter=15%26section=3%26tasks=true

so…How do u guys feather now…???just use blur??

In Flame, (and I believe Nuke and Shake as well) You can actually define the feathering using beziers. By that I mean You have your main mask shape, and then an inner and outer shape that defines the falloff. So each point has two other points attached that can be dragged closer or further away to define the falloff.
It’s a more precise way of doing feathering, rather than just adding blur.

@arexma: are you kidding? Wow, if that could make it into this workflow, you and ZanQdo would be a instant stars!
You know that project mango is going to be a vfx piece, right?

Like here.
http://homepage.mac.com/juhatak/TEMP/blender/shake_roto.png

To be perfectly honest I didnt knew about the focus of mango =) Too busy loading ToDo´s off the truck :slight_smile:

But no, no kidding.
It actually was part of ComputerVision at the university… an assignment.
And I really don´t need to be a hero.

Here´s the source:
http://temphost.arexma.net/2dPatternTracking.zip

It´s commented and missing the universities framework, which isn´t really needed, the framework basically handles the XML config and file I/O.
Somewhere in the NCC is an off by 1 in all the GIL iterators haven´t had any time yet to debug it, nor the need to do so, the code earned me a 1 anyways, or an A for our non imperialists :smiley:

What it basically does at the moment:
You feed it an image sequence and a pattern from that sequence and its coordinates. That would have to be replaced with reading the initial image under each bezier point and assigning the coordinates. All gets really clear once you look into one of the configs.
I am not sure if it is possible to track mutliple patterns in one go, I think it has to be done sequentially, track point by point, but once the Integral Images are computed it´s no real efford to calculate the local maxima to find the pattern match - which actually is the whole trick to make it really really fast.
The mean shift algorithm prevents the tracking from jumping. Imagine you got a video and want to track a red ball. now your red ball leaves the image and another red ball comes in from the other side. The algorithm will ignore the other ball and not track it, it will wait around the last local maximum for the other ball to reappear within a given threshold which could be a parameter for the user to set.
It also offers adaptive patterns, meaning you got an initial pattern you track. During the movement obviously the pattern changes so after each successful tracking the pattern gets changed adaptively to compensate for changes in the normalized pixelvalues.

It really is no magic once you got the concept - I guess I am just fortunate that the institutes for Computer Vision and Computer Graphics are really good ones at my university. Just wait for next year, then I got to code a raytracer (which I should have done last year) =) Was one of the best choices to start to study computer sciences with my old age =) Too bad mathematics is one of my weaknesses…

Bezier algorithms on my ToDo as well, already done texture filtering, texture mapping, blending and antialiasing (coded the algorithms, not used libraries) so developing a class to make feather beziers can´t be too hard either.

I am not sure though I got the time between study and the work in the studio to jump on the code wagon as well.
To tell the truth I haven´t compiled Blender once yet, let alone read into its code =)
But I´ll be glad to help Zan with the best of my knowledge, and I think knowledge is valuable.

I posted up a very good book on computer vision some time ago, it might be useful as well:
http://blenderartists.org/forum/showthread.php?t=187900&pagenumber=

O/

If you haven’t already, I really urge you to please contact the people of the libmv project.
Tell them about your code and what it can (and can’t) do.

The libmv code seems to be pretty modular, so existing algorithms can be switched out quickly.
Their ultimate goal is to integrate the resulting tracking library into Blender :smiley:

Project homepage: http://code.google.com/p/libmv/
Mailing list at http://groups.google.com/group/libmv-devel

EDIT to stay on topic: Very nice Rotoscoping script ZanQdo.
The only possible improvement I can think of (but probably is pretty hard to implement) is a variable number of curve handles/points for each keyframe.

Greetings from Vienna :slight_smile:
Martin

OK guys I think we have something nice going on here with all the interest of all of you, I never tough so many people where facing the same problems I had, and yes, I had been thinking about double edge masks all this time and recently got an idea on how to do it using some crazy UVs… BUT i believe we face a bigger problem ATM: interactivity! This is a brainstorm design I’ve made and later discussed with Campbell

It’s a way to separate the rotomask rendering from the main renderer so it is able to feed the compositor interactively and fast just like a sequence of prerendered images

I would rather keep current solution simple and try to solve the interactivity issue first. What do you think?