Video with CG Background (new test dec. 11)

Lots of people use camera tracking to stick a 3D object into a video, but I’ve not yet seen anyone use camera tracking to stick footage into a 3D background. Since I’m in a film class and we want to try some stuff with 3D backgrounds, I’ve been tinkering around with Icarus and Blender to create a CG environment through which a live actor can walk.

Right now I’m still doing basic motion tests. Please note that these tests look like crap, because they are simple tests just to check the motion. Here is my first test. The subject is a friend.

I used Final Cut Pro, Icarus, Blender, and VirtualDub to create this test. Final Cut Pro wasn’t really necessary, as I only used it to capture. You should be able to capture with any video editing software.

Why it’s crappy
I used Final Cut Pro to capture and then brought the footage home to work with. The footage I brought home was not keyed in any way. To key out the green, I used the color key matte feature built into Icarus. I have little experience with keying and I also was in a huge rush when I shot the footage so I didn’t set up the green screen properly for optimal keying (edit: for one thing, I was manually holding the LED ring that illuminates the screen in front of the camera, you can see it in the corners). Hence, plenty of green is visible. As for the footage itself, I captured it as a .MOV in Final Cut Pro, exported it to an AVI with Microsoft DV codec with FCP, brought it home on a CD, converted to raw AVI with Virtualdub, imported into Blender (along with the matte from Icarus), rendered the background in Blender as an AVI-JPEG, reimported the background, and finally rendered again as an AVI-JPEG. With all that converting, I’m surprised the video looks decent. The keying, again, is terrible.


Is there any way to reduce the amount of motion in the IPO curves? Right now, when I import, the motion is so exaggerated that I have to make the scene hundreds of times a comfortable size to match the camera movement correctly. For one of my tests one object got so big it would no longer show up in the view (in orthographic or perspective).

If people want me to, later I can explain the whole process in detail or write a tutorial. I’m not sure how soon I’ll have time. I also need to refine the process first.

Does anyone know an alternative video host? I don’t like releasing all rights to my videos. I’m not going to upload anything worthwile with Putfile.

edit: Second test - this file is a little over 8 megs and takes a while to load. You can clearly see the tape we used to mark the green screen so that Icarus would have something to track. There is a glitch in the camera motion because we didn’t put enough tracking crosses on the screen but most of it is good.

These are good tests so far. The second has a bit too much green still around the subject, but seems to track well.

Looking forward to some more. :smiley:

Sonix.

Thanks!

Like I said, I keyed it with a matte from Icarus and I didn’t set the screen up well. For final videos we’ll be keying with Final Cut and the screen will be better.

I shot five different short sequences for tests, but in two of them not enough crosses are visible and Icarus won’t track them. In one of them I stepped in front of a light and ruined the whole shot with my shadow. So I guess we need to shoot more tests.

Does anyone know an alternative video host? I don’t like releasing all rights to my videos. I’m not going to upload anything worthwile with Putfile.

There are two that come to mind. Rapidshare and Megaupload.

This is good stuff.

As a Putfile alternative, try Google video:
http://video.google.com/
Putfile sometimes tends to freeze up Firefox, so I haven’t seen your animations, however I am sure they are going to be good!

Thanks, guys!

Putfile sometimes tends to freeze up Firefox, so I haven’t seen your animations, however I am sure they are going to be good!

Well, you could try I.E. just for five minutes :stuck_out_tongue: . But whatever.

Any ideas how I could make the actor cast a shadow without modeling him out?

I’m also having some trouble with alpha. For the two previous renders, I have the video set as one texture layer and the black-and-white matte video set as a second texture layer. The matte is set to CalcAlpha (and NegAlpha because the black and white is flipped). In the Map To tab in the materials window I have the matte layer set to Alpha. This didn’t do anything until I changed the blending mode to Subtract, then it made the correct area transparent, except instead of seeing the rest of the model behind the video plane, it only showed the world. To fix this, I turned off the video plane, rendered the scene, imported that render as the world, and then turned on the video plane and rendered again. It worked, but it’s a total pain in the ass to do, and to top it off I can’t see how the background and video match up until I render the background. How would I make the alpha area show the model behind the video plane instead of only the world?

Here is third test. This is with the same footage as the first test, for this one I was experimenting with lighting and shadows. I never got the lighting to a point where I felt it was close to the real lighting in the scene, though I spent hours on it. I’m not good with lighting (and light in Blender doesn’t behave realistically at all).

Shadow test

As you can see I figured out how to get the transparency and shadowing working, but using the footage as a shadow only works if the shadow is essentially directly behind or in front of the subject. I doubt there’s going to be a simple way to make shadows for the left or right side.

I’d still like to know a way to reduce the amount of motion in IPO curves. I’m going to try looking this up some more.

Excellent work! You’re probably about a year ahead of me since I’m just getting started in Blender.

DV is crap when it comes to keying. If I was doing it more frequently than once every few years I’d try building a PC to record via S-Video using huffYUV, or some other mostly lossless codec.

Thanks.

I started Blender in mid October. :stuck_out_tongue:

Yes, DV is bad for keying, even with good cameras. Gonna try HD for the next one.

I work at a TV station, and I am very interested in replicating your results. I’m excited about possiblities for use of VR sets for LIVE TV broadcasts. What are some of the issues or hurdle that need to be addressed in order to accomplish REALTIME pixel tracking & Blender3D environment compositing. The Chief Engineer is also interested and is willing to help me in cobbling together a solution here. Any tips would be great inspiration.

Thanks alot!

Trane

I don’t know anything about professional motion tracking. As for Icarus, there’s no way you’d ever get real time tracking. Tracking takes a fairly long time, and if your subject fills a lot of the screen (like mine), you have to matte the subject out so it doesn’t try to track him/her. Also, we’ve been using tape on the screen for tracking, but the tape has to be a different color in order to track, so it doesn’t key out well - not sure how you’d get around that without keying both colors, live, which I’ve never tried. I don’t know how you’d matte and track and key in real time.

You’d also need something that can render in real time, or maybe just a 2D images that warps with the camera movement (but that wouldn’t look as good). If you wanted to render in real time, you’d need a faster program for rendering (something like a video-game engine probably, I’ve never tried Blender’s so I don’t know how it works) or a supercomputer.

I expect this would all be very expensive if possible. What station do you work at? Is it small or large?

Those are some very good points to bring up…I was thinking a game engine would be perfect as well.

I work at very small market CBS affiliate in Medford, Oregon. We have a three camera set with a Weather wall in chroma green. The wall is small, and just painted plywood, so it has some irregularities. I would envision an entire set in chroma green with very basic furniture. We of course use a broadcast quality Ross switcher, so I was thinking IT could do the “compositing” like we do with an anchor on set, and the computer generated weather maps.

The issue seems to be; how do we get positional data from the REAL cameras to the VIRTUAL cameras? Pixel-tracking takes to long you say…would be fine for “post” scenarios (like promos or special reports) So what if we had the camera(s) feeding position and angle and zoom data… it’s just a matter of actuators with an output. I’m no engineer, but I work around them, and I’m sure this could be done.

Lets see… one camera with 6 DOF (Degrees of Freedom):

  1. PAN
  2. TILT
  3. DOLLY X (HRZ)
  4. DOLLY Z (HRZ)
  5. PEDASTAL Z (VRT)
  6. ZOOM

That’s not so many variables… even X 3 cameras on the set.

If we use Blender’s Game engine, we should be able to create VIRTUAL cameras (players) and use their viewpoints!

Now feed the camera(s) movement to three clients running a networked “game” of a news set… use the Switcher to KEY the REAL reporters on set with the real-time coordinated cameras …and VIOLA!

Sounds good on paper. can it be done with Blender? hmmmm. I am also learning Garagegames.com’s excellent game engine TORQUE… maybe it could be done with that, But I like Blender3D alot.

Your thoughts … ANYONE?

What do you think the hardware requirements would look like? The Camera’s already have thick ass wires leading into the controlroom, whats’ a few more!? IMAGINE… a giant MOUSE BALL at the bottom of each camera pedastal.

[quote=“thelonesoldier”]Thanks, guys!

Putfile sometimes tends to freeze up Firefox, so I haven’t seen your animations, however I am sure they are going to be good!

Well, you could try I.E. just for five minutes :stuck_out_tongue: . But whatever.

Not everybody uses Windows :-? . I hope you read the TOS at putfile cos anything you put on there no longer belongs to you.

We of course use a broadcast quality Ross switcher, so I was thinking IT could do the “compositing” like we do with an anchor on set,

I know that’s possible. As I said I wasn’t sure how you’d key out two colors (the switcher we use at my school can only do one), but if you’re going to use camera movement instead of pixel tracking that wouldn’t be a problem.

It sounds like you know what you’re doing. I’m not good with that sort of thing, I could never turn a camera into a giant controller that feeds data into Blender.

Not everybody uses Windows. I hope you read the TOS at putfile cos anything you put on there no longer belongs to you.

I think Opera is free now, see if they have it for your OS (Linux?).