Blender Composition VS Adobe After Effect

I would like to use Blender Composition instead of Adobe After Effect. Is it possible? As Adobe After Effect, what can’t Blender Composition do?

From en.wikipedia.org/wiki/Shake_(software), Shake source code is $50,000 USD. Will it make Blender Composition better?

I’ve been user of AE during the last 12 years, (when it was called CoSa After Effects!) so for me it’s slow to make such change. But knowing better the blender nodes makes me believe eveyday more that is possible to replace AE with Blender. And i’m trying everyday! :smiley:

It entirely depends on what you want to do. AE has a fantastic titler, and working with 2d text is a breeze. Personally I love Blender, and I couldn’t give it up, but I couldn’t give up AE either.

AE has an excellent keying plugin (keylight) but you can also do keying with Blender. There’s a great tutorial on it, although I don’t have the link right now.

If you can do without AE’s color correction, keying, text engine, and host of plugins, then you could replace it with Blender. You will probably even find much of what AE offers is doable in Blender, but not all of it.

There’s also the workflow angle. There are some things that are going to be faster to put together in AE. If tight deadlines aren’t a concern, then speed to finish just doesn’t matter as much.

Personally I see the two programs as complimentary, and not directly comparable.

I am using combustion along side Blender. Blender has a little way to go before it can compete with the 2D compositors like Apple Motion, Apple AE, and Autodesk Combustion.

For me Blender is a replacement for Softimage, and Houdini, both are 3D effects suites with integrated compositors which adds a whole new level to 3D.

Since you cant keyframe the nodes parameters, it’s difficult to compare.

As everyone has said, it’s a matter of workflow speed and efficiency. If you are learning both, and want to save the money, and are not having to work in a shop with other AE users, and since this IS a Blender forum, of course we are going to lean toward Blender.

You keyframe many nodes parameters using the Time-Multiply node combo.
You color correct using the ColorNodes (Color, HSV, RGB)
You key using the Mattenodes (Difference, Chroma, Spill, etc)
You title by creating title scenes using Blender’s text, and then importing that for overlay by using the Renderlayer input node to a Mix node to overlay the text.
You mask in much the same way, creating a shape in a Scene, setting it to shadeless white, killing the world, and then importing that black-white mask.

So there are mostly ways to do the same stuff, but different and some will find, more cumbersome than AE.

The speed of masking in AE is lost in Blender. Until you can actually draw 2D style on footage, blender can not replace AE, or any compositing program. IMHO.

Also, After Effects manages keyframes in a much more effcient manner. If only the code gurus of Blender would simply make the IPO editor work like After Effects, work flow would be increased.

That’s the point. I bet we could do this soon… Somebody that tells me i’m not wrong? :wink:

Um, you can. Use your footage as background, or as texture to a plane that mimics the projection screen, and draw your curve. I use the Bezier a lot to draw my mask, but you can also use a mesh…and you can draw multiple meshes/combos for different areas.

You keyframe many nodes parameters using the Time-Multiply node combo.

@PapaSmurf: you have some clues about it? I really think that i get rid of After Effects soon… hehehehe

Um, you can. Use your footage as background, or as texture to a plane that mimics the projection screen, and draw your curve. I use the Bezier a lot to draw my mask, but you can also use a mesh…and you can draw multiple meshes/combos for different areas.

But you don’t get the immediate feedback that After Effects offers. I can draw my mask right on the image, drag my points around and see the masking taking place as I edit. I don’t think you can get that in Blender yet.

Also, what about animating the mask? In After Effects all I have to do is turn on the stop watch for the shape of the mask and start moving mask points at various points in time. How would you do this in Blender? The only way I can think is to put a hook on every vertex and then you would have to select all the hook empties every frame and remember to press the I key to lock in their position for that frame. Yes, it can be done, but not quickly.

um, it works for me; i can do rotoscoping this way. i made a video on masking, and think I posted it here somewhwere that explains how.

In Blender you have shape keys, and you can also just move the whole mask via IPO Object. Yes, you can also hook the mesh and animate the hook as you say. I agree that the whole shape key user interface could be reworked to be more friendly akin to AE. No argument on the speed/workflow, as I have commented before. Also, multiple overlapping masks can be combined and animated as objects to avoid shape keys.

Regarding the time/multiply node combo, suppose you want to do a fade to black from Frames 100 to 148. You feed the image to a Color node, and then the Color node out to the composite output. Set the color node to flat 0. Add Time node, set sta: and end: to 100 and 148. Thread that to the color node Fac input. The color node will have no effect on any images prior to 100, and then will influence the video after that.

The mulitply node is needed where you want to animate something less than 100% (or more). For example, scale something in from 0 to 200%. Suppose you have a film that was shot outside, and from frame 100-148 the sun came out and the film is overexposed, and you want to darken it by like 30%. You can either adjust the time node’s curve, or just feed it to a multiply node set at .3 and then feed the multiply node to the Fac socket on the Color node.

Blender’s compositor will probably never be able to compete with AE for speed and ease of use concerning 2d image files and it’s bezier tools probably will never see that day either just as AE will never be able to compete with blender for flexiblilty of 3D image file manipulation and creation or custom effect creation. Apollos said it best when he called these programs “complimentary”. This will probably always be the case.

AE is also superior in it’s native handling of alpha channels to all other programs, bar none. Blender is very good in this regard but there is something a bit quirky in the code that you have to learn to work around but I suspect this has to do with the OpenExr render pipeline and not Blender itself (this rears it’s ugly head where keyed alphas display an inability to properly “Pass Through” resulting in a halo effect that you’ll see from time to time depending on how you configure your nodes but it’s easy to circumvent when you understand what’s going on). This will likely have to be dealt with by the coders at ILM’s OpenExr board.

Well the way I see it, the problem isn’t what each program can do better than the other, it’s how easy it is to do it. Until blender gets a ‘keying’ node that is as easy to use as AE’s ‘keylight’ effect, then I’m sticking with AE. I do like the was masking is done in blender alot more than in AE.

I agree that the programs are complimentary. There are some things that are just much easier in AE or Premiere, and vice versa. Primarily, masking and effect keyframing are cumbersome in Blender. Once Blender has N-Gons, masking will be a breeze, though. And in most cases you can keyframe a node’s result. I haven’t had to use the time node yet, but it looks like it can suffice for a lot of situations.

It really depends on which program you’re more comfortable using for whataver task. I was doing AE style editing ever since I was 9 years old or so, so when I came across nodes I freaked out. Now that I’m using them more often, though, I’m actually starting to like them a lot, even compared to layers.

Edit: Blender can do color correction quite nicely. The only feature I wish I had is Hue/Saturation adjustment of individual hues (as in Photoshop where you can affect either the reds, the blues, the greens, etc.).

You can. Nodes allow you to roll your own. This isn’t the only way to do it either. You can see quite clearly via the split viewer image that only the reds are affected. The specular was separated from the combined image and added back at the ebnd to keep the hsv node from playing tricks with it since white is composed of R, G, and B.

.

Attachments


I am really interested in using Blender for processing video as it seems to be able to do everything I need with the compositor. There are many things I like about it, but I find keyframing using the time-multiply to create a lengthy and complex page of nodes. Keyframing already exists in Blender and is very intuitive, so why is there no intention to connect the two, such that each individual parameter of a node could be controlled with the keyframes?

A couple other things:
How does Blender control where a node is added? Often I add a node and it is off the viewable screen of nodes I am working upon.

Why does blender limit how far you can zoom out in the node view? The real estate gets used up fast I find and then it gets annoying scrolling around.

When I step through frames to view the output with the viewer each step takes up more memory and if scanning through video to locate where to apply a color effect this quickly leads to running out of virtual memory. Assuming it is due to caching frames for the viewer, is there some way to flush the cache or set an upper limit on it?

Thanks for assistance to my questions!

How does Blender control where a node is added? Often I add a node and it is off the viewable screen of nodes I am working upon.
It’s fixed in svn… now a node will appear where your mouse was when you either rmb or spacebar to add a node. (I heard that it bothered some crazy old crackpot enough that he actually came up with with a patch to fix it…)

GREAT ! :smiley: