“Afterglow” or “fade away” node for the composition nodetree.
Before reading this please mind you that this is nothing i personally need in any way, but it certainly sounds very cool to have and i just wanted to put this somewhere before it’s lost in the attic of my brain.
At first I wanted to put this directly into the suggestion page in the wiki, but I believe there are quite some people here that might have critics & suggestions.
The effect is very similar to the motion/vector blur effect, but takes the whole thing to a different (and in some ways more extensible) level. It may also need less CPU power. But it’s not a replacement for motion/vector blur.
I know that this may be already possible (i bet it is somehow) with the exiting compoostiion nodes, but i never tried it.
User Interface
Imagine a new composition node with the following properties:
A ‘texture’ that is the same size as the output node. I’ll reffer to this as the ‘buffer’.
Multiple inputs (objects&groups) that tells the node where (mask?) to look for the color&intensity values of them. An actual seperate node that gives you the mask layer of a specified object would be even better IMHO … see the end of this post.
An “intensity” value - This tells the node how fast the glow fades away from in the buffer.
Maybe a “delay” and other values are needed as well here to control the fading behaviour better. I’ll exclude them from this proposal to make it simpler
Each composition-frame the buffer will be darkened by a certain value (depending on “intensity”) and the color & intensity of the given objects&groups will be applied to the buffer.Examples
With this node you’ll be able to create very long light trails.
A still landscape and a moving bright light in front of a camera with long exposure time. Here you’ll see a light-trail after the light.
Very long lightsaber trails.
Anime-style light trails of car/motorcycle-lights that last unrealistically long. “Object Mask” node
And as mentioned above a composition node that takes as imput a object&group name and outputs the alpha mask would be quite a bon in a lot of cases (no extra scene or other tricks). You can combine multiple of these mask with existing RGB-nodes.
It sorta sounds like you’re looking for an animation, not a compositing function. There is a texture node that applies a texture to an image, but does not wipe the image by itself. That’s where you need to feed the time node into the X displacement socket. The time node can feed into an alpha channel, making the texture become more visible and have more effect. Sounds like the texture you’re thinking of would be an image texture of a headlight pattern. You would alpha over or mix overlay this on your static background.
The key to nodes is to think like Legos; use small building blocks to build up complex functions.
To be exact it’s a composition feature for animations (just like motion blur).
It doesn’t make sense for stills. (But could be used to make nearly static glowing parts such as flickering candlelights or as an CPU expensive “brighten” filter for static parts).
There is a texture node that applies a texture to an image, but does not wipe the image by itself. That’s where you need to feed the time node into the X displacement socket. The time node can feed into an alpha channel, making the texture become more visible and have more effect. Sounds like the texture you’re thinking of would be an image texture of a headlight pattern. You would alpha over or mix overlay this on your static background.
Ok, this goes straight over my head … or at least i can’t really picture what you just said
Why would i need a X displacement socket?
Where do i get the colors for the texture that is mixed in with this way?
What if i don’t have a static background, but a slowly (or even fast) moving background?
If needed I could draw an example-nodetree of what i imagined in my first post, but this takes some time :-/
Ok, i’ve made a mockup of what I had in mind (slightly modified) in my first post. And as i already stated, I don’t really know how this would be done with existing nodes (nor do i need it). This is just a though trown out for discussion/critics …
Mind that this setup doesn’t need another scene to mask out the object.
I don’t think it can get any more modular.
EDIT: A side effect of this node-setup would be that the path would rotate with the camera.
+1, Its like motion blur mostly and can be cheated by vector blur, but that doesnt help when using an external source at a set frame rate.
It would be handy to add previous frames of a source clip to the current frame via a blend value. You could stack lots of duplicated sources with staggered start times, but sheesh what a lot of work for little return.
I animated 10 frames of the entire model and 100 of just the lights. Then i joined them with Gimp (which means changing the alpha of every layer) and then used Gimps motion blur to hide the picture steps.
I’m quite sure, there is a better way to do this… even with gimp, but you know… once you’re half way in …
Could you stick the source footage on a plane in 3D view, then use the Motion Blur (render type with frame sampling) to add previous frames on top? The drwback is the limited amount of frames you can add.
Without the frame limitation that might actually work. But as it is, you’d have to do it multiple times and combine the results - so setting up and patching is even more complicated than changing 100 layers (which is tedious, but simple).
And for short tails it might be enough to use Gimp’s Motion blur anyway, so that’s not really a solution, i guess.
Just tried the scene sampling motion blur and guess what? It doesnt work. It only samples internally generated textures back in time, not external sources such as movies.
Just tried it with Nodes. An Movie input node (with a frame offset) to a mix node, then duplicate that and daisy chain. The problem is that the Duplicated nodes cannot be made Unique. So any change to one affects the others. Good for the fade amount I guess but not good for the frame offest. As they are all the same value, therefore no effect.
It seems like the kind of effect that would suite Python coding (as it is just an iteration of the current nodes- just grouped differently) but that is a way off for nodes i believe?
I think, all we’d need to do is render the frames of the animation on one single picture. That would basically give the trail.
Then you put the picture of the current frame on top, so that the model isn’t blurred… sound’s easy
What you propose requires rendering of all frames first then keying the part of the image you want to retain. It really is a nodes process, but I think (if it is generated from Blender geometry) a form of vector blur is waaaay more efficient.
vector blur doesn’t work, because you don’t have a vanishing point (= no zoom setting).
Directional blur kind of works… or let’s say, it would work great for constant movements (even curves and so on…) IF the zoom could be negative.
But because the zoom can only be positive, you have to duplicate the object, scale the copy down and move it towards the vanishing point.
Then you can use directional blur on this copy and set it up pretty good.
At least, that’s what i thought of…
But still, this would only work for a constant movement - it’s not a real trail of the motion.