Compositor node to return 2D element X Y value in frame for animated flare?

I’m looking for a way to get a node value of the X and Y frame position of the center of an object in a layer. I’m trying to do a 2D animated flare that has one central glare/flare effect, then others that move and scale with offsets computed with a Math node based on their distance from the 0,0 center X,Y of the frame.

Here’s my image, nodes and blend file(2.5). In the nodes screenshot, the two Value Nodes at the bottom connected to Math nodes are where I want to put dynamic X,Y values based on a layer element’s 2D (or possibly a 3D object) position in the frame (not scene space values). I want to use this for dynamic offsets using the Translate node.

Is there a way to get an X,Y value node from a 2D element or 3D object?

Ideally, I want to do this all in the compositor in 2D, so I can use this animated effect with elements from outside Blender.

Thanks for your help.

Attachments

flare_nodes03.blend (101 KB)



What a cool idea! If it’s image tracking from a source node- the answer us no. And I’m not aware of any value drivers available from 3d view either, sorry.

That seems too complex, usually for lens flare I use a single vertice with a halo material, works fine.

Thanks for confirming this. After my testing and searching, I suspected as much. I hope there might be some way to use custom Python script nodes in a future 2.5. Then it might be possible. Anyway, now I can stop spending time trying to find a function that doesn’t exist.

Yes, that would work for a 3D scene rendered in Blender. As I mentioned, I was hoping to use the compositor as a post processor for pre-rendered and external footage (2D image layers or video). I also wanted more custom control of all the flare elements: size, shape, pattern, color (or gradient) and movement.

It seems a shame that you cant get simple co-ordinates for a xy reference point. In this case I would expect that you could easily isolate lamps in the shot by increasing contrast. Then you wouldnt have to spend energy tracking so much (hi-con pass eliminates visual noise), as all you would have is a white dot on black field.

What you want to work with is the specular pass for that object. If you have multiple objects and want to isolate some, use the ID node. But, instead of blurring that second image, blur the specular pass.

Thanks for the suggestions. Ideally I want to apply these effects to external footage, so everything would be done starting from an Image Node in the compositor with external frames or video. No objects, no Render Layers node. I’m trying to set this up as if the compositor were a stand-alone compositor.

I added a few more layers with a Flip node, Scale and a Displace node with a grayscale image. That gives me something closer to what I was looking for, though I don’t have the control I wanted. The Displace node only seems to work as expected with 1:1 square frames and the other flare element layers show edge cropping now, so I might have to start with a frame larger than the input footage, run through all the filtering nodes, then crop back down to the original footage size.

Here’s the result of the attached nodes. Sorry, it’s very rough and the nodes are a mess.

http://imgur.com/hIQBh.gif

Attachments


I really like the result, could the input (flare?) source have a variable size depending upon size of footage? It would be a great little Node group I think.

Thanks. Yes, everything should be adjustable.

There’s no halo or flare, I’m just taking the specular hotspots of the scene, by adjusting the black/white levels in RGB Curves. So you could use it with any live action footage, render layer or a light pass. You can adjust the size there and you could probably make a garbage matte to isolate certain sources.

In this case, I started by making the hotspot black and white, then I added color from an RGB node, but you could stick with the color from the original frame or use a multicolor gradient ramp.

As for the size and shape of all the elements, you just adjust those in the settings, Glare, Blur, Scale and Mixing. So, you could have star shapes, streaks at different angles or something else. You could probably make mattes for custom shapes. You just keep mixing and changing the effects to add more element layers.

Where I’m still stuck is getting precise control over the secondary element scaling and I haven’t quite worked out how to precisely control the Displace node effects.

If there was a custom Python node for the compositor, or if I could some how get some X,Y or other numeric values for 2D layers, I would do things differently.

Here’s an update. I did the resizing and cropping I mentioned to fix the edge cropping on smaller elements. So, the comp is done at 1024, while the scene/footage is at 512, then all the fitlers can go outside the 512 edges without being cut off on the sides. After the frames are rendered at 1024, then they all have to be cropped back down to 512, which could easily be done when encoding to video.

.blend file is attached. It runs pretty slow on my old system.

Displace images needed for the Blend: (Is there a way to package compositor images in a Blend?)



Feel free to post your results or new node edits.

EDIT: Here’s an example of the nodes applied to a photo, which you can think of like a frame of video.

Photo credit: dan_clements
Imgur

I just adjusted the input RGB Curves level and the RGB Color input. Otherwise the nodes are the same as the animated .gif.

Attachments

flare_nodes12.blend (106 KB)


Thanks for sharing it looks great, I don’t think we will see a py node for some time.

While I don’t know if it can be done specifically with BPython (Blender’s “dialect”) it seems that your X-Y values derivation could be done with some fairly straightforwarad graphics function scripting. Here’s the “thought experiment” version:

  1. Take your source rendering/video and run it through the compositor to produce a hi-con animated “map” of the centers of interest of your objects. This can be done a number of different ways, such as image processing, or even by slaving special objects to your objects of interest and rendering those as proxies for the objects of interest’s centers (or whatever part you want, really). Distill this map down to all-black, all-white pixels.

  2. Write a script (likely external to Blender, but probably still in Python) that scans the raster of the map images looking for values of 0 and 1.0. With some clever coding you should be able to discriminate between separate map elements by setting limits to their possible extents in X & Y. Average the resulting max/min X & Y values for the “hot spots” to get a good mean center of any one hot spot. That’s your numeric track of the points of interest from your original source.

This sounds feasible to me because I did something similar on a pre-OSX Mac many years ago to while trying to develop a post-rendering DOF filter for another 3D app. But in that case I was reading full-range greyscale values, so dealing with B&W seems easy-peezy (hah!).

  1. Lastly script up a BPython wonder that reads your mapped X/Y data back in as IPO curve values for new objects on Render Layers that will act as the source(s) for your modulating FX nodes.

Sound like fun to you? :wink:

Do you have to ask? But yes, that’s similar to what I had in mind for a hypothetical compositor PyNode script (depending on what values and elements the API could access). Thanks for the thoughts though, I hadn’t thought of automatically creating new objects or empties from the hotspots.

I’m really just testing the limits of what I can do with the compositor as a stand-alone compositor, since it’s the only one I have. It has helped me to come up with a good wishlist of features.

I do have Photoshop though, so I’ve been thinking about trying a batch action and maybe some scripting to see how it compares. The Blender nodes solution isn’t very efficient and doesn’t quite have the level of control I want. For example, I probably wouldn’t use the Displace node if I could do stretching, scaling, position and rotation with dynamic numeric values.

On the other hand, these tests have shown that I probably can get something good enough out of the compositor for the shots I have in mind for no budget videos and animations.

aaahh a point tracker, how do I love thee?

I see this thread at commiters list asking about image value reads…
http://lists.blender.org/pipermail/bf-committers/2010-June/027909.html

Not good, I wonder what ZanQdo is up to?

Well, an interesting thread. But it does have a link to this:

While working on this effect, I found Matt Ebbs page on the Displace node.

Most of the code logic was done using Blender’s Python Image API, as a means of quickly testing and prototyping.

I did run his original .py script to see how it compared to the current node. I might see if I can get any of those pixel reading techniques to work as the same sort of test script.

Yep, those look like the kind of scripting resources you’ll need.

Keep in mind there’s no real reason to keep everything inside Blender, other than the stuff that interfaces with it directly. Some of the image processing might be done faster externally. I did some graphics-oriented work with Python a few years back before finding Blender and found it a capable language, didn’t present a huge learning curve given my background with other OOP-flavored scripting languages, though getting used to new APIs and libs is always fun :wink:

True. I wrote a simple file compression program in Python as an exercise and it was definitely slower than the same thing in C++. Of course, this exercise was meant to see what I could do in Blender. I know there are other compositors and plugins that will do the trick. If the Blender solution is to write something outside of Blender to fill the gaps, then I know I’ve hit the limit. Anyway, this will be a good side project to add to the todo list for when I need a change of pace from other things.

Just out of interest, I wonder if you displaced a mesh in 3d view with a hi con video could you track the resulting distortion with an empty? If so could you get coordinates? Very hacky but interesting

Interesting. I had a quick look, pulling in the hi-con frames. I don’t know how the displacement data is handled, since it’s render time displacement, rather than physical vertex displacement. Anyway, pulling in a sequence as a material like that would give scripting access, so it might be another option worth looking at.

I displaced a mesh and moved the displace-texture around via empty. Then i wondered, what would interact with those raised verts? So I tried soft bodys, trying to get another object to stick to the raised surface. Alas I have no idea about soft bodies, I just thought they were a good fit for frame by frame analysis in real time.