# Deriving a single number from scene data? (Compositor)

I’m trying out an experiment in special effects using the Compositor, and I need a means of deriving a single numeric value from the information in the current frame’s scene, the idea being to modulate that number with Math nodes, then input that into Distortion nodes like Translate and Scale, so the image ultimately being produced is altered according to the infomration in the frame being rendered. For example, if I could covert the image to grayscale, blur the heck out it, then derive an average grayscale value for the entire scene, I could use that final value. The first 2 steps can be done with nodes, but I can’t seem to find a means of distilling it all down into a single number to use as a Value input in a Math or Distort node input.

Breaking out the RGB values comes close but is still an image and that doesn’t seem to affect the Distort nodes values. I guess it needs to be a single float.

I’d do a search but I can’t even begin to think of proper search term.

Thanks in advance for any suggestions.

This is an unusual and intriguing question.
The way I’m assuming it works it that when you feed an image output into a math input, the math node treats it as an array of values, rather than a single value.
Off the top of my head…
I have no idea how to turn that array into a single value.
However, you could make every value in the array the same by scaling the image down to 1px x 1 px.

That should deliver an array containing only one value.

Messin’ around with the Compy’s nodes always gives me a bad case of “what ifs.”

The idea of a 1pix^2 image didn’t work, but it did lead me to understand more about the difference between an image output (regardless of scale) and a flat numeric. An image input into a Math node can completely suppress an effect that a plain number affects as expected (such as a 5 pixel Blur in Y), even when the image is heavily modulated, say by a Multiply node set to 5000 or up.

I got some interesting effect using some of the Vector nodes but couldn’t find a way to have the values change from frame to frame, they mostly acted like constants or the effect was so small as to be unnoticeable.

Even something as basic as a frame number output, or a random number generator, or these two working together, would provide the variable numeric values I’m tryng to achieve, but they don’t seem to exist. Am I missing some node functions?

I guess maybe pynodes could offer a solution but that’d mean having to learn how they work and I have so many fish frying now I’m in danger of something going up in flames if I try to do more. Like my brain.

So it goes.

Well, I found a way to do something like what I want, but due to another of those seemingly meaningless limits on certain Blender parameters, I can’t really use it effectively.

The key is a Time Input node. This acts in many ways like an IPO curve – time plotted on an X axis and a normalized output plotted on the Y. Output seems to be a simple float between the limits of zero and 1.0, exactly what I need, and it can be modulated over time by adding points along the nodes curve line.

But unlike an IPO curve, the Time curve can’t be scaled uped past a certain arbitrary limit imposed on the size of the node itself. Which means that if you want to add points on the curve with any small resolution you’re screwed, unless you daisy-chain nodes of short frame range together, which is very clumsy.

The similarity between the Time Node and an IPO curve led me wonder why there cannot be an “IPO Curve” Input node. So I started to (gulp) look into pynodes and found they are really rather simple if you have a grounding in Python, which I do.

But according to the wiki, pynodes are not yet implemented for the Compositor.

Great. Another cool idea that’ll have to wait 'til the devs catch up.

wonder why there cannot be an “IPO Curve” Input node

I guess you could cheat by creating a scene with an IPO on say one of the RGB values for the World or an object, and extracting/normalizing values for that channel in the compositor. But I thought you wanted to extract the values from some aspect of your real scene?

That’s the ideal approach, but I couldn’t find a way to do it. AFAIK, all the nodes that affect color deal with it on a image level, which is a different kind of data output than I need (a single float).

I think to achieve what I want I’d need to create a pynode that takes all the scene’s pixel data and integrates in some way that it can be expressed as a single float. For example, averaging the per-pixel RGB values after converting to BW (grayscale) might work, I’m sure there’d be some fluctuation in the resulting value over time, based on image content.

Time node offers the right kind of output, can be modulated over time (frame range), and is fairly easy to manipulate within the limitations of the node size. It’s a more “manual” process than using the existing scene data per frame, but at least it actually works.

I’ll post a demo later on if I get some time to rig one up and render it.

Another thing I think would be very useful is to be able to access the data structures output from the nodes. For example, what is the structure for the output from the Vector Blur node? Could each “channel” (x, y & z) of blur be used separately in some fashion? This could perhaps be used to build filters that could affect imagery based on motion through a scene, like modulated smear or blur or streak FX. I tried using the output from a Vector Blur node but could get no visible effect in modulating another node’s value (like Blur strength), so I assume the data structure is inappropriate for this kind of use.