Direction-Based Shape Blending (Solved)

What is the best way to achieve this effect:

(there are several “shape keys” (reproduced with geonodes in this case), and each represents a set direction in space. They are then interpolated based on some directional vector). If these shapes are spread evenly, like,one for each cardinal direction (my example scene), then it is not a problem. I can use dot products as weights in a weighted sum. But what do I do if they arent equidistant on a “coordinate sphere”, if it makes sense? Then each shape will need a different “domain range”, not just 1 - 0 of a dot product. Is there more universal method?
DirectionalBlend.blend (1.2 MB)

Haven’t looked at your blend. From what I understand, you’re scaling cubes based on weighted dot product of origin (or median vertex position) with an arbitrary vector. (That’s what it sounds like, and that’s what it looks like.)

As a basic idea, this doesn’t depend whatsoever on them being equally spaced around a sphere. If they’re all in the same general vector, they’ll all scale. It sounds like you don’t want that. But then the question is, what do you want? Do you want only one scaling? Then take your weights to a very high power before normalizing, and you’ll basically only be scaling the one very close to that vector. The highest weight will dominate, the higher the power.

(Looks like you’re already dealing with this, but there’s going to be a problem with a weighted operation when the weights are negative, and dot products can be negative. Feel like I should add this statement even though I think you’re already aware.)

1 Like

I am aware. I (generally speaking) discard the negative values and only use range from 1 to 0

Lets say, there is one “shape key” (scaling cubes is only used for simplified example BTW) assigned to Y-positive axis direction (0,1,0), another - to Z-positive, and a third shape representing direction somewhere between them. If I use 1-0 ranges of their dot products as weight coefficient for each of them, they wont blend correctly. In case direction is matching a third vector, the result will be 0.5x shape A, 0.5x shape B, and 1x shape C. Instead of just shape C

If you want it to shapekey only from dominant, then don’t use weights, use greatest of all potential vectors. That’s what you’ll do if you want only C to show up. If you want, you can even assign all dot products to an attribute and then use attribute statistic to find out what the greatest dot product actually is, for dealing with an arbitrary number of potential vectors.

If you want it to shapekey the three closest, find the three closest, then get an angle to each of those three, and weight those three (rather than all dot products, and angle is probably a more intuitive measure than dot, not much more performance, just an arcos for each.) 3 is a natural number for this, same as an icosphere is made out of triangles, same as 3 points define a plane. (Edit: I’m wrong. See below.)

You should never end up with 1.0, 0.5, 0.5 from a weight, which I think it’s safe to assume means normalized to 1. You would add those three values, find it added up to 2.0, and divide all values by that sum: 0.5, 0.25, 0.25.

1 Like

Three “shape keys” was chosen on top of my head, but I got your point. Looks like calculating angles between those vectors and working with them is unavoidable. Makes sense. I’ll test some “hack” solutions and then report here

So, there is a hack which involves meshes and raycasts. Create a mesh which serves as “coordinate sphere”, paint weights corresponding to directions on it, then convert a vector into mesh’s local space and read these weights using raycasts. It does work (I used it in experimental “automorph” system for cartoon heads), but I’m afraid blending is limited by reference mesh’s topology

2 Likes

Yeah, that makes sense for interpolating from 3 (and if it wasn’t already replied to, I’d probably delete my original post, at least in regards to interpolating closest 3, because it’s wrong.)

Think about if we have a cube of 8 vectors, and we interpolate across a “face” of that cube, across the diagonal. The interpolation is different depending on how we triangulate that face. Just like vertex color is different depending on how we triangulate that face.

Additionally, if we cross an “edge” in our topology, our weighted angle may suddenly change, and the shapekey we’re blending in will definitely suddenly change. That’s not what you want.

It makes perfect sense to me that you would need to use mesh because you need actual topology to indicate the edges. And then, nearest face interpolation across that face will give barycentric coordinates, which exist in the proper 0,1 range (one vert’s influence reaches 0 as we cross its opposite edge.) Like geometry/parametric in shader nodes.