Calculating a reflex angle between two objects

Hi,

I am trying to create a very basic Blender project that will ouput the angle between two objects so that I can then use that angle to pan a surround sound decoder.

I have this working to an extent by using the getVectTo() function which ouputs the cosine of the angle between the two objects in relation to the owner object’s x axis, which is exactly what I want.

However, the problem is that this angle always flips between 0 and 180 degrees, whereas I need it to be between 1 and 360 degrees (anti-clockwise around the x axis) in order to work correctly with my sound decoder.

For example, if the second object is 90 degrees to the right of the owner object I want it to output 270 degrees and if it is 90 degrees to the left of the owner it should output 90 degrees.

Does anyone know how to achieve this? Unfortunately neither maths nor programming are my strong points so this has me quite confused! I think I may need to use a different reference vector rather than the x axis, but I am not sure how to implement this…

Sorry, I can’t help you there since I’m not much of a coder. But I would very much like to see this work. ^^

[previous post deleted due to errors]

I think I misunderstood your problem initially, but the principle should be the same. Find out what direction the rotation has taken based on the signs of the components in the vector using something like

if vector.x > 0:
    angle = 180 + (180-angle)

I’m assuming that the vector that you’ll be using is in object coordinates (local) rather than world coordinates. If you are using world coordinates, you’d need to take object orientation into account too (so use local coordinates to simplify things).

Thanks, I solved it myself with a very similar method, just took me ages to get there as I haven’t done any proper maths in years!

@John_tgh, will definitely share the results of this experiment with the community once finished, it is part of a Msc project though so will be a while yet!

Although you’ve already marked this solved, I believe (lets assumes that the two vectors lie in the xy plane) you can take the cross product of the two vectors and if the z component of that is less that 0 subtract the arcos(dot) from that, eg:

crossp = v1.cross(v2)
if crossp.z < 0:
    angle = 6.28 - v1.angle(v2)
else:
    angle = v1.angle(v2)

Or something like that…
So its similar to FunkyWyrm’s method.

Looks like there are multiple ways to solve this problem! I did it by first converting the cosine angle to degrees by using acos and multiplying it by (180/pi). Then I checked if the y component of the local vector is < 0, if so return (360 - angle), if not simply just return the angle as is. Seems to work for me!

Actually there is one problem with it; if the owner object is close to the second object the angle starts to get slightly larger instead of smaller. I think this is simply due to the algorithm finding the angle between two vectors which are small, exact points, so if the two objects are close even a small rotation of the owner object will be exaggerated in the angle. To solve this I think I would need to take into account the size of the second object and adjust the angle accordingly so that if you are right up close to a big object it will always output near to 0 degrees. If that makes any sense haha!

Duh… “360 - angle” is much better than my shabby method, lol. :spin:

@jsr11:

Angles will always be found between the object centres. One possible way around this is to set the object centre to the place where the sound is being emitted from. eg. For a truck or car you could place the object centre at the engine, or have an Empty object parented to the vehicle but positioned at the location of the engine and use the vector to this empty for your calculation for the sounds.

Yeah I do have the object centres set to the right places, it’s more a problem with how the user subjectively expects the sound to be localised rather than a problem with the method itself. For example, if you are facing a large, sound emitting object, close enough that it fills most of the screen, the method will return near enough 0 degrees, which will place the sound in the centre channel. However, the object appears large enough that the user will subjectively expect the sound to come out of all three front speakers to a certain extent.

It is a problem with the sound algorithm rather than the angle finding one, if I can find a way of returning the size of the object in python it shouldn’t be too hard to fix!

bpy allows access to an object property bpy.context.object.dimensions that you can access through the interactive console.

This is not available in bge though.

Maybe you could store the dimensions of the object in some way then set them on the game object at runtime and then use these values in bge for your calculations.

A crude way is to set game properties dimX, dimY, dimZ in the logic editor so they will survive the conversion process (Blender data is converted to bge data before runing the game).

So I would have to hard-code the object’s dimensions in beforehand?

I guess that would work for testing and proof-of-concept, not ideal for real-world application though! I’m suprised there is no way to get an object’s size in bge.

Thanks for your help by the way!

Thanks for your help by the way!
No probs. I’m just making suggestions and letting you do the hard work. :wink:

Like I said, hard-coding the game properties is a crude way of achieving what you want as a proof of concept, but if you can write code then why hard code stuff? You could write a script to generate a data file from Blender data while you create your game and then write another script to read that data file into the game at runtime and store the data on the various game objects for later access. This game script could be quite small and would only need to run once when the object is first initialised (on game load in a mundane situation or whenever the object is added in a more dynamic situation). This would be a far more elegant solution and Python has all the tools that you need to achieve this.

It would kinda be a waste of resources for the game engine to store the dimensions of all objects when this is hardly ever needed. There is an arbitrary number of properties that we can give to a game object and it is our role as game creators to make sure that we have as much information in the game as is needed, but no more.

You can get an object’s scaling with the obj.scaling property. Also, you can loop through the vertices in the object’s mesh and determine the distance apart from the two furthest vertices - that would give you the object’s size. If it’s not correct, then it probably needs to be multiplied by obj.scaling to get the correct distance between the two points.

SolarLune’s method is a good way of doing things entirely within the game engine. :slight_smile:

I have come to understand that there are many ways of doing things. The choice is entirely down to the game developer and what compromises they feel are most appropriate to their game.

The main compromise is between storage space and processor time. The first reduces the possible extent of the game, the second reduces the runtime speed of the game. The final decision needs to be made by the game developer based on what is most appropriate to their vision. Having said this, there are many more compromises that can be made such as development time versus perfection and realism versus control.

I’m getting a bit philosophical here, but I truly believe that game development is underestimated by both those who know nothing about the process and industry professionals alike. The potential is limited only by the imagination (and this is is infinite).

Thanks for the idea, but I’ve been thinking this through and believe I’m tackling this issue from the wrong angle. As I mentioned in my example above, I’ve got to balance the objective distances and angles that Blender provides with the more subjective aspect that if an object is filling the player’s screen, he or she will expect the sound to fill the audio spatial image too, even if it isn’t true according to the angles. Thus, it perhaps isn’t so much the size of the object that I am concered with but how much of the screen it takes up.

I probably just need to do more testing to see how much of a problem this is actually going to be, ‘apparent source width’ is something that is well-researched in the audio world, a topic that could possibly provide a few answers too!

Well, if you cast two rays from each corner of the object to the camera, then you will get the angle between the camera and the edges of the object. If it’s onscreen, then the closer the object is, the larger the angle, and you can go from there.

Alternatively, you could just adjust the heaviness of the effect by the distance, since it’s safe to assume that the closer the object is, the heavier the effect would be, as you already posted earlier, saying that the sound should come through all speakers when the object is very close to the camera. In other words, fake it. :smiley: