Getting Vertex global position

Greetings to all!
I have believed till yesterday, that if you add the vertex of a face to the position and multiply by the object’s orientation, you’ll get the global position of the vertex.
So far it doesn’t work. The orientation returns a 3X3 matrix with axis rotation percentage(note that this is my understanding, and I’m not very good with maths), and the position is a vector. The multiplication is possible, but the result is not obvious, and the position is certainly not correct.http://dl.dropbox.com/u/6162142/VertexPosition.jpg
So, does any of you have any ideas, or proven concept of this?

In my VertexBasedAO script I just used: (vertex_position_x * object_scaling_x) + object_position_x
It worked just fine.

So, you get each axis individually?
scaling? Yes, this can give a position. But I need to get the vertex position in an object without applying the rotation(this is when orientation comes in). If I rotate a the plane (red plane =blue rotated plane), I still want to get the global position of those vertices.

You can just use the worldTransform matrix to go from local to global space, since that includes position, orientation, and scale - all in one matrix.

The .blend example is attached.

Attachments

localToGlobal.blend (436 KB)

localToGlobal.blen
d\verts.py", line 25, in <lambda>
AttributeError: ‘KX_GameObject’ object has no attribute ‘worldTransform’
Blender Game Engine Finished

Thank you Goran, but, I get this error!

I’ve tested this for one point, and it seems to work:

from bge import logic as GL
from mathutils import Vector

objs = GL.getCurrentScene().objects
obj = objs['Cube']

x, y, z = 1., 1., 1.
P = Vector([x, y, z])
M = obj.worldOrientation
globCoords =obj.worldPosition + M*P

print(globCoords)

where x, y, z are the coordinates of P in the object reference coordinate system.

One point like, a point in an object. For the vertices are in local coordinates.
My approach is similar, I get the vertex local position, then add the object’s position and multiply by the orientation. Show me a working example pliz?

hi

mayble you have forget to add the position of object?

as say mb10 , is one of less things which I understand about MATRIX (!!) is that:

posGlobal= posGlobal+(orientation * localPosition)

where
vertexGlobalPos= posMesh+(orientationMesh*localPositionVertex)

:wink:

one things which can due error (in the results) is the order of the vertex , Think!

Are you using 2.62? … I’m guessing worldTransform was just added.

If you really need to use older blender versions, you can just build your own worldTransform matrix:


worldTransform = Matrix.Translation(own.worldPosition)
worldTransform *= own.worldOrientation.to_4x4()
for i, s in enumerate(own.worldScale):
    worldTransform[i].xyz *= s

And then use that in the lambda expression instead of own.worldTransform.

I tell you that I did that here is the example.
As the object rotates, the matrices are possibly formed with trigonometric values. So a change in 90 degrees, will register as about 0.5
somewhere in the orientation matrix, multiplying that by 1, will give an incorrect value, the example shows!

Edit: I’m using 2.61…
Sorry Goran, do you mind redoing the example? I couldn’t add the worldTranform part ^^!

Attached is the blend file (made with blender 2.62).
The sequence is: first apply the rotation matrix, then add the object position to the result.
To verify the result you can compare the printed coordinates with the global coordinates of the vertex, as shown in the transform panel in edit mode.
globalVertexCoords.blend (491 KB)

Oops … double post - Sorry.

Goran your blend here very amazing! :wink:

The transformation matrix is a 4x4 matrix!

The orientation matrix is a 3x3. You need to make it a 4x4 before multiplying with the scale and translation matrizes. All of them needs to be 4x4.

the resulting vector is:

worldPosition = transformationMatrix * vertexPosition

Goran’s method is the right one, but for 2.62.
in 2.61 use this code:


from bge import render
from mathutils import Vector, Matrix

def _getVerts(obj):
    
    mesh = obj.meshes[0]
    idx_mat = 0
    
    verts = []
    for i in range(mesh.getVertexArrayLength(idx_mat)):
        verts.append(mesh.getVertex(idx_mat, i))
    
    return verts

def _drawMarkerAt(vec_pos):
    
    vec_3 = vec_pos.xyz
    render.drawLine(vec_3, vec_3 + Vector((0, 0, 0.5)), [255, 0, 0])


def main(cont):
    
    own = cont.owner
       
    vec_verts = map(lambda v: localToWorld(own) * v.XYZ, _getVerts(own))
    
    for v in vec_verts:
        _drawMarkerAt(v)
        
def localToWorld(gameObject):
    
    scale = Matrix.Translation(Vector([0,0,0]))   
    for i in range(len(gameObject.worldScale)):
      scale[i][i] = gameObject.worldScale[i]
      
    translation = Matrix.Translation(gameObject.worldPosition)
    
    rotation = gameObject.worldOrientation.copy()
    rotation.resize_4x4()
    
    return translation * scale * rotation
    

@ Monster

I posted a solution for 2.61 as well - Just 4 lines.

@ MarcoIT

Thanks.

Thank a lot for your assistance!
I’ve got one step closer to my objective, I’ll now pass to part 2 of this thread!

A few weeks ago, I posted a thread about converting a set of planes into a sphere. This is for my planetary system. Moster was able to pull out a working example. Unfortunately, I had to apply the plane’s rotation for it to work. What I liked about it was the the vertices position were very accurate. Till now I don’t see the reason for it work so well(in short I can’t cut it), so I’m looking to get it in a way I understand!
In this example I got the planes to deform(thanks to you guys), but the corners are not connected, and don’t make a sphere. I think that the multiplication of the vector by the sphere radius, is causing some sort of “bad” scaling.
So, if you have more ideas, pliz help me again!

I posted a solution for 2.61 as well - Just 4 lines.

…where is it?

Use of 4x4 matrices and 4x1 vectors (i.e. the so-called omogeneous coordinates) allows for making transformations using multiplications, and this is an advantage when dealing with complicated robotic geometries, for example.

But to avoid misinformation please note that rotations and translations can be perfectly handled with 3x3 matrices and 3x1 vectors; in fact, it can be shown that:
Matrix(4x4)*vector(4x1) = Matrix(3x3)*vector(3x1) + Translation(3x1)

This method is for the final vector transformation [vectorB=transform(vectorA)]. It excludes further scale and further rotation as you get a vector not a matrix.
As you wrote the remaining 4th coordinate (vector scale) should be 1 and can be ignored.

The transformation matrix of affine transformations (translation/rotation/scale) in 3D space is at least a 3x4 matrix (2x3 in 2D). As it is much easier to multiply homogeneous matrices, therefore it is usually expressed as 4x4.

Beside the affine transformations there are the projection transformations which fill the unused part of the 4x4. These transformations are used to transform vectors from world space into camera space (as seen by the camera).

See wikipedia.