Navigating the blender bpy.data.* structure

Up until recently, I’ve been using a C++ tool written using assimp to export objects from blender files into a format that is useful for some other software. The problem is that it only works on 2.5x and earlier blender files.

I wrote an export script in blender 2.68 python to replace the tool, but there are some pieces of data the old tool used which the new one doesn’t account for, in particular some transformation information that is important for the models i’m exporting to display correctly in relation to each other, both in size and position. I’m sure that there is stored, SOMEWHERE in the bpy.data.* structure, some additional position and scale information which I can extract and use in a 4x4 transformation matrix in order to duplicate the previous methodology, but it doesn’t seem to be obvious.

All of the objects that I care about are polygon meshes, no textures, no uv, no bones, no poses.

If anyone had good resources that expounded on the subject, specifically in relation to blender 2.68+, I would greatly appreciate any assistance. Beyond just solving my immediate problem, I’m hoping to understand the conceptual rhyme & reason for the way the bpy.data.* is structured. I’ve been relying on a few people from various forum sites (mostly @CoDEmanX, many thanks), and I’d like to know where folks learned these things in addition to the facts themselves so I can stop bugging folks for these tidbits.

Thanks in advance.

Are drivers, constraints or the like involved?

Object.location and Object.matrix_world.to_translation() may differ in that case.

I learned a lot from API docs, existing scripts, IRC (especially thanks to ideasman_42) and PyConsole autocomplete (+lots of testing, crashing and cursing)

It was also important to understand how things are done in Blender, like Vertex Group names need to equal bone names for armature deformation (and that armature deform is actually a modifier). It’s really hard to script armature stuff if you don’t really understand this relationship.

Thanks for the reply.

As far as I know, we don’t use any drivers, but maybe if you explain what they are in a blender context, maybe it’ll make some more sense/I can give you more of an idea. I did just discover bpy.data.objects[id].scale and .location and will be working to modify my other external software to make use of this data and see if it helps with the problem.

So the last piece of information I am actually missing is any rotation information that might be stored anywhere.

I am hoping to use the 4x4 matrix that is returned by the bpy.data.Object.matrix_world in combination with the rotation information to form a complete transformation matrix that I can multiply by every point in an object in order to have the final “on the screen” result written to my file instead of the “in memory” result.

Also, if there is an active dedicated blender python IRC channel, what’s the server & channel name? I’d be happy to get in there as well.

Drivers are python expressions that evaluate to some value (dynamica):
http://wiki.blender.org/index.php/Doc:2.6/Manual/Animation/Editors/Graph/Drivers

bpy.context.object.matrix_world will give you the final 4x4 transformation matrix. It contains location, rotation and scale.

mat = bpy.context.object.matrix_world

loc = mat.to_translation()
rot = mat.to_3x3().normalized() # normalize vectors to length of 1 to remove the scaling
scale = mat.to_scale()



## or do it (sort of) ourself ##

# first 3 entries in 4th column
loc = mat.col[3][0:3] 

# divide column components by column vector length to normalize, turn around the matrix
rot = Matrix([mat.col[i].to_3d() / mat.col[i].length for i in range(3)]).transposed()

# get the column vector length
scale = Vector(map(lambda x: x.length, mat.col[0:3]))

So i’m working with vtk and keep ending up with a mesh that is non-manifold after being processed through a vtkTriangleFilter and then a vtkStripper. Apparently one or more of the algorithms that they use in those filters relies on normals for correctly calculating the triangles and then turning them into strips.

I think that what I need to do is pull the normals from blender and then have vtk use the blender-calculated normals instead of calculating new ones.

So, in the bpy.data.objects[n].data.vertices[n].normal i was able to find a normal vector. Do I need to apply the matrix_world transform matrix to the normals too? it seems as though this wouldn’t be needed since they are in relation to each point. What do you think?

everything should be in object space, but if you

Mesh.transform(Object.matrix_world)
Object.matrix_world = Matrix()

it should also update the normals

Er, that still doesn’t tell me the answer to my question. I’m reading through all the points in each object and then multiplying them by the transform matrix. At the same time, I’m grabbing the normal (which i am perhaps incorrectly assuming is indexed the same as the vertices) and hoping that I can just do the same to the normals. But as I think about it, it likely doesn’t work like that. Are you saying that I have to figure out which Mesh object is associated with the object, and then transform it by the matrix_world? What does that second statement do? How can i determine which mesh(es) are linked to the object at a specific index in the bpy.data structure?

There’s only a single Mesh datablock linked to an Object datablock, and you can access it like

ob = bpy.context.object # ref to the active object
me = ob.data # retrieve mesh ref from object (assuming ob.type == ‘MESH’)

You can call me’s transform() method to apply a 4x4 matrix. There’s no need to multiply every single point by the object’s transformation matrix. Call…

me.transform(ob.matrix_world)

once, and the entire mesh will be updated (points, normals, everything should change along).

The second statement assigns the 4x4 identity matrix to the object.

>>> Matrix()
Matrix(((1.0, 0.0, 0.0, 0.0),
        (0.0, 1.0, 0.0, 0.0),
        (0.0, 0.0, 1.0, 0.0),
        (0.0, 0.0, 0.0, 1.0)))

Since we transformed the mesh, we need to reset the object transformation to end up with the same visual output (otherwise it would sort of apply the object transformation twice).

This does the same as
bpy.ops.object.transform_apply(location=True, rotation=True, scale=True)

So, I want to apply the transform to a copy of the mesh instead of to the main mesh - this process needs to be nondestructive and leave the original as it was. Does the transform() matrix change the original or return a pointer to a changed matrix, or is there a way to make it behave in the latter of those options?

I just need the resulting vertex coordinates, and normal values, so I can properly export them, but want them to remain as they are within blender without having to close and avoid saving the file or something like that.

in this case, use

export_me = ob.to_mesh(bpy.context.scene, apply_modifiers=True, settings='PREVIEW', calc_tessface=True, calc_undeformed=False)
export_me.transform(ob.matrix_world)
# do the export here
bpy.data.meshes.remove(export_me)

It will apply modifiers (or not) and return a new mesh datablock, which isn’t linked to an object / the scene. Transform it to world space and do the export. Don’t forget to remove that temp mesh.

Thanks for your help! Excellent advice as always.

Seems to me that I wouldn’t want it to calc_tessface=True if I’m tessalating triangles externally after export. Am i misunderstanding what that flag’s doing?

It does not triangulate the mesh, but will make the tessellation cache available to python (me.tessfaces). If you don’t need / access it, call to_mesh() with calc_tessface=False. If you need quads + triangles, set it to True and write out the .tessfaces. For tris you needed to split tessface quads like tessfaces[#].vertices 0,1,2 + 2,3,0.