Blender for biologists - accessing vertex coordinates with API

I’m trying to use Blender to analyze renderings of neurons done in TiltBrush. Blender has been awesome for finding the volume of structures, however I also want to save length/width measurements. It seems like the easiest way to do this would be to drop two points onto the vertices of my mesh and get their X,Y,Z coordinates with the API. This has proven quite difficult, so I wanted to see if there was a better way to go about it.

Figured it out:

  1. Enable Add Mesh: Extra Objects: to create vertex objects:
    https://all3dp.com/2/blender-how-to-add-a-vertex-simply-explained/

  2. How to snap to vertices:
    https://blender.stackexchange.com/questions/36812/how-do-i-snap-one-vertex-to-the-position-of-another-vertex-in-a-different-object

Note the Snapping Tool magnet icon has moved to the top of the screen for Blender 2.82

See the Bmesh templates in the script editor.

Does not Measure tool work for you? You can snap it to with Ctrl pressed while dragging it.

We need a way to save the measurements and the points they were taken from. Saving two points and calculating the distance between them seemed like the easiest way to solve that problem. That said, I’m new to Blender and would love to learn alternatives.

The closest you can get to neurons is to look at the tree maker addon, looking at the API techniques will be handy.
blender\2.83\scripts\addons\add_curve_sapling

However I don’t know if you have plans to create some sort of your own vertex format and for what purpose. For example if you are interested to do calculation you will need this, but if is only for visualization and rendering you will use workarounds with the curve_sapling addon.

You can extend the vertex format by using additional index arrays.
Such as for example two FloatVector (float lists) one for width and the other for length.
https://docs.blender.org/api/current/bpy.props.html?highlight=prop#module-bpy.props

Those tree-making add-ons are very pretty, thank you for sharing. :slight_smile: We’re not using Blender in the traditional way at all, but it looks like it can do everything we need. We’re tracing our neurons in Tilt Brush and want to use Blender for annotating landmarks and to automate a couple processes. The first process will separate all loose parts in a mesh. (Dendritic spines in our use case.) Each spine goes into a folder with four vertex objects, which the user snaps onto the mesh. A second process will write coordinate data from vertices, spine volume, center of mass, &c to a Neurodata Without Border file.

Thank you for the advice - I’m currently ramping up on the API.

Perhaps another idea is not to involve blender meshes and objects at all. It might be that the blender API is a clutter at some point. If for example you have your own custom design but you have to invole operators\properties\bmesh etc. It all adds complexity while doing the same in Processing is 10 times easier but only in terms of missing the entire application infrastructure but rolling with a custom solution that does 5 things really efficiently.

You can use Blender only for rendering OpenGL, see the code templates how to do immediate rendering with OpenGL. That way you can minimize the interaction with the API at least only on the essential points.

Another time when you need access to some mesh operations or other features, you can create TEMP meshes and operate on them, then retrieve the results.

I’m afraid that I don’t follow what you mean by doing something in Processing. We are using Blender for organizing and structuring data sets and won’t need to do any rendering. We tried Fusion 360, but it wasn’t flexible enough for our oddball use case.

“Processing” is the programming framework in Java.

1 Like

Good to know, thank you! Blender’s python API looks like it will do the trick for us. We are deeply excited that the Scene Collection can be used to match the destination structure in an NWB file. :smiley: