Code for Motion Tracking - Measure Displacement

Hello!

I am new using Blender.

I need to track markers over the face. It is not to create a mocap.

I am recording with tiny markers (4mm) over the face to track facial expressions in Opera Singers.

I know how to track the markers but I need a way to calculate the displacement of the markers. Each marker has a 4mm size. So, from a neutral position, for example, the eyebrow in a standard position to a higher position, I need to calculate the distance traveled in mm? And have the graphic curve of the movement but calculated in mm? Detailing: since I express an emotion, like fear, my eyebrow tends to move to a higher position, the opposite to anger. And this movement is that I am tracking. But, as I know the marker size (4mm), I would like to calculate the distance travelled by the marker.

I know that I can see the graphic in blender but I don’t know how to modify this graphic to show the distance in mm based on the marker size.

Another way to do it, is to write a script to calculate this and export to a csv file and create the graphic curve in other software.

Does anybody here could help me to understand how to do that? Or have any script to automatic do that? I have more than 20 markers to track. It is part of my PhD.

Thank you.

Hmm… :thinking: … yes this is… special… and the way 3D generating apps like blender are build for… this might be… complex…

AFAIK sometimes people export animations as meshes for every frame in alembic (?) or something… (mayeb even possible in OBJ… never done this…) IDK if the vertex indices are the same… so someone could measure the distance between the frames…

The only thing i found which may be of any help… but i didn;t looked too deep into this… (maybe someone with better knowledge in geomety nodes… ? )

https://blender.stackexchange.com/questions/247657/is-it-possible-to-calculate-the-distance-between-the-vertices-of-two-objects-in

Thank you! I will take a look.

The main thing is that I used motion track systems like Qualisys Oqus7 to track this facials expressions. But, this involve to go to a lab with all subjects and limits the research.

So, with a mocap helmet I can collect everywhere using this markers and them tracking all into blender. I use movie sequence with PNG. I could open all PNG into ImageJ and calculate one by one. But the amount of work will be very high.

Thats why I am asking for help to do this into Blender or other software. I will give the credit for the person or persons who help me and acknowledge in every paper.

Thanks for your help.

Hello Guys!

I wrote a code with chatgpt help.

It partially works.

My problem now is how to tell Blender that each marker has a 4mm size.
I tried with the script code but it seems that not work well.

Some markers are fixed but when I run the script and see the csv file, the Y (vertical displacement) has information of displacement. But the markers are fixed, no movement.

The Y are reported in csv file with information less than 0.01… I tried to say the markers are 4mm… Chatgpt told that Blender use meters… so in the code values were divided per 1000…

If someone can help me…

Here is the script:

from future import print_function
import bpy
import os
import csv

Specify the desired output directory

output_directory = “C:/Users/tiago/OneDrive/Desktop/Piloto Teste/RAIVA”

Create the output directory if it doesn’t exist

os.makedirs(output_directory, exist_ok=True)

D = bpy.data

printFrameNums = False # include frame numbers in the csv file
relativeCoords = False # marker coords will be relative to the dimensions of the clip

Set the marker size to 4mm

marker_size_mm = 4

Convert the marker size from mm to the blender units (e.g., meters)

scale_factor = 0.001 # 1 Blender unit is 1 meter

Camera shake factor (adjust as needed)

camera_shake_factor = 0.1 # Example: 0.1 Blender units (meters) of horizontal displacement per frame

f2 = open(output_directory + ‘export-markers.log’, ‘w’)
print(‘First line test’, file=f2)

for clip in D.movieclips:
print(‘clip {0} found\n’.format(clip.name), file=f2)

width = clip.size[0]
height = clip.size[1]

for ob in clip.tracking.objects:
    print('object {0} found\n'.format(ob.name), file=f2)

    for track in ob.tracks:
        print('track {0} found\n'.format(track.name), file=f2)
        fn = output_directory + '{0}_{1}_tr_{2}.csv'.format(clip.name.split('.')[0], ob.name, track.name)
        displacement_fn = output_directory + 'displacement_{0}_{1}_{2}.csv'.format(clip.name.split('.')[0], ob.name, track.name)

        with open(fn, 'w', newline='') as f, open(displacement_fn, 'w', newline='') as df:
            # Create CSV writers
            f_csv = csv.writer(f)
            df_csv = csv.writer(df)

            # Add header to the CSV files
            f_csv.writerow(["Time (s)", "Y"])  # Header for the position data
            df_csv.writerow(["Time (s)", "Y"])  # Header for the displacement data

            frame_start = 1
            frame_end = 435
            frame_rate = 30

            frame_range = range(frame_start, frame_end + 1)
            prev_coords = None

            for framenum in frame_range:
                markerAtFrame = track.markers.find_frame(framenum)
                if markerAtFrame:
                    coords = markerAtFrame.co.xy
                    if prev_coords is not None:
                        x_distance = coords[0] - prev_coords[0]
                        y_distance = coords[1] - prev_coords[1]
                    else:
                        x_distance = 0
                        y_distance = 0

                    # Adjust horizontal displacement for camera shake
                    x_distance -= camera_shake_factor * (framenum - frame_start)

                    # Calculate the time in seconds based on the frame rate
                    time_seconds = (framenum - frame_start) / frame_rate

                    # Adjust Y distance relative to frame 1
                    if framenum == frame_start:
                        y_distance = 0

                    if relativeCoords:
                        f_csv.writerow([time_seconds, y_distance * scale_factor])
                        df_csv.writerow([time_seconds, y_distance * scale_factor])
                    else:
                        f_csv.writerow([time_seconds, y_distance * marker_size_mm])
                        df_csv.writerow([time_seconds, y_distance * marker_size_mm])

                    prev_coords = coords

        # Clear all keyframes in the graph editor
        for area in bpy.context.screen.areas:
            if area.type == 'GRAPH_EDITOR':
                override = bpy.context.copy()
                override['area'] = area
                override['space_data'] = area.spaces.active
                bpy.ops.graph.select_all(override, action='SELECT')
                bpy.ops.graph.delete(override)

f2.close()

If you embbed the whole script into the block quotes (three acute accents), then there is the problem (at least with python code) that if it includes also three apostrophes (in python as remark or for quoting strings)… then this mangels up weirdly. → Delimiter collision

So you might better upload the file… or have to use the HTML entity ` Acute accent ( ← funny enough quoted by an ` ← and this is the html entity again :rofl: ).

Sure. You are right! Sorry!

from __future__ import print_function
import bpy
import os
import csv

# Specify the desired output directory
output_directory = "C:/Users/tiago/OneDrive/Desktop/Piloto Teste/RAIVA"

# Create the output directory if it doesn't exist
os.makedirs(output_directory, exist_ok=True)

D = bpy.data

printFrameNums = False  # include frame numbers in the csv file
relativeCoords = False  # marker coords will be relative to the dimensions of the clip

# Set the marker size to 4mm
marker_size_mm = 4

# Convert the marker size from mm to the blender units (e.g., meters)
scale_factor = 0.001  # 1 Blender unit is 1 meter

# Camera shake factor (adjust as needed)
camera_shake_factor = 0.1  # Example: 0.1 Blender units (meters) of horizontal displacement per frame

f2 = open(output_directory + 'export-markers.log', 'w')
print('First line test', file=f2)

for clip in D.movieclips:
    print('clip {0} found\n'.format(clip.name), file=f2)

    width = clip.size[0]
    height = clip.size[1]

    for ob in clip.tracking.objects:
        print('object {0} found\n'.format(ob.name), file=f2)

        for track in ob.tracks:
            print('track {0} found\n'.format(track.name), file=f2)
            fn = output_directory + '{0}_{1}_tr_{2}.csv'.format(clip.name.split('.')[0], ob.name, track.name)
            displacement_fn = output_directory + 'displacement_{0}_{1}_{2}.csv'.format(clip.name.split('.')[0], ob.name, track.name)

            with open(fn, 'w', newline='') as f, open(displacement_fn, 'w', newline='') as df:
                # Create CSV writers
                f_csv = csv.writer(f)
                df_csv = csv.writer(df)

                # Add header to the CSV files
                f_csv.writerow(["Time (s)", "Y"])  # Header for the position data
                df_csv.writerow(["Time (s)", "Y"])  # Header for the displacement data

                frame_start = 1
                frame_end = 435
                frame_rate = 30

                frame_range = range(frame_start, frame_end + 1)
                prev_coords = None

                for framenum in frame_range:
                    markerAtFrame = track.markers.find_frame(framenum)
                    if markerAtFrame:
                        coords = markerAtFrame.co.xy
                        if prev_coords is not None:
                            x_distance = coords[0] - prev_coords[0]
                            y_distance = coords[1] - prev_coords[1]
                        else:
                            x_distance = 0
                            y_distance = 0

                        # Adjust horizontal displacement for camera shake
                        x_distance -= camera_shake_factor * (framenum - frame_start)

                        # Calculate the time in seconds based on the frame rate
                        time_seconds = (framenum - frame_start) / frame_rate

                        # Adjust Y distance relative to frame 1
                        if framenum == frame_start:
                            y_distance = 0

                        if relativeCoords:
                            f_csv.writerow([time_seconds, y_distance * scale_factor])
                            df_csv.writerow([time_seconds, y_distance * scale_factor])
                        else:
                            f_csv.writerow([time_seconds, y_distance * marker_size_mm])
                            df_csv.writerow([time_seconds, y_distance * marker_size_mm])

                        prev_coords = coords

            # Clear all keyframes in the graph editor
            for area in bpy.context.screen.areas:
                if area.type == 'GRAPH_EDITOR':
                    override = bpy.context.copy()
                    override['area'] = area
                    override['space_data'] = area.spaces.active
                    bpy.ops.graph.select_all(override, action='SELECT')
                    bpy.ops.graph.delete(override)

f2.close()

Don’t bother…

So as far as i can see… this is:

Extracting the marker coodinates from previous and the current frame and then just build the difference in x and y… okay… this of course do not give you euclidian distance… but then it’s “adjusting” the y distance by the scale… if relativeCoords is True but it is set to False and never changed…

Because of this always is “adjusting” the y distance by the size of the maker … ??? This doesn’t make sense at all… the size of the marker has nothing to do with it’s distance to the previous…

And then… only the Y-value is wriiten the the log file…

:question:

Also this double open thingy:

with open(fn, 'w', newline='') as f, open(displacement_fn, 'w', newline='') as df:

with double writing to

                f_csv = csv.writer(f)
                df_csv = csv.writer(df)

makes no sense…

I don’t know what part is shit GPT ( “sorry”) and what you tried when don’t know programming ?? ( no offense meant…)

Except the idea of directly reading the marker coordinates… this is “just not good”… you might have achieved more when directly searching for motion tracker data in the API docu… and just do the programing by your own… ?? IDK…

So just calculating the distance of the x and y coordinates and output them like

      x_distance = coords[0] - prev_coords[0]
      y_distance = coords[1] - prev_coords[1]
      distance = math.sqrt(x_distance**2 + y_distance**2)
      doSomeOutputWith(distance)

seems to be what you realy want ??

:person_shrugging:

:thinking:
…or… since you are talking of displacement… what you might really want is detecting the local changes (diplacements) in the face of the singer while performing and meanwhile also moving the face…

This of course is “slighly” more complex…

First of all, thank you to reply my message.

As I told, I don’t know programming… only basic things in R.

I can calculate the distance using ImageJ. But this is manually calculated. So, a lot of work. I am inserting here a GIF image of what I am trying to calculate, to show the markers.

The size of the marker is about the size of the "strass"marker over the face. It has 4mm.

So, as soon as I can’t use a rule to calculate the distance beetween the middle of one marker in one position to the middle of the same marker in another position, I tried to define the size of the marker.

For now, I saw that I did it wrongly

I would like to calculate the displacement of each marker you can see in the GIF.

The head is still. So, I have markers going up and down (vertical) and left and right (horizontal). No z.

If you can help me or someone else, I will appreciate.

Thanks.

test anger

Okay you have footage of a human face with markers… and want the difference of the position of the markers…
And… it seems (as i could see this) you have already the x and y coordinates of the markers with this script… which are of course “only” the 2D markers of the tracking footage… ( so i wonder what the distance would help you… because the aren’t the movement vectors and length in 3D space … but that’s not my concern… )

Anyway:
Also the marker size is as far as i see this irrelvant… if you are interested on the distance between them…
So i thought this is already somekind of help…

Thank you for your reply again.

Yes. I want the 2D movement of the markers. In this case the head didn’t move in anterior/posterior position. So, I kept the camera in the same distance of the head all the time.

It is difficult to see 3D movements over the face because, the eyebrow for example, do not rotate. You have vertical and horizontal movement.

I know that I have X, Y position of the markers, but there is something in the script that is not righ about it. The upper marker on the face, almost not move, but when I look at the csv data of it, show a lot of movement. I don’t know why…

Thats why I am searching for help.

Thank you.

It’s somekind of difficult to tell also… because there simple is no project data files… or whatsoever to investigate… etc…

If you have trackers which do match the markers… then it’s sounds a bit weird if the data show different movements than the trackers…

This is a it more than just looking at a blender screen shot or even blend-file to make suggestions or error corrections or giving any tips…

You might be better off for example in any computer vision or even mathematics forum… but then again you already said you are so deep into programming except some R ( for statistical computing )… :person_shrugging:

I also do know a little about ImageJ… for example the use of plugins to count cells from photo etc… so it may even exist anu plugin which might recognize the movement of by contrast recognizable image areas alias tracker…

This said maybe this helps:

Or even this: