I’ve never programmed in python before (my skills lie in C++ and Matlab) which is why I find this type of optimization problem to be extremely difficult. I’m trying to get better and trying to learn how the data in python is handled but this problem is connected to a two month internship with many different parts which is why I don’t really have the time to learn python from scratch. I’m not trying to be lazy though, I am doing python for 12 hour a day, so I’m doing my best! Despite this I require a bit of additional help.
My main problem is that I need to read in a large file with data, add this data to x objects.
My algorithm would look something like this;
Read the file
Add data from file to active object
If data contains a “" or “@”
Duplicate active object
Add data after this "” or “@” to duplicated object
The code I have so far works, however, when the file contains too much data the time to process this takes an absolute huge amount of time. To process 2003 rows of data takes 1800 seconds on my (fairly good) stationary computer which simply is not acceptable. The worst part is that this is just a fraction of data I would hope to read.
The data looks like this:
x,y,z,x_rot,y_rot,z_rot,frame
0 , 0 , 0 , 0 , 1.106572e-01 , 4.729842e+00 , 1
0 , 0 , 0 , 0 , 1.106572e-01 , 4.747296e+00 , 2
0 , 0 , 0 , 0 , 1.106572e-01 , 4.764749e+00 , 3
0 , 0 , 0 , 0 , 1.106572e-01 , 4.782202e+00 , 4
0 , 0 , 0 , 0 , 1.106572e-01 , 4.799655e+00 , 5
0 , 0 , 0 , 0 , 1.106572e-01 , 4.817109e+00 , 6
0 , 0 , 0 , 0 , 1.106572e-01 , 4.834562e+00 , 7
.
.
.
.
0 , 0 , 0 , 0 , 1.106572e-01 , 4.834562e+00 , 1000
@
, 0 , 0 , 0 , 1.106572e-01 , 4.729842e+00 , 1
0 , 0 , 0 , 0 , 1.106572e-01 , 4.747296e+00 , 2
0 , 0 , 0 , 0 , 1.106572e-01 , 4.764749e+00 , 3
0 , 0 , 0 , 0 , 1.106572e-01 , 4.782202e+00 , 4
0 , 0 , 0 , 0 , 1.106572e-01 , 4.799655e+00 , 5
0 , 0 , 0 , 0 , 1.106572e-01 , 4.817109e+00 , 6
0 , 0 , 0 , 0 , 1.106572e-01 , 4.834562e+00 , 7
.
.
.
0 , 0 , 0 , 0 , 1.106572e-01 , 4.834562e+00 , 1000
The data is read via the script:
import bpy
anim = []
path = 'C:\\Users\\Inger\\Desktop\\Input\ est.txt' #file pathing
bpy.context.active_object.animation_data_clear() # clear keyframes
file = open(path, 'r')
row = file.read()
for row in row.splitlines():
elems = row.split(",")
if elems[0] != "@":
anim.append((float(elems[0]), float(elems[1]),float(elems[2]),float(elems[3]),
float(elems[4]),float(elems[5]),int(elems[6])))
ob = bpy.context.object
for x, y, z, x_rot, y_rot, z_rot, frame in anim:
ob.location = (x,y,z)
ob.rotation_euler = (x_rot, y_rot, z_rot)
ob.keyframe_insert(data_path="location", frame=frame)
ob.keyframe_insert(data_path="rotation_euler", frame=frame)
else:
bpy.ops.object.duplicate()
As the “else” part only occurs once in my test.txt file (only have one “@”) it must mean the reading of the rows and adding data into the list of my first object must be the most time consuming (or computationally heavy) part of this code. I.e This part:
if elems[0] != "@":
anim.append((float(elems[0]), float(elems[1]),float(elems[2]),float(elems[3]),
float(elems[4]),float(elems[5]),int(elems[6])))
ob = bpy.context.object
for x, y, z, x_rot, y_rot, z_rot, frame in anim:
ob.location = (x,y,z)
ob.rotation_euler = (x_rot, y_rot, z_rot)
ob.keyframe_insert(data_path="location", frame=frame)
ob.keyframe_insert(data_path="rotation_euler", frame=frame)
My question is, could you help me optimize this snippet of code to take less computational time? Maybe eliminate unnecessary loops or by using different functions entirely?
I will attach an example file of my data that I need to read so you can test this out for yourself if necessary.
Attachments
Script and data.zip (20.7 KB)