Animating multiple objects from a text file

I’m trying to use Blender as a visulisation tool for another program. I have a text file which has the positions of multiple objects over many frames. The format is basically this :

 
Object 1 x,y,z frame 1
Object 2 x,y,z frame 1
Object 3 x,y,z frame 1
Object 1 x,y,z frame 2
Object 2 x,y,z frame 2
Object 3 x,y,z frame 2

And so on. (In fact what I have is a list of 250 random postiions, just for proof-of-concept that the script will work). So I am trying to do the following :

  1. Use the first 25 lines of the file to generate 25 spheres at the correct positions and set IPO keys. This bit works !
  2. Advance the frame (OK !) and use the next 25 lines to update the object positions.
  3. Repeat 2 until done.

I’m completely stuck on stage 2. Not knowing quite how to have Blender select the appropriate object to update (perhaps an array of object names generated along with the meshes ? Haven’t figured this out yet either !), I tried to have it just update one sphere, so it should be getting its positions from lines 1,26,27,28… and so on. What it actually comes back with the positions from lines 1,50,75,100… which is obviously not right. I think there are probably simple ways to do this, I just can’t figure out what. The current script is below.
Hope someone can figure out what I’ve done wrong… I can attach the text file if need be.

import Blender as B
from Blender import *
def create_meshes( line ):
 # chop up line into name and coordinates
 
 x, y, z, r, XR, YR, ZR = line.split()
 scn = B.Scene.GetCurrent()
 subD=2
 diameter = float(r)*2
 me=Mesh.Primitives.Icosphere(subD,1.0)
 object=scn.objects.new(me,'Halo')
 object.setLocation(float(x), float(y), float(z))
 object.insertIpoKey(B.Object.LOC)
 object.setSize(diameter, diameter, diameter)
 object.insertIpoKey(B.Object.SIZE)
 object.Layer=1
 
 diameter=float(r)
 me=Mesh.Primitives.Plane(1.0)
 ob=scn.objects.new(me,'Disc')
 ob.setLocation(float(x), float(y), float(z))
 ob.setSize(diameter, diameter, diameter)
 ob.Layer=2
 ob.RotX=(float(XR))
 ob.RotY=(float(YR))
 ob.RotZ=(float(ZR))
 ob.insertIpoKey(B.Object.LOC)
 ob.insertIpoKey(B.Object.SIZE)
 
def move_meshes( line ):
 x, y, z, r, XR, YR, ZR = line.split()
 scn = B.Scene.GetCurrent()
 #for n in range (1,nh+1):   (This will be necessary to do all objects)
 ob=B.Object.Get('Halo')   # Somehow have to read in appropriate objects
 ob.setLocation(float(x), float(y), float(z))
 ob.insertIpoKey(B.Object.LOC)
 ob.setSize(float(r), float(r), float(r))
 ob.insertIpoKey(B.Object.SIZE)
 
 
#
# main
#
infile = open('250Halos2.txt', 'r')
nh = 25  #No. halos
nf = 10  #No. frames to read in
mf = 10   #Blender frame increment per frame increment of text file
frame=B.Get('curframe')
if frame == 1:      # Must be on frame 1 for initial conditions
 try:
  for i in range (1,nh+1):  # nh+1 due to silly Python convention
   line = infile.readline()
   create_meshes( line )  # Use first nh lines to create meshes
 except Exception, e:
  print 'Oops!', e
 
# This part reads in the next nh lines nf times(not nf+1 since have already
# done once)
 
for n in range(1,nf):    
 B.Set("curframe",frame+mf)
 frame=frame+mf
 try:
  for i in range (1,nh+1):  # Starts from next line after the last read.
   line = infile.readline()
   move_meshes( line )  # Use appropriate lines to move meshes
 except Exception, e:
   print 'Oops!', e
 
B.Redraw(-1)
 
# Pseudocode :
# Frame 1. Create nh spheres and planes of correct size and set IPOs.
# Halos are on layer 1 and discs are on layer 2. This all works.
# Next get all spheres and planes and update positions based on next nh
# lines in text file. For now just deals with first sphere.
# Since readline should remember previous line, for loop should work as
# is.
# Problems : Find what lines are actually being used for coords.
# 1, 50, 75, 100, 125, 150, 175...

You have to adress the each of the single Objects with frame and Position.

Setting the global frame does not affect the IPO frame for an Object.
Also setting the Object-Size in a distinct frame does not set the IPO-values for this frame.

Also you maybe should use an array for storing the Object, when you create them, and return this array from the creation-process.
Afterwards you have to adress the IPO of the single Objects directly.

Maybe these Code-Snippets will help you.


def getObjIpo(objn, name):
        ipo = objn.getIpo()
        erg = None
        if ipo == None:
                ipo = Ipo.New('Object',name)
                objn.setIpo(ipo)

        for v in ipo:
                if (v.getName() == name):
                        erg = v
        if erg == None:
                erg = ipo.addCurve(name)
        return erg

def delIpoCurve(objn, name):
        ipo = objn.getIpo()
        if ipo != None:
                erg = None
                for v in ipo:
                        if (v.getName() == name):
                                erg = v
                if erg != None:
                        ipo.delCurve(name)

def ipoSetPos(obj,t,x,y,z):
        erg = getObjIpo(obj, "LocX")
        erg.append((t,x))
        erg = getObjIpo(obj, "LocY")
        erg.append((t,y))
        erg = getObjIpo(obj, "LocZ")
        erg.append((t,z))

def ipoClearPos(obj):
        delIpoCurve(obj, "LocX")
        delIpoCurve(obj, "LocY")
        delIpoCurve(obj, "LocZ")

Setting the global frame does not affect the IPO frame for an Object.
Also setting the Object-Size in a distinct frame does not set the IPO-values for this frame.

Are you certain ? I don’t think that’s true. When I try animating only one sphere, all on its own, from the same file by advancing the global frame and setting the IPO as in the above script, it works as expected. An IPO curve is generated where the position every 10th frame corresponds to the each line in the text file. I should mention I advance by 10 frames each time instead of 1 as I want Blender to interpolate positions, which it does quite nicely.

Also you maybe should use an array for storing the Object, when you create them, and return this array from the creation-process.

I’d like to but I don’t know how ! :o

Maybe ‘insertIpoKey’ does, what you want, but it seems to me not to be the proper way to do this (in fact, i didn’t find this function in the API-Description, where did you get it from ?)

But still you fail to adress the Object

untestet



def create_meshes( line ):
  ...
  erg = []
 ...
  erg[] = Object
 ...
 return erg

def move_meshes( objectsInArray, line ):
 ...
 #ob=B.Object.Get('Halo') # is void
 ob = objectsInArray[index]


you also can give the Object an unique Name like ‘Halo-125’ and adress this name.

Sorry, I can’t remember where I found insertIpoKey, but it seems nice and intuitive to me.

Still having problems trying to work with a list of objects. Simplified the script for testing. What I think this should do is create 25 spheres from the first 25 lines, then update their positions from the subsequent lines :

import Blender as B
from Blender import *
def create_meshes( line ):
 global index
 x, y, z, r, XR, YR, ZR = line.split()
 scn = B.Scene.GetCurrent()
 subD=2
 diameter = float(r)*2
 me=Mesh.Primitives.Icosphere(subD,1.0)
 object=scn.objects.new(me,'Halo')
 object.setLocation(float(x), float(y), float(z))
 object.insertIpoKey(B.Object.LOC)
 object.setSize(diameter, diameter, diameter)
 object.insertIpoKey(B.Object.SIZE)
 object.Layer=1
 erg[index] = object
 
def move_meshes( line ):
 global nh
 global n
 x, y, z, r, XR, YR, ZR = line.split()
 scn = B.Scene.GetCurrent()
   
 ob = erg[n]
 ob.setLocation(float(x), float(y), float(z))
 ob.insertIpoKey(B.Object.LOC)
 ob.setSize(float(r), float(r), float(r))
 ob.insertIpoKey(B.Object.SIZE)
  
#
# main
#
infile = open('250Halos2.txt', 'r')
nh = 25  #No. halos
nf = 10  #No. frames to read in
mf = 10   #Blender frame increment per frame increment of text file
index = 1
erg=[]
frame=B.Get('curframe')
if frame == 1:     
 try:
  for i in range (1,nh+1):  
   line = infile.readline()
   create_meshes( line )
   index = index + 1  
 except Exception, e:
  print 'Oops!', e
  
# This part reads in the next nh lines nf times(not nf+1 since have already
# done once)
 
for n in range(1,nf):    
 B.Set("curframe",frame+mf)
 frame=frame+mf
 try:
  for n in range (1,nh+1):
   line = infile.readline()
   move_meshes( line ) 
 except Exception, e:
   print 'Oops!', e
  
B.Redraw(-1)

erg[] = object gives a syntax error. I assumed there has to be an index number in the brackets. I tried making erg global to keep things simple (I tried it as you suggested but that didn’t work either). But all the script does now is create the first sphere and give the error ‘list index out of range’, though somehow the frame counter still advances to 91. :spin:

insertIpoKey() is a member function of the Object class.

When calling insertIpoKey it expects a value from the IpoKeyType dictionary, so:


...
object.insertIpoKey(Object.IpoKeyTypes.LOC)
...
object.insertIpoKey(Object.IpoKeyTypes.SIZE)

You may need to have an Ipo curve assigned to the object. I’m not sure if blender will automagically create an Ipo if you don’t have one assigned to the object and try to insert an ipo key. If it does not create the Ipo curve automatically then it will have to be done manually and linked to the object (not very hard, like making an object and linking to a scene).

You can eliminate the need for the index variable by using


def create_meshes( line ):
     global erg
     ...
     ...
     erg.append(object)

The only thing contrary to your code would then be that everything is referenced by instead of 1-25, 0-24. If you still wanted everything referenced 1-25, you could initialize erg to be erg = [0]. That will create a single element at position 0, so that your objects will be 1-25.

In your main “function” (we’ll call it), you have a line/for loop that is:


     for i in range(1, nh+1):

....Which becomes:

    for i in xrange(nh):

xrange objects are easier on memory (at least when building large lists) since they don’t produce a list filled with numbers, but rather are iterable objects themselves.

Next up in your code:


for n in range(1,nf):    
 B.Set("curframe",frame+mf)
 frame=frame+mf

Its slightly more efficient (not touching everything that could
be done with this statement) to write it as:

for n in xrange(1,nf):
 frame = frame + mf
 B.Set("curframe", frame)

This way, you are only calculating frame+mf once (saves a cpu cycle).

I think there were a couple other things I noticed, but can’t recall them right now…

Not 100% sure why exactly, but it works ! Many thanks !

I appreciate the optimisation advice too, I may eventually try this on vast numbers of particles.
(BTW, insertIpoKey doesn’t need to have an existing IPO curve to assign to, it works without any)

OK, everything’s working quite nicely now, but I’d like to get it optimised a bit. Ideally, I would like to import about 20,000 particles over 100 frames, about 10 times the amount of data I’m currently working with. At the moment it takes about 6min30s minutes to run - but the first 100 frames only take about 1 or 2 seconds, then rapidly slowing to 1 second per frame ! I can’t see why this should be. Also, when I save the file it is over 80mb in size, which seems crazy - the text file read in is only 16mb. Finally, if I set it to read in 2500 particles over 100 frames, instead of the current 250 over 1000, it only takes 2min40s.

Below is the latest version, it only creates points instead of spheres. I found a thread on speeding up Python and tried some of the suggestions. I installed Pysco 1.5.2 and Python2.5.1. Blender finds Python but I’m not sure about Pysco, got some strange error messages when installing but it seemed to finish OK.

The only thing I can think of at the moment is to remove the unneccessary columns from the text file, I really only need the first 3. Not sure how much this will help though, as if I simply read in the file and nothing else, it works instantly, so it probably isn’t a read-in speed problem.

import Blender as B
from Blender import *
from Blender.Mathutils import Rand
def create_meshes( line ):
 global SList
 global DList
 x, y, z, a, b, r, d, e = line.split()
 scn = B.Scene.GetCurrent()
 p = [0,0,0]
 verts = [p]
 ob = B.Object.New('Mesh', 'MeshOb')
 me = B.Mesh.New('myMesh')
 me.verts.extend( verts )
 ob=scn.objects.new(me,'Halo')
 ob.setLocation(float(x), float(y), float(z))
 ob.Layer=1
 ob.insertIpoKey(B.Object.LOC)
 SList.append(ob)
 
def move_meshes( line ):
 global nh
 global n
 x, y, z, a, b, r, d, e = line.split()
 scn = B.Scene.GetCurrent()
 
 ob = SList[n]
 ob.setLocation(float(x), float(y), float(z))
 ob.insertIpoKey(B.Object.LOC)
 ob.setSize(float(r)*2, float(r)*2, float(r)*2)
 ob.insertIpoKey(B.Object.SIZE)
try:
   import psyco
   psyco.full()
except:
   pass
infile = open('File.txt', 'r')
nh = 250  # No. halos
nf = 1000  # No. frames to read in
mf = 1   # Blender frame increment per frame increment of text file
SList=[0] # Array that will contain all the sphere objects
DList=[0] # Array that will contain all the 'disc' objects
frame=B.Get('curframe')
if frame == 1:     
 try:
  [create_meshes( infile.readline() ) for i in range (1,nh+1)]   
 except Exception, e:
  print 'Oops!', e
 
# This part reads in the next nh lines nf times(not nf+1 since have already
# done once)
 
for i in range(1,nf):    
 frame=frame+mf
 B.Set("curframe",frame)
 try:
  [move_meshes( infile.readline() ) for n in range (1,nh+1)]
 except Exception, e:
   print 'Oops!', e
 print i
 
B.Redraw(-1)

On file size: Due to how your script works, it can be expected that the file size will be larger. Think about the overhead difference between an object and a vertex. A good representation would be the difference in members between an mvert class and object class in python. Objects have many, many more members than a vertex, and therefore require more memory to hold them. On top of that there is storage of hundreds or thousands of ipos to go with them, which must store data points, etc. Files can get large.

Now to specific points in your code:
In the create_meshes function, you assign “p” to another list called “verts”. You don’t need to do this. You can go ahead and use “p” as the argument to the extend function.

Not sure if you tried out xrange and got worse results, but it could reduce memory costs while running (which can reduce time, since things aren’t having to be stored in memory as much).

On two occasions you have something like



try:
     [functioncall for n in range(1, nh+1)]


That creates a list but never assigns it to anything. Try doing this instead



try:
    for n in xrange(1, nh+1):
         functioncall

See if any of those suggestions help.

I realised I still had a setSize IPO in there, took it out and reduced the time to about 3.5 minutes, and file size to 40mb. I’m not clear on why the file is even that large though. The file being read in has some unneccesary columns and is only 16mb in size. Surely Blender can’t be storing any more information than is in there, since it’s only getting its information from the file ?

I tried replacing :
me.verts.extend( verts )
with
me.verts.extend( p )
and removing verts = [p], but Blender didn’t like this, I kept getting an index out of range error.

Tried changing the functions as you suggested, but it doesn’t seem to make any difference.

I noticed when trying 10 times the particles and 10 times less frames that the main slowdown occured generating the meshes, thereafter updating their positions was relatively fast. Is it possible to have python duplicate one mesh in the initial frame, rather than creating new ones, and would this make any difference ?
Thanks again.

Still not sure about the filesize issue. The thing you can be sure of is that blender is storing considerably more data than is in your 16mb file. Objects have 116 data members, which each object is stored with. I’m not sure if each object stores only what it needs or not, though, so it may be something you just have to live with. This is not to mention data stored in the ipos (which is probably the major culprit) and mesh data (probably minimal).

There is a duplicate() function that can be called by objects. As for whether it would be significantly faster, I’m not sure. There would be somewhat less work being done by python and more by the blender internal code, so it may be faster. The worst thing that could happen is you try it and its slower, so you wind up going back to the way you are doing it now.

I can probably live with large file sizes, although I would like to run this in uni where disk space is more limited.

I ran a few more tests. Just as it slows down moving the meshes as the frame advances, even though the operation is exactly the same, so it also slows down creating the meshes initially. I don’t understand why this should be. The file isn’t THAT large. Why should creating, say, the 2300th mesh (which is, after all, only 1 vertex) take much, much longer than creating the very first ? I guess I can understand that moving 250 meshes in the 1,000th frame might take a bit longer than in the first frame since the file has more information to handle, but still, it ain’t that big.

I think understanding why things slow down after an initial fast burst is the key, since this seems consistent it would probably happen when using Duplicate instead (haven’t quite got that working yet, so I can’t test). I don’t think it can be array size getting large since the arrays aren’t that big when only using 250 particles. I also don’t think it’s filesize, as running the script twice in the same file doesn’t make it any slower. It’s definately not due to reading in the file, since if I just read it in and do no other operations it’s able to read in 250,000 lines almost instantly (which is rather impressive).

Thanks for any ideas.

I found something interesting. As a test I rewrote the script to use the first 25000 lines of the file for the positions of verticies in a single mesh, then the next 25000 for a new mesh and so on. This is not only blindingly fast (27 seconds !) but the file size is only 4mb !

Also I tried manually making 50 objects with IPOs with 1000 keyframes each (not so hard with some duplication). The file size is consistent with the 40mb generated by the Python script for 250 animated objects. Still, I find it hard to understand why the .blend should be 10 times smaller by storing the same data in a slightly different form.

I suppose what I could probably do is to IPO each 25,000 vertex object so that it only appears on 1 particular layer for 1 particular frame. That would give the same effect as animating 25,000 individual objects. It would, however, lose several nice features such as being able to follow individual particles by selecting them, interpolation between positions so that the .blend animation might last longer than the actual simulation, and being able to see individual particle “tracks” with the k key (shows an objects position for all its keyframes).

On the other hand… it’s probably not necessary to interpolate positions, and viewing tracks could probably be reconstructed by rearranging or reading the data in a different way. Still, if anyone has any clever ideas as to why this version of the script is so much faster, let me know !

import Blender as B
from Blender import *
def create_mesh():
 global infile
 scn=B.Scene.GetCurrent()
 ob=B.Object.New('Mesh','Meshob')
 me=B.Mesh.New('myMesh')
 
 for i in range(1,25000):
  line = infile.readline()
  x,y,z,a,b,r,d,e=line.split()
  p=[float(x),float(y),float(z)]
  #verts=p
  me.verts.extend(float(x),float(y),float(z))
  print(i)
 
 ob=scn.objects.new(me,'Halo')
 ob.Layer=1
infile=open('Rhys.txt','r')
for i in range(1,10):
 create_mesh()
B.Redraw(-1)