Rendering paths of moving many small spheres, disks, or particles

I have a math-art project I’m working on in which I’m trying to do a 3D version of graphics stuff I’ve done a lot of in 2D using Java. I’m trying to find out what 3D modeler can do what I want via scripting rather than manually. I’ve done a few simpler things in Blender using Python scripts and that’s the route I’d like to use here.

The basic thing is I have points that are coupled to one another in such a way that they each trace out a different curve as a single parameter in the system is changed. In 2D each point is given some shape - typically a small circle - and as the parameter changes each of these traces out it’s own curve (Each frame is just a point cloud but compositing these gives the path traces). I can control the color of each trace, the diameter of the shape, etc. to end up with a wild collection of these curves. I typically want a lot of them so there are usually anywhere from 10 to 10,000 of these curves that get rendered into a single image.

I want to do this in 3D because the equations governing these are easily extendable to 3D a 3D animation of the curve set developing would be great for animations I sometimes do to go along with talks about this kind of thing.

I am not far beyond newbie level in Blender, but I can’t seem to find out a) can I do this using Blender and b) what would be the best way if it is possible.

Naively I image thin tubes being created as the trace of small spheres moving under the influence of the equations. I’m not really clear how to go about that, and I have a big concern that I would be able to get a reasonable number of them (I’d be delighted with just 100 or so) and not either crash Blender or have to wait until I’m really old for a render to finish.

At first I thought maybe I could use Python to control a particle system down to the level of each particle’s path and somehow get finite diameter traces out of their paths that could be rendered, but it seems that that was expecting too much when I tried to look into that.

I’m also willing to hear that I should consider using X (I was looking into 3Ds Max last night) instead of Blender for such a project, though I’d prefer Blender as I like the price and that I can script pretty much anything it can do using Python. This is not a scripting question: I’m familiar enought with that part of using Blender, I just don’t know if Blender is a good choice for this.

In the end I want these “tubes” to be able to be semitransparent, or reflective or some combination of that rather than just opaque but that’s down the road. For now I don’t know if I should be spending any time at all in Blender trying to do this particular thing.

Thanks a lot for any insight or suggestions anyone can offer.

Tom Bates

You could use python to animate a series of spheres, running whatever code you need to make them move and storing the output to keyframes. Then you can render that with motion blur and composite together many frames for longer traces.

You can also create curves point by point using python, and add or remove points for each curve for each frame to have longer curved geometry.

Blender is definitely capable of this, it will just take some time to piece together the code to make it happen.

Thanks a lot SterlingRoth,

That sounds promising and I’m looking into this - an opportunity to learn about keyframes, motion blur and compositing, which I have almost zero experience with in Blender.

But I have also thought about an approach where I use Python to calculate the current tangent and normal to each developing path at intervals, use that to orient a polygon at the current point position that represents a cross-section of the developing path, then make facets between each adjacent set of polygons. Thus I would be growing a tube along each point’s path. My concern is this might lead to so many facets if I had say 500 points with 1000 locations along each point’s path that Blender would be brought to a crawl or crash (which I’ve had happen with other experiments). Just wondering if you have any feel for which approach is likely to be better from memory and render time point of view. This method has the potential advantage for me that I’m currently far more experienced and comfortable with Python than with Blender.

I wouldn’t worry about generating the tangents, while you certainly could do that, it would be a fair bit of code that would duplicate a lot of what blender was really built to do.

Check out the below image:

on the left is a curve object with the interpolation mode set to vector, and a handful of random points to demonstrate the concept
in the middle, I set the interpolation mode to automatic, which creates a much smoother path over the same handful of points.
On the right, I enabled bevel depth on the curve to give it thickness.

Those 2 steps are 2 clicks, and can be applied to multiple curves really easily, either manually or via python. So if you can get a series of points out of python, you can easily turn it into a smooth path. You can even generate your points externally and load them into python via a .csv file, just to put some more options on the table.

Testing on a curve object with 6000+ handles, blender seems fairly responsive. Duplicating that curve object 100 times still remains fairly responsive, you can feel it chugging a bit, but it’s nowhere near lockup/crash level. For reference, I have a decent, but old system, it was high end 5 years ago.

Let me know if you have any questions and I’ll be happy to help.

Memory issues and blender crashing may well be the most important issue. I have written n body simulations and rendered with blender. Now blender has a list of objects and one high level question is should you use a workflow with n objects, answer is probably no. You should use a single object and then your script performs actions on the vertices (or control points if using curves) because as @SterlingRoth mentions depending on the system, using alot of objects will crash blender. Once you reach hundreds of objects , degradation sets in.

Very true. I was working on a python based astrology chart animation:

Generating those lines in between the moving icons worked great, until I got about 20 frames deep, then blender started chugging hard. Even though I was deleting the lines on changing frames, it still stayed in memory. I ended up solving it by reusing lines for each frame, just changing geometry and materials so I didn’t generate more and more objects every frame.

If you have more than ~2000 objects, you start hitting performance problems. It’s definitely something to consider.