an idea

This is not related to Blender and I may not be the first person to have thought about this but, here we go.

If each camera had an accelerometer and somehow we input the data into a tracking software, wouldn’t it make the camera tracking process much simpler and effective?

Accelerometer data is too imprecise to get much meaningful out of it for tracking. It could be used for seeding the solver though. When exact data is needed, motion control rigs are usually used, they give accurate motion data and can also repeat motions with high precision.

Well… the actual Augmented Reality systems like ARKit and ARCore does that, they mix visual information and the integrated IMU information to make real time tracking and create a virtual world, so maybe and IMU may simplify some things, like detecting incoherent movements and discarding possible solutions (weird solutions for humans but mathematically plausible), but I´m not sure it could have such a big impact.

It could help? maybe, it could help A LOT? I´m not sure, maybe a specialist developer can chime in and put his grain of salt.

Cheers!

There are many sensor data apps in android play store that sends via UDP protocol or TCP/IP what ever it may be…the main problem is script for blender is not available. They provided python script. Does anybody know how to prepare it for blender to control blender camera?

I made a new node for Animation Nodes, that measures and outputs speed and acceleration of any moving object in blender, here is the node tree for one project:

Detail of my Speed Acceleration Node:

How to get it and where to put it can be found on this page of my website, if anyone is interested.

Here is the output being used to show speed, altitude and direction of a glider, the speed comes from my node, I am not using the acceleration in this example:

As for feeding data in to animate objects, this could be done from a flat ASCII file quite simply in Animation Nodes (all caveats about how good the data would be accepted). I do this type of work to read MIDI files to feed directly into animations of keyboards for example, see other parts of my website, like the bit on my MIDI nodes for example. I see no reason why we cannot use the same principles to get coordinate data from a real camera to animate a virtual one. it doesn’t have to be AN, it could just be a normal script, as my MIDI stuff was to start with.

Cheers, Clock. :beers:

EDIT:

Data of the format:

x-coord, y-coord, z-coord, x-rotation, y-rotation, z rotation, timestamp

One record per timestamp in a sequential file could be easily read and used to keyframe an object, or even to make F-Curves stored in the blend file as my MIDI stuff does.

Ehm @clockmender have you even read the first post? It was about an accelerometer in cameras for the usage in camera tracking, not object acceleration in Blender.

Well, how do I respond to that? Oh yes, I have thought a little:

The first part of my response demonstrated an understanding of how to derive acceleration from motion, it is safe to assume I can work the other way around, since the equations can be re-arranged.

Q1) Do you have similar knowledge?

Q2) Did your post demonstrate that knowledge?

The second part of my post demonstrated an ability to use large complex external data to animate motion. Both parts demonstrated my ability to code this into Blender.

Q3) Do you have similar knowledge?

Q4) Did your post demonstrate that knowledge?

Q5) Did you read and understand all of my post, including its implications for solving this issue, or demonstrating that such data could not be used?

Q6) Have you provided information, or opinion, contributing meaningfully to the discussion, i.e. a route to a solution?

As the answers to all these questions is apparently, in its simplest form, a two letter word, I would offer this piece of historically quoted advice, although somewhat para-phrased:

“It is sometimes better to remain silent and let the world think you ignorant of an issue, rather than speak and remove all doubt”. Which might be re-interpreted as “If you have nothing meaningful to contribute, contribute nothing”.

Despite this being grammatically incorrect English, I was able to decipher its meaning and more easily, its unfortunate innuendo, so this begs the question; How does this remark add meaningfully to the discussion and help to resolve the issue and what was the purpose of the remark?

In the past I have written my most barbed remarks in Latin to spare the feelings of the recipient, this time I felt that offered no tangible benefit.

Cheers, Clock. :beers:

PS. To those who do understand the issue:

Can someone provide me with some raw camera accelerometer data, in flat text format, that I might use to further develop the work I started yesterday on this issue:

  1. Well yeah, I am not a native english speaker, but I think my english is fairly understandable.
  2. Sorry, I didn’t read your post well enough, seems like you read the first post.
  3. Anyways, i think @Bone-Studio named a good point. I think, the idea of using the accelerometer data for assisting the camera tracking solver is a good idea. Your idea of animating the camera in Blender with that data could work theoretically, but I think too that the accelerometer data isn’t precise enough to prevent sliding in the final composited footage (at least, when only this data is used).

Apology graciously accepted. :smiley:

I think it is still worth going through the exercise with the camera data, I anticipate one of three outcomes:

  1. We cannot use this data to animate the camera in any meaningful way, in which case this means we must look elsewhere.

  2. We can use the data to animate the camera and all is fine, unlikely to be a complete solution.

  3. The animated camera might well form the basis on which we can proceed to fine tune the camera tracking so it follows the original footage better than just using the tracking solver alone.

I suspect, like you, the accelerometer data is too imprecise to be entirely useful. This may, however, be down to the interpolation of the animation F-curves between the known points of the camera track, they could be “tweaked”, but still we don’t have a good solution. Another thought, Is it possible to get the track data from a camera dolly to use as a test exercise, does anybody out there in “film-land” have any such data?

So, if I can get some raw accelerometer data and the footage from the camera to work with, I can develop this idea further and see where we get. At the moment I have no idea how this data is formatted, is it purely rotational and locational accelerations against time, or is it actual locations and rotations against time. Either way I can work with it, ideally I would like locations and rotations against time, but either can be worked to animate a camera, or even a camera “holder”. It might be good also to have focal length information against time as varying the focal length will affect the tracking of the camera to the footage.

So, a flat text file with records that store camera location, rotation and focal length, against time, combined with the footage, would be a great place to start from.

Cheers, Clock. :beers: