Using technology that does motion capture, but only capturing the motion of the camera?

The goal would be to accomplish the exact same thing that camera tracking does, but it would be less time consuming and you’d never have bad tracks, even if your scene has no good track points or moves too much.

I haven’t seen anyone do or talk about this, but I can’t imagine that it’d be any harder than motion capture (I also understand motion capture accomplishes a very different goal than this, but I think that the process would be similar). In fact, it should be easier. Instead of having so many points moving in relation to each other, you’d just need the motion of the camera in relation to a grounding point. Duct tape an arduino or something onto the back of your phone and start filming.

But I don’t know anything about motion capture and I’ve never done it. With something this simple, I’m wondering if I could just buy a piece of hardware and download a Blender plugin. But can someone who knows more about this point me to exactly what hardware and plugin I’d need and just some basics about what my process might be?

Try camtracK ar. For iOS.

https://youtu.be/1Rx4c9zPr_M A link to using in blender.

Thanks! That looks like something worth trying.

I kind of feel like an idiot because in the research I did after asking my question, I realized motion capture doesn’t work at all how I thought it did. I thought that it used some kind of magical hardware to determine the relative positions of objects without any need of a camera, but now I see that it’s all optical (based on reflective or light emitting trackers and several cameras), so my solution wouldn’t have been as magical as I thought. What you’re proposing probably isn’t magical either, but looks like a good thing to look into that might be as close as I can get.