Photogrammetry for VFX?

I saw this demonstration of a flicker-free 3d photogrammatry https://www.youtube.com/watch?v=5Tia2oblJAg and, as someone who knows pretty much nothing about coding, photogrammetry or deep learning, were pretty much mind blown. Not only did it make an amazing 3d reconstruction for the scene but it even adapted it for movements and made a perfect track of the camera!
I had no clue 3d photogrammetry have come so far. Last time I tried it it was almost ten years ago with autoesks 123DCatch haha. Anyone here who knows how soon we can expect to use these apps (or whatever format they are in as is) for VFX work as done in Blender? Would be pretty neat to instead of spending hours on setting up a dozens of track-markers just letting an app take care of it and give you a sweet 3d reconstruction of your scene along with all movements and camera work. Is it to much to wish for it to even be a feature in Blender one day?

So, rather than support, this sounds maybe like a discussion on what the future of Blender could be. I am with you in drooling over new algorithms, methods, and technologies. I’ve subscribed to the Two Minute Papers YouTube channel (here: https://www.youtube.com/c/KárolyZsolnai) and Dr. Károly Zsolnai-Fehér covered this technique in a recent video. I’ve seen cool features and libraries added to Blender and other open source applications like Gimp over the years, and it seems like it is just a matter of time until someone with the time, energy, and motivation implements it.

For this one specifically, though, it seems more focused on real-time AR rather than VFX. I know that there is work currently being done to allow editing in Blender using VR, but I don’t know if AR is a target for Blender’s developers at this time.

Cool technique, though!

1 Like

If the technique was faster, it could be used for photogrammetry directly within Blender. Or it could be used to enhance tracking. With the reproduced geometry of a video, it could be a lot simpler to work with videos in 3d, as you could actually work in some kind of 3d.

Exactly! even if it’s aimed for realtime AR at the moment I can’t help to see what a great tool it could be within (or at least at the side of) a software like Blender.

You don’t happen to know if the technique already is avaiable in some form? Again, I’m a total amateur in the field and my attempts to google just leads to pages about coding which is the same as greek to me haha.

Nice to know I’m not the only one excited!

In theory the technique works and if I remember correctly, the code is available. However, it is not yet practical, because it is too slow. What works is a direct prediction of the depth which is not very accurate.
For this technique to work, the neural network model needs to be trained on the actual scene. This takes very long, requires an actual deep learning framework to be installed and an Nvidia graphics card (everything else is pretty much not practical, except for executing it on the cloud).
We have to wait a little longer… Besides the mentioned use cases, this could also be used to enhance motion capture. For VFX, this technique is going to be amazing.

1 Like

Thanks for the info! I guess it’s all about patience then. Can’t wait to see what the future may have in hold “two papers down the line”. What a time to be alive! :star_struck: