How likely is Blender to adopt NeRF?

https://zju3dv.github.io/animatable_nerf/

Well this paper says animation is possible. if I didn’t misunderstand it. The quality isn’t very good but still.

if i understand it correctly (such papers and videos are hard to understand if you aren’t a researcher in the field :)) they found a way to convert multi frame nerfs to animatable meshes. so it’s not a nerf anymore for animating.

2 Likes

That’s true. Researches are often so promising, but nothing usable comes out even years after a paper is released.

1 Like

Well, that’s the whole point of research :slight_smile: I doubt many people will waste time researching something that looks obviously useless from the beginning

1 Like

I know. I’m not complaining. I’m just saying I’m aware researches are different from usable products.

“even” :stuck_out_tongue: ?
Taking in account that developing some novelty software usually takes few years, so releasing something within few years after that paper means, that they started working right after paper was released or even before (co-worked on it).

eg. Nanite meshes released in 2022 is revolutionary despite concept of meshlets have decade.

1 Like

I understand the difficulty. My wording is not to blame people taking that long to turn a research paper usable, but to emphasise the difficulty of doing that. It’s like saying “it takes hours to fly across the world even on a supersonic plane.”

I have seen some animations, but I have no idea regarding the practical viability.

I wasn’t clear enough. Photogrammetry is often problematic when dealing with thin meshes. That’s one aspect where Gaussian Splatting is surprisingly good.
Just like NeRFs, Gaussian Splats (don’t know if what they are called :slight_smile: ) can be converted to meshes. To my surprise, I haven’t seen that being done so far.
Instead of visualizing it, you can take a similar approach to scan it and produce vertices, faces, … . I forgot the name of the standard algorithm for this.

1 Like

I can see this NeRF tech being potentially interesting for real world scene capture/reconstruction for compositing / visual effects purposes in film industry. But I don’t see it ever becoming a replacement for rasterization or raytracing or ‘traditional’ computer graphics in any other sense. I’m sure it will find and settle into it’s viable niches tho.

2 Likes

I was curious to find out how much it takes to convert Gaussian Splatting shapes to meshes and literally the first result is a GitHub repository with that:

Gaussian splatting is far more practical, and I imagine is already being heavily researched in VFX pipelines. They are animatable - they are simply points in space representing color/opacity, with the possibility of re-lighting them using spherical harmonic data. Look into what Infinite Realities is doing scanned human gaussian splat models, and animating them. You bind these points with weights to an underlying mesh, and thus you can manipulate them. We are not far in our lifetime to possibly have Ready Player One kinds of experiences with VR/AR in terms of realism and fidelity, I believe.

I’ve recently developed some scripts where I can generate data sets from Blender and train them with ease. These are early tests, but the results are promising:

Blender Generated Gaussian Splatting

2 Likes