How likely is Blender to adopt NeRF?

I’m no expert on this, just some guy who has been reading news here and there. NeRF has been talked about a lot recently. Some people even speculate it will replace ray-tracing, or at least be an alternative option to it. I know Blender already has 2 NeRF addons, but they’re missing some features that makes NeRF potentially revolutionary.

From my understanding, NeRF is fundamentally different from traditional 3d, so it isn’t like switching from the Blender Internal to Cycles, which was just changing the render engine. So I wonder, is it likely that Blender adopts NeRF? Or is it more likely that there will be completely new softwares that use NeRF instead of traditional 3d?

From Wikipedia:

A neural radiance field (NeRF ) is a method based on deep learning for reconstructing a three-dimensional representation of a scene from sparse two-dimensional images.

…so why would someone make a complete 3D scene to render some dozen 2D image to create a 3D scene again ???

Also… AFAIK it’s showing a static scene how do you so any interaction with this… (yet) ?

And yes… you need something like a “nerf-engine” to “render” the 3D data without the detour of using a “traditional” 3D render engine… (i do not know the addons you mentined…)

1 Like

One addon is called BlenderNeRF, and the other is called NeRFStudio.

It’s probably a bit too much for me to explain NeRF in words. There are many videos on different possibilities of NeRF. I’ll show you some.

NeRF was originally made for photogrammetry, hence the sentence about it creating a 3D scene from 2D images. In a Nvidia paper, https://research.nvidia.com/labs/rtr/publication/muller2021nrc/, if I understand it correctly, it seems to blend NeRF with path-tracing to speed up the rendering times by a lot, without the need for 2D images generated before.

1 Like

Oh… in fact i do know what they are… and it is somekind of fascinating… interesting to be able to “navigate” in the produced data…

But again… how to “include” anu actors in this 3D space…

…so meanwhile most thing will be doen the “traditioal way”…

1 Like

Remember when just a few years back AR/VR was this big hype?
Remember how everybody tried to act like its this big-time revolutionary thing and this time its here to stay and all that?

Does anybody actually give a shit about it anymore outside of gaming by now?

This whole neural-this-neural-that hype and all that research is really just driven by the fact training these networks consumes tons of computing power, and companies such as Nvidia are actually in the ‘I’ll sell you tons of computing power, if you give me a ton of money’-business.

Hint: The Blender Foundation doesn not have a ton of money like this.

greetings, Kologe

Everything has a chance of being overhyped. I’m not claiming NeRF is definitely the future of rendering, but it does look promising. I don’t think it’s fair to compare VR to NeRF because it’s not a rendering technique.

I understand terms like “neural” and “AI” are thrown into everything these days to make them sound more advanced, but it’s also true that generative AI has made a big impact as LLM, in 2d art, sound, translation etc. Naturally I’d hope it could make a positive impact in 3d art.

I also understand that companies doing these researches are aiming to make a profit off those researches, but then any companies are the same. Car companies, pharmaceutical companies, game companies. The purpose of any company is to make profit. It doesn’t mean their researches or products are not useful to you.

What function would NeRF have in Blender? As far as I know, there are no programs that are actually using NeRFs for a large scale practical use, so what would the point be?

To me, it seems that NeRFs are an interesting technology with little practical use currently. As far as I’m aware, there are very few use cases that aren’t already done better and faster by traditional polygonal models and shading (please correct me if that’s wrong).

1 Like

Real-time or at least fast rendering and physics simulations, reduced texture size, supposedly.

why nerfs? i though gaussian splatting is the new thing?

1 Like

They are both being researched. Sometimes Gaussian Splatting gets ahead, sometimes NeRF gets ahead. But you’re right, I actually got some of the papers mixed up. For example, this is a very recent paper on real-time fluid simulation in Gaussian Splatting

So yeah, Gaussian Splatting could work too. Whichever is better.

Thanks… i was agonizing about this… …that’s the “other one”… i wanted also to note…

So why “adopt” something what is “in competition” with some other method… and both aren’t “feature ready” yet :wink:

Because blender is so versatile there is also something to do in sone areas… so why add something “not completly re-searched yet”… again… what’s about object animation within those methods…

https://zju3dv.github.io/animatable_nerf/

Well yeah neither of them are ready yet. I just imagine when they’re ready, is it likely for Blender to adopt them, or is Blender too fundamentally different to do so?

Like Doris sang…

Que Sera, Sera ,Whatever Will Be, Will Be

Yeah I know. I was just hoping to see if someone has more knowledge in these areas can provide a bit more insight. Well, regardless, Blender is already very good for 3d art. It’s just that there are some areas that I hope can be drastically sped up, like fluid simulations or posing with ragdoll physics. Because of that, I’m a bit overly hopeful for new technologies.

In foretelling ?? :wink:

…no… this is all experimental… AFAK…

Contrary to many other hypes like AR/VR, I strongly believe that many of those neural-this-neural-that solutions are going to be used in practice.
I am pretty convinced that techniques like Gaussian Splatting are going to become a part of many photogrammetry pipelines. The current algorithms are very complicated, heavily optimized and as far as I can see very difficult to improve. Using Gaussian Splatting instead appears like a logical step forward to me.

Now, that doesn’t mean I believe it should become a part of Blender for the sake of it. But just like the denoiser, there are likely going to be small things here and there become neural-this-neural-that, at least within people’s pipelines, if not as part of Blender.

It is great to see that the Blender Foundation doesn’t waste resources and jumps on every hype wagon for the sake of marketing.

2 Likes

Since the introduction of Gaussian Splatting, I pretty much never saw NeRFs being better (maybe with individual exceptions, but definitely not overall).
From a practical point of view, NeRFs take a lot longer to compute too, meaning going from images to the 3d representation.

I didn’t know that. That’s good to know. And things like animation, are they just as good in Gaussian Splatting?

I think it largely depends on exactly how we define the neural-this-neural-that solution in question.

I doubt we will see anything practical which basically turns the entire rendering-pipeline upsidedown (= e.g. do away with meshes and rely on gaussian splatting instead).
This is also the spirit in which I read the original question in this thread.

That said, I do agree there might be interesting, practically viable solutions coming up in a more specialized sense. Maybe something will stem from the research into using machine learning in animation, for example.

Yes, absolutely. It is to be noted, a few years back, there was a proposal/idea to integrate machine learning driven fluid simulation to Blender, iirc.
It was never actually being worked on, but it is another example which might actually come to fruitition.

greetings, Kologe

animation is not really possible. at least if you mean character animation. in the video you posted the excavator had some kind of soft body physics applied.

the scenes basically are point clouds (with extras).

gaussian splatting doesn’t use anything “neural” and seems to be better than ne(ural)rfs for most things. :slight_smile:

2 Likes