Does realtime raytracing make Eevee obsolete?

In a future where there is world peace [hey you never know it COULD happen]

“Today in Blender news marks another era for us, remember back when we removed blender internal ? For all you kids who don’t remember or even know what Blender internal was, it was used back when dinosaur ruled the Earth, today marks another step as floating head Ton in a jar announce the official removal of Blender cycles as real time accurate GI, raytrace and shadow is and has been a thing for a decade now, goodbye Blender cycles, you will be missed…by no one still alive and in the industry, what a joke to think that Eevee was started with lots of fake hacks for what we now took for granted…meanwhile in celebrity news modanna is still kicking it in her latest single “I am an Android now but still kicking it”, fun times !”

Its not based on unreal engine. The unreal engine devs just let the blender developers study its code so that they could have a better understanding of how to implement the same features.

I vaguely recall reading that Blender would eventually support UE4’s material engine at some point, so you could easily port objects between the two without having to redo your procedural work. Dunno if that’s true or not.

This is the end of an era. For a while, rasterization will be supplemented by RT only. But after a few generations of GPUs, they will have sufficient power to do full realtime raytracing. I won’t get into word games on whether Eevee is “outdated” by this or that definition, but it’s certainly built on technology that’s about to die (i’d say within a decade) just like fixed function hardware died rather abruptly. It’s inevitable.

Eevee will just have to be updated soonish. It’s not that big a deal.

We’re coming closer to an agreement as we look further into the future.
I still think that rasterization and cheats will have value for as long as they require less energy (even though I personally always prefer to use the most accurate rendering technique) and I think they will, even when every GPU in the world can do RTRT.

It is a misconception that rasterisation is less accurate than ray tracing. Both are methods of solving the visibility problem. When implemented without bugs, both a rasteriser and a ray tracer will give identical output when given the same input.

The approximations come in when choosing algorithms that skip calculations in order to gain speed - shadow mapping or reflection maps however are not techniques restricted to rasterisation. They can and have been used with ray tracing too*. Vice versa, people have implemented bidirectional path traced GI using rasterisation instead of ray tracing.

There are plenty of approximating algorithms based on ray tracing currently in use too - for example screen space ray tracing for games or diffusion model based subsurface scattering for production.

The main advantage ray tracing has over rasterisation is that it’s comparably easy and fast to calculate visibility for random and incoherent directions. A rasteriser is only efficient when you have a large batch of coherent directions to render.

* Ray traced reflection and ambient occlusion were developed to improve the realism of environment maps: http://www.spherevfx.com/downloads/ProductionReadyGI.pdf

2 Likes

i’ll add my point here, the big DCCs out there have some real-time PBR veiw-port integration into certain extent but they are really hard to setup intuitively unless you are in a big team and a big studios with talented tech Artists…the ray-tracing in real time is still not possible for average consumers yet, maybe big studios can achieve that now but that requires the DCCs/ray-tracing developers to upgrade their tech, personally i think this is happening like with Arnold and maya…they might have been secretly working on this for years now with nvidia it’s just not for the public.

IMHO fake/biased render will be popular many years more. Until tecnology will remain the same (each ray calculated digitally).

There is two reasons:

  1. ppl always wants faster/cheaper and more complex and beautiful scenes
  2. unrealistic can be much more beautiful. For example in Zootopia fake reflections in Judy`s eyes.

And 3 for Cycles: NPR is a new big breath for it: stylish and appealing.

So if we will have free, fast and awessome raytracing, EVEE will remain (and evolve) as good NPR-engine. Also it works and will work like fast preview-engine.

1 Like

Eevee give already amazing results !the physically based rendering give the engine a lot of bonus
But it’s slow :frowning: they need to speed up in the next blender 2.81, before starting adding new features
And one feature that I miss is the per object motion blur for eevee this is the only feature luck

1 Like

Eevee has a lot of potential, but what puts me off is the hassle of (optimally) preparing a scene. There are so many factors to deal with before a scene is optimized: all kinds of shadow parameters for a light, many intricate render settings, settings for each material, irradiance probe(s) with proper placement, correct use of reflection probe(s), baking indirect lighting, the works.

If you take all the Eevee scene preparation time into account, the total time spent will be about the same as using Cycles. :slightly_smiling_face: Unless you’re rendering animation of course.

2 Likes

It’s all true. Additionally, Eevee becomes very sloooow (especially in viewport) with more complex scenes and you can’t speed it up because it is limited to one graphics card.
On the other hand for simple scenes Eevee can be incredibly effective compared to Cycles.

1 Like

Clement has taken initial steps in making Eevee easier to use.
https://lists.blender.org/pipermail/bf-blender-cvs/2019-May/123442.html
https://lists.blender.org/pipermail/bf-blender-cvs/2019-May/123444.html

Even then, Eevee will likely not be as simple as Cycles because of the rasterization technology (which can be much faster in animation, but creates more setup time).


Keep in mind that while Eevee can be made to run at 60 FPS all the time, it would come at the cost of quality (as right now the target is quality over speed). Unity and Unreal for instance cheat on GI by pre-baking it with a raytracer and having the user place light-probes.

1 Like

I don’t use Eevee much, but removing checkboxes for SSS and Volumetrics? I don’t see how that is supposed to make life that much easier. On the contrary, it feels like having them could make it easier as we can turn off those effects completely with a single click instead of hunting through materials.

Shadows, leaks, and probes however, that’s where most of us struggles. I’ve had a couple of projects I would and could do in Eevee, but gave up. Maybe it’s just education that is needed, but the default values in there (at least for my line of work, office spaces) are horrific. Horrific defaults, is that a thing in Blender, or is it just me? :slight_smile: (looking at you, curve half-fill, which was finally given a useful default).

1 Like

Bad default values is one of those things almost exclusive to FOSS unfortunately. I don’t know if this is finally changing for 2.8, but historically the devs. just didn’t care whether a default was bad or not.

Though I know in one recent case, the default distance value in the bump node in Cycles was actually changed from the good quality normal-map like value of 0.1 to the crappy value of 1.0. This is speculation, but apparently people would find it familiar if bumpmaps by default produce worse results than normal maps like in other apps. :no_mouth:


That said, 2.8 is still going to be a major improvement across the board for Blender, as more and more possible headaches for users are eliminated.

1 Like

The distance slider input itself is garbage, it should have the same sensitivity as strength slider which I rarely adjust. Although it supports 0.0001 (which itself is too coarse) it will truncate the display to 0.000. I typically have to use math nodes to divide it small enough to have an input slider that works.

I don’t know the technical difference between strength and distance (what is distance anyway for something that modifies normals?), but every time I adjust strength for procedurally generated bumps (musgrave is awesome, image maps are too limiting) I’m getting trash. Distance works better, but I may complement with strength if the floating point accuracy suffers.

hey, why not use differed rendering as a post processing step for eevee and add attachment outputs like upbge master has?

then we can have RTX as a post processing step?
(fill buffers -> RTX does raytracing (instead of SSR) - global illumination, reflection etc ?)