Why can't Eevee do the same as CryEngine or Unreal Engine?

When I see people placing grass, trees etc. in CryEngine or Unreal Engine, they can make an entire landscape with forests, villages etc. and you don’t notice any lag of any kind.

If I do exactly the same thing in Blender’s Eevee, it becomes slow and the render times doubles - even when using alpha grass or alpha tree planes (I’m using RTX 3070 Ti).

Why can’t Eevee just behave like a normal game render engine like CryEngine or Unreal Engine?

With current Eevee, I can’t even make the amount of large environments like people do in other game engines. Please, let’s make Eevee faster!

1 Like

The short answer is that Eevee isn’t a “game render engine” :slight_smile:


Game engines are able to handle heavier data than 3D modeling software because they don’t need to have the ability to edit meshes. So Eevee will never be as fast simply because of the app’s different purpose and needs.

Also, make sure you are using instancing when duplicating objects. Instead of using shift+d, use alt+d. This will create objects that share the same mesh (instances). They have a better performance than simple copies as long as they don’t have modifiers on them (you will know you have done it if duplicating doesn’t increase the scene’s polygon count).

Finally, Eevee is actually being rewritten right now, but it will take some time.


Because blender is not a real-time engine, Eevee is not real-time.

DCC’s such as Maya,Max,Houdini do not conceive 3D like a game engine would, it’s two extremely different workflow, two very distinctive paths on how to create in a 3D environment.

I often have users confused about this subject, especially when talking about adaptive LOD, which is typically a very important game engine feature. Here is a related answer, from few days ago


Dude, is it really necessary to go through that discussion again?
If you want to take full advantage of the features of a game engine, you just use a game engine.

Thanks. This explained it to me.


Because large game engine companies usually hire at least couple dozen of rendering engineers whom get at least double the salary of Clement and are allowed to focus on just that for several years at a time. They simply have more resources, that’s the main reason.


I don’t think that resources is the problem.
From a design&devepment point of view, it would be quite hard to implement game engine oriented functionalities in a DCC.
For ex, I hardly see how an adaptive LOD feature could be implemented while respecting the current work environment we have (object data/mesh data/material data ect…)

1 Like

Game engines rely on tricks to be efficient and handle large scenes.

Game engines use techniques like lods, culling, instances, optimized materials and textures to make the gpu render faster.
You can use Blender to achieve the same effects using geonodes. With Geonodes, you can create lods, and camera culling. you can also employ texture optimizations to help improve rendering.

1 Like

From what I’ve read there are many factors that make Blender/Eevee slower than a game engine.
First it’s two different things as mentioned, and a Game engine have a more limited focus. Where blender as a broader scope which add complexity and limits little tricks aimed at speed only.

Also , many possible optimizations didn’t make it to the first version of Eevee because Blender had to support older hardware. So it can’t always use the last tech available from graphic cards.
In some ways, it’s the same as my first point, blender as a broader scope.
Each time a feature come in, some specific optimizations are sometime used, but they need a back end for older hardware.

And some part are slow by nature and planned to be improved. I think instancing is one of them.
Instancing is great, but I always get massive slowdown like when I add a lot of grass elements on a broad area. Where unreal seems like doing a far better job at it.
It’s a known issue but it takes some time to do all the needed changes for them to be faster.

Eevee is also quite new compared to Unreal, and as said with less manpower behind, so it’s hard to compete on every sides. In the end, some things will improve with time, but don’t expect the same results in blender than in Unreal because it’s two different things.


A geometry node or a potential python implementation is a far cry from the technology implemented in-game engine. Game engines use shaders transparency in order to fade away the transition between the level of details. They are also able to work with any kind of instances on a global scale. These are vital functionalities to have in order to implement a proper LOD system. Right now we don’t have any of that.


That’s exactly it. Nanite in UE5 is on a way to make need for LODs obsolete, but development of Nanite took a lot of resources to pay lots of very competent people for long duration of time.

(Yes, I am aware Nanite has currently limitations, but they are going away extremely fast. Nanite in 5.1 will support opacity mapped transparency).

You can overcome the limitations, they are not set in stone, but it requires tremendous amount of resources poured into it. Companies like Epic can afford that. But BF with their relatively small funding can not. That’s what I meant.

If game engine can have DCC tools (Unreal has modeling tools, you can edit meshes, UV’s, rig and animate) then DCC can have high quality, high performance render engine. Resources definitely are the problem.

Yeah, I know.

I was just letting the op know there are certain possibilities that can improve what he is trying to achieve in Blender especially with Geonodes.

I feel Blender’s Cycles and eevee is sufficient enough with geonodes to achieve certain optimizations.

Lumen’s reflections is not quite there yet especially for photorealistic results. Ue5 does have the pathtracer but it doesn’t render nanite meshes well. It renders Nanite fallback mesh instead.

If I just had used Viewport Render Animation instead of Render Animation from the render tab in the first place, I wouldn’t feel the need for this thread.

Viewport Render Animation can render your animations pretty fast, 1 sec. per frame. Just remember to disable Overlays.

It has a slight - very, very slight loss of quality but I think it is “good enough” for my renderings.
You can’t do compositing though. If you want to composite the rendered frames, you have to make a new project for the compositing and re-render. In total, you end up with 2 sec. per frame (including the compositing stage).

Problem is not render, problem is pre-render operations. Eevee must do a lot of thing before begin render.

Do not forget, Blender is 3D package software, not only render software.

Maybe would be a bit more optimized, but not expect too more.

Well, Eevee is very much a baby compared to a game engine like ue5. Should have been more cautious with my words about Blender compared to Ue5 and Cryengine.

If Op feels eevee doesn’t meet his needs with optimizations. He can give game engines a go.

Hopefully Eevee Next would have some improvements but it is still not a game engine.

If you have technical knowledge of 3d programming, I can tell you why Eevee never be like UE5 or similar game engines.

A game engine is concerned about exactly one thing: frame rate. Speed. But they also know that you will not be too-closely studying those images, because they are shooting fireballs at you. :slight_smile:

EEVEE is using the same hardware to produce images that you will be studying for a long time. Although the hardware is the same, the algorithms are not.


With the old pipelines, rendering optimizations and tricks, were unacceptable for “movie-level-of-quality”. Because they tamper with the data and make it more ambiguous.

As I had asked the same question over and over, there is a philosophy among rendering extremists that things must be done explicitly and not change any data in any form. In fact the slower and more accurate the rendering is the better. Better to spend months rendering your movie rather than mining for bitcoin. Don’t ask…