Will cycles (and other path tracers) be redundant

Now, I don’t game, so I rarely see the latest developments. However, I’ve been watching a couple of UE4 videos, and honestly, the graphics are astounding.

In particular, the human skin shaders used and showcased in https://www.youtube.com/watch?v=rT1FMqL4gek are far beyond what I have been able to achieve in Blender.

Before the naysayers jump up, I know that there are some brilliant works made with cycles out there, that probably took hours to render. UE4 is doing this in real time, though.

So, is the day of the renderer coming to an end as real time rendering improves (and I know Blender is getting Eevee, though it’s not there yet).

Realtime rendering has advanced quite a lot in the last five years, but pathtracers are also seeing new technology in the form of ever more expensive and realistic shading (counteracted by optimizations and new sampling techniques).

For instance, the Burley SSS being doable in realtime, but pathtracers getting an even newer iteration called Random Walk SSS. Let’s also not forget that Eevee actually has to resort to tracing rays in order to avoid some quality issues (which are very difficult to fix using rasterization alone). Even then, we never got to the issue with trying to do detailed caustics in realtime.

I think we have less than 2 years before true realtime bidirectional path tracing will be available to the rich, and 4 till it will be mainstream.

also what about having a ai ‘wing it’ and render it from it’s headspace?
(describe what you want and hand ithe ai models.)

the future is fing stange.

At the moment you still have to throw more artist time at an Unreal scene than at a Cycles scene. Realtime still needs more preparation than non real time rendering. I guess that is going to stay like this for a while but sooner or later the realtime renderers will be good enough.

Current pathtracer can currently handle only a subset of the physical interations of light (caustics, some materials, camera lenses, interference of light, etc.) and there is still a lot of active research to broaden this subset. Pathtracers can do this in a quite easy, but yet mathematically correct framework.
Rasterization based renderers have a lot of benefits, especially fast rendering of dynamic scenes. But their subset is a lot smaller, more difficult to develop and a lot is “cheated”. You can create ultra complicated and difficult algorithms that fake or at least approximate effects, that a pathtracer gives you almost for free.
There is reasearch on both areas and both will get better in the next years. I think that the percepted distance might get smaller, but it will always be there. Pathtracing/movie quality art is still not good enough to mix cgi with real footage without viewers noticing it. This will alyways be the big challenge.
And well, this discussion exist for many years now and the gap between both algorithms is still there…

This is where I was headed…it is a trade off…but it is getting to the point where they will be meeting a middle ground or singularity in the future, where the time spent preparing you scene and assets for RT, will be equal to or less than the time spent on rendering…the matter of quality then comes up…and those margins are ever closing, but RT will never be as high quality as rendering…it’s just in the nature of how they work(rendering methods).

So, no…PT is not going away in the next ten years…probably much longer…but the possibility exsists that we could end up with hybrid RT + PT even if it is just single sample at first…already achievable for simple setups and low resolutions.

If you look for more in-depth explanation how to create photo-real characters in UE4 this link is pretty helpful.

My first experience of working with real time rendering for animation video production was several years ago with the CryEngine. It was a bit of a nightmare for us. Back then at least. Although to be fair we did all need to go through a last min crash course.
But even so I think we were in agreement that things took much longer to set up and get to work than it would have if it were pre rendered and composited traditionally. Everything needed to be so carefully set up and precisely triggered to work together the right way. So while it was obviously great for the actual game. It seemed at the time there were so many hoops to jump through to render what were quite complex individual narrative animation scenes. So I don’t think the eventual render time saving’s ( once it all was set up ) helped us there much.

But things do seem to be getting much easier and faster. And there comes a point when you have to ask with a particular job. Is something simply … good enough. As opposed to … does it include all the bells and whistle options . I saw an Architectural Viz style interior room render in Eevee recently which looked fantastic.
I’ve not looked at Eevee myself much yet. But Cycles itself is becoming faster and more usable all the time. As well as more feature rich. The recent Random Walk SSS addition being a great example. And the recent speed ups in Cycles rendering time have been phenomenal.

Its interesting times for sure. Certainly it will be more horses for courses between the two in the near future I think.

But now realtime renderers are good enough for some types cg work. A lot of product visualisation could be done with Unreal or some similar engine and produce good enought quality. The question is can this quality be achieved with less work than with a traditional render engine?
The vast majority of cg jobs don´t require holywood budget quality.

Yes that is true. But computation time gets cheaper, especially when using cloud services as aws. The interactivity is the main benefit of rasterizers, which makes them great for architectural or previs. But there is no reason to choose rasterizers for non-interactive projects, as artist time is way more expensive than computation time (at least if you don’t want to render hours of footage).

one plus is for RT once you have a scene setup re-renders do not detract from the project :).

But if you’re rendering hours of non-repeating footage, then either your material is probably so monotonous that it should be far shorter than it is, or the artist(s) have enough work creating that much content that it still dwarfs the rending budget (at least according to all the major studios, that is).

The reason studios want to decrease render-times is twofold:

  1. Artist time is more valuable than pretty much anything else, so the ability to preview a clip or a frame quickly is invaluable.
  2. To expand the limits of what is achievable within the medium. (Example: Rapunzel’s hair, The Good Dinosaur’s clouds, and the sheer number of lights in Coco’s floating cities).

That said, a realtime, near-realtime, or even realtime/offline hybrid rendering workflow would be a boon to the more casual rendering communities.

i have told that long time ago when graphene or similar CPU-s arrive then Cycles could theoretically run realtime.

It always leaves me kinda puzzled why some are so eager to try to believe in any marketing-hypocrisy of companies trying to sell their realtime-engines.
Yes, the tech is getting better. Yes, some of the screen-space-sss demos are impressive (for what they are).

But:
If I’m not misstaken, most of the realtime-tech scales really badly. So you may be able to render a character closeup with niceish SSS in realtime, but try hundreds of orcs in a LOTR-style battle-scene, and your realtime-engine is gonna collapse like a house of cards during an earthquake.

Also notice how most of those characters in those RT-SSS demos hardly have any hair and if they do, it’s short, it’s dreadlocks or it really just sucks big time, visually.
Thus forget about hollywood-grade fur in realtime,
forget about hollywood-scale scenes in realtime (try rendering more than 3 trillion polygons in realtime, and that’s a number from a shot of “Elysium”. That flick is 5 years old now!),
forget about hollywood-scale VFX-work, or has anybody seen large numbers of layered transparent surfaces, large fluidsims, large smokesims rendered in realtime?

So is Weta gonna render stuff like those apes in realtime in 25 years, because they’ll have realized UE/Unity/what have you is so much better than Manuka by then? -Hell no. Pathtracers ain’t going anywhere, and if, not to be replaced by realtime-engines.

While I do understand it’s just a nice thought for us CG-folks if we could just render anything in realtime in 5 years. But it’s a nice thought we’ll all own 30 million $ in 2023 either, right? -Not gonna happen, and you might still find some youtube-vids somewhere in which some gambling-company praises their ‘services’, just like Unreal praises its engine.

greetings, Kologe

I don’t use any RT engines. As I said in my OP, I’m not even a gamer, so I rarely see RT stuff. My point was that, compared to what I’ve seen previously, the current engines are outstanding.

With this kind of progress in just a few short years, and with advances in HW, it’s not inconceivable that the future lies with RT.

Well, you should try to import a model with two or three million triangles into the Unreal Engine. The preprocessing takes 10 to 30 minutes. And then try to assign materials to it. Each assignment of a material takes several seconds. Game Engines have a great performance, but they hide a lot of their costs in preprocessing steps. It is quite common for larger game productions to take a night to build/optimize the game content. Trying to work with billion triangle scenes (and you reach that quite fast) would be a pain.
And you need to talk about computational complexity, even if you ignore all the fancy stuff like Global Illumination. A pathtracer does its calculations basically in O(number pixels * log(number triangles)), a rasterizer in O(number pixels + number triangles) => Rasterizers scale better with resolution (in practice they often don’t but that’s a different story), but raytracer scale better with the number of triangles.
But you are right, realtime rendering is really great for having fast feedback and doing some preview renderings.

What planet are you living on lol

I chuckle, because the same debate happened 7-10 years ago with final frame renderers for VFX and animation. There were a few ray traced renderers like mental ray back in the day, but generally everyone was scanline based for major productions. In fact the same things people are saying now around EEVEE and other real time engines, people said about RenderMan.

Then there was a move to a “hybrid” rendering, some amount of physical rendering/ray tracing mixed with the scanline rendering as the hardware got faster. People wanted more quality. However as time went on the setup of these effects became burdensome.

Finally everyone moved to fully raytraced. Of course this went with more hardware speedups. But the end result was less artist time to get good looking renders. I imagine the same sequence of events will happen in the real time engines as we’re already moving into phase 2!

I am playing now Uncharted 4, arguably the best game in the past two year, with a superb work in the lighting department, probably the best so far, with several key improvements from other games, like better understanding of lighting regarding changing scene conditions, better tone mapping and great colors. However, there is not any kind of rendered caustic component, realtime diffuse shadows or rt participating media in the whole game plot, even when it could make a great difference visually speaking, as there are lot of scenes with water. It only renders pre backed stuff and speculars, and sometimes it looks like characters are rendered apart and composited on the plate scene. The only key difference from lets say 10 or 15 years ago is the amount of geometry and textures these engines can handle, and they are not there yet regarding physics and particles work. I mean you can not completely destroy an scenario using your light and heavy weapons.

There is no doubt in the long term realtime will be 75% of the market and what most users need to accomplish what they want. I mean you can already use the principles of ray tracing in realtime, UE4 has VXGI, Unity I believe has something similar.

If all you need is a small scene area like for studio product renders you can already get 99% path traced results with VXGI but larger scene’s are a no no. Thats why I like the fact Eevee was started on, If realtime gives you what you need use it, If not go the path traced route, different tools for different needs.

Realtime path tracing isnt as far away as you may think, I got something near realtime with AMD’s radeon rays API and some heavy biased rules, And this was only running on one card and an old arch gpu. What we really need is cheap GPU power again, but miners are killing any hope right now more mass market cheap gpu’s capable of realtime or near realtime path traced rendering.

This was an old test of my path tracer: