@bliblubli: I think you’re doing a great job. Buyers can find all the info they need inside this topic, if someone is not able to read and understand data that’s another problem. Same is if someone blindly buy a product after reading a title, when there’s so much info about it. While some of the discussions here are constructive, I see also a lot of posts with a lot of unnecessary aggression and blind talk. I hope you won’t get too influenced by that and keep up the great work you’re doing.
Keep in mind that the ability to make pathtraced animation on home hardware (with all of the accuracy) will be a huge testament to the advances in computer hardware and rendering technology. It’s already bordering on achievable with the GTX 2xxx series, the Vega VII, and the AMD threadripper WX.
It’s not uncommon for the big Hollywood blockbusters to have VFX that takes many hours for one frame (but they have huge server farms that can render hundreds or even thousands of frames concurrently).
Hey @bliblubli ,
You should do a straight up comparison: 2.80 vs ECycles with side by side screencaps of the ui render settings along with final render. You can stamp render times onto the render but it shouldn’t matter as long as you also show the ui. Just saying, a couple of screengrabs will speak for themselves because it sounds like some of these guys simply need a bit more info and clarification.
I did the above in the thread link that I sent you earlier, showing that my renders were right about 10 times faster than 2.80 at 1024 render samples and no compositing tricks with that Optimus Prime mesh. I double checked, just look at the render times. During this bench is when I started using that tile size trick that I also mentioned to you earlier so I know it offers a little speed boost when rendering with cpu + gpu.
Anyway, remember and try to include the ui with all pertinent settings. A screengrab is simpler, faster and you don’t have to explain a single thing so less work for you and way less hassling questions. Like I said, this is how I do it on my thread and not one person has asked me a rude question or treated me poorly, simply because all pertinent info is in my screengrabs so there is nothing that is misunderstood.
Keep doing what you’re doing man… you’re making magic.
You are extrapolated with V-Ray, Corona, etc. Looking at the comments in this thread https://blenderartists.org/t/1146048 which happens to be on the top row for several days on this forum, the definition of what a good render is is very elastic with some people even arguing a lot (in other thread too) that Eevee is the future and it’s as good as Cycles but real time, etc. My point was to show that if you find what Eevee produces good, you can obtain the same results in about the same time with Cycles. With Eevee, if your client say your render look fake (and some will), you have to redo it in another engine. But with E-Cycles, you can also pretty fast make your Eevee-level render look like V-Ray if you are ok to wait 10minutes instead of 20sec.
Now about the renders, I totally agree that photorealistic is not always better looking. Increasing bounces also increase brightness in hard to reach areas and may lower contrast. Many then take a higher contrast filmic profile to make the shadows darker again. The third rendering is with 4 bounces still, which is exactly the same as what Evermotion sells, the point here was too show how E-Cycles can perform when using the same level of tricks and while keeping most of the look of the full GI render.
Actually I think I should show the next renders without any information (render time nor bounces, etc.). Nearly everytime I did the test, people would say the darker render is the most realistic one, although the brighter one was with full GI and trick-free.
Is it possible for you to create a Metal E-Cycles version for macOS? What would be the estimated total cost of creating such a version?
Everything is possible, but it would be a huge undertaking. I have enough to do at the moment, so it would be another developer anyway. Very very rough estimate, based on the fact that it took about a year to reorganize the OpenCL split kernel and make it available for CPU and CUDA. It was only a subset of Cycles and still using the same API. So I would say rewriting all of it in a new API would take 3x more time. At the current Blender Foundation prices, it would cost 36*5K = 180 000 euros = about 210 000 $.
Thanks for your answer. I expected more or less that amount of money but three years seem like an eternity. I realize that more developers could work but indeed it seems like a huge project.
It remains to be hoped that Apple will stop sulking on Nvidia. And BI will notice that Mac is suitable for work as I did (and many others).
Small sneak pick of my work in progress. A little bit of work and the Intel Denoiser works with animation pretty well. 36spp, 23sec per frame in full HD using a single 1080Ti
Edit: By the way, this new node work for all architecture (CPU, OSL, OpenCL, CUDA) and their is a beta build to make cpu render faster (about 11% on a 6700K and 17% faster on a Ryzen on this classroom scene) available on the product page.
These are indeed impressive results. I am curious about one more thing. Could you try to simply render the first image, exactly as it was set up by Evermotion with default Cycles, and then, without any modifications to settings, render it with E-Cycles?
One thing I am really curious to see is speed difference on identical image. I mean seeing the exact same image with different render times is what I am after.
I tried yesterday E-cycles for the first time, i opened a production scene of mine without touching anything and rendered with it.
Got 3 minutes and 20 secs.
With vanilla Cycles got 4 minutes and 30 (or even a bit more, i don’t remember exactly).
I will run more tests next weekend.
Keep in mind i haven’t touched anything in the settings, the thing i forgot is to compare side by side the two images i haven’t noticed any big difference but i can’t be sure. Will check afaik.
All in all, first tests with e-cycles are promising, and intel OIDN is pretty impressive,. something vanilla blender has not at the moment.
The scene was a full interior with tens of lights, many shaders and few million polygons.
Why post 2 identical images? Anyway, with exact same noise pattern, it’s 1,4x faster in this scene, but actually nobody uses this option. It was broken 2 weeks long and nobody reported.
There appears to be something wrong with the video. I can neither view it on my desktop nor on my laptop.
I took mov this time like @rawalanche suggested, although Blender defaults to mkv. At least with firefox it works. It’s standard h264. Did you try with chrome or firefox? IE/Edge are known to not support standards. I will add a mp4 version to see if it works better.
In my opinion, and I forgot to suggest it, mp4 container gives less problem browser and platform wise
I´m on firefox. My desktop is a windows 10 machine and I just get a blank video player. If I press play it disappears.
On the laptop which is a mac the video plays but it is just a green screen with some of the contrours of the video visible.
I normally have no problem with mov or h264 codecs.
I hope it’s better now with mp4?
Another question for you @bliblubli: are these speed gains still there with command line rendering? What is your coding about? Different sampling algorithms? Eliminating Blender’s bottlenecks?
It works with cmd rendering of course. There are many points, better sampling, code that better use parallelism, auto tile size also brings time savings by rendering and by setting up the scene. And now a very powerful AI denoiser
Why? Because if you have 2 images with same setup, you can truly see if the optimizations impact only performance, or also quality. You have previously posted one image that is brighter, with more bounces, and different setup. Sure, that’s fine, but that’s not how comparisons are done.
It’s good to see that E-Cycles not only renders faster, but also renders faster with more bounces, but first you need to establish some basis for measuring just performance, before you can start comparing apples to bananas