Is it possible for you to create a Metal E-Cycles version for macOS? What would be the estimated total cost of creating such a version?
Everything is possible, but it would be a huge undertaking. I have enough to do at the moment, so it would be another developer anyway. Very very rough estimate, based on the fact that it took about a year to reorganize the OpenCL split kernel and make it available for CPU and CUDA. It was only a subset of Cycles and still using the same API. So I would say rewriting all of it in a new API would take 3x more time. At the current Blender Foundation prices, it would cost 36*5K = 180 000 euros = about 210 000 $.
Thanks for your answer. I expected more or less that amount of money but three years seem like an eternity. I realize that more developers could work but indeed it seems like a huge project.
It remains to be hoped that Apple will stop sulking on Nvidia. And BI will notice that Mac is suitable for work as I did (and many others).
Small sneak pick of my work in progress. A little bit of work and the Intel Denoiser works with animation pretty well. 36spp, 23sec per frame in full HD using a single 1080Ti
A new build for 2.8x and 2.7x will come soon with better performance when using this new denoiser
Edit: By the way, this new node work for all architecture (CPU, OSL, OpenCL, CUDA) and their is a beta build to make cpu render faster (about 11% on a 6700K and 17% faster on a Ryzen on this classroom scene) available on the product page.
These are indeed impressive results. I am curious about one more thing. Could you try to simply render the first image, exactly as it was set up by Evermotion with default Cycles, and then, without any modifications to settings, render it with E-Cycles?
One thing I am really curious to see is speed difference on identical image. I mean seeing the exact same image with different render times is what I am after.
I tried yesterday E-cycles for the first time, i opened a production scene of mine without touching anything and rendered with it.
Got 3 minutes and 20 secs.
With vanilla Cycles got 4 minutes and 30 (or even a bit more, i donāt remember exactly).
I will run more tests next weekend.
Keep in mind i havenāt touched anything in the settings, the thing i forgot is to compare side by side the two images i havenāt noticed any big difference but i canāt be sure. Will check afaik.
All in all, first tests with e-cycles are promising, and intel OIDN is pretty impressive,. something vanilla blender has not at the moment.
The scene was a full interior with tens of lights, many shaders and few million polygons.
Why post 2 identical images? Anyway, with exact same noise pattern, itās 1,4x faster in this scene, but actually nobody uses this option. It was broken 2 weeks long and nobody reported.
There appears to be something wrong with the video. I can neither view it on my desktop nor on my laptop.
I took mov this time like @rawalanche suggested, although Blender defaults to mkv. At least with firefox it works. Itās standard h264. Did you try with chrome or firefox? IE/Edge are known to not support standards. I will add a mp4 version to see if it works better.
In my opinion, and I forgot to suggest it, mp4 container gives less problem browser and platform wise
IĀ“m on firefox. My desktop is a windows 10 machine and I just get a blank video player. If I press play it disappears.
On the laptop which is a mac the video plays but it is just a green screen with some of the contrours of the video visible.
I normally have no problem with mov or h264 codecs.
I hope itās better now with mp4?
Another question for you @bliblubli: are these speed gains still there with command line rendering? What is your coding about? Different sampling algorithms? Eliminating Blenderās bottlenecks?
It works with cmd rendering of course. There are many points, better sampling, code that better use parallelism, auto tile size also brings time savings by rendering and by setting up the scene. And now a very powerful AI denoiser
Why? Because if you have 2 images with same setup, you can truly see if the optimizations impact only performance, or also quality. You have previously posted one image that is brighter, with more bounces, and different setup. Sure, thatās fine, but thatās not how comparisons are done.
Itās good to see that E-Cycles not only renders faster, but also renders faster with more bounces, but first you need to establish some basis for measuring just performance, before you can start comparing apples to bananas
@rawalanche I updated my post, but thatās so dumb, that even the forum notice itās the same image (it calculates hashes and the hash is the sameā¦ so it just keep the first one.).
Edit: it even replaced the name with the one from the other render, it looks like a big fake removing. You can download the image twice if you want and compare the 2 downloads
Thanks for the update bliblubli, so this comparison is vanilla vs. e-cycles with the same sampling algo - 18:10 min vs. 13:07, right? I could imagine some people would like to see your new sampling algo in comparison (same settings for bounces, no caustics, clamping etc.) to see the speed gains. Personally I would love to see the images without denoising to judge noise quality and afterwards separated tests regarding denoising vanilla vs OIDN.
To make things clear in advance, I will be future E-cycles customer and I really appreciate your hard work!
yes, as @rawalanche asked, I used the option nobody uses to get the exact same sampling as vanilla Blender (so exact same rendering), thus only 1.4x faster in this case, instead of 2.1x faster with the new algo. I will do a comparison of vanilla sampling/denoise vs E-Cycles sampling/denoise if you want.
Ok, cool!
Its not really about what I want, it is perhaps more about how you present the results of your work in the best way possible, to prevent speculation/misunderstandings etc.
Are there speed gains through faster pre-processing, yet? If so, please show analytic results
The faster pre processing is a work in progress. The next update will bring faster rendering in most scenes (about 20%) with the exact same output. The preprocessing time speedup will come later.