And there was a bug I have encountered…but only when using Fusion with CUDA on Debian (regardless of Intel or AMD). But it had to do with cuda libraries installation paths…
Fix there was to run resolve from a bash script like:
So cool that 980Ti are tested and listed, and pretty kickass to know that my three EVGA 980Ti Hybrids are still none-too-shabby! They have been serving me quite well with E-Cycles… maybe I do not need to upgrade anytime real soon! Thanks for posting that BB!
I have yet to put the Custom Build feature on Barista through its paces but it looks like it will work!
Here is a question, though. Could E-Cycles have an option to automatically cache all simulations etc before rendering? Then we could avoid uploading simulation cache files and use the power of AWS to simulate at super high res. That would be an awesome feature.
I have also asked one of the Barista devs about support for addons and how that works with the custom build feature. Maybe it would take a similar feature to above where the activation of addons happens in the custom build before rendering?
I apologize if any of my assumptions are mind numbingly stupid! Feel free to enlighten me haha.
For the cache, I think it would be hard to manage all cases correctly. I think calling a simple script that does what you want (bake all cache, render animation) with blender -b your.blend -p your_script.py would be much better. You have an example here to bake a flip fluid sim for example https://github.com/rlguy/Blender-FLIP-Fluids/wiki/Baking-from-the-Command-Line. Depending on what you want to bake, the API call is different. You can hover the UI button for the bake you want and the python code will be drawn in the tooltip (on 2.8, you may have to activate that option in the user prefs -> interface).
For the addons, there is already a solution. Activate all the addons you want. Then just copy your 2.xx/config folder from your home/user folder to the 2.xx folder of the E-Cycles build and voila. You have a portable E-Cycles with all your configs wherever you start it, including if you start it from command line to combine with solution above with your script.
So everything is possible now
I will test tomorrow the releases on Debian 9 with i7-7800K and GTX1050 2G in the office…
just for testing…though I don’t expect any issues…just to be sure (o;
Otherwise the machine is just too slow for rendering…I mostly just use ot for modelling and render at home (o;
I already have a fixed build for Ryzen for distros with glibc 2.27+ (Ubuntu 18.04 and up) available on the market. Still fighting with older builds as they seem to really prefer Intel processors
You’re welcome. Note that if your AWS instance has multiple GPUs, the 20190516 build for 2.8 would be the best one to render. If you use E-cycles 2.79x, the latest are the best one.
The interiors animations I render take around 10-20 sec per frame to render, so most of the time, the upload time and download time make farms useless in my case. But I’ll be happy to hear how that works.
It seems E-Cycles already brings great speed-up thanks to the AI denoiser on CPU instances, so GPU should be cool
I’ll let you know how things go once I actually get some heavier non test projects rendered! Thanks for the additional tips!
I can use my local machine (980Ti + 1080Ti) to render lots of stuff but I am in love with microdisplacement/adaptive subdivision and Cycles volumetrics and BVH prep time and volumetric render times take forever even with E-Cycles.
Its also worth noting when I say volumetrics I don’t mean standard smoke sims featured on camera - those render quickly enough! What I mean is the camera/scene being immersed in a huge volume whether it is a sim or noise derived volumetric pattern. It does wonders to “sell” scenes but its slowwww. I have tried using mist passes and EEVEE volumetric output but its just not the same…
I’m working on a glibc 2.23 version so that it works for Debian9 too
Regarding the notifications, @bartv already know about this I think and had a fix?