@CarlG
YAFU is right, 3.0 even render BMW (400 Samples, 32x32) a bit faster, strange.
One of my test files use SSS, maybe the new SSS is slower.
I have to make more tests.
Cheers, mib
@CarlG
YAFU is right, 3.0 even render BMW (400 Samples, 32x32) a bit faster, strange.
One of my test files use SSS, maybe the new SSS is slower.
I have to make more tests.
Cheers, mib
Blender 3.0 only has Random Walk methods. They are supposed to be fast and accurate.
If your old file uses gaussian method that would not be surprising to render faster in 2.93, but result should not look as good as in 3.0.
Hi, does anyone have any news from the caustics on cycles X?
As far as I know, there hasnāt been much work in that area. The first stage of development on Cycles-X was mainly to modernise the underlying code get existing Cycles features to work.
Features such as path guiding may come later.
Is the front right wheel broken for you in the BMW scene?
It seems broken in all versions from 2.81. You report the problem to see if this is a bug with some of those constraints.
EDIT:
Doing this fixes it, although Iām not sure if the 2.7x file should open correctly by default anyway:
Hi, I test both files and the latest from https://www.blender.org/download/demo-files/ is really a bit faster on 3.0 but only 3-4%.
Cheers, mib
IMHO, the BMW scene is too small of a test scene to give meaningful benchmark results on todayās hardware. Itās like test driving a race car in a parking lot.
Some of us are still using 2014ās hardware and donāt to anything more complex than a stereotypical keyshot render, so how fast the BMW renders on cpu on a mac is very relevant.
Benchmarks based on the lowest common denominator hardware are irrelevant. So the more relevant it is for someone like you, the less relevant it is overall.
Hardware from 2014 was already benchmarked 7 years ago. Benchmarks are primarily used to benchmark newly released hardware.
That is why I had asked that. I donāt get CPU performance regression in the couple of official sample scenes that I had tested. So is difficult to deduce what the problem may be and to know if other users are having the problem if the scene is not available for us to test.
Hi, I test the barbershop scene on my old i5 with reduced settings.
Resolution 50%
Samples 128
2.93
Render time (without synchronization): 459.96
3.0
Render time (without synchronization): 404.442
It seams only my own files are faster with 2.93, fine.
Cheers, mib
Along with Brecht and Sergey fixing regressions in rendering performance for the CPU (by having the CPU perform different code that fits its capabilities better). There are also optimizations going in that improve CPU rendering speed a little further, but in this case Optix benefits as well.
rB04857cc8efb3 (blender.org)
In other words, since Embree is now the way to build BVH trees, it stands to reason that code designed for the old custom implementation can be redone, removed, or changed. The only downside to this commit is that CUDA will see a slight performance regression, but the reduction in memory usage will be found on all platforms.
To note, the devs. reasoning for not giving CUDA near as much attention, is that its usage has been dwindling considerably in favor of Optix and is now becoming a distant third (behind CPU users as well), so there may be a time where it is soon regarded as a legacy platform as far as rendering goes.
I hope not. In my case, on a dual GTX1070 system, CUDA is still around 20% faster than Optix.
I havenāt tested those yet, not sure when Iāll be able to. I only tried my startup file which is basically a simple room with a window with a couple of points lights and no world. Pre Cycles-X Iām using 48 x 54 tile size (1920/10/4 x 1080/10/2), min light bounces: 2, 8 diffuse bounces. Cycles-X takes 3 minutes vs 1 minute in this setting.
This is great news! Scrambling distance in master would be awesome. I never rendered a frame without it in the last years (using Eā¦Kā¦and whatever-Cycles). Octane has this feature too.
That would be fatal.
Maybe someone should show the devs the current graphics card prices.
Optix is supported down to Kepler (GTX 6xx-7xx).
How is this nonsense? A point in space has a coordinate, and this is giving you a coordinate in the space in centimeters. If you want to locate or shift a point (incoming vertex coordinates for instance) in your scene precisely in a shader then you need that location in the world space. A point in space is not an arbitrary vector it has a location so it is showing you the location in the scene units.
It wonāt show units in other types like vectors or normals since they do not have a ālocationā to mess with. I think the main issue here is that the mapping node does not necessarily know the type of the incoming socket values so the location in cm might make sense with some values and not with some others. It is generally a good idea to set the mode of the mapping node manually according to the incoming socket type.
Yes, Optix is supported, but on older (10th gen) cards, CUDA is still measurably faster.