Intel Rocket Lake alleged benchmarks; a regression in mult-core?

I am not making up the regression bit, while it appears that Intel has jumped ahead of Ryzen on single-core, the rumors suggest it will actually be slower in heavily threaded tasks compared to Comet Lake.

Considering how rumors often hype up an upcoming chip’s computing speed compared to the competition, this was an unusual sight to see. One possible reason is that this is based on a new architecture that was meant for the 7nm process but had to be backported to 14nm. I’m not saying the regression will be present on release, but it does throw in some uncertainty on whether we can continue to expect a reliable upward trend in performance, especially as we do not yet know how today’s software will handle the BIG.little design Intel is planning with Alder Lake (which may also need to be backported).

Alder Lake will surely be a bad choice for rendering, those little atom cores are just not going to be able to handle it. Sure it might be good with the big cores but its going to have a core limit if its a monolithic design, so that rules it out for the high end desktop and creative users. Amd Zen 4 will wipe the floor with it.

1 Like

More Rocket Lake information, a rumored 14 percent increase in IPC is expected.

Here’s the thing though, Rocket Lake’s Sunny Cove architecture should’ve given what Intel needs to again be the unquestioned king of computing, if only they got their 7nm process working. It could be that an otherwise huge leap in performance was cut back significantly with the backport to 14nm, so their slide shows them just eclipsing AMD by a few percentage points in single-core and in lightly threaded tasks.

Ofcourse there is a regression in multi-core when you downgrade the core count. 5 to 15-ish % IPC improvement doesn’t balance out 20% loss in core count(it went from 10 cores last gen to 8 cores this gen).

But here’s the thing, most consumers don’t need that many cores anyways and this is a product targeting gamers more than content creators. And single core performance rules there so the IPC increase is a relevant improvement for the consumer lineup.

If you are a prosumer who wants to go the intel route you are better off going for their workstation platform rather than the consumer one. I.e. get a Xeon.

Or if not so hung up on intel then consider AMD. Zen 3 based threadrippers should arrive this year.

Cpu based rendering is dead in Blender “for now” as far as I am concerned.

Bmw scene

64 Core Amd -> 22.55 sec (My cpu renders this around 238 secs)

Rtx 3090 -> 9.96 sec (I can confirm this on my machine too)

Only the “koro” scene is head to head between Amd 3990x 64 core and Rtx 3090. Both rendered it around 40 secs which is an interesting case.

Those higher core performance will be useful for the rest of the stuff in Blender

That’s the previous gen Zen2 based TR, also there’s the cpu + gpu rendering option to consider. But ultimately it comes down to scene/production specific needs.

Most scenes will probably render fine on the 3090 but you might have esoteric issues in more niche cases perhaps.

I do recon for most people though a consumer CPU should do fine and GPU rendering will cover their needs well enough, yes.

That is a good assesment. Naturally there are downsides to the Gpu rendering like memory requirements. However I am not seeing how this small incremental Cpu improvements will beat the Nvidia’s current line of top Gpus in the near future when it comes to rendering in Blender.

Gpu+Cpu rendering is nice but this does not work in all cases because the Cpu generally is slower than the Gpu with tiles and I found that generally disabling Cpu tends to lead to faster renders, however I do not have the top notch Amd or Intel Cpus atm. I am also not seeing marjinal increases with the tile stealing that was implemented recently.

You can’t buy the 3090 which is the the only viable GPU only Blender rendering solution because it has adequate VRAM, because its out of stock everywhere and 2021 will be worse stock issues.

Ideally i wouldn’t want a downgrade in memory and i already have 32GB of RAM. So CPU rendering is not dead, only for small scenes if you manage to find a RTX3000 series in stock.

For large scenes a threadripper with 128GB of RAM is much preferable because although slower than a 3090, its still going to render in a reasonable timeframe. If money was no object i think i’d opt for a dual socket EPYC system.

Also to mention, highend cloud GPU nodes are very expensive, and I am not sure if any cloud provider supports the current line of RTX cards as on demand service.

I naturally agree that a monster PC with alot of ram is a very stable choice for long term rendering. However something like 3090 is still the king of rendering on a desktop in Blender. I got a 3090 and I suddenly went from 56 secs (hybrid Bmw scene with i7-8700k+ RTX 1070 ) to 10 secs without upgrading anything else. Plus the power usage is down because this is GPU only rendering. It would have been much more expensive and involved upgrade if I went for a CPU based solution to get to a number that is close to that one.

I don’t know if this is a bad sign for Intel’s confidence in their next two lake products or not, their current CEO has been shown the door.

By the looks of things, Intel is at least trying to get closer to having their own version of Lisa Su (ie. an electronics guy who knows his way around computing). This could have interesting implications for the generations after Alder Lake.