Now this is a Geforce Titan !

gtx 750tis wont go over 70c because they use so little power

are they supported already? I’ve read bug reports where they dont work with cycles.

If we go by the GTX 750 Ti times Maxwell (GM) is going to be a great improvement for Cycles. Take in mind that it’s still on 28nm so we can compare some stuff with Kepler (GK).

If Nvidia keeps its naming convention this is the bottom of the barrel Maxwell chip for desktops so mid range and high end Maxwell cards will be blazing fast.

If we compare die sizes the GTX 750 Ti with 148mm^2 is as fast as the GTX 660 (GK106) with 221mm^2. With an index of 0.67 for the GTX 750 Ti and 0.45 for the GTX 660. That’s a whopping 50% faster per square mm improvement in Cycles. It’s even more impressive when you take into account that the GTX 750 Ti VRAM is rated at 88 GB/s and the GTX 660 is rated at 144 GB/s.

So I would expect the Maxwell GK104 (GTX 770) equivalent at ~300mm^2 to be really close to the current GK110 chips.

If Nvidia keeps the jump in compute capabilities for the GM110 then the highest end Maxwell is going to be really crazy for Cycles.

You seem terribly enthusiastic. The way I see it, it’s going the be the first single card to offer a significant speedup for Cycles since Fermi launched in 2009.

WTF $2900 for that. NVIDIA is getting nuts these days. Can’t wait to see their SoC creations on other devices, like tablets or smartphones. I just saw their Tegra K1 and it’s just mind blowing. Hopefully, more and more manufacturers will use their GPUs on the next-gen devices. Actually, i’m planning to buy HP Slate 7 Extreme or Tegra Note given that they have Tegra 4 SoC.

I have an older, dual 9800 GX2 setup as a TV gaming station, and I was running dual 7990’s for awhile but was forced to take one of them out after the most recent driver updates caused irreversible instability. I’ve been thinking about upgrading to Windows 8 anyhow.

My opinions on the whole thing are mixed. Both computers required 1100 watt power supplies and the heat from even a single 7990 requires a case that hums like a jet engine. The worst part are the high-pitched harmonics that come out under load.

But for my rendering work, it’s worth it. GPU rendering speed was fantastic. I was able to do complex scenes in near-real time. It’s the only reason I stopped using Luxrender and moved to Cycles; I was able to have a rendered viewport.

Moreover, my power and temperature problems didn’t manifest very much during rendering. Ray-tracing is very easy on GPU’s. Temperatures rarely exceeded 65 degrees and total power draw for both 7990s was only about 750 watts.

Whether I was running a single 7970 Ghz edition, a 7990, or dual 7990s, my computer lagged while it was rendering ( I should point out that this is a Windows 7 box). The advantage to having all of that power is that you are forced to give up your computer for less time.

A single GPU is always better than two, assuming equal performance. Thermal characteristics are easier to handle, power usage is lower, less space is used up, stability issues are fewer… and on and on. I’d recommend doing what I did: buy a single, top-end card (in my case, a 7970 a year and a half ago), and then plan to add more of that card as prices go down. And if you like Blender, go with Nvidia. I’m a total AMD fanboy, but Nvidia’s better for rendering right now. Support in Luxrender, Cycles, Octane, and better Linux drivers all make Nvidia a very compelling product.

And while on the subject of Nvidia, this friggin’ card is a money grab. I read speculation that this is because professionals were skipping over Quadro and Tesla cards in favor of the original Titan. And since Nvidia owns 80-90% of the professional GPU market, they feel safe in this move.

Because this makes no sense. They try to justify the hilarious price increase with this garbage from their press release

“Unlike traditional dual-GPU cards, Titan Z’s twin GPUs are tuned to run at the same clock speed, and with dynamic power balancing. So neither GPU creates a performance bottleneck.”

That is pr nonsense. Every card, every processor, has advanced power management at this point. AMD blathered on about the same power-saving features with the 7990. The end result? A card that drew less power, but primarily because of AMD’s carefully binning the chips at the point of manufacture. And that card cost only a grand, so Nvidia’s argument holds no water there.

And tuned to run at the same speed? So, that means that if one chip thermally throttles performance, then the other chip will follow suit and lower its core clock? Wow. That’s advanced.

It’s just two Titans, slapped together. I wouldn’t be surprised if its even two Titans and not Titan Blacks, since these dual GPU cards usually sport lower core clocks than the standalone cards, and unsurprisingly so. Dumping two GPU’s worth of heat into a single assembly is a lot of work for even a bunch of fans.

Nvidia can call me when they produce a card with 12GB of actual memory for less than my first-born. Because as I discovered with my 7990 rig, if I have the power, I will fill the memory.

I wonder, do you get 12GB for both GPUs, or do they split it so each gpu has 6gb. that would be awful.

They are not getting nuts. Targeting business (read: an organization with money) is a good business sense. CUDA is big in research, and there is a lot of money to be made there. Also high end computer graphics (medical, simulation, etc).

6GB per GPU.

http://www.techpowerup.com/199247/nvidia-announces-the-geforce-gtx-titan-z.html

Ah, was hoping they both where wired to the same vram. so a scene and textures could be loaded into 12gb and both draw from that pool to render.

it’s just basically 2 Titan cards, … for the price of three. and there’s 780s on the way with 6gb of vram. Hum nvidia, well played… well played :smiley: I am basically looking for an upgrade and it’s leaning so much towards the new 780s on the way or wait until summer and see what maxwells cards get out there on the market.