Oops already posted
When I click on the link I get a HTTP Error.
Oops already posted
When I click on the link I get a HTTP Error.
yup, link to the faq is broken. Is this info anywhere else on the www?
Hey guys! I was wondering and searching on google about possibility of using blender Network renderer with GPU. I know this feature was developed on mango project and that the complexity of scenes works sometimes against the GPU capacity. But I got 4GB graphic card which could handle easily all of my shots and it seems to me as such waste to have a computer rendering on CPU while GPU is resting. Is there any way how to send jobs from network renderer to both CPU and GPU (almost like two separate slaves) ? That hack would double the speed of everyones render times.
Well, if Cycles was hybrid this could have been done. Not yet.
For now you can send a rendering job to another computer with different (gpu & noise) setting then compose into one.
Iām not sure if I should be posting here, but I have read (most) of this thread and need to ask for an opinion.
Iām running Win with an out-of-date nvidea card (and poor spec).
The trouble is, there are so many cards out there, its difficult to know which way to goā¦
Iām looking at three cards at the mo:
Any advice on which to go for? Iāve also heard that the number of Cuda cores is a KPI?
Thanks
Go for the 960, more computing power per watt.
The GTX 960 has 1024 CUDA cores, whereas the GT 740 has 384.
The GT 740 uses 64W while the GTX 960 uses 120W. The GTX 960 is more then twice as powerful as the 740 and may use less power in long renders. (64W over 20 minutes > then 120W over 10 minutes, right?)
Even though the 960 is far more expensive it is the better choice. It is rumored that nvidia will release a new card soon, probably either a dual GPU 990 or the new Titan, which will lower the price of the current 980 and likely to result in lower prices of the 970. If they are going to, it has to be soon. They have already confirmed the release of the Pascal GPU architecture in 2016. Which if you can wait until then to upgrade, I know I canāt, there will be a huge leap in GPU performance.
Definitely donāt get the 1GB version of the 740. I currently have a graphics card with 1GB VRAM, itās not enough.
Thanks Phith
Exactly the answer I wanted and it also confirms views from other people too. I guess cores win over RAM.
To be honest, the power isnāt critical as itās for business use but I absolutely understand your explanation.
Iāll also look into the new 990/titan although I need this card pretty quick.
The next best thing is always on the horizon!
Iām not sure I fully understand the relevance of the PCI Express 3, but Iām reading up on it now.
Thanks again.
Just to let everyone know, I went with the following card
Iām running on an i7 4770 with 16GB RAM
I had the following problems:ā¦
Firstly, on Blender 2.72b, it throws an immediate errorā¦
āblender CUDA binary kernal for this graphics card compute capability (5.2) not foundā
This can be resolved by changing a filename ākernel_sm_50.cubinā (Blender/2.72/scripts/addons/cycleslib) to ākernel_sm_52.cubinā
That workedā¦ greatā¦ butā¦
I work on large models with complex geometry and if I try to switch to rendered view (Shift + Z), it runs out of memory because this card is only 2MB vs the standard 16MB I had to start withā¦ OK, never mind, I can resolve this by switching to a full F12 render using tiles, which only needs to commit the current tile to memory - which works.
Lastly, on large .blend files (I am currently working on a 600mb fileā¦ yes 600mb!), there is very little perceptible difference in render speed. I did a test using two identical PCās - one with standard CPU processing and one with my new GPU compute. The end times were very similar.
A disappointing result!
On smaller object files, there is a very noticeable improvement in speed on both the rendered view and in final F12 renders.
That said, a MAJOR advantage is that whilst using the GPU to do the rendering, the rest of my CPU is freed up to carry on with the rest of my daily tasks and the PC isnāt clogged up trying to process the render.
All in all, in my case, Iām not sure it was worth it.
Hi,
I have a GeForce GTX 660 Ti GPU.
I am running Debian Jessie with up to date proprietary nvidia drivers (version 340.65-2).
When I check with nvidia-settings tool I see 1344 CUDA cores.
I donāt see CUDA under User Preferences -> System -> Compute Device (only None and then CPU in the selection).
When I start Blender (v2.73a) from command line it doesnāt print any erros/warnings.
Do I need a special build for GPU rendering to work?
Edit:
Iāve found this and installing libcuda1 package solved my problem. CUDA works now.
Dear all, I would like to upgrade my graphic card. Should I go for GTX970 or Quadro 5000? Beside Blender, Other softwares Iām using are Zbrush, Mudbox, Photoshop. Thanks and looking forward on your reply.
Hi, there is some software supported from special drivers for Quadro cards.
I know from Maya for example. Blender does not profit from it.
Cuda render engines like Cycles or Octane are much faster on gaming cards.
There was a Quadro card in my benchmark for 2.72:
http://www.blenderartists.org/forum/attachment.php?attachmentid=354192&d=1420649558
Cheers, mib
So I am using a Mac Book Pro 15-Inch Retina Display, it has a Nvidia Geforce 750M GT CUDA supported video card, Got the CUDA Driver so that blender would recognize it and now iām getting this error when rendering with GPU in Cycles:
āCUDA error: Launch exceeded timeout in cutCtxSynchronize()ā
I read up that someone else has the same error but on a PC so they say to go fix something in the registry (I prob wouldnt even do it if it said to like that) But has any Mac user fixed this issue?
Also, the link on the first post gives me a 404 error.
The guide linked in the original post is 404 does anyone have a mirror?
Iāve got a Dell Latitude E5520.
Itās got an embedded Intel Graphics chip:
VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)
(Someone in a different thread here mentioned the NVIDIA website as a better link for Graphics Card info. It shows how to find Graphics
Card information using Windows or Macs (but not Linux),
In Linux, this command works (run in the Terminal):
lspci -v -s lspci | awk '/VGA/{print $1}'
for me, returns this information:
VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) (prog-if 00 [VGA controller])
Subsystem: Dell Device 049a
Flags: bus master, fast devsel, latency 0, IRQ 41
Memory at e1c00000 (64-bit, non-prefetchable) [size=4M]
Memory at d0000000 (64-bit, prefetchable) [size=256M]
I/O ports at 7000 [size=64]
Expansion ROM at <unassigned> [disabled]
Capabilities: <access denied>
Kernel driver in use: i915
More info on Linux and Video Card Info here:
WebGL is broken on Chrome because of it.
Barely runs on Firefox.
(Latest versions)
LinuxMint 17.1 (one off the most current: 17.2)
Blender doesnāt recognize my video card. There is no āDeviceā option
under render.
Doesnāt matter that it works with NVidia and now (with 2.75) some ATI/AMD
video cards.
The latest Dell upgrade XPS13 for Linux uses the:
Intel (R) HD Graphics 5500
with NO options for video upgrades.
By default, ZAReasonās Chimera 2
http://zareason.com/shop/Chimera-2.html
offers the following specs / upgrades:
Nvidia GTX or Quadro Graphics
Nvidia GTX 970M dedicated graphics with 6GB video memory
Nvidia GTX 980M dedicated graphics with 8GB video memory (optional)
Nvidia Quadro K3100 workstation graphics with 4GB video memory (optional)
Nvidia Quadro K4100 workstation graphics with 4GB video memory (optional)
Blue Backlit Keyboard with on/off switch + brightness control
Two SATA 2.5" drive bays
Up to 32gb of memory (4x SO-DIMM slots)
So, Dellās BEST for Linux, itās ONLY choice for Linux falls far short of ZAReasonās for
about the same price.
Memory is only PART of the equation. Iāve got 8 GB of memory. Does NOT matter.
Not enough is being said about video memory. I have no idea how good an Intel
5500 is. If itās anything like my current card, I can expect that in a few years, Iāll
be blacklisted again.
Iād like to buy a laptop (yes, a LAPTOP) designed for Graphic Design, something that
registers with a program like Blender.
And Iād like to get it RIGHT this time.
Got any suggestions?
Thanx
Hi,
I just downloaded blender yesterday and have been looking at some turorials etc. to get started.
However when I am supposed to switch from āBlender Renderā to āCycles Renderā, blender crashes. I havenāt made a scene or done any editing (I only have the start scene with the grey box) I tried google but couldnāt find anyone with quite the same issueā¦
Does anyone have any idea why this is happening?
I am sorry if this is a very n00b question :o)
Hi, a while back i tried using GPU for rendering on Ubuntu. I tried a ton of different things and couldnāt get it to work. Finally, I saw a thread somewhere where it was mentioned to run Blender as root (i.e. sudo blender), and it worked! I tested on a vanilla system with just the proprietary nvidia driver installed and no other changes, and just running blender with sudo worked great. Donāt know about fedora, but itās worth a try!
Thank you! I could have GPU rendering working fine on Fedora 22.3 the day after, hereās a monologue through the steps I was going through (synthetic form just posted at the bottom, as a comment, I donāt know if itās awaiting moderation):
i cant open the link in the first post, it first says that it might be a security risk, when ignoring it it asks for a username and Password, the 401s, is there an up to date guide anywhere?
For me it is giving a error connection refused message.