Page 101 of 140 FirstFirst ... 519199100101102103111 ... LastLast
Results 2,001 to 2,020 of 2793
  1. #2001
    i donīt know what i'm doing wrong i thought my investment in a very expensive pc will pay of, but sadly my results show quite the opposite. my results:

    CPU: Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz (24 CPUs), ~2.0GHz
    GPU: NVIDIA Quadro 4000
    OS: Windows 7 64bit | Blender 2.74
    Time: 4 min 14 seconds (GPU - CUDA)
    Time: 4 min 54 sec (CPU 24 threads)

  2. #2002
    Join Date
    May 2015
    Berlin, Germany
    The Quadro 4000 is technically the same hardware as a GTX 560, an outdated midrange gaming card and therefore not a good choice for GPU rendering anymore.
    Your CPU might have 24 (logical) cores, but only at 2 GHz. You could try to improve the CPU performance by using smaller tiles, like 32x32.

  3. #2003

    Which E5 2600 chips are they (v1/2/3)? I strongly feel that v2&3 should do it in way less than 2minutes.

  4. #2004
    Member Grimm's Avatar
    Join Date
    Nov 2007
    Fairbanks, Alaska
    Ran another test with 2.75a and fiddled with the tile size:

    GPU: MSI GTX980
    OS: Linux Mint 17.1
    Time: 01:10.82 (GPU 240x540 tile size).
    Linux Mint 18 - I7 5820K 3.8 Ghz - 32 Gbytes GTX980

  5. #2005
    Originally Posted by l4gerardo View Post
    i donīt know what i'm doing wrong i thought my investment in a very expensive pc will pay of, but sadly my results show quite the opposite. my results:

    CPU: Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz (24 CPUs), ~2.0GHz
    GPU: NVIDIA Quadro 4000
    OS: Windows 7 64bit | Blender 2.74
    Time: 4 min 14 seconds (GPU - CUDA)
    Time: 4 min 54 sec (CPU 24 threads)
    Quadro cards have disabled cuda cores if I remember well. So they are much more expansive than gamer cards while being much slower for rendering. NVIDIA segment their market, they want you to buy one card pro use and the quadro are limited to only fit for CAD/real-time OpenGL. We have many quaddros at work and they are all slower than CPU rendering.

  6. #2006
    Member mib2berlin's Avatar
    Join Date
    May 2008
    Hi l4gerardo, for CPU change the tiles to 32x32 or 16x16, for GPU use 240x540.
    Do you meant Quadro 4000, K4000 or K4200?
    The Quardo 4000 is old, the E5-2620 is very old too, when do you buy this?

    Cheers, mib
    OpenSUSE Leap 42.1/64 i5-3570K 16 GB Blender 2.7 Octane 3.03
    GTX 760 4 GB, GTX 670 2 GB Driver 375.26 | Blender for Octane

  7. #2007
    Join Date
    Jul 2015
    North Carolina
    EVGA 04G-P4-2768-KR G-SYNC Support GeForce GTX 760 4GB
    16GB RAM at 1866
    Windows 8.1 64bit
    I'm driving a 2k monitor - not sure if that matters.

    GPU Cuda: 2:19.19
    CPU: 3:51.12

    Standard Blend Settings from file

  8. #2008
    i thought my investment in a very expensive pc will pay of,
    sadly this never pays of. Except of you are really making money with your work! Its like with expensive cars. At a specific point you pay double the price for 10% more power/speed.
    Best way is to buy upper midrange hardware for exsample 2x GTX 970 instead of 1 GTX 980Ti. You can even sell midrange hardware much better then high end.

  9. #2009
    Member BernhardS's Avatar
    Join Date
    Dec 2011
    Bonn, Germany
    Gigabyte R9 390 "G1 Gaming" 8GB
    Xeon 1231 v3
    Windows 7 64
    GPU: 00:59:13 (960x540)
    CPU: 04:30:49 (16x16)

    -> Keep in mind, that OpenCl does not support transparent shadows [...] yet, so this result is not comparable to nvidia-cards.

    2x MSI GTX 750ti "Twin Frozr" 2GB
    AMD PhenomII 1090t
    Windows 7 64
    GPU: 01:46:84 (256x256)
    CPU: 07:27:11 (16x16)

    In other scenes, the 2x750ti are just as fast as the R9 390.

  10. #2010
    Member Thomas Berglund's Avatar
    Join Date
    May 2009
    Oslo, Norway
    New BMW scene.

    Mac Pro (2013)
    CPU: Intel Xeon E5-1680 v2 - 3GHz
    GPU: 2x AMD FirePro D700
    OS: OS X 10.11 Developer Beta 4 (15A226f)

    Time: 2 min 43 seconds (GPU - OpenCL) - 128x128 tiles
    Time: 2 min 34 seconds (GPU - OpenCL) - 256x256 tiles
    Time: 2 min 12 sec (CPU) - 16x16 tiles

    Using blender-2.75-548e650-OSX-10.6-x86_64 from:
    Last edited by Thomas Berglund; 23-Jul-15 at 04:09.

  11. #2011
    Just starting off in the world of Blender, but saw the instruction to share benchmark results so thought I'd do so - although I don't understand the different tile settings people are talking about, see, told you I was new. I've just pressed F12 to render, once using CPU and once using GPU CUDA enabled.

    Intel i7-5960X @ 3.00GHHz and NVidia GTX Titan X

    CPU Render 03:08.94
    GPU - CUDA 1:53.45

  12. #2012
    New BMW scene

    OS X 10.11 latest DP & blender-2.75-33bac1f-OSX-10.6-x86_64
    R9 290 (256x256 tiles): 04:04.64

  13. #2013
    Join Date
    Sep 2008
    TheIMP67, the tile settings are found in the render tab (the camera icon) under the subheading "Performance". You can change the x and y pixel dimensions of the tiles Blender will use to dice up your scene. Changing these settings will dramatically reduce your render times, you just have to experiment. You can see the different render times that happen with different tile sizes in my renders here.

    Generally the larger the tiles, the faster the render, however, this only works up to a certain point. For example, 240 x and 540 y tiles rendered faster than 480 x and 540 y tiles with my GTX 970.

  14. #2014
    Join Date
    Jun 2015
    London UK
    Just finished (2 days ago) building a new machine after a while of saving. Really impressed with the multi gpu scaling. No overclock on the gpus yet. I'll do another when they are overclocked.

    My time(s):
    CPU: Intel i7 5960x 4.8Ghz
    GPU: Nvidia GTX 980 ti 2x
    OS: Windows 7 64bit
    Time: 50.71 sec (GPU - CUDA) tile size 128x128
    Time: 34.92 sec (GPU - CUDA_ tile size 256x256
    Time: 31.05 sec (GPU - CUDA) tile size 480x270
    Time: 32.85 sec (GPU - CUDA) tile size 512x512
    Time: 34.48 sec (GPU - CUDA) tile size 480x540
    Time: 2 min 04 sec (CPU) tile size 128x128
    Time: 1 min 54 sec (CPU) tile size 64x64
    Last edited by nearleyg; 30-Jul-15 at 17:34.

  15. #2015
    Member Csokis's Avatar
    Join Date
    Jul 2015
    My time(s):
    CPU: Intel i7-3770 @ 3.4Ghz
    GPU: Nvidia GeForce GTX 750 Ti
    OS: Ubuntu Studio Linux 14.04.2 LTS
    Time: 03:20:13 (GPU - CUDA)
    Time: 04:10:31 (CPU)

  16. #2016
    CPU: Intel Core i5-4690K
    GPU: AMD R9 390
    OS: Windows 10 Pro
    Time: 54 seconds (GPU - OpenCL) tile size 960x540
    Time: 5 min 46 sec (CPU)

  17. #2017
    (Times are for new BMW scene)

    CPU : Intel i7 4770k
    GPU : AMD 7870 OC
    OS: Windows 7 64bit Pro

    Time: 02:31:66 (GPU-OPENCL) tiles 256x256
    Time: 04:01:84 (CPU), tiles 16x16

    And a vray version I made for fun xD

    Last edited by johndoe123; 02-Aug-15 at 07:27.

  18. #2018
    CPU: Intel Core i7-3770k @ 4.2Ghz
    GPU: AMD R9 295x2 @ 1.1Ghz
    OS: Windows 10 Home
    Time: 1:23 min (GPU - OpenCL) - Default tile size (128x128)
    Time: 4:53 min (CPU) - Default tile size (128x128)

  19. #2019
    Join Date
    Aug 2014
    United States, Pa
    Is there a benchmark page for the gooseberry render scene found here

    BTW my workstation did an hour for that render, So not to bad for 200 dollars of machine I think.
    The Market, a CG and game dev discord chat.

  20. #2020
    CPU: Intel i7-5820K 3.8Hgz
    GPU: Nvidia Quadro K5200
    OS: Windows 7 64bit | Blender 2.75a
    Time: 2 min 12 seconds (GPU - CUDA)
    Time: 3 min 29 sec (CPU 8 threads)

    Will try my 2 OC Titan X's in combo with this card later and then post. I am also getting AIO water blocks so I can crank up the titans a bit more. Its more for Octane then Blender (cycles) but nice to know how it works here as well.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts