Page 3 of 3 FirstFirst 123
Results 41 to 52 of 52
  1. #41
    From what I've seen on some benchmarks there is no difference in performance for a GPU between 8x and 16x.



  2. #42
    Member JustinBarrett's Avatar
    Join Date
    Jul 2009
    Location
    Trier(near), Germany
    Posts
    2,158
    I'm not really concerned with this...anyone with that kind of money probably has the resources to get it working with more cards...I mean who here can even afford 6 1080Ti's for instance...
    "The crows seem to be calling my name." Thought Kaw.
    Myrlea, "The Shepherd's Quest" formerly "Valiant" [project]



  3. #43
    Member stanland's Avatar
    Join Date
    Oct 2013
    Location
    Finland
    Posts
    47
    Originally Posted by animani View Post
    From what I've seen on some benchmarks there is no difference in performance for a GPU between 8x and 16x.
    I have 2 gtx titans and my motherboard supports 16x16 with 2 gpu's and with 3 gpu's it splits pcie lanes to 16x8x8.
    When i tested my system with 3 gpu's i noticed some speed differences in blenders viewport performance (when you select Rendered mode), it was definitely a little bit slower than with just only 2 gpu with 16x16.



  4. #44
    Member
    Join Date
    Dec 2007
    Location
    Wroclaw, Poland
    Posts
    327
    I have a dual xeon 2687w (v1) with 4x 480s. 3 in 16x PCIe 3.0 slots. and one of them in a 4x PCIe 2.0 slot. Render times between the cards even large scenes is within seconds of itself. So PCIe bandwidth has no effect when rendering. And I need to put in 2 more rx 480s (or maybe two VEGA's) for a compelte 6 GPU setup (probably 480s)

    However as you staner mentioned, within viewport I'll have to do some testing to confirm. is there a way to activate FPS in viewport so that I can compare the results? Or just objective?
    https://blenderartists.org/forum/sho...69-UEA-Pelican
    2x E5-2687w :: 32GB :: 4x RX 480 :: SSD goodness



  5. #45
    Originally Posted by staner View Post
    I have 2 gtx titans and my motherboard supports 16x16 with 2 gpu's and with 3 gpu's it splits pcie lanes to 16x8x8.
    When i tested my system with 3 gpu's i noticed some speed differences in blenders viewport performance (when you select Rendered mode), it was definitely a little bit slower than with just only 2 gpu with 16x16.
    Are those 3 GPUS the exact same models? To really test if there is any drop in performance you should render a single tile of lets say 256/512 pixels with each card with a single tile and check their timings. I would not rely on viewport performance. I think is not as accurate as rendering to slots.



  6. #46
    Member
    Join Date
    Dec 2007
    Location
    Wroclaw, Poland
    Posts
    327
    I have to agree with staner though.

    Rendering is independent of the PCIe buss, as once data is sent, only finished tile is sent back.

    When in Viewport, that is a hole different ball of wax. First, does it actually use more then single GPU? Second is any update in view port has to be sent across to all cards (if all are rendering) and as such pCIe bandwidth might be more critical then rendering.

    I'll do some tests in my setup


    Still as for Threadripper. main point of the post, still didn't see any motherboard with more then 5 PCIe slots ... I'm probably more eager to see what Epyc might deliver (from motherboard perspective) now that new patch allows more direct CPU+GPU rendering ..
    https://blenderartists.org/forum/sho...69-UEA-Pelican
    2x E5-2687w :: 32GB :: 4x RX 480 :: SSD goodness



  7. #47
    What do you guys think for a workstation with some rendering power, threadripper or coffee lake?
    Im considering a new build and its a big dilemma!

    threadripper - many cores for little money - you have a CPU that can count as a GPU fundamentally -

    apparently socket has longer survivability, but less compatibility and more risk for AMD not sustaining the jump. One might speculate that they are going to focus on clock speed in the next iterations making CPU a candidate for future upgrades.

    coffeelake - less cores but more clockspeed, more efficient modelling work, but not to take into account for rendering and also it seems the socket is going to be changed by the next iteration
    on the other side for rendering you can argue that u dont really want to use CPU and just add a new GPU in the mix...


    what are you toughts guys?



  8. #48
    Member Felix_Kütt's Avatar
    Join Date
    Apr 2005
    Location
    Hiiu, Nõmme, Tallinn, Harjumaa, Estonia, EU
    Posts
    4,319
    Originally Posted by noktek View Post
    threadripper or coffee lake?
    You are asking about completely different platforms.
    As of now all launched coffelake parts are consumer platform while threadripper/TR4 is HPC/workstation platform.
    FunLinks: . . . . . . .



  9. #49
    im a freelancer 3dmodeller/illustrator, so i need a bit of both spectrums. Dont really need a fully blown dedicated workstation yet



  10. #50
    The question is does anything as a prosumer exist? And what are the scenarios for which it would make sense to go threadripper as opposed to a more common cpu. The amount of rendering that needs a huge amount of ram or complicated computation?



  11. #51
    Member Ace Dragon's Avatar
    Join Date
    Feb 2006
    Location
    Wichita Kansas (USA)
    Posts
    28,344
    Originally Posted by noktek View Post
    The question is does anything as a prosumer exist? And what are the scenarios for which it would make sense to go threadripper as opposed to a more common cpu. The amount of rendering that needs a huge amount of ram or complicated computation?
    If you don't need something like Coffee Lake right now, then next Winter should see the release of a refreshed Ryzen processor line from AMD. It might be worth it as the focus now is on overall performance (without increasing the core count) and Coffee Lake's single-core performance increase (compared to Kaby Lake) is only a few percentage points at best.

    If Ryzen+ ups the clockspeed and the IPC, then the best choice could be an AMD CPU with an Nvidia GPU. You might want to approach chips with more than 8 cores with caution meanwhile as some programs, when presented with that many cores, may not even run.
    Sweet Dragon dreams, lovely Dragon kisses, gorgeous Dragon hugs. How sweet would life be to romp with Dragons, teasing you with their fire and you being in their games, perhaps they can even turn you into one as well.
    Adventures in Cycles; My official sketchbook



  12. #52
    Depends on usage.
    https://www.pugetsystems.com/labs/ar...x8-vs-x16-851/

    I think if you make use of system RAM for out of core rendering, it will make a difference.



Page 3 of 3 FirstFirst 123

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •