Is the Biostar TB250-BTC D+ motherboard any good for rendering?

I was browsing eBay for motherboards and the board from the title called my attention. Is aimed for mining but, it looks like a good option for rendering as well. It has 8 full size PCI-E 16X slots but, I’m not sure about the bandwidth for each one of them. I was hoping that somebody here knows. The board is LGA 1151 board and is single channel which I don’t think is optimal for rendering but, the 8 pci-e slots look good I think. What do you guys think about the idea? Thanks in advance for reading my post.

Badnwidth isn’t even stated in the manual, but since it is for mining I expect 1 slot at 16x electric, and remaining at 1x electric.

As for blender it self, I ran one GPU on PCIe 2.0 4x (same speed as PCIe 3.0 2x) and comparing that to a full blown PCIe 3.0 16x slot, there was next to no difference in rendering speeds.

Main thing is that rendering is done fully within the memory of the GPU, so outside of the initial ram feeding (with the full scene + textures) and end result being sent back, there i not much communication, hence I expect next to no performance penalty.

I to was thinking of something similar (wish there was AMD equivalent).

Personally I got a very bad experience with BIOSTAR graphics card a few years ago. You try ASRock if you are looking for something cheap and not so bad.

Edit:
Anyway, it was many years ago, perhaps BIOSTAR improved in these years. Also I’m not sure about how easy it is to find motherboards with that amount of PCI-e slots.

There are boards with 12 or even 19 PCIe slots… 19 GPU rendering system… just drooling over the idea of it… Waiting till miners start dumping the hardware on ebay and others to get it for cheap.

In my opinion it is not good idea.

  1. 1xPCIE is far too slow for comfortable viewport work. It is really frustrating.
  2. 1xPCIE usually is too slow for efficient rendering.
  3. It is good to have at least one fast CPU core per one GPU card for efficient rendering

I’ll forget about the Biostar board. I placed an order for Asus Z87-WS from Amazon. That one has 4 PCI-E slots working at 8X so I won’t be experiencing those problems. Thanks everybody for taking the time to answer my post.

blendest.

1 - I’ll need to try this on my setup and report if in fact it slows down the viewport.

2 - 1x PCIe is sufficient for rendering as I’ve tested that setup at home. Remeber that PCIe bandwidth is needed to load a scene and send the final image back. There is no interaction during rendering. So if the system is purly for rendering, it is more then enough.

3 - I’m curious about this one. As rendering is done totaly within GPU (except smoke and few other parts) then why each GPU needs a fast CPU core?

As for Biostar vs that Asus Z87-WS. I might recommend a Z9PE-WS with two xeons (v2) and then you have 7 PCIE slots with most at 8x and one or two at 16x.

Still looking at the fact that this board and other mining ones give you opportunity to plug in more GPU’s then 4 is quite interesting. One negative part is to make sure the driver supports that many GPU’s Earlier on there was a limit of 6 or 8 GPU’s for AMD or Nvidia (unless you mix them). Unsure if currently this is resolved but I expect as much since both AMD and Nvidia had much to gain and nothing to loose.

Stupid question time: if you’re going to be populating the system with 4/6/8+ GPUs, is it really worth saving a mere fraction of that value spending time tracking down a second hand motherboard? Money saved is money saved and all, but with that kind of investment it would make sense to get something new and deliberately spec’d.

I might eventually get a Threadripper but, I’m on a budget so, I thought why not get the Z87-WS for just a fraction of what the whole Threadripper system will cost me and upgrade my present i7-4790K system?

The Threadripper system will cost me about $2,000 USD and since renderings are speed up by GPU’s rather than CPU’s I opted for spending the money in cards and install them in my Z97-A/USB 3.1 with the help of risers and ribbon cables. Later I got curious about the possibility of gains in rendering power if I add bandwidth with a workstation motherboard so I ordered the Z87-WS. I think I’ll be allright this way for the next couple of years. When I’m finally charging for my work, I might save some money and get the whole Threadripper system instead than piece by piece upgrades like I’m right now.

I added two GTX 1070’s one 1300 watts power supply and the Z87-WS motherboard for about $1,800 total and it was bought on credit. It would not have been possible to get credit to buy the Threadripper system so I went for the upgrade instead. When I receive everything I’ll do some test renderings and post the results.

For rendering comparison, I highly recommend to look at : http://download.blender.org/institute/benchmark/latest_snapshot.html

Unfortunately dated from November 2017, so not latest build. hope Blender team will do an update on this. (unless they did and I missed it)

Grzesiek

It is based on my own experience.
1 I used Amfeltec GPU cluster with PCIx1 4xGPUs and the viewport work was terrible.
2. The rendering was a lot better but it was clear that for a bigger projects using PCIEx1 is a big waste of GPU power
3. Even when GPU rendering is enebled the CPU has to feed GPU with data. It is not my theory but also my experience - if I have slow core and/or more GPUs than CPU cores you waste GPU power because GPUs constantly wait for data. Please look at the CPUs cores load while rendering. For lightweight work it is not so evident but for bigger Blender projects using PCIEx1 is a waste of time.

I use GA-Z77X-UP7 (i7-3770K, 4xGPU) and MSI X299 AC (i9-7920X, 4xGPU) and i9 setup render a lot faster than i7 despite the same GPUs. The GA motherboard has 4x16(8) PCIEs but i7 has only 16 PCIE lines and use PCIE switch to connect 4xPCIE.
Futhermore in the experimental Blender versions is GPU+CPU rendering mode and it shines when you have fast multicore CPU and fast PCIE connections.

Understood. my experience coudl have been influenced by the PCIE 4x slot being just fast enough to not notice (when rendering). Viewport I’ll have to test once I reassemble the system.

Still if 1x PCie is that bad… then my plans have to be adjusted. Probably back to dual xeons (or other) with 7 PCIE x16 slots (x8 electric)…