[Dev] Checking Display List and Vertex Array Support on ATI cards (need testers)

The BGE has a check that keeps ATI cards from using Vertex Arrays with Display Lists. Instead (if Display Lists are enabled), the BGE will use Immediate Mode for rendering, which is rather slow.

I have uploaded a Win32 build here, and a Linux 32 build here. I don’t have an ATI card, so I can’t test this. So, could users with ATI cards test this build on some of their scenes and report back any differences (good or bad) between regular Blender and this build? Also, please post what graphics card you’re using (e.g., HD 2600).

Thanks,
Moguri

Downloading right now. Should i report the fps change? Btw if there would be a standard scene to test it on, we’d see the differences between driver release and card family(hd 4xxx, 5xxx, 6xxx, mobile and desktop)

An FPS difference would be good. I don’t have a benchmark file though (if someone has one, please feel free to upload it). A good one would be heavy on geometry, since that’s what Display Lists, Vertex Arrays, etc are mostly used for.

VCOMP90.dll is missing. Can you add it in the archive?

Btw i just have a project with lots of vertices. Here you are http://www.mediafire.com/?ww2xodhhq21furr
(it actually add terrain in the scene dynamic loading it. The terrain has no material, so all the heavy part is geometry)

EDIT:
in Costum_Thread_Class replace

from io_utils import load_image, unpack_list, unpack_face_list

with

from bpy_extras.io_utils import unpack_list, unpack_face_list
from bpy_extras.image_utils import load_image

(api changes :slight_smile: )
EDIT2:
It keeps giving errors. Some api function must have changed use. Can someone else post a heavy geometry blend? :slight_smile:

Sorry about that, I’ve updated the Win32 build and I’ve also added a link to the Linux 32 build in the original post.

I have an ati card too, I will test this build when I get back home and let you know the results.

A simple scene with 1 sphere of 400k verts, one light and a camera have the same performances… Maybe a bit more complex scene can help showing the difference.

Well, I’m not really looking for just gains in performance (although, any gains would be nice). The focus here is to figure out if there are any negative side-effects to removing the check. From your results, it seems like it is okay. Could you post your graphics card model please?

ATI mobility radeon hd 5470 (512 mb)

Made a UV sphere (400 by 400) in the centre of default factory settings (deleted the cube) ~160k polys

Default 2.58, GLSL 250 fps (rasterizer 0.8 ms), multitexture and texture face @ 30 fps (rasterizer 30 ms)

Your build, GLSL, multitexture & texture face @ 250 fps (rasterizer 0.8 ms), so it has a large impact on multi- and face-texture :eek:

Didn’t notice any other differences…

The scene I am currently working on (just 2k polys but with complex textures, GLSL) went from 200 fps (2.5 ms rasterizer) to 250 fps (1.6 ms rasterizer) :smiley:

ATI HD4850, vista 32 bit, 4 GB RAM, AMD Athlon 64X2 Dual core 5400+ 2.8 GHz

Won’t run on Ubuntu 10.04.

Your libs are probably too old. I built the Linux build using 11.04. I’ve had problems with dynamic libs on older Ubuntu versions before…

Moguri: I’ll hop on irc when I get home. I can make an osx build and test my HD 2600 with your patch if you’re interested.

I’ve got an ATI Radeon X800XT (yeah, I know it’s old). Here’s the specs:

I used a blend file of a subdivided cube - it was over 24,000 faces.

In GLSL mode, your test build and Blender 2.57 get around the same: 18-20% on Rasterizer without Display Lists, and 6-7% with Display Lists. Interestingly, the newer build seems to fluctuate more, but the middle values are frequent enough to discount the outer edge values.

In Multitexture mode, it’s the same. Looks like no bugs to me.