Finding the System Specifications of a PC through the Blender Game Engine?

When installing a game (usually triple-A titles), it will assign default configurations according to your system specifications (screen resolution, graphical properties, sound options, control input, etc.). Most DirectX programs do this by prying the information from dxdiag, which is automatically generated by Windows, so it’s an easy location to get the specs from.

A while back, games used to run a stress test then change the properties. That’s easy enough to do; just set up a benchmark then return the average framerate which will define certain options (for example, if the framerate is over 60, it may consider enabling anti-aliasing, however if it is below 60, it may change the texture quality etc.). The drawback of this method is that if the PC doesn’t support those features, the screen could just end up blank and the framerate would be reported as high.

Is there a way to get the specifications of the client PC from the bge/python? The screen resolution can be obtained already (running through fullscreen mode), but can you get, for example, how many cores the PC has, the clock speeds, GPU model, graphics memory, ram etc.?

The main reason I am trying to do this is to determine whether or not to enable certain “more” demanding shaders in-game (such as fxaa), and prevent the game from launching if the card does not support OpenGL/glsl shading mode.

…Or is there a simpler way to do this?

Thanks :slight_smile:

You can get a bit of the GLSL capabilities of the graphics card using logic.PrintGLInfo(). There’s also the not widely-known (at least to me) PrintMemInfo(), which shows you a small bit of extra debug stats about the game, like the current number of assets (materials, meshes, for example). You can also get the profile information that the debug profiler would print out to the screen with logic.getProfileInfo().

The output of the PrintGLInfo() function is, in part:


...
 GL_ARB_vertex_shader supported?        yes.
 ----------Details----------
  Max uniform components.1024
  Max varying floats.128
  Max vertex texture units.32
  Max combined texture units.32


 GL_ARB_fragment_shader supported?      yes.
...

It’s very unfortunate that PrintGLInfo prints that information directly to the console, rather than just giving you a dictionary, but I guess it’s not hard to work around. I believe your goal’s pretty much to see if the card can handle vertex and fragment shaders (GLSL shaders), right? So, what I think I would recommend is to just check the sys.stdout stream for those two strings (“GL_ARB_vertex_shader supported?” or “GL_ARB_fragment_shader supported?”) after calling the function. You would then be able to tell if each is supported or not.

Alternatively, you can try altering the source yourself to return a Pythonic dictionary consisting of the results. If you do want to try, and aren’t sure how to go about continuing, you can talk to some BGE coders on IRC at #bgecoders on freenode. If you succeed, you can make a patch from it to submit to a dev like Moguri, and he may be able to get it committed.

Awesome.

The script for those functions is in KX_PythinInit.cpp line 866 & 877:


{"PrintGLInfo", (PyCFunction)pyPrintExt, METH_NOARGS, (const char *)"Prints GL Extension Info"},
{"PrintMemInfo", (PyCFunction)pyPrintStats, METH_NOARGS, (const char *)"Print engine statistics"},

So I’m guessing something like changing METH_NOARGS to METH_VARGS…?
There are no other occurrences of “PrintGLInfo” in the entire source…

I think I’m going to have to just do a workaround. Don’t really feel like building Blender :wink:
Need help with getting that output into a string though…
You can do something like this, right?


from io import StringIO
import sys
import bge
from bge import logic


stdout = sys.stdout #keep the current sys.stdout for later
sys.stdout = StringIO() #Create a fake file
logic.PrintGLInfo() #print out GL information
gloutput = sys.stdout
sys.stdout = stdout #return sys.stdout to its previous state


print(gloutput.getvalue())

Suggestions?

Hm, I tried it, but it doesn’t seem to intercept the PrintGLInfo() call. So, it might be pushing the debug info out directly from C++ to the console. I also tried directly reading sys.stdout, but it seems that it can’t be read. So, you might have to alter the source to get the info out to be used easily, unfortunately.

Yeah. Unless there’s a way to intercept it and print it to a file… Which I am unaware of.

For a simple way you could maybe benchmark processors with time() and loops with calculation and graphics by displaying absolute barebones scene (something that will run on ANYTHING) and measure something like (FPS)^(1/n).

While interesting topic, do you really have sophisticated enough game to need such auto feature? Just start the game with minimum settings and let the users crank up the quality if they want to. You could spend eternity on working on and debugging auto-detection which will be hard with just one or couple of machines.

Yeah, it’s probably not worth it. If it boils down to it, you can get the framerate and how much time is spent on graphics; if the game’s in GLSL mode, and it’s getting 5 FPS because the graphics are pulling that much on the computer (a high percentage for Rasterizer), then you can have a popup stating that the game may not run because you don’t have a powerful enough graphics card.

Still, it seems like a feature that would be worthwhile adding to the bge. Might work on it some day :slight_smile:

Yeah, the function’s there, and the data’s there, it’s just not in a format that you can really read or use at the moment. You can take a look at getProfileInfo to see how it returns a dictionary. EDIT It seems to do so from the engine, so it shouldn’t be too difficult to disect./EDIT

I think the first string is the Python name of the function (“PrintGLInfo”), and the second is the C++ function it maps to (pyPrintExt). Doing a search in the code base for the second name turns up a function of the same name earlier in the same file.



static PyObject *pyPrintExt(PyObject *,PyObject *,PyObject *)
{
#define pprint(x) std::cout << x << std::endl;
    bool count=0;
    bool support=0;
    pprint("Supported Extensions...");
    pprint(" GL_ARB_shader_objects supported?       "<< (GLEW_ARB_shader_objects?"yes.":"no."));
    count = 1;


    support= GLEW_ARB_vertex_shader;
    pprint(" GL_ARB_vertex_shader supported?        "<< (support?"yes.":"no."));
    count = 1;
    if (support) {
        pprint(" ----------Details----------");
        int max=0;
        glGetIntegerv(GL_MAX_VERTEX_UNIFORM_COMPONENTS_ARB, (GLint*)&max);
        pprint("  Max uniform components." << max);


        glGetIntegerv(GL_MAX_VARYING_FLOATS_ARB, (GLint*)&max);
        pprint("  Max varying floats." << max);


        glGetIntegerv(GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS_ARB, (GLint*)&max);
        pprint("  Max vertex texture units." << max);
    
        glGetIntegerv(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS_ARB, (GLint*)&max);
        pprint("  Max combined texture units." << max);
        pprint("");
    }


    support=GLEW_ARB_fragment_shader;
    pprint(" GL_ARB_fragment_shader supported?      "<< (support?"yes.":"no."));
    count = 1;
    if (support) {
        pprint(" ----------Details----------");
        int max=0;
        glGetIntegerv(GL_MAX_FRAGMENT_UNIFORM_COMPONENTS_ARB, (GLint*)&max);
        pprint("  Max uniform components." << max);
        pprint("");
    }


    support = GLEW_ARB_texture_cube_map;
    pprint(" GL_ARB_texture_cube_map supported?     "<< (support?"yes.":"no."));
    count = 1;
    if (support) {
        pprint(" ----------Details----------");
        int size=0;
        glGetIntegerv(GL_MAX_CUBE_MAP_TEXTURE_SIZE_ARB, (GLint*)&size);
        pprint("  Max cubemap size." << size);
        pprint("");
    }


    support = GLEW_ARB_multitexture;
    count = 1;
    pprint(" GL_ARB_multitexture supported?         "<< (support?"yes.":"no."));
    if (support) {
        pprint(" ----------Details----------");
        int units=0;
        glGetIntegerv(GL_MAX_TEXTURE_UNITS_ARB, (GLint*)&units);
        pprint("  Max texture units available.  " << units);
        pprint("");
    }


    pprint(" GL_ARB_texture_env_combine supported?  "<< (GLEW_ARB_texture_env_combine?"yes.":"no."));
    count = 1;


    if (!count)
        pprint("No extenstions are used in this build");


    Py_RETURN_NONE;
}


You can see it outputs the information directly to the console. So, it shouldn’t be too difficult to create and return a Python dictionary consisting of that information instead.