BGE and Raspberry Pi

Hey yo. So I’d like to use the Raspberry Pi 3 to run a single Blender game continuously. I created a simple scene to benchmark the Pi’s performance but there seems to be a bottleneck somewhere since the game runs at only ~18 fps but according to htop, it is only using about 25% of the CPU and 20% Memory. There are no other processes running (except for htop, that is). The game profiler shows a heavy GPU latency of +80%

Here’s what I’ve done so far:

A. On the Pi 3:

1-Installed raspbian jessie, updated and upgraded the OS and firmware.
2-Installed Blender 2.72 (raspbian default) via ap-get.
3-Enabled experimental graphics support (Blender will barely run and/or show a visible image if this step is not taken).
4-Allocated 480 Mb of memory to GPU (my Pi won’t boot if I do more than this). This step seems more or less futile, since the game runs at ~16 fps on only 64 Mb of memory.

B. In Blender 2.72:

0-A blank game will give me less than 30 fps.
1-Made an array of 20x20x20 cubes that have a rotate actuator on x,y,z. The cubes have a single material with default settings. I am not using textures. There is only a single lamp. I started with one (1) cube but seems to be little to no difference with the extra geometry and logic. The graphics resolution is 1080p.
2-Disabled GLSL, so using multitexture. GLSL enabled gives me 5 fps on a blank game.
3-Disabled Vsync but enabled it again, since there is no noticeable difference between one another.

Besides the sluggish speed I am also seeing some tearing.

So, my question is, am I missing something obvious that’s causing the bottleneck or if not obvious, how can I funnel more CPU/GPU resources into the game so as to improve its speed and graphic qualities? Any and all replies will be very much appreciated.

Cheers!

I’m surprised it runs.
I imagine the problem is simply that the pi’s GPU isn’t powerful at all, after all, it’s pretty much a mobile processor. This would explain why more logic doesn’t really do anything. Despite what the benchmarks show, there is practically a huge distance between mobile, laptop and desktop GPU’s.

That said, the pi is running raspian, which is hardly efficient, particularly when it (in this case) exists simply to run a single program continuously.
I’d investigate dropping the window/desktop manager, and ‘booting’ straight into your blender game, using something like the .xinitrc file to define what to boot into. This will mean that the pi no-longer tries to composit the blender window on top of other things.

This is something I’m likely to investigate in the near future (not on a pi, but running BGE as the sole application on login), so I’d be interested to hear your results.

No… Not really. They’re kind of mixed up. Some of phones provide better GPUs that some of intel Desktop GPUs. Here the GPUs blend. My laptop’s GPU is better than most of desktop GPUs(especially in terms of technology, but speed is also great) - at this point the laptop GPUs totally overlap desktop GPUs. So there are not gaps by their usage specification. Each - phone, laptop and desktop GPUs has large range of abilities. The weakest GPU is owned by phone, but many phone GPUs are way better than weakest laptop GPU. Even though the most powerful desktop GPU is better than any laptop GPU, there are tons of GPUs(GeForce GTX970 and below) that are beaten by the GTX 980 laptop GPU.

There are tons of GPUs(GeForce GTX970 and below) that are beaten by the GTX 980 laptop GPU.

That there are. I had the pleasure of using a GTX980 laptop a little while back and was suitably impressed. But look at the cost. A $1000 phone has the same GPU speed as a $700 laptop has the same GPU speed as a $500 desktop.
Proof, a GTX970m is rated the same as a GTX670, so about 4 years behind. I haven’t seen any benchmarks comparing phone GPU’s, bit would expect the difference to be the same again, placing phone GPU’s at 8 years, or the GTX 2xx series.

Benchmarks don’t actually show this, but the main issue is … thermal. A $1000 phone GPU could perform just as well as a $1000 desktop GPU, for about three seconds before it melted (and your phone battery would take it to one minute).

A raspberry pi runs an ARM processor (similar to phones) and a Broadcom VideoCore IV GPU. A quick search of this GPU returns:

This GPU isn’t good for 3D graphics, it can run, but not well. The GPU has been made for HD videos and simple stuff. I already tried on a phone which has this GPU, but even the UI was slower. I didn’t try any game. I recommend you to go to an Andreno, Mali, Tegra or PowerVR
.
Yup, it actually is a phone GPU made in (gasp) 2010!
In a cellphone, it scored ~3000 on 3Dmark test. in comparison, the GTX260 came in at 10,000. No idea how comparible those values actually are though, as the CPU is vastly different etc.

So there you go, pretend you are making a game for a computer in 2005 and you’ll be fine!
Try blender 2.49 in single-texture mode and see how you go.

Guys, thanks for the replies, but let me kindly ask you to stick to the theme. The theme ain’t whether or not the Pi has a cutting edge CPU / GPU combo, or even a decent one for that matter. The challenge is to squeeze the most out of it using Blender and that’s the theme of this thread.

Yea, I know there’s an Nvidia SoC that’s cool and powerful. I also understand the limitations of the Pi, mind you. So once again, my request is that future comments stick to the aforementioned theme instead of digressing how and why a Raspberry Pi provides the graphics quality equivalent of a 1st gen Xbox (I can use a search engine, too :wink:

I am indeed not pretending but fully understanding that I am working with a 10 y.o. standard, so pointing that out does not really help me. It would help however, if you help me push it to its fullest :smiley:

Cheers!

, it is only using about 25% of the CPU
Is not the Pi3 using a quad core cpu ? Presumably the BGE is not multithreaded so just running on one of those cores