[Dev] Call for files that take forever to convert

Have a file that takes ages to get through the converter? I’d like to see it! Part of my GSoC project has been to speed up the converter. I’ve already done some optimizations in the Swiss branch, but I need more test files to get some more profiling information. So, if you post your slow (conversion time only!) blend files here, I can take a look at them and try to speed them up. If you don’t want to share the file publicly, but don’t mind sharing with me (I won’t share it with anyone else), then feel free to send me a private message.

Thanks,
Moguri

Try the 7dfps un optimised map in dropbox

Which one is the “un-optimized” one? map.blend? If I recall, the problem with that map was that it had a really hi-poly physics mesh, which Bullet was taking forever to build. Setting the map to No Collision speed up the conversion time considerably, but then you’d have to go back through and add (a) new physics mesh(es).

Is there any point in sending you unoptimised blends? I have a few maps that take a while to load from when I press ‘p’, but I haven’t removed the high poly models that are linked into the scene, that are used to bake down textures to the low poly objects. Once I remove these then it seems to convert quite quickly.

What are other situations that take a long time to convert? You have already mentioned high poly collision meshes take a long time for bullet to process, but I was just wondering if there are more definitive situations that make the conversion slow?

Well, unoptimized still gives me something to profile. However, if it’s unoptimized in the sense that it has hi-poly collision meshes, then it’s not very useful. I think large textures might also slow things down, but the effects of texture upload would probably be reduced by using DXT compression on DDS textures. However, that currently isn’t supported in trunk (there is a patch, but it didn’t make it into 2.64, maybe 2.65).

So using DDS should help to reduce the time its takes for images to be uncompressed and loaded onto the GPU? I have never heard of DDS. Going to have to google it up. I’m assuming photoshop and things can export DDS files?

Photoshop and The GIMP both have plugins available to save out to DDS. The beauty of DDS is that it can stay compressed on the graphics card. This means a few things 1) don’t have to waste time with uncompressing, 2) minimize the amount of bandwidth needed to send images, 3) less VRAM needed to store the images.

Oh so DDS are quite helpful!
What about TGA? Many games are using TGA.

As far as I know, TGA offers minimal (if any) speed improvement over other image formates. It does, however, take up more space on disk. In my opinion, you’re better of using an image format with lossless compression such as PNG.

PNG is quite a pain to use when making textures because of slow exporting( yep,Gimp user here).
However,I don’t see any difference between tga and png, both are large,both are supporting Alpha.

I haven’t checked recently, but improperly sized images (non-power of 2 dimensions) used to be a huge slowdown at scene conversion. IIRC, these images have to run through a scaling algorithm. I was debugging a file for someone once where some images were 2049x2049. They created a 30~40 second lag at load time. When the images were properly sized (2048x2048) the scene conversion time was reduced to 1~2 seconds (there were quite a few of these images, but the scene was mostly planes that were textured with these, so there were no physics related issues that could have been to blame).

I haven’t checked in 2.5x or 2.6x to see if the issue still exists, but here’s a suggestion: would it be possible to just pad out the image instead of scaling? For example, if it is 2049x2049, could you just pad it out to 4096x4096 with empty pixels and somehow instruct the GE that those pixels should be ignored? It seems to me that this would be much quicker, and I believe the scaling is already upsizing to 4096 anyway, so there would be no extra loss in the way of memory.