A BitmapFont editor

Hello everyone. For the game I’m “working” on I needed - surprisingly - some text. I’ve had some problem with the Text objects (namely they are not rendered if LibLoad-ed) so I went back to the glorious bitmap fonts. More problems. I couldn’t find a working BitmapFont encoder. So I had to write one. I don’t know if someone might be interested in it, here’s a video of what it does now:

I wrote it in Java under Linux but it uses just standard apis so it should work wherever an OpenJDK7 is available (though the font rasterizer of the Oracle Java was a lot better, at least the last time i checked).

Here’s the zip with the program (sources inside, there’s no need to say it’s open source):

http://www.tukano.it/blender/bitmapfont/JBFT.zip
(52.7kb, zip file with one jar file in it, sources are included in the jar)
10.22.2013 Update - V. 0.0.2 - Application warns when format boundaries are exceeded, minor ui tweaks, code cleaning

Extract the zip, there is an executable jar file in it, open terminal, run “java -jar jbft.jar”. The source files are included in the jar file but it’s garbage, it was a wild coding session.

Known issues:

1. the bitmap font of blender has some size constraints (in particular the size of a single char is limited to 256x256 pixels) and the program doesn’t check them [Fixed in V0.0.2]
2. i can’t get blender to take 2 byte unicode characters.

Next step.

  1. I think i’ll add a dialog to create a “fake-font” structure to paint things like icons in external image editors. Then the characters could be used as keys to select which icon to paint (like A -> a cup of coffee, B -> a torch). The program will output two layers, one with the bitmap-font image and one with a grid-like layer to know where to paint stuff. Maybe three layers, bitmap font + grid + character keys.
  2. I will check that unicode thing because i’m not convinced it is a bug.

Notes for coders on the BitmapFont format.

The format is bizarre but manageable.
To make it short - for those who might be interested in writing an encoder, the upper rows of the image are used to store the metadata of the font. Each pixel (made of 4 bytes in sRGB) contains one byte of metadata, in the first channel.
The first four bytes retrieved from the first four pixels form the magic number BFNT (ascii code of each letter)
Then there are two bytes (that is read two pixels and take the two byte values of the first color channel of each one) for the version number, which is 0 (16 bit unsigned short).
After the version, one ushort16 (2 bytes taken from the pixels…) for the number of characters encoded in the image (N)
Then we have two puzzling shorts, referred to as XSIZE and YSIZE that seems to have no purpose - I store in there the size of the biggest char in the bitmap but random numbers works as well.
Those close the manifest of the file. The header goes on with the metadata of the single characters.
for (N) times we read:
1 ushort = the unicode value of the character
1 ushort = the X position of the upper left corner of the character subimage
1 ushort = the Y position of the upper left corner of the character subimage
1 byte = the offset of the character along the x axis (for horizontal spacing between characters)
1 byte = the offset of the character along the y axis (for baseline alignment of characters)
1 byte = the width of the character subimage
1 byte = the height of the character subimage
1 byte = the advance of the character (how much space is added between this character and the next one)
1 byte = 0 (reserved byte)

Characters can be randomly painted in the image (as long as we put the correct coordinates in the metadata) but the metadata of the characters must be ordered by unicode value. Or so it seems by reading the source od blender. So we have to write the metadata for the A letter before the metadata for the a letter otherwise characters are not found. It’s a strange precondition.

The first encoded char “must” be the @ symbol. I use quotes because i didn’t check really well why and what, i made a quick test to omit it and things went wrong.

The width of the image must be a power of 2.
There are no constraints on the height.
The coordinate system for the location of the characters in the image includes the metadata rows of the image.

As i mentioned, there is something weird going on with the unicode values. No matter how i order the bytes or how i write the values, 2 byte characters (like ò à ù) are not fount when it is time to render them - blender seems to revert to the default 0 character.