How exactly do I generate the bitmask for font rendering?

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
Post Reply
ThatCodingGuy89
Posts: 18
Joined: Sat Apr 30, 2022 5:57 am

How exactly do I generate the bitmask for font rendering?

Post by ThatCodingGuy89 »

Post is to do with this page in the wiki: https://wiki.osdev.org/VGA_Fonts#Displaying_a_character

Specifically, the optimized version of the character displaying routine. There is no explanation of how exactly this bitmap is generated. (Or hardcoded)

I don't know if I am an idiot and have missed some computer graphics term for bitmask that implies some structure, or if the page is just unclear on how to do things.
User avatar
iansjack
Member
Member
Posts: 4703
Joined: Sat Mar 31, 2012 3:07 am
Location: Chichester, UK

Re: How exactly do I generate the bitmask for font rendering

Post by iansjack »

I'm not quite sure what your problem is. The section "Decoding of bitmap fonts" shows you the structure of the character bitmap.

Edit: Ah - I think you're referring to the mask_table array. Sorry, I've no idea where that came from.
klange
Member
Member
Posts: 679
Joined: Wed Mar 30, 2011 12:31 am
Libera.chat IRC: klange
Discord: klange

Re: How exactly do I generate the bitmask for font rendering

Post by klange »

While the math seems to have been left as an exercise for the reader, the mask table is basically the same data as the bitmap - just with whole bytes filled in instead of individual bits. Or rather, with whole pixels filled in - whatever that pixel size may be.

A brief explanation of the idea is that you have your bitmap representation of a glyph, made up of a byte per row, with bits representing columns, and you take this bitmap and expand it out so that each bit is now 8 bits. Note that for an 8-bit bitmap row, this means a mask row is 8 bytes - the example code seems to assume 32-bit values, so you need to two of them, but I'm lazy and will use 64-bit values. So a row of 01101100b becomes 0x00FFFF00_FFFF0000. Except... we're probably on a little-endian machine, so this is actually backwards. It should be byte reversed to 0x0000FFFF_00FFFF00. We do this for every row in our bitmap and then the masking can be used to write multiple pixels at a time, which is probably faster than individual calls to a set-pixel function.
ThatCodingGuy89
Posts: 18
Joined: Sat Apr 30, 2022 5:57 am

Re: How exactly do I generate the bitmask for font rendering

Post by ThatCodingGuy89 »

klange wrote:While the math seems to have been left as an exercise for the reader, the mask table is basically the same data as the bitmap - just with whole bytes filled in instead of individual bits. Or rather, with whole pixels filled in - whatever that pixel size may be.

A brief explanation of the idea is that you have your bitmap representation of a glyph, made up of a byte per row, with bits representing columns, and you take this bitmap and expand it out so that each bit is now 8 bits. Note that for an 8-bit bitmap row, this means a mask row is 8 bytes - the example code seems to assume 32-bit values, so you need to two of them, but I'm lazy and will use 64-bit values. So a row of 01101100b becomes 0x00FFFF00_FFFF0000. Except... we're probably on a little-endian machine, so this is actually backwards. It should be byte reversed to 0x0000FFFF_00FFFF00. We do this for every row in our bitmap and then the masking can be used to write multiple pixels at a time, which is probably faster than individual calls to a set-pixel function.
Ah, so 0b00100110 becomes 0x00FFFF00, 0x00FF0000. Just checking if I understood your explanation correctly.
Post Reply