Post is to do with this page in the wiki: https://wiki.osdev.org/VGA_Fonts#Displaying_a_character
Specifically, the optimized version of the character displaying routine. There is no explanation of how exactly this bitmap is generated. (Or hardcoded)
I don't know if I am an idiot and have missed some computer graphics term for bitmask that implies some structure, or if the page is just unclear on how to do things.
How exactly do I generate the bitmask for font rendering?
-
- Posts: 18
- Joined: Sat Apr 30, 2022 5:57 am
Re: How exactly do I generate the bitmask for font rendering
I'm not quite sure what your problem is. The section "Decoding of bitmap fonts" shows you the structure of the character bitmap.
Edit: Ah - I think you're referring to the mask_table array. Sorry, I've no idea where that came from.
Edit: Ah - I think you're referring to the mask_table array. Sorry, I've no idea where that came from.
Re: How exactly do I generate the bitmask for font rendering
While the math seems to have been left as an exercise for the reader, the mask table is basically the same data as the bitmap - just with whole bytes filled in instead of individual bits. Or rather, with whole pixels filled in - whatever that pixel size may be.
A brief explanation of the idea is that you have your bitmap representation of a glyph, made up of a byte per row, with bits representing columns, and you take this bitmap and expand it out so that each bit is now 8 bits. Note that for an 8-bit bitmap row, this means a mask row is 8 bytes - the example code seems to assume 32-bit values, so you need to two of them, but I'm lazy and will use 64-bit values. So a row of 01101100b becomes 0x00FFFF00_FFFF0000. Except... we're probably on a little-endian machine, so this is actually backwards. It should be byte reversed to 0x0000FFFF_00FFFF00. We do this for every row in our bitmap and then the masking can be used to write multiple pixels at a time, which is probably faster than individual calls to a set-pixel function.
A brief explanation of the idea is that you have your bitmap representation of a glyph, made up of a byte per row, with bits representing columns, and you take this bitmap and expand it out so that each bit is now 8 bits. Note that for an 8-bit bitmap row, this means a mask row is 8 bytes - the example code seems to assume 32-bit values, so you need to two of them, but I'm lazy and will use 64-bit values. So a row of 01101100b becomes 0x00FFFF00_FFFF0000. Except... we're probably on a little-endian machine, so this is actually backwards. It should be byte reversed to 0x0000FFFF_00FFFF00. We do this for every row in our bitmap and then the masking can be used to write multiple pixels at a time, which is probably faster than individual calls to a set-pixel function.
-
- Posts: 18
- Joined: Sat Apr 30, 2022 5:57 am
Re: How exactly do I generate the bitmask for font rendering
Ah, so 0b00100110 becomes 0x00FFFF00, 0x00FF0000. Just checking if I understood your explanation correctly.klange wrote:While the math seems to have been left as an exercise for the reader, the mask table is basically the same data as the bitmap - just with whole bytes filled in instead of individual bits. Or rather, with whole pixels filled in - whatever that pixel size may be.
A brief explanation of the idea is that you have your bitmap representation of a glyph, made up of a byte per row, with bits representing columns, and you take this bitmap and expand it out so that each bit is now 8 bits. Note that for an 8-bit bitmap row, this means a mask row is 8 bytes - the example code seems to assume 32-bit values, so you need to two of them, but I'm lazy and will use 64-bit values. So a row of 01101100b becomes 0x00FFFF00_FFFF0000. Except... we're probably on a little-endian machine, so this is actually backwards. It should be byte reversed to 0x0000FFFF_00FFFF00. We do this for every row in our bitmap and then the masking can be used to write multiple pixels at a time, which is probably faster than individual calls to a set-pixel function.