xeyes wrote:Does this squeeze the characters even more than 80x50 or it has a higher resolution for real (aka makes Qemu/Bochs window grow bigger)?
It will use 8x8 fonts, so with 90x60 you'll get the resolution of 720x480.
xeyes wrote:Not too worried about code size as this is intended for a normal computer, but these don't seem trivial either for someone never dealt with graphics before.
Look, Scalable Screen Font is a fully featured font renderer that deals with both geometrically compressed and deflated fonts and supports scalable vector fonts too. Of course it's not trivial. However if you have an uncompressed 8x16 bitmap font only (same as in the VGA ROM), then your code can be as trivial as:
Code: Select all
void drawchar(unsigned char c, int x, int y, int fgcolor, int bgcolor)
{
int cx, cy;
for(cy = 0; cy < 16; cy++){
for(cx = 0; cx < 8; cx++){
putpixel( font[c * 16 + cy] & (1 << cx) ? fgcolor : bgcolor, x + cx, y + cy - 12);
}
}
}
Basically you iterate through the bitmap, and plot a pixel with either foreground or background color depending if the bit in the font is set or not
font[c * 16 + cy] & (1 << cx). (This isn't an optimized code, it's a trivial one. Here we take advantage of the fact that the font is 8 pixels wide, so one row is stored in a byte, and the font is 16 pixels tall, so each and every glyph is stored in 16 bytes. That's
c * 16. Then we add the current row to that
c * 16 + cy, finally we check if the
cxth bit is set in that row.)
xeyes wrote:I assume linear frame buffer means the CPU writes pixels directly to a row major 2D array?
Well, there's no such thing as row major in memory. That's just a notation on paper (this is a common confusion for DirectX and OpenGL developers too, their documentation uses different notation, but actually both using the same byte order in memory for a matrix.)
And not 2D, just a 1D array with gaps. So first come the first row's pixels, there might be a gap, and the second row's pixels. That's why it is important to get both screen width and pitch: the first tells how many pixels are on screen, and the second how many pixels are stored in memory. The two are often the same, but not always. For example, 800 x 600 true color. On screen you have 800 pixels, that's 3200 bytes, but the second row might start not on the 3200th byte but at the 4096th byte.
xeyes wrote:Maybe not every pixel need to be redrawn at each frame. But at the end of the day it seems that a good algorithm is needed to get reasonable performance without actually tying up a 5Ghz CPU just to get some text out.
Exactly. One change pixels that needs to be changed. And never read the framebuffer, that's slow, only write it.
xeyes wrote:Maybe I'll stick with text mode for a long time until GUI is needed.
This isn't really a choice, as @Octocontrabass pointed out. With modern machines you only have pixel mode. Text mode is already obsolete.
xeyes wrote:But isn't this case the other way around?
No, you have to make sure you use only known codes. That is:
Terminal to your app: keycodes, those listed in termcap, you should interpret all those.
Your app to terminal: only send ANSI CSI codes so you don't have to worry about the terminal's type.
Now some codes are the same for both: for example, if the user presses the [cursor up] key, then the terminal will send (escape)[A to your app. And if your app wishes to move the cursor on the terminal, it sends the same (escape)[A to the terminal. Likewise, if the user presses the [Home] key, then the terminal sends (escape)[H, and if you want to move the cursor to the top left corner of the screen you send the same (escape)[H sequence to the terminal.
xeyes wrote:My code would be sending these output escape codes like move cursor to X and Y via serial to Putty running on the host in order to draw. They'll send me a different set of input codes for things such as "the user has pressed/released enter key" just like a PS/2 keyboard's scan codes?
Exactly. Except there are common codes for both sets and there are no key released codes, and not scan codes are sent rather ASCII codes. Special keys send control codes: ^H, ^M etc. (see the wiki) or multi character sequence, which always starts with ASCII 27. What sequences the terminal sends depends on its configuration (whether Backspace sends ASCII 8 or ASCII 127, and the Function keys' sequences, the rest is quite standard), and what sequences you send to the terminal depends on you detecting the terminal's capabilities (using only a limited set of ANSI escape sequences guaranteed to work on all terminals).
xeyes wrote:If that is the case I can easily forgo the eye candies such as colors, as long as I can interpret alphanumerical key inputs and a few control keys like ctrl, etc.
Key modifiers aren't send either. The terminal is responsible for handling those, and it sends a different sequence. For example, pressing [C] will send the character ASCII 99 ("c"). Pressing [Shift]+[C] will send ASCII 67 ("C"). Pressing [Ctrl]+[C] will send ASCII 3 (^C). But yes, your app doesn't interpret the colors, only sends those. So if you don't send RGB sequences, only 0, 1, 30-37, 40-47 codes, then you'll be limited to 16 colors, but you don't have to care about the terminal's type, all will interpret those color codes correctly.
My advice is: interpret all VT100 - VT220 key codes (you can forget about VT52), try to support all Fn combinations. And only emit VT100 sequences. This will guarantee the most compatibility without difficult terminal detection code.
Cheers,
bzt