Well, I guess this qualifies as an introduction
Well, I guess this qualifies as an introduction
Even though I have OS-related questions, I suppose they'll go at the end; I didn't want to place a primarily-introduction thread into the OS topics and incur the wrath on my first post.
Name's Eli Fedele. Computer science major, but I've been working in code for awhile, in various languages (from C to C++, Obj-C to Java).
For the longest time, I've simultaneously wanted to write a simple OS, later building it into something useful, but also not wanted to because of the momentous amount of work and the inevitable question of "Why write your own OS when you can get one that's supported with [insert long list of feature sets]?". It was a question I had to take a long time to answer for myself, knowing it may be years of development before I manage to get anything with any semblance of "usability" at the daily level. Secondary to that was reading the OS Dev wiki - it took a very strong 'no nonsense' approach, and even at this very minute I'm hoping that even the amount of research I've been doing suffices as "enough". "Intimidation" might be a good word, even for someone with otherwise ample code writing under his belt.
When I first started learning to work with programming, it was simultaneously software and hardware-based. I first learned on an Arduino, but wasn't satisfied with the learning until I realize how the thing actually did what I coded it to do (even though I didn't know much, I know for a fact it didn't understand Serial.write() natively). From there on, it was on to faster processors, bigger chips, more complicated systems. I've worked with FPGAs, instructed raw x86 processors on sketchily-made dev boards, and I've been working as an iOS developer for some months.
To put it all in perspective, I'm 4 months shy of 21. I'm still very young in the eyes of the world, and almost undoubtedly carry some of the youthful buttheadedness about myself. My entire exposure to computers and systems has spanned both hardware and software, the missing link being how the hardware and software interact (the OS). It was a few days ago that I realized that I wouldn't be happy any other way unless I finally kicked myself into making an attempt at an OS.
So today, I whipped up some code in C a la the "Hello Kernel World" on the Wiki. I tried for the most part to write it myself (the last thing I'd want to do is write it off the example, because then my code would just be a differently-worded carbon copy), using the source as a reference. I've already (tentatively) implemented text scrolling and newlines (working on adding other escape sequences like tab, etc.), and I'm going to work next on rendering the colors differently.
I intend to really muck with this blatantly-simple code for a good while before moving on. So many elements of coding are built around gross repetition and must be learned "hard and fast" the way they are before progressing. Rather than try to push the envelope to go for speed ("but I need those l33t graphix now!!") I want to get the core handling down before moving on. This leads me to a few initial questions:
1. How do I set up a handler for reading keystrokes? Obviously, reading into the keyboard buffer, just like writing to the text mode buffer, would be a start. What I want to work on is a slight modification of my existing "kernel" (if the abomination I've written so far can be called that) where it takes a user string input and echoes it back. Not very complex, but as keyboards are weighted heavily by the OS in terms of both processing and IRQ priority (not to mention their importance to the user), I want to get down the keyboard tracking logic as soon as possible.
2. How do I deal with the IRQs at this level? From having done a good amount of work on embedded systems/microcontrollers (not a super-great deal - I don't have a Master's in embedded systems, mind you), I find that writing an OS is much like writing code for a micro due to the fact that many of your standard library functions are unavailable and many times things have to be triggered by direct write to memory locations. Anyway, rambling aside, how do I work on the keyboard ISR at this point - such that when the key is pressed, it's pushed to screen?
3. Are file systems a core function of the kernel - that is, does the kernel "natively" have the ability, when mature enough, to read into files? That may be what I look into next after I've got a good amount of practice - implementing FAT or some subset of it.
Final thoughts - if all of you have the patience to bear with me on many things, I'll learn quickly. It need not be said that operating system development is a much different realm from the modes of development I've been working in for awhile now; one of the reasons I justified this leap to myself is that if my time on this forum bears fruit, even if my kernel never makes it out of the "hello world" stage, I'm hoping that it will make me a better, more versatile programmer - maybe, just maybe having an idea about how the OS works will help me write better software, if nothing more.
Regards,
Eli Fedele
blasthash
P.S. I've said this in a bunch of circles I'm a part of, as blasthash is pretty much my handle across the web - if you can take a stab at what it references, we'll be fast friends.
Name's Eli Fedele. Computer science major, but I've been working in code for awhile, in various languages (from C to C++, Obj-C to Java).
For the longest time, I've simultaneously wanted to write a simple OS, later building it into something useful, but also not wanted to because of the momentous amount of work and the inevitable question of "Why write your own OS when you can get one that's supported with [insert long list of feature sets]?". It was a question I had to take a long time to answer for myself, knowing it may be years of development before I manage to get anything with any semblance of "usability" at the daily level. Secondary to that was reading the OS Dev wiki - it took a very strong 'no nonsense' approach, and even at this very minute I'm hoping that even the amount of research I've been doing suffices as "enough". "Intimidation" might be a good word, even for someone with otherwise ample code writing under his belt.
When I first started learning to work with programming, it was simultaneously software and hardware-based. I first learned on an Arduino, but wasn't satisfied with the learning until I realize how the thing actually did what I coded it to do (even though I didn't know much, I know for a fact it didn't understand Serial.write() natively). From there on, it was on to faster processors, bigger chips, more complicated systems. I've worked with FPGAs, instructed raw x86 processors on sketchily-made dev boards, and I've been working as an iOS developer for some months.
To put it all in perspective, I'm 4 months shy of 21. I'm still very young in the eyes of the world, and almost undoubtedly carry some of the youthful buttheadedness about myself. My entire exposure to computers and systems has spanned both hardware and software, the missing link being how the hardware and software interact (the OS). It was a few days ago that I realized that I wouldn't be happy any other way unless I finally kicked myself into making an attempt at an OS.
So today, I whipped up some code in C a la the "Hello Kernel World" on the Wiki. I tried for the most part to write it myself (the last thing I'd want to do is write it off the example, because then my code would just be a differently-worded carbon copy), using the source as a reference. I've already (tentatively) implemented text scrolling and newlines (working on adding other escape sequences like tab, etc.), and I'm going to work next on rendering the colors differently.
I intend to really muck with this blatantly-simple code for a good while before moving on. So many elements of coding are built around gross repetition and must be learned "hard and fast" the way they are before progressing. Rather than try to push the envelope to go for speed ("but I need those l33t graphix now!!") I want to get the core handling down before moving on. This leads me to a few initial questions:
1. How do I set up a handler for reading keystrokes? Obviously, reading into the keyboard buffer, just like writing to the text mode buffer, would be a start. What I want to work on is a slight modification of my existing "kernel" (if the abomination I've written so far can be called that) where it takes a user string input and echoes it back. Not very complex, but as keyboards are weighted heavily by the OS in terms of both processing and IRQ priority (not to mention their importance to the user), I want to get down the keyboard tracking logic as soon as possible.
2. How do I deal with the IRQs at this level? From having done a good amount of work on embedded systems/microcontrollers (not a super-great deal - I don't have a Master's in embedded systems, mind you), I find that writing an OS is much like writing code for a micro due to the fact that many of your standard library functions are unavailable and many times things have to be triggered by direct write to memory locations. Anyway, rambling aside, how do I work on the keyboard ISR at this point - such that when the key is pressed, it's pushed to screen?
3. Are file systems a core function of the kernel - that is, does the kernel "natively" have the ability, when mature enough, to read into files? That may be what I look into next after I've got a good amount of practice - implementing FAT or some subset of it.
Final thoughts - if all of you have the patience to bear with me on many things, I'll learn quickly. It need not be said that operating system development is a much different realm from the modes of development I've been working in for awhile now; one of the reasons I justified this leap to myself is that if my time on this forum bears fruit, even if my kernel never makes it out of the "hello world" stage, I'm hoping that it will make me a better, more versatile programmer - maybe, just maybe having an idea about how the OS works will help me write better software, if nothing more.
Regards,
Eli Fedele
blasthash
P.S. I've said this in a bunch of circles I'm a part of, as blasthash is pretty much my handle across the web - if you can take a stab at what it references, we'll be fast friends.
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
Re: Well, I guess this qualifies as an introduction
If you're lucky, PS2 Keyboard is what you need for a hardware reference. If not, then it's USB and you'll have quite a long way to go. Either way, getting a character on screen is the job of the app because you might want to change what that same key does later. Typically you have a queue of such events, where the keyboard driver writes to the queue and some app reads from it. If you're familiar with threading primitives, writing an actual IRQ handler shouldn't be too hard.
That depends on design philosophy. You can isolate functional chunks from each other so that very little is part of the kernel, or you can decide not to have such barriers. Similarly, IRQ handling could be little more than sending a process out of the kernel a short message and have it read the device of it's own.3. Are file systems a core function of the kernel
Re: Well, I guess this qualifies as an introduction
Luckily I'll probably break an old motherboard out from a giant bin of computing parts to use it to test. Dual PS/2s for mouse and keyboard.Combuster wrote:If you're lucky, PS2 Controller is what you need for a hardware reference. If not, then it's USB and you'll have quite a long way to go. Either way, getting a character on screen is the job of the app because you might want to change what that same key does later. Typically you have a queue of such events, where the keyboard driver writes to the queue and some app reads from it. If you're familiar with threading primitives, writing an actual IRQ handler shouldn't be too hard.
From having to implement USB/HID-class drivers in both hardware (FPGAs) and microcontrollers I know all too well how ugly that picture can be. So for this picture, the keyboard driver simply pulls from the keyboard buffer and runs it to a queue for app usage?
For whatever reason (might be that it's late at night, or that I've spent way too much time and brainpower in Obj-C for my own good) I'm drawing blanks on threading primitives, but I'll have a bunch of lengthy looks at the wiki entries pertinent to ISRs/IRQs and see about getting something out of it.
And from doing a little bit of looking, I need to hit the books on x86 assembly - I've worked before in MIPS assembly, but that arch has much different interrupt units and methods of handling than x86.
- Bender
- Member
- Posts: 449
- Joined: Wed Aug 21, 2013 3:53 am
- Libera.chat IRC: bender|
- Location: Asia, Singapore
Re: Well, I guess this qualifies as an introduction
Hello and welcome to the Operating Systems Development Forum!
If you're on it, I suggest to start from here: http://wiki.osdev.org/What_order_should ... _things_in
This should be of help too:
http://wiki.osdev.org/8259_PIC
If you're on it, I suggest to start from here: http://wiki.osdev.org/What_order_should ... _things_in
This should be of help too:
http://wiki.osdev.org/8259_PIC
Perhaps Art of Assembly?And from doing a little bit of looking, I need to hit the books on x86 assembly - I've worked before in MIPS assembly, but that arch has much different interrupt units and methods of handling than x86.
"In a time of universal deceit - telling the truth is a revolutionary act." -- George Orwell
(R3X Runtime VM)(CHIP8 Interpreter OS)
(R3X Runtime VM)(CHIP8 Interpreter OS)
Re: Well, I guess this qualifies as an introduction
Yes, look for goal of versatility. And in the end (how many years later?) you will get some grip on the architecture and be able to complete your OS in a very nice manner. But to be able to do this you should start from just writing simple things. While doing so - do not forget about main goal - the architecture. Your OS is just your vision. If the vision is from a low level, then OS will be too ugly. But even ugly OS eventually can become a Windows 8 or something alike - amount of your work matters here.blasthash wrote:one of the reasons I justified this leap to myself is that if my time on this forum bears fruit, even if my kernel never makes it out of the "hello world" stage, I'm hoping that it will make me a better, more versatile programmer
Re: Well, I guess this qualifies as an introduction
That looks like a good choice. I'll start grabbing some tutorials in the meantime.Bender wrote:Hello and welcome to the Operating Systems Development Forum!
If you're on it, I suggest to start from here: http://wiki.osdev.org/What_order_should ... _things_in
This should be of help too:
http://wiki.osdev.org/8259_PICPerhaps Art of Assembly?And from doing a little bit of looking, I need to hit the books on x86 assembly - I've worked before in MIPS assembly, but that arch has much different interrupt units and methods of handling than x86.
That's one of the things I'm trying to keep in mind - extensibility. I'm trying to push myself to write code that works, but isn't bloated, because I don't want to have to rewrite it all later when the upper-layer stuff (windowing, what have you) causes the less-than-optimal code to crap out. I'm trying to keep the big picture in perspective.embryo wrote:Yes, look for goal of versatility. And in the end (how many years later?) you will get some grip on the architecture and be able to complete your OS in a very nice manner. But to be able to do this you should start from just writing simple things. While doing so - do not forget about main goal - the architecture. Your OS is just your vision. If the vision is from a low level, then OS will be too ugly. But even ugly OS eventually can become a Windows 8 or something alike - amount of your work matters here.blasthash wrote:one of the reasons I justified this leap to myself is that if my time on this forum bears fruit, even if my kernel never makes it out of the "hello world" stage, I'm hoping that it will make me a better, more versatile programmer
I couldn't find anything about this on the wiki, mainly because I don't know how to refer to it. But how do I make my kernel "self-aware" in that it senses its own neighborhood of devices attached to the computer? Obviously, the first step to reading into the filesystem of a drive is knowing that the drive is there.
Re: Well, I guess this qualifies as an introduction
Hi,
The next layer involves some sort of "device manager" that searches for devices and starts the corresponding device drivers (if possible). This mostly begins by scanning PCI buses (and once PCI is under control, it's safe to start checking for legacy devices).
The point is, the higher layers depend on the lowest layers; and therefore you can (and should) completely forget about them until you've finished implementing the lowest layers. Basically, until you've got working memory management, then a working scheduler and some sort of communication (messages, pipes, whatever); the only hardware you'll need to care about is RAM, CPUs, timer/s and whatever is used for the timer/s IRQs. Otherwise it's a bit like building a house by starting with the roof - without walls to support it the roof is just going to fall on your head.
Cheers,
Brendan
An OS is many layers, where higher layers are built on lower layers. For the lowest layers you need to know:blasthash wrote:I couldn't find anything about this on the wiki, mainly because I don't know how to refer to it. But how do I make my kernel "self-aware" in that it senses its own neighborhood of devices attached to the computer? Obviously, the first step to reading into the filesystem of a drive is knowing that the drive is there.
- RAM (detected by asking firmware - e.g. BIOS "int 0x15, eax=0xE820")
- If there are PIC chips or not; and if there are IO APICs or not (detected by searching for, then parsing, ACPI tables)
- (Optional) How many CPUs there are and what their IDs are (also detected by searching for, then parsing, ACPI tables)
- Some sort of timer/s (local APIC timer, HPET, PIT or RTC) for scheduling. Note: This mostly needs high precision, lower accuracy and an IRQ, as it'll be used for things like "nanosleep()".
- Some sort of timer (local APIC timer, ACPI timer, HPET, PIT or RTC) for keeping track of "wall clock time" (may or may not be the same timer used for scheduling). Note: This mostly needs high accuracy (low "drift") and doesn't necessarily need high precision or any IRQ.
The next layer involves some sort of "device manager" that searches for devices and starts the corresponding device drivers (if possible). This mostly begins by scanning PCI buses (and once PCI is under control, it's safe to start checking for legacy devices).
The point is, the higher layers depend on the lowest layers; and therefore you can (and should) completely forget about them until you've finished implementing the lowest layers. Basically, until you've got working memory management, then a working scheduler and some sort of communication (messages, pipes, whatever); the only hardware you'll need to care about is RAM, CPUs, timer/s and whatever is used for the timer/s IRQs. Otherwise it's a bit like building a house by starting with the roof - without walls to support it the roof is just going to fall on your head.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Well, I guess this qualifies as an introduction
And to gedit we go! Haha.Brendan wrote:Hi,
An OS is many layers, where higher layers are built on lower layers. For the lowest layers you need to know:blasthash wrote:I couldn't find anything about this on the wiki, mainly because I don't know how to refer to it. But how do I make my kernel "self-aware" in that it senses its own neighborhood of devices attached to the computer? Obviously, the first step to reading into the filesystem of a drive is knowing that the drive is there.This is all you need for (physical, virtual and heap) memory management, scheduling, IPC/communication, etc.
- RAM (detected by asking firmware - e.g. BIOS "int 0x15, eax=0xE820")
- If there are PIC chips or not; and if there are IO APICs or not (detected by searching for, then parsing, ACPI tables)
- (Optional) How many CPUs there are and what their IDs are (also detected by searching for, then parsing, ACPI tables)
- Some sort of timer/s (local APIC timer, HPET, PIT or RTC) for scheduling. Note: This mostly needs high precision, lower accuracy and an IRQ, as it'll be used for things like "nanosleep()".
- Some sort of timer (local APIC timer, ACPI timer, HPET, PIT or RTC) for keeping track of "wall clock time" (may or may not be the same timer used for scheduling). Note: This mostly needs high accuracy (low "drift") and doesn't necessarily need high precision or any IRQ.
The next layer involves some sort of "device manager" that searches for devices and starts the corresponding device drivers (if possible). This mostly begins by scanning PCI buses (and once PCI is under control, it's safe to start checking for legacy devices).
The point is, the higher layers depend on the lowest layers; and therefore you can (and should) completely forget about them until you've finished implementing the lowest layers. Basically, until you've got working memory management, then a working scheduler and some sort of communication (messages, pipes, whatever); the only hardware you'll need to care about is RAM, CPUs, timer/s and whatever is used for the timer/s IRQs. Otherwise it's a bit like building a house by starting with the roof - without walls to support it the roof is just going to fall on your head.
Cheers,
Brendan
I'll work on throwing in the E820 RAM detect before going through the RSDP/ACPI table chain. One step at a time.
Re: Well, I guess this qualifies as an introduction
Ask yourself that question in a year's time. Low-level programming isn't that different IMHO.blasthash wrote:It need not be said that operating system development is a much different realm from the modes of development I've been working in for awhile now
Great code is quite rare in the realm of hardware configuration. There's just too much straight line "put A in reg X, set bit B in reg Y, now poll reg Z until bit 0 is 1." My idea of great code seems to be seen more often at the highest levels of abstraction. The good thing about OS code is that it gets stuff done and makes hardware useful (and makes all that great high-level code able to run). So, more versatile - yes I agree with that. Better - you could improve your skills working in any area, it's all down to you. The fact you're expanding your horizons says that you find this stuff interesting so you're probably going to improve year on year due to your programming experiments anyway. You'll leave those who are programming only for the money behind after not too long.blasthash wrote:I'm hoping that it will make me a better, more versatile programmer
Yep, definitely true this one.blasthash wrote:maybe, just maybe having an idea about how the OS works will help me write better software, if nothing more.
Every universe of discourse has its logical structure --- S. K. Langer.
Re: Well, I guess this qualifies as an introduction
So here's a pertinent question. When looking at the E820 RAM sniffing method, I decided to 'roll my own' implementation of the algorithm rather than rely on the examples there. Even if it makes it much more rough on me in the short term, I'll sleep easier rolling my own code than something which is just a rewriting of example code.
Anyway, I get that the algo is designed to "step" through the address descriptors; how do I concatenate ES and DI each time? Is it simply ES comprising the high segment and DI the low segment, or is it formed into a physical address for the base address descriptor?
In code:
Is that a valid way of accomplishing it? Also, during successive calls, does EDX have to be restored to its initial value (0x534D4150) after it is mirrored in EAX?
It's hard to explain how it feels different - it may just be because of it being a much different task - but it is the same as a bunch of my micro dealings in the past where I again, have to assume the device knows nothing. If anything, getting into the first steps makes me feel better because it seems that the system already has provisions out there that are just waiting to be accessed and used.
And that is correct - I do enjoy these things, even if just for the holistic experience. Who knows what'll become of my OS endeavors, whether this source I'm writing today will eventually become the next Windows or Linux or whether it just sits on a live USB drive for fun on rainy days. Whatever the result, it'll be one hell of a ride.
Anyway, I get that the algo is designed to "step" through the address descriptors; how do I concatenate ES and DI each time? Is it simply ES comprising the high segment and DI the low segment, or is it formed into a physical address for the base address descriptor?
In code:
Code: Select all
// Word to the wary, I'm still improving my x86 ASM, so if some of these instructs aren't kosher, forgive me. This isn't exactly GCC inline syntax, I cleared it up a bit to make it easier to read:
xor %EBX, %EBX // clear EBX
movl 0x534D4150, %EDX
movl 0xE820, %EAX
movl 24, %ECX
int 0x15
movl %EAX, %0 // Push out of inline to a var for checking against initial EDX
movl %ES, %EAX // Move ES in
shl 16, %EAX // Shift ES left 16 bits in EAX
add %DI, %EAX // Add in DI into lower word space vacated by the shift
movl %EAX, %1 // Push out of inline to the correct value.
True stuff.bwat wrote:Ask yourself that question in a year's time. Low-level programming isn't that different IMHO.blasthash wrote:It need not be said that operating system development is a much different realm from the modes of development I've been working in for awhile now
Great code is quite rare in the realm of hardware configuration. There's just too much straight line "put A in reg X, set bit B in reg Y, now poll reg Z until bit 0 is 1." My idea of great code seems to be seen more often at the highest levels of abstraction. The good thing about OS code is that it gets stuff done and makes hardware useful (and makes all that great high-level code able to run). So, more versatile - yes I agree with that. Better - you could improve your skills working in any area, it's all down to you. The fact you're expanding your horizons says that you find this stuff interesting so you're probably going to improve year on year due to your programming experiments anyway. You'll leave those who are programming only for the money behind after not too long.blasthash wrote:I'm hoping that it will make me a better, more versatile programmer
Yep, definitely true this one.blasthash wrote:maybe, just maybe having an idea about how the OS works will help me write better software, if nothing more.
It's hard to explain how it feels different - it may just be because of it being a much different task - but it is the same as a bunch of my micro dealings in the past where I again, have to assume the device knows nothing. If anything, getting into the first steps makes me feel better because it seems that the system already has provisions out there that are just waiting to be accessed and used.
And that is correct - I do enjoy these things, even if just for the holistic experience. Who knows what'll become of my OS endeavors, whether this source I'm writing today will eventually become the next Windows or Linux or whether it just sits on a live USB drive for fun on rainy days. Whatever the result, it'll be one hell of a ride.
-
- Member
- Posts: 5513
- Joined: Mon Mar 25, 2013 7:01 pm
Re: Well, I guess this qualifies as an introduction
I take it you haven't done any x86 programming in real mode before. Congrats, it's time for more learning!blasthash wrote:Anyway, I get that the algo is designed to "step" through the address descriptors; how do I concatenate ES and DI each time? Is it simply ES comprising the high segment and DI the low segment, or is it formed into a physical address for the base address descriptor?
Where exactly are you starting, anyways? Do you have a bootloader? Are you using a pre-made one, like GRUB? You can't go about detecting memory until your code is running, after all.
Re: Well, I guess this qualifies as an introduction
I have a bit of experience with segmented-memory operation in real mode from attempts to engineer the 8086 in Verilog RTL. Other than that, it's a new concept.Octocontrabass wrote:I take it you haven't done any x86 programming in real mode before. Congrats, it's time for more learning!blasthash wrote:Anyway, I get that the algo is designed to "step" through the address descriptors; how do I concatenate ES and DI each time? Is it simply ES comprising the high segment and DI the low segment, or is it formed into a physical address for the base address descriptor?
Where exactly are you starting, anyways? Do you have a bootloader? Are you using a pre-made one, like GRUB? You can't go about detecting memory until your code is running, after all.
I'll ideally work with GRUB to start, rolling my own later when I know I actually have something demanding a custom write.
-
- Member
- Posts: 5513
- Joined: Mon Mar 25, 2013 7:01 pm
Re: Well, I guess this qualifies as an introduction
1. GRUB gives you a memory map.
2. GRUB leaves the CPU in protected mode.
3. The BIOS can only be called from real mode, and switching back to real mode is typically a very bad idea.
If you want to detect memory the hard way, write your own bootloader. Until then, just use the memory map from GRUB.
2. GRUB leaves the CPU in protected mode.
3. The BIOS can only be called from real mode, and switching back to real mode is typically a very bad idea.
If you want to detect memory the hard way, write your own bootloader. Until then, just use the memory map from GRUB.
Re: Well, I guess this qualifies as an introduction
I feel like a bona fide idiot.Octocontrabass wrote:1. GRUB gives you a memory map.
2. GRUB leaves the CPU in protected mode.
3. The BIOS can only be called from real mode, and switching back to real mode is typically a very bad idea.
If you want to detect memory the hard way, write your own bootloader. Until then, just use the memory map from GRUB.
How does one get the memory map from GRUB? The wiki doesn't seem to elucidate enough for my initial preferences.
-
- Member
- Posts: 5513
- Joined: Mon Mar 25, 2013 7:01 pm
Re: Well, I guess this qualifies as an introduction
I think this is a good starting point. (Beware: although it does say that the machine state should be unchanged from what the BIOS left it in, you can't guarantee that the machine you're running on has a BIOS at all! It's better to assume that the BIOS is not available once GRUB is finished.)
And don't worry about feeling like an idiot - I've had plenty of those moments myself!
And don't worry about feeling like an idiot - I've had plenty of those moments myself!