Economy in OSes

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
geezusfreeek

Economy in OSes

Post by geezusfreeek »

Split from My OS Design, which was intended to be a very slow-chat thread where you could post your design for others to read
-------------------
Yeah, long posts. I was just expecting a post about something coming along sooner than this, not even necessarily related to my posts.

Anyway, the point is not to be predictive, but adaptive. Applications could provide hints if they want, but the system would still work without knowing what exactly the application does. This is not a prefetching/caching scheme; in fact, it could be said that this is orthogonal to prefetching. This is about determining whether it would more benefit one task as opposed to another task to have the resources for caching/prefetching and more efficiently distributing these resources.

To address your puzzlement over trading memory for energy, consider a system where power is virtually distributed among all the running tasks. Additionally, tasks are allocated a certain amount of memory (only so many pages can be in RAM at a time). By limiting the amount of power distributed in the system, usage of a battery's power could be limited, thus extending its lifetime. In a traditional operating system, RAM is just a huge shared pool from which all other tasks may draw from at will regardless of its actual value to each task, but by limiting the amount of RAM each task may use, it becomes a tradable resource with true value instead. If a task runs out of the virtual power I referred to, it is no longer allowed any time using the CPU or creating hard drive activity (or other things that require energy). In order to do anything else, another of its system resources (in this case, usable pages of RAM) must be traded with another task for some power.

Sorry this isn't in the same low-level terms that many of you guys seem to be used to here. I am not explaining this very well at all, but to attempt a better explanation would basically require a few pages worth of text and diagrams. I am working on a simple simulation right now. Hopefully, it will be done fairly soon (a week maybe?) and I can post it here to make things a bit easier to see and prove its worth.
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Re:Your OS design

Post by Colonel Kernel »

Sounds like applying economic theory to OSes. I like it. :)

I thought of a "taxation" scheme for managing kernel memory awhile ago that reminds me of this.
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
geezusfreeek

Re:Your OS design

Post by geezusfreeek »

Exactly; it is economic theory. In fact, I thought I had mentioned that association earlier, but apparently not.

I think my biggest hurdle right now is not how to go about managing the resources, but deciding which resources can and should be included in this deal. There are those which I already mentioned, like CPU, RAM, and power, and there is also stuff like network bandwidth, which seems like it would be a pretty good idea. I'm trying to decide what else to make markets for. There's the possibility of also designing to sell GPU or sound card usage as tradable resources, but it seems that I probably shouldn't put too much hope into being able to take advantage of graphics acceleration, and I don't really know much about how sound cards work, or even if it would make sense to ration it off like a dividable resource.
User avatar
Candy
Member
Member
Posts: 3882
Joined: Tue Oct 17, 2006 11:33 pm
Location: Eindhoven

Re:Your OS design

Post by Candy »

Colonel Kernel wrote: Sounds like applying economic theory to OSes. I like it. :)

I thought of a "taxation" scheme for managing kernel memory awhile ago that reminds me of this.
If so, it's called the Bankers Algorithm and it's been detailed in for example Andrew Tanenbaum - Modern Operating Systems. I can recommend it :)
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Re:Your OS design

Post by Colonel Kernel »

Candy wrote:If so, it's called the Bankers Algorithm and it's been detailed in for example Andrew Tanenbaum - Modern Operating Systems. I can recommend it :)
I don't think it's the same thing. I read on wikipedia that the Banker's algorithm is a way to avoid deadlock. My idea was a way to push both user-level and kernel-level memory management policies to user space by forcing processes to sacrifice some of their own memory in exchange for kernel resources (anything that consumes kernel memory such as thread stacks, TCBs, page tables for new address spaces, etc.)
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
User avatar
Candy
Member
Member
Posts: 3882
Joined: Tue Oct 17, 2006 11:33 pm
Location: Eindhoven

Re:Your OS design

Post by Candy »

Colonel Kernel wrote:
Candy wrote:If so, it's called the Bankers Algorithm and it's been detailed in for example Andrew Tanenbaum - Modern Operating Systems. I can recommend it :)
I don't think it's the same thing. I read on wikipedia that the Banker's algorithm is a way to avoid deadlock. My idea was a way to push both user-level and kernel-level memory management policies to user space by forcing processes to sacrifice some of their own memory in exchange for kernel resources (anything that consumes kernel memory such as thread stacks, TCBs, page tables for new address spaces, etc.)
IIRC (don't have the book here, DennisCGc, could you look it up?) it was both for resource management (or under another name of course) and deadlock avoidance. If you know who allocates what and who is going to allocate what you can prevent future situations from occuring in which a device is unavailable. If you ignore the future, you know who owns what and you can bill them for it.
User avatar
gaf
Member
Member
Posts: 349
Joined: Thu Oct 21, 2004 11:00 pm
Location: Munich, Germany

Re:Economy in OSes

Post by gaf »

Hello,
I just had a look at the chapter about the banker's algorithm in "Modern Operating System" and the book too describes it as a way of avoiding deadlocks. In another chapter however an algorithm is mentioned that is ment to enforce load balancing on a multiprocessor by starting kind of a microeconomy: Tasks have to bid with their credits for a certain resource and the highest bidder wins the action. Unfortunately the original paper about this idea doesn't seem to be available on the internet anymore.
Colonel Kernel wrote:My idea was a way to push both user-level and kernel-level memory management policies to user space by forcing processes to sacrifice some of their own memory in exchange for kernel resources (anything that consumes kernel memory such as thread stacks, TCBs, page tables for new address spaces, etc.)
I like the idea that tasks have to pay for kernel resources, but unfortunately paging makes it a bit hard to tax them for smaller structures. Maybe a lazy model could circumvent this limitation:
- Each task has two additional variables in its TCB: current, limit
- "current" is the amount of kernel memory the task holds right now
- "limit" is the maximum amount it may aquire
- Each time a task creates/destroys a kernel structure "current" is updated
- If a tasks reaches it's limit further systemcalls that would create kernel structures are denied
- Kernel memory is taken from the user-level memory manager
- On a low memory situation, the user-level server evicts a page from a task of its choice
- The user-level server may access the two variables in the TCB to enforce its policy

regards,
gaf
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Re:Economy in OSes

Post by Colonel Kernel »

I like the idea that tasks have to pay for kernel resources, but unfortunately paging makes it a bit hard to tax them for smaller structures.
The details are in the original thread -- I'm too lazy/busy to re-state them here. :) However, the idea is that each process keeps a pool of pages and it must put one or two of them up for possible sacrifice on every system call. The kernel can then steal one or two pages or not depending on how much kernel memory that process is using. This gets around the granularity problem that you mentioned.
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
geezusfreeek

Re:Economy in OSes

Post by geezusfreeek »

Both my idea and Colonel Kernel's idea seem to be different from the Banker's Algorithm. The Banker's Algorithm seems to treat the system resources as though they belong to the system and are merely being loaned to the processes, thus providing the means for an algorithm (given that the processes' future resource needs are known, which is unrealistic in most cases) to guarantee that there is always at least one processes with enough resources to do its job and return the resources.

Colonel's idea seems to be that each process has claims on some system resources, but must be willing to give up some number of them in order to use kernel services, thus guaranteeing that the kernel will always have enough resources to do its job without taking up unnecessary amounts of memory all the time.

My idea is that each process has claims on every system resource, but must give some away in order to obtain some of a different resource, thus all the processes in the system will be guaranteed that any resources taken from them will be instantly compensated for with resources of an equal or better value and providing a means of constantly moving the values of a task's resources in the direction of better value.
Kemp

Re:Economy in OSes

Post by Kemp »

How would a system like this handle (for instance) a database server that requires the entire address space worth of RAM (or as much as it can get its hands on) as well as potentially a lot of CPU time and IO (hard disk) access during use? The database could be seriously crippled in a scenario like that. How about games that need a lot of RAM (current games pretty much make everything else be paged out to disk) as well as a substantial portion of CPU time and again a lot of IO access (to the graphics card and hard drive)?
geezusfreeek

Re:Economy in OSes

Post by geezusfreeek »

There are two ways that these could be done: one would be to simply give those types of tasks more resources or capital, the other to implement a sort of scheme where other tasks can loan out their unused resources or capital with the guarantee that it will be repaid within some set time frame. I was going to try implementing both.

Consider the game scenario: we give it a higher percentage of the system's resources, which it then trades around with other tasks until it reaches equilibrium with the other tasks on the system (note: not compromising... it still has the same value's worth of resources as before from the point of view of the system, but the resources it traded for likely have an even higher value to the game). No task could then take a single resource from the game without instantly replenishing it with an equal or better resource. Since the game would likely buy out a lot of RAM, CPU, and IO, it would practically be guaranteed not to suffer. The same could be said of the database. No task is crippled that is given a reasonable amount of the system's resources for that task.

In fact, consider another scenario: we are running a database server on this OS and choose to give the database nearly 100% of the system's resources, even though there are other tasks we want to run as well. Whenever the database is not fully loading the system, it could loan out its unused resources to the other tasks so that they can get their work done too, but after that time slice (however long it is) is over, those tasks can't expend any more of their own resources until their debt is paid off. They could either use their own resources to pay it, or borrow from yet another task (or even the same task it borrowed from before, assuming that the same task has some extra resources again). Either way, no resource is being wasted since borrowing can only be done with resources that would be wasted in the immediate time frame anyway.
Kemp

Re:Economy in OSes

Post by Kemp »

Ah, I didn't notice that there would be user interaction in deciding resource levels (or levels set in the apps configuration somewhere, whichever). That makes more sense to me now.
User avatar
Pype.Clicker
Member
Member
Posts: 5964
Joined: Wed Oct 18, 2006 2:31 am
Location: In a galaxy, far, far away
Contact:

Re:Economy in OSes

Post by Pype.Clicker »

giving more credits to the games sounds much like "renicing" it to me :P

My concern would be that most applications have a "minimum working set" under which they cannot run. Below a critical amount of pages, e.g., your webbrowsing cannot browse the web anymore: all it can do is swap pages in and out ... similarily, a process that has too few credits to buy CPU time to the game has actually no use for its own pages of memory so it could try to trade off memory for CPU time, but the price of the game's CPU time might be so high that the program no longer has the pages it requires to do what it has to do once he has the CPU ...

And finally, "trading memory for CPU" means swapping pages out, right ... that means buying disk-access time and disk space for that operation :-/
DennisCGc

Re:Your OS design

Post by DennisCGc »

Candy wrote:IIRC (don't have the book here, DennisCGc, could you look it up?) it was both for resource management (or under another name of course) and deadlock avoidance.
@Candy:

Quoting from the book:
A scheduling algorithm that can avoid deadlocks is due to Dijkstra(1965) is known as the banker's algorithm...
And later it's also discussed it can ben generalized to handle multiple resources.. so, you're right in this case.

PS. Sorry for the off-topic part, haven't been here for a while.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re:Economy in OSes

Post by Brendan »

Hi,
Pype.Clicker wrote:My concern would be that most applications have a "minimum working set" under which they cannot run. Below a critical amount of pages, e.g., your webbrowsing cannot browse the web anymore: all it can do is swap pages in and out ...
Technically, the absolute minimum working set is about 5 pages (or less). If the application has more than this it can still run incredibly slowly (i.e. between none and 4 page swaps per instruction).

This magic number is from the worst instruction I can think of "push dword [somewhere]", where the instruction itself and the dword at "somewhere" may cross page boundaries (e.g. 2 pages at EIP, one page at ESP and 2 more pages at "somewhere").

There are a few alternatives at 5 pages (e.g. "call [somewhere]"), but none than require more than 5 that I can think of. If all data is guaranteed to be properly aligned the "worst case" can be reduced to 4 pages.

Of course the "minimum working set for acceptable performance" is a completely different issue... :)


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Post Reply