Page 1 of 1

Life without an MMU?

Posted: Sat Feb 02, 2019 10:24 am
by eekee
I know why systems written in C or certain other languages are much nicer with an MMU. What I'm looking for is opinions on developing an OS in other languages without an MMU, or using certain programming techniques to minimize the risk.

Actually, I'm wondering why a Lisper would only take hardware seriously if it has one, Schol-R-LEA. ;) I thought Lisp was a safe language.

For my opinions, I've read that there have been multi-tasking Forth systems since the 70s, and that most Forth code was safe because... something to do with using the stack. I'm not sure I believe it, because pointer math is just as feasible in Forth as it is in C. I can imagine some programming tricks. I'm thinking the worst bit will be array bounds-checking slowing things down. I guess I'll be finding out the hard way.

I have more experience with Inferno. Inferno is an operating system which can run on hardware or hosted under another operating system. In either case, it only has a single address space. User-space code is written in Limbo, which is a relatively safe language. (It's related to Go.) Limbo compiles to Dis, a bytecode without safety features. The kernel is written in C. The whole system seems rather stable to me, although I didn't exactly get experimental with it.

Re: Life without an MMU?

Posted: Sat Feb 02, 2019 11:56 am
by Schol-R-LEA
eekee wrote:I know why systems written in C or certain other languages are much nicer with an MMU. What I'm looking for is opinions on developing an OS in other languages without an MMU, or using certain programming techniques to minimize the risk.

Actually, I'm wondering why a Lisper would only take hardware seriously if it has one, Schol-R-LEA. ;)
It isn't so much that I don't take systems without an MMU seriously, personally, so much as that I would expect most of the others here wouldn't.

OTOH, Lisp systems have a long history of accelerated hardware and hardware memory support; the various Lisp Machines all had MMUs, for example, as well as tagged memory. I would certainly love to see tagged memory architectures return for that very reason (as well as hoping that hardware-based capability security will finally reach the mainstream). While the usual approach to memory management doesn't play well with the usual approaches to garbage collection, that doesn't necessarily have to be the case if the two are designed to work together. However, I intend to flexible enough that, if a given system doesn't have an MMU, I can do without it.

On the gripping hand, most 32-bit systems today - even microcontrollers - have MMUs. Most 8-bit ones don't, of course, but I doubt I could fit enough my design into a 64K memory space to make it work. I might have to look into that... but no, I mean to focus on 64-bit systems, actually.

As for Arduino: while the majority of Arduinos use an 8-bit AVR CPU, there are a number of official Arduino-branded SBCs (and Arduino-compatible SBCs) with microcontroller-class ARM core SoCs. There was also a plan for a RISC-V Arduino called the Arduino Cinque, but AFAICT that never happened. The HiFive 1 isn't an 'official' Arduino, but it is in the same form factor as Arduino Uno, IIUC, and compatible with at least some shields; while the US$60 price point is high for a maker-class microcontroller, it isn't nearly as extreme as the HiFive Unleashed's $1000 price is.
eekee wrote:I thought Lisp was a safe language.
Setting aside the question of whether there is such a thing, I am guessing that this ties into the misunderstanding of Lisp being primarily an interpreted language, as well as the matter of garbage collection implementations. Thing is, there's not really such as thing as 'an interpreted language' vs 'a compiled language' - with enough effort and run-time sleight-of-hand, pretty much any language can be compiled for even code which explicitly accesses the code translator. While many, many Lisp interpreters have been written as class projects (because it is dead easy, and a useful example project), most 'serious' Lisp systems are compiled (often to native code), and usually can mix-and-match interpreted and compiled code transparently in ones designed after the early 1970s (see the Lambda Papers for the breakthroughs that made this feasible). For some systems, even the Lisp Listeners (the interactive REPL) are compile-and-go rather than interpreted.

(As I understand it, most Forth implementations also mix interpretation and compilation, but in a different manner, with the interpreter walking through the words of a given named operation until it finds a built-in or compiled word, and then calling it as native code. Comments and corrections welcome.)

As for garbage collection, that helps a lot in minimizing the possibilities of trivial memory leaks and wild pointers, no question. However, it comes with trade offs. First, you are replacing a large number of potential failure points with (conceptually at least) one much bigger one, and debugging garbage collectors is notoriously difficult. The bigger issue is that while the most common classes of memory problems are avoided, several less common ones still remain, most notably the problems that come when a very long running process doesn't release the memory it is using somehow - as long as the memory still has live references, it won't be freed by the collector, and while most memory references go stale as a matter of course, if the process is recursing heavily such that prior memory used in prior calls is still 'live' it can result in a something akin to a memory leak. There are some ways around this, but that's just one possible problem.

Also, naively implemented garbage collection does not interact well with naively implemented paging. However, this wasn't a problem with the Lisp Machines (for example), because they used a more sophisticated approach which let the two facilities work together (and the systems had hardware support for GC that tweaked the performance of this even more). In my system, too, I intend for the paging and the garbage collectors (plural, as I mean to have a hierarchical memory manager) will be coordinated with each other, rather than fighting against each other.

Mind you, I mean to do some seriously odd things in my OS, such as using runtime code synthesis in a high-level language, Xanalogical storage instead of a conventional file system, and using a combination of a portable AST, code templates, and compiler hints as the fuel for a JIT compiler, rather than the more common bytecode approach for portability. Some of this may not work out. It is more a series of experiments than anything else.

Re: Life without an MMU?

Posted: Sat Feb 02, 2019 1:40 pm
by nullplan
eekee wrote:I know why systems written in C or certain other languages are much nicer with an MMU. What I'm looking for is opinions on developing an OS in other languages without an MMU, or using certain programming techniques to minimize the risk.
MMU or not is merely a matter of opinion for the OS :-) . At work I deal with an e300-based system that has no virtual memory. Now, if you look up the Freescale e300, you will notice that it does, in fact, have an MMU. But it is used solely for memory protection. All page maps are identity maps.

The hardest part, which keeps us on a weird old proprietary compiler, rather than gcc or clang, is the ABI. In that system different instances of the same program still share the code section (and the read-only data is part of the code section). So this means that code and data section have to be able to move independently of each other. This is accomplished by having r13 point to the code section and r2 to the data section. That ABI is apparently so alien to GCC that porting it is a significant undertaking.

Major sticking point is, the stack is part of the data section; it's allocated right at the end there. Since even on PowerPC the stack grows downwards (it isn't enforced by the CPU, they could have done it differently), it can't be enlarged at run time. The OS has to know the stack size at load time. If that's exceeded, the stack clobbers global variables. In order to prevent that, global variables hold stack information, and the compiler instruments every function where it isn't explicitly turned off, to call a stack handler if the stack got larger again. That handler will notice an overflow and terminate the program. Except it does so in a way that can't be caught by the exception handler. So I had to use my ca. 3 grams of PowerPC assembly knowledge and write a better one. And that works! It crashes the program properly.

The OS (OS-9 by Microsys) is also very funny, in that it allows privileged programs to just allow themselves access to whatever memory they feel like. This means, programs can overwrite OS data structures. Yeah, it happens. It also means, we can write into the null page. In case you never worked on PowerPC: The null page is where all the interrupt vectors are.

But the worst part is that by now significant parts of our programs use this weirdness as a means of IPC. You put a pointer into shared memory, then the other process can just allow itself to read the data there. Or write it.

So, to summarize: C programming without MMU is possible but painful.

Re: Life without an MMU?

Posted: Sat Feb 02, 2019 7:02 pm
by eekee
Schol-R-LEA wrote:
eekee wrote:I know why systems written in C or certain other languages are much nicer with an MMU. What I'm looking for is opinions on developing an OS in other languages without an MMU, or using certain programming techniques to minimize the risk.

Actually, I'm wondering why a Lisper would only take hardware seriously if it has one, Schol-R-LEA. ;)
It isn't so much that I don't take systems without an MMU seriously, personally, so much as that I would expect most of the others here wouldn't.
Most of them? Oh, okay.
Schol-R-LEA wrote:OTOH, Lisp systems have a long history of accelerated hardware and hardware memory support; the various Lisp Machines all had MMUs, for example, as well as tagged memory. I would certainly love to see tagged memory architectures return for that very reason (as well as hoping that hardware-based capability security will finally reach the mainstream). While the usual approach to memory management doesn't play well with the usual approaches to garbage collection, that doesn't necessarily have to be the case if the two are designed to work together. However, I intend to flexible enough that, if a given system doesn't have an MMU, I can do without it.
I didn't know the Lisp Machines had MMUs. I don't know why I assumed they hadn't; foolish of me. Cool that you could do without it.
Schol-R-LEA wrote:On the gripping hand, most 32-bit systems today - even microcontrollers - have MMUs.
This has happened so fast! Doesn't seem long ago that few of them had MMUs. There were all these optional features which variants of a given SoC did or didn't have, an MMU sometimes being amongst them. I guess a few ARM SoCs have had microcontrollers for a long time. I still have my PXA-270 based Zaurus SL-C3200 which has a MMU. It's over 10 years old now.
Schol-R-LEA wrote:
eekee wrote:I thought Lisp was a safe language.
Setting aside the question of whether there is such a thing,
Yeah. I sort-of know there probably isn't, but the wishful thinking is strong with me. :)
Schol-R-LEA wrote:I am guessing that this ties into the misunderstanding of Lisp being primarily an interpreted language, as well as the matter of garbage collection implementations. Thing is, there's not really such as thing as 'an interpreted language' vs 'a compiled language' - with enough effort and run-time sleight-of-hand, pretty much any language can be compiled for even code which explicitly accesses the code translator. While many, many Lisp interpreters have been written as class projects (because it is dead easy, and a useful example project), most 'serious' Lisp systems are compiled (often to native code), and usually can mix-and-match interpreted and compiled code transparently in ones designed after the early 1970s (see the Lambda Papers for the breakthroughs that made this feasible). For some systems, even the Lisp Listeners (the interactive REPL) are compile-and-go rather than interpreted.
My misunderstanding was to do with the garbage collector and the way data is handled. I've known about compiled Lisps for a long time, although I didn't know the original intention was to compile the language until I read another of your posts today.
Schol-R-LEA wrote:(As I understand it, most Forth implementations also mix interpretation and compilation, but in a different manner, with the interpreter walking through the words of a given named operation until it finds a built-in or compiled word, and then calling it as native code. Comments and corrections welcome.)
That is the traditional way Forth is implemented, and no doubt the majority of implementations follow it, but I have installed right now four compiled Forths; 2 each closed and open source. Like serious Lisp systems, they compile the code but have REPLs too. (Swift Forth, VFX Forth, GForth and SP-Forth.)

I do wish more than one of them (Swift Forth) would allow control structures at the prompt. One which doesn't (SP-Forth) does supports anonymous functions [and I think lambdas?], so I guess I could work it that way. I'm planning an enhanced REPL whatever language I use. (APL was considered at one point.)
Schol-R-LEA wrote:As for garbage collection, that helps a lot in minimizing the possibilities of trivial memory leaks and wild pointers, no question. However, it comes with trade offs. First, you are replacing a large number of potential failure points with (conceptually at least) one much bigger one, and debugging garbage collectors is notoriously difficult. The bigger issue is that while the most common classes of memory problems are avoided, several less common ones still remain, most notably the problems that come when a very long running process doesn't release the memory it is using somehow - as long as the memory still has live references, it won't be freed by the collector, and while most memory references go stale as a matter of course, if the process is recursing heavily such that prior memory used in prior calls is still 'live' it can result in a something akin to a memory leak. There are some ways around this, but that's just one possible problem.
I admit I forgot about debugging the garbage collector. Didn't really know that it's a notorious problem. That's a shame.

I understand leaks, but was thinking of them as a different problem. I don't see much difference with or without an MMU. Without; the leaking process gets stopped at allocation time rather than use. It's not nice with programs which allocate a lot of space ahead of time, but that's not really a leak as such.

Dare I say I hate recursion? (I need my alien smileys again. :D) I'd rather use a little extra code and save bloating up the stack or putting the exit condition into the interface. Of course, sometimes it's there already. Tail call optimization doesn't fit every case, does it? Although I imagine it could with a wrapper function, which could also take care of the interface... *shrug* :)
Schol-R-LEA wrote:Also, naively implemented garbage collection does not interact well with naively implemented paging. However, this wasn't a problem with the Lisp Machines (for example), because they used a more sophisticated approach which let the two facilities work together (and the systems had hardware support for GC that tweaked the performance of this even more). In my system, too, I intend for the paging and the garbage collectors (plural, as I mean to have a hierarchical memory manager) will be coordinated with each other, rather than fighting against each other.
Am I right in assuming the hardware GC support worked with the memory tags?

I've been thinking about paging in the last couple of years. I couldn't see it working well with garbage collection. Coordinating them makes sense.
Schol-R-LEA wrote:Mind you, I mean to do some seriously odd things in my OS, such as using runtime code synthesis in a high-level language,
Awesome! :D Also slightly terrifying! :twisted:
Schol-R-LEA wrote:Xanalogical storage instead of a conventional file system,
This deserves another new thread, I have a lot to say about local hypertext, much of it based on 10 years experience. I love it! I don't like the Xanadu project though...
Schol-R-LEA wrote:and using a combination of a portable AST, code templates, and compiler hints as the fuel for a JIT compiler, rather than the more common bytecode approach for portability.
What's AST? In any case, this sounds wild too!
Schol-R-LEA wrote:Some of this may not work out. It is more a series of experiments than anything else.
It sounds like a series of most fascinating experiments! Well, so long as you don't do as Ted Nelson did and start obsessing over something unworkable. But again, other thread, and I won't be posting it tonight because it's past midnight.

Re: Life without an MMU?

Posted: Sun Feb 03, 2019 3:58 pm
by azblue
eekee wrote: What's AST?
Abstract Syntax Tree, it's an intermediate code representation in a compiler. Let's say you have:

Code: Select all

x=a+b*c
This translates to something like:
variable = expression
expression has branches below it:
Term+term
The 1st term has "a" below it, the 2nd has:
factor * factor
And each factor has its variable below it

Re: Life without an MMU?

Posted: Sun Feb 03, 2019 5:46 pm
by bzt
Hi,

If you're asking if multitasking is possible or have been ever done without an MMU, then the answer is absolutely yes. Here are some examples:

- Minix, probably the most well-known Open Source, educational, microkernel OS without an MMU support. Source available online. Not much protection of any kind though.

- AmigaOS was a very cutting edge microkernel architectured OS back in it's day, also without MMU support. This is not a hobby/research/educational OS, it's safe to consider it mainline (or used to be). The source is closed, but you can download the version 3.1 because of a source leak.

- Singularity was an interesting research OS from M$, now called Midori. It's supposed to be effective and secure without an MMU support. (Although I have to tell you I'm not convinced that its process protection (SIP) is sufficient, no arguments on the performance gain.)

As others have stated already, MMU has nothing to do with the language itself in most cases. Most languages can be compiled into a code which do run-time checks. For example TCC is a C compiler (from the original author of qemu) which can compile memory and bound checks into the output automatically. You are right though, that some language expects more checks than the others, Ada is the strictest of all in this regard. With Ada, you can define which arguments are input or output, and all parameters are copied on the stack, so if a function crashes (throwing an exception), the referenced memory is not corrupted. Memory is only modified if the function returns successfully. Also shared memory objects, monitors and thread synchronization, safety and security keywords are part of the language which is not typical at least to say. The only downside is being Pascalish, otherwise Ada is the best in stability and security MMU not withstanding. If you want to take a look, try GNAT (part of GNU GCC).

Cheers,
bzt

Re: Life without an MMU?

Posted: Sun Feb 03, 2019 5:48 pm
by eekee
@Schol: Oh that makes sense... And leaves me wondering why bytecode even exists. (Not really :))

And I see there's a new post while I'm typing this little thing. :) Will have to reply to it tomorrow.

Re: Life without an MMU?

Posted: Mon Feb 04, 2019 4:49 am
by OSwhatever
bzt wrote:Hi,

If you're asking if multitasking is possible or have been ever done without an MMU, then the answer is absolutely yes. Here are some examples:
There are loads and loads of systems without an MMU in embedded systems, which have some kind of thread multitasking. Just a few examples, FreeRTOS, ThreadX, eCos but these operating systems are usually simple enough that vendors often choose to implement their own OS so the real number of MMU-less operating systems out there must be very high. These operating system are usually paired with CPUs like ARM Cortex M-line, R-line or ARC.

The common denominator for these is that users aren't in general allowed to install their own SW, thus an embedded system. In this case MMU is not that useful and makes things just more complicated. Also, these systems tend to be compiled into one binary monolith.

As soon a user wants to add its own SW from whatever unknown source or their own, HW protection is something you want. Not only for security but also for convenience. Remember the DOS days when your program crashed, you had to reboot the computer and reload the OS from a floppy disk. Anyone who wants to go back to those days?

I have seen embedded systems where the "user" (customization of the system) is allowed to do additions but in Lua or some other scripting language. Those systems are still embedded systems.

Re: Life without an MMU?

Posted: Mon Feb 04, 2019 11:10 am
by eekee
I'm looking for the practicalities of running without a MMU. Examples do help, but tips and tricks help more. ^.^'

Does Minix 3 really not have MMU support? I remember reading all this stuff about how it was going to be much more serious than the purely educational versions before it. It was going to be highly reliable and secure, even restarting drivers if they crash, etc etc. Looking it up now, I see on the front page:
MINIX 3 is a free, open-source, operating system designed to be highly reliable, flexible, and secure. [...] It runs on x86 and ARM CPUs, is compatible with NetBSD, and runs thousands of NetBSD packages.
Searching its wiki for "memory management" turns up things like this: "the example of faulted-in stack pages". Page faults can't be done without some sort of MMU, can they? It's from a page describing memory management problems.

A guy I know runs a clone of AmigaOS on old PPC Macs. (I can't remember which clone, but I'm pretty sure he told me about it before the leak.) It doesn't have MMU support. He says programs crash quite often, but most of them save their state so you can just carry on where you left off after restarting them. This is an option, but I don't really want my OS to be like this.

Singularity/Midori I might have to look up.


Oh! Haha, I had *no* idea TCC could compile those checks in. I could use it for simple performance tests; "Just how much does bounds-checking slow things down, anyway?" Then again, I could make those tests in Forth.

Hmm... what gets me is you don't need bounds-checking for some jobs. For instance, in iterating over all the elements of an array, the code is only going to go over the end if it doesn't know the correct size of the array. You might make a mistake giving it the correct size in C but not in many other languages, not even Forth if you do it right.

What I'm thinking of is denying pointer access to user code in Forth (disabling @ and !), and implementing a bunch of APL array features instead. (Plus structs. APL lacks structs, it's annoying.) I've been thinking of those things separately for a while, disallowing pointer access to untrusted code, but just realised that the APL features might make it reasonable to cut off pointer access to all user code.


I was looking at the Ada barebones on the wiki the other day. It looks like a very easy language to read; it was all fairly obvious even though I've never seen Ada before. A lot to type though. That matter of copying referenced memory seems like overkill. I'm sure there's some logic that applies, but I can't see the practicality of it in isolation. It makes sense if the caller is going to call something else with the same data if the callee fails, but how often does that happen? It would help with debugging, you could see what it contained before the call and after the error.

OSwhatever wrote:The common denominator for these is that users aren't in general allowed to install their own SW, thus an embedded system. In this case MMU is not that useful and makes things just more complicated. Also, these systems tend to be compiled into one binary monolith.
That makes sense.
OSwhatever wrote:As soon a user wants to add its own SW from whatever unknown source or their own, HW protection is something you want. Not only for security but also for convenience. Remember the DOS days when your program crashed, you had to reboot the computer and reload the OS from a floppy disk. Anyone who wants to go back to those days?
Yes I remember, but those programs were often written in assembly language. That's the opposite of increased safety. MS-DOS itself had a lot of bugs fixes when it was re-written in C. I've been using FreeDOS and some mature DOS programs lately. Strange problems aren't absent, but they're relatively rare. They also tend to be specific to certain programs. For instance, SetEdit breaks if the mouse moves while keys are pressed, but nothing else has that particular problem.
OSwhatever wrote:I have seen embedded systems where the "user" (customization of the system) is allowed to do additions but in Lua or some other scripting language. Those systems are still embedded systems.
Scripting languages are expected to be slower, thus will have bounds checking etc. Limiting the scope of what scripts can do also makes for easier testing.


Edit: Forgot to mention Ada's multi-threading and shared memory. For years I've been in touch with languages and libraries which use channels as the primary method of communicating between threads. It seems to be an easier, safer way to do it. To take a catchphrase from Go, "Don't share memory to communicate, communicate to share memory." In other words, use channels to hand off ownership of the shared memory. Or don't use shared memory at all. If you want to be as safe as Ada, why would you even have shared memory and not just copy the data sent through the channel?

Re: Life without an MMU?

Posted: Mon Feb 04, 2019 6:48 pm
by bzt
eekee wrote:Does Minix 3 really not have MMU support?
Well, according to this documentation, "Memory management in MINIX 3 is simple: paging is not used at all.", and it wasn't last time I've checked. But just in case if a student had implemented some sort of minimal MMU support lately, use MINIX 2, that won't be updated for sure.

EDIT: according to wikipedia, virtual memory support was added in MINIX 3.1.4, so use older versions than that.
A guy I know runs a clone of AmigaOS on old PPC Macs. (I can't remember which clone, but I'm pretty sure he told me about it before the leak.) It doesn't have MMU support. He says programs crash quite often, but most of them save their state so you can just carry on where you left off after restarting them. This is an option, but I don't really want my OS to be like this.
TBH the same could happen with an OS that implements separated address spaces. I'm sure you have seen segmentation fault under Linux. It's more like an application quality issue.
I was looking at the Ada barebones on the wiki the other day. It looks like a very easy language to read; it was all fairly obvious even though I've never seen Ada before.
Well, everything you ever wanted from a language is already implemented in Ada :-) Java interfaces and C++ templates seems like a wild joke next to Ada generics. If only Ada had C-style syntax, that would be the King of Programming Languages of all time. For example, in Ada you can say that the integer variable you use to iterate on the array may contain only values in the range [0..array length] as an extra security measure. Or the screen pointer may only have values in the range [0xB8000..0xB8FA0]. Really cool, isn't it?
That matter of copying referenced memory seems like overkill. I'm sure there's some logic that applies, but I can't see the practicality of it in isolation.
Think of this: you pass a kernel object to a function by reference. That function modifies some properties of the object, then throws an exception for some reason. In C++ that would mean that the passed object would remain in an inconsistent state when the exception handler is called, and it is extremely complicated to restore. In Ada, those properties remain unchanged, therefore the inner consistency of the kernel object always guaranteed. This is similar to transaction rollback in databases, but for memory objects.
Edit: Forgot to mention Ada's multi-threading and shared memory. For years I've been in touch with languages and libraries which use channels as the primary method of communicating between threads. It seems to be an easier, safer way to do it. To take a catchphrase from Go, "Don't share memory to communicate, communicate to share memory." In other words, use channels to hand off ownership of the shared memory. Or don't use shared memory at all. If you want to be as safe as Ada, why would you even have shared memory and not just copy the data sent through the channel?
This is not a problem with Ada, because task synchronization primitives are part of the language, called randevous points. With this model, no ownership hand off required. This provides much better performance than channels: all tasks can run asynchroniously and uninterrupted provided they are modifying different objects at any given time. In contrast a channel has a limited throughput, but the number of shared objects and randevous conditions are unlimited.

Cheers,
bzt

Re: Life without an MMU?

Posted: Tue Feb 05, 2019 7:37 am
by Solar
bzt wrote:
A guy I know runs a clone of AmigaOS on old PPC Macs. (I can't remember which clone, but I'm pretty sure he told me about it before the leak.) It doesn't have MMU support. He says programs crash quite often, but most of them save their state so you can just carry on where you left off after restarting them. This is an option, but I don't really want my OS to be like this.
TBH the same could happen with an OS that implements separated address spaces. I'm sure you have seen segmentation fault under Linux. It's more like an application quality issue.
Funnily enough, the very absence of memory protection on AmigaOS made memory access checking tools like efence standard issue during development -- something that, to my experience, even professional development is skimping on a rather regular basis today. Resulting in comparatively high application quality.

Bad Amiga software could crash, taking other applications or even the whole system down. Worse, bad software could taint other application's memory space, resulting in data corruption. AmigaOS users knew which software had a reputation for such behavior, and that usually meant they'd be looking for alternatives. There were many very well-behaved software titles available for AmigaOS, which worked to a very high degree of reliability.

Re: Life without an MMU?

Posted: Tue Feb 05, 2019 1:48 pm
by davidv1992
bzt wrote:
eekee wrote:Does Minix 3 really not have MMU support?
Well, according to this documentation, "Memory management in MINIX 3 is simple: paging is not used at all.", and it wasn't last time I've checked. But just in case if a student had implemented some sort of minimal MMU support lately, use MINIX 2, that won't be updated for sure.

EDIT: according to wikipedia, virtual memory support was added in MINIX 3.1.4, so use older versions than that.
Although Minix 3 prior to 3.1.4 doesn't use the MMU, it did use virtual memory and memory protection to some extent. Rather than use paging, it used the segmentation system of x86 processors to achieve similar results (at least in terms of protection), albeit with the disadvantage of being susceptible to the problem of memory fragmentation.

Re: Life without an MMU?

Posted: Wed Feb 06, 2019 4:05 am
by eekee
Oh yeah, fragmentation is another issue without an MMU.

Good to know that about Amiga software quality reputations. I can imagine such news going around. :)
bzt wrote:
eekee wrote:If you want to be as safe as Ada, why would you even have shared memory and not just copy the data sent through the channel?
This is not a problem with Ada, because task synchronization primitives are part of the language, called randevous points. With this model, no ownership hand off required. This provides much better performance than channels: all tasks can run asynchroniously and uninterrupted provided they are modifying different objects at any given time. In contrast a channel has a limited throughput, but the number of shared objects and randevous conditions are unlimited.
I don't think a channel is forced to be synchronous. *checks* No, even in that bastion of blocking I/O, Plan 9,[1] a call to alt() may be non-blocking. *ponders some more*

[1]: Non-blocking alt() is actually the only way to do non-blocking I/O in Plan 9. Fork off another process (not just a co-routine) to do the I/O, communicating with its parent on a channel. Non-blockingly check that channel with alt(). ;)

Re: Life without an MMU?

Posted: Wed Feb 06, 2019 2:38 pm
by bzt
eekee wrote:I don't think a channel is forced to be synchronous.
It's not a question of sync / async. I could try to explain but it'd be better if you'd read more about randevous points. The Ada doc is really great and easy to understand.

Cheers,
bzt