Page 2 of 7

Re: Dawn

Posted: Thu Mar 16, 2017 3:05 pm
by Geri
2017, march 16. - Compiler ang GUI bugs
Several bugs fixed in the compiler related to inline function calls and runtime library.
A bug sometimes caused missing re-rendering of texts.

Re: Dawn

Posted: Fri Mar 17, 2017 7:23 am
by matt11235
Are you compiling another language into SUBLEQ bytecode, or do you write it by hand?

Re: Dawn

Posted: Fri Mar 17, 2017 12:06 pm
by Geri
no, i wrote it in C. i wrote a C compiler to compile the source to SUBLEQ machine code. the C compiler is also integrated in the operating system itself, so people can write programs in C language and save is at an executable native subleq binary.

Re: Dawn

Posted: Sun Mar 19, 2017 10:33 am
by Schol-R-LEA
Let's look at the assertions in the first post with a bit of a critical eye, shall we? I don't want to be a wet blanket, or attack your ideas or you personally, but, well, I see a lot of problems with what you are saying, and I think you need to reconsider how you are explaining your ideas, even if the ideas themselves prove sound (which has yet to happen IMAO). I will try to keep this as constructive as possible, but, well, that may prove difficult.
Geri wrote:Dawn operating system is a revolutionary technology, the first high-level operating system built for URISC computers.
This isn't an assertion, it is marketing. We can safely ignore it until and unless something is stated to justify the claim.
Geri wrote:Other operating systems are designed to run on extremely complex hardware environments, but Dawn operating system is designed for the SUBLEQ architecture, which have no traditional instruction set. This allows a Dawn compatible processor and computer to be designed from a millionth costs than a traditional CPU.
This demonstrates a genuine lack of knowledge about hardware design, and about the costs and effort associated with it. The complexity of the instruction set has almost no bearing at all on the cost and time needed to implement the ISA; the real costs are in making it run efficiently. Even a highly CISC ISA such as the VAX can be implemented in a standard FPGA costing $25USD in about a week of work, but such an implementation will have terrible performance.

The fact that undergraduate Computer Architecture courses (such as the one I took at CSU-East Bay) can teach novices to construct a logic-emulated MIPS or DLX processor equivalent to any implementation of that ISA built in the mid-1980s is proof of that implementing an ISA, regardless of complexity, is not a significant issue in CPU design at the professional level. Actually, the ISA itself was not really a factor in the course I took at all, which was mostly focused on the logical components needed by any CPU implementation (adders, barrel shifters, etc. - even an OISC needs at least an adder, a comparator, an instruction counter, and a memory interface, right?). Yes, we needed some hardware for decoding the instructions, which wouldn't be needed in SUBLEQ, but that was almost an afterthought and done mainly in a PLA.

The hard work in developing a new generation of CPU is mostly in development and debugging of the die process - improving the transistor density is not a small task, and not an automatic one despite the impression Moore's Law might give people. The x86 ISA? 90% of that was worked out in 1978; despite the heroic (and fundamentally futile, something even the companies involved are aware of) efforts Intel and AMD have made to extend its life, the basic ISA hasn't really changed all that much compared to things like the memory addressing, register file size, register width, caching, instruction pipelining, branch prediction (especially branch prediction!) and MMU - none of which are part of the ISA, even if they led to some of the changes in it. The ARM and MIPS designs have undergone even fewer changes; in effect, the ISA itself is a done deal.

As an aside, if memory serves, about half the die of the Kaby Lake design is taken up by cache, and about 10% each by the pipeline, instruction re-ordering, instruction simplification (the modern equivalent of microcode), and branch prediction logic. Actually implementing the ISA? Probably less than 5% of the die, even on CPUs with 6 or more cores.

The statement also shows an all-too-common misunderstanding of load/store architecture - the reason 'RISC' is more performant (in principle, though many of the advantages disappear or are less distinct once things like caching, register renaming, multi-path branch prediction, and so on are used) isn't because the ISA is small - several so-called 'Reduced Instruction Set' designs actually have pretty big ISAs - but because all of the instructions can be implemented without microcode; the use of load/store discipline reduces the frequency of memory accesses for data; regularizing the instruction set makes it easier for compilers to target it and optimize the generated code; and eliminating rarely-used instructions means that the whole can be fit onto smaller dies, leading to less propagation delay. The term RISC is really a very misleading and unfortunate one, and the idea that 'URISC' would be somehow inherently even better is a gross misunderstanding of the reasoning behind load/store discipline and elimination of low-usage-frequency instructions. OISC, by its very nature, is not actually a RISC design at all, because it isn't load/store - the single instruction is actually more CISCy than any of the 56 instructions in the MIPS 2000 ISA.

In practice, an efficient OISC implementation would need any incredibly hairy multi-branch-predicting code/data pipeline that would make Kaby Lake's instruction decoding look like the RCA 1802's. The die layout would be much larger and more complex than that of even the current Intel designs.

And, oh yeah, the fact that the code and data are deeply intertwingled means that conventional approaches to memory protection can't be applied. I have no problem with this - I am a Lisp fanatic, after all, so mixing code and data seems natural to me, and as a fan of the Synthesis kernel design I find the usual approaches to memory protection more an inconvenience than a benefit - but you do need to be aware of the trade-off.
Geri wrote:The goal of the Dawn operating system is to bring back the hardware and software development to the people.
What is this even supposed to mean? Especially since, well, 'the people' never had them in the first place. Seriously, I am all for new operating systems, for empowering users, and for making programming more accessible, but... come on, even if the Alto/Star had been a smashing success and we were all using Pilot OS today, probably less than 2% of users would ever learn Smalltalk, and even fewer would use it to any real extent, and that's on a system that bends over backwards to make it easy for users to write code! My own intended UX design (not OS design - design issues in kernels, ABIs, driver models, system libraries, and user interfaces are completely orthogonal to each other, even when they are all being built to work together as a whole, so stop confusing them!) follows similar principles, but I am well aware that most people will simply use the tools that are on hand, because programming isn't their goal, using programs to further their activities is.
Geri wrote:Currently, the market is dominated by x86/arm, where a CPU that is capable of blicking the cursor on the screen needs 20 billion usd investment to create, and 30 years of work by 30 corporations and 1 million developers worldwide.
Are you trying to say that the effort made in developing them was wasted somehow? I beg to differ; even the work on improving x86 so that people can put off losing most of the code base for one more design cycle has garnered a wealth of knowledge on CPU implementation (for Intel and AMD, anyway, though I am always pleasantly surprised at how much of it isn't kept as trade secrets).

In any case, the first assertion of this paragraph is, well, not wrong, but misleading, because it fails to consider why those two designs are dominant. Here's a hint: ISA has nothing to do with it. As I have said many times before, everyone - Intel and Microsoft included - have been praying for an opportunity to ditch the x86 ISA almost since it was released (the 8086 was intended as a stopgap until the ill-fated iAPX- 432 was working - which never really happened - and Intel have tried on at least two more occasions to cook up a replacement for it only to see the replacement get ignored by the customers). The reason they haven't has nothing to do with CPUs and everything to do with the installed codebase, and specifically with the way far too many software vendors have written their code to depend on x86 in ways even Intel disapproves of (e.g., using undocumented instructions).

ARM, conversely, is widely used because a) the owners of the IP have given a lot of people inexpensive licensing for the architecture, and b) it lends itself to both low power consumption and low implementation cost. The same is true of several other CPU designs, most notably MIPS, but ARM came out on top mainly due to being readily available when the early PDAs were being built (the owners of the MIPS IP were reticent to license it at the time, and neither SPARC nor Alpha were as suited to the task despite being load/store designs).
Geri wrote:Traditional IT corporations created a technologic singularity, which was not able to show up any innovation in the last 20 years.
Well... no. Software developers did that. A stable software base requires a stable hardware platform (or at least a stable way of transitioning between them, something that is less dependent on the hardware and more on the way the software itself is written). The hardware manufacturers themselves are the ones trapped by it.
Geri wrote:We arrived to a point, where painting a colored texture on the screen requires 1600 line of initialization and weeks of debugging - and if you are not want to do that, you can only access child toys, design apps, half gbyte sized libraries, and they are still crashing at initialization in most cases.
That has absolutely nothing to do with the hardware. This a fault of the OSes and the languages used.
Geri wrote:The internet, global forum of the free speech - actually controlled by the ISP-s and governments.
You do know that free speech was never part of the design goals of the Internet, right? Well, not that it really was designed at all - the various protocols were, but the 'Internet' as a whole, no. Anyway, my point is that 'free speech' on the Internet was due entirely from the way it blindsided the people who came up with it, and the neglect of it by those who might have an interest in monitoring it. Back when 'the Internet' was just a way for academic researchers to sent messages between themselves by piggybacking on what was in reality the medium of the US military's C3 infrastructure, no one thought it would matter what people were saying, so none of those involved put in any checks and balances for either monitoring it, or conversely, for ensuring privacy. This attitude carried over after it was released for public use, and persisted far longer than one would have expected because the people who would want to clamp down on it were ignorant of what it could do, and too busy flinging poo (and impeachments) at each other to notice what was going on.

In other words, had the creators of ARPAnet, NFSnet, milnet, etc. and so forth knew what it would lead to, the projects would have been buried and the records of the proposals destroyed. Free speech isn't a feature, it is a bug, though one which we fortunately can (to some extent) still exploit.
Geri wrote:A wifi device is driven by 5 million of source code lines, and nobody actually fully understants, how they work, a TCP stack is 500.000 lines of code. There are no experts at this area any more - even professionals are just typing random things in consoles to get it working if something is broken, hoping that it will randomly cure itself, because they cant debug 30 and 40 million code lines that is responsible for sending a bit on the cable.
All of this is true - but the CPU designs have nothing at all to do with it, nor do the operating system designs. You seem to think that because the OISC itself is simple, and the OS running on it is simple, that the software will be simple - which is a non-sequitur, because the complexity of wifi has nothing to do at all with either of those things. If anything, OISC would worsen the problem, because of the heroic amount of effort needed to make it do anything!
Geri wrote:Dawn operating system is different. Emulating the cpu itself is 6 source code line in C, understanding the hardware set is very simple.
The simplicity of the emulated CPU says nothing at all about the complexity of the hardware underlying the emulation layer. That wifi router would still need the 5 MLOC, plus the emulator, because about 150KLOC are needed to talk to the hardware, and 4.85 MLOC is then needed to implement the remote web interface - both of which will be true regardless of the emulation layer or not, since the former would have to be part of the implementation of the emulator itself (or else it wouldn't be able to access the hardware), while the latter would be true regardless of the hardware (or hardware emulation) it is running on.
Geri wrote:Dawn operating system itself does not supports any technology, that is enemy of the freedom, while it still offers a nice graphics user-friendly graphics interface with the most common elements.
I really want to snark about the placement of the first comma in that first sentence, as it seems hilariously appropriate. However, I will point out that you seem to be mixing up your assertions - and design goals - here.
Geri wrote:It is easy to create hardware and software for Dawn - the hardware design is well documented, and simple.
Create hardware... to support... an operating system?
Image
Geri wrote:Dawn have a built-in C compiler that also offers connectivity to the Dawn platform to create textured sprites, texts, play music, get data from the joystics, from webcamera, or manage the files of the computer.
So, it is about as capable as, say, GEOS circa 1985, then? OK, that was uncalled for, sorry.

However, I do need to say that, first off, that has nothing to do with the fact that it is built on an OISC interpreter, as that just pushes the complexity down into the hardware emulation; the emulator becomes just another sort of abstraction layer. Second, you aren't saying anything about how it is supposed to be easier to write code for (presumably in C, given the prominence you gave the compiler - which means that a lot of the things that make GUI design simple, such as a widget class hierarchy and an event system, won't be readily available...).

Re: Dawn

Posted: Sun Mar 19, 2017 3:59 pm
by Geri
basically i dont have time to properly read your comment, and properly answer it, becouse i get brutal amouts of comments on the system, i just typed answes more than a hour today just on dawn related questions. i will reply in very short form - not becouse i dont want to reply in long form, i dont want to hurt you, i appreciate you write a lot of questions - i just cant fit in my time to make long answers now.

i quickly checked through your comment, and in several parts, you are right, at several parts, you are wrong, and the problem is that you try to make consequences to make from a good statement through a bad statement, resulting bad conclusions.

for example you underestimate the complexity of an x86 isa by scales - and you overestimate the complexity of having a good branch predictor in a urisc architecture, creating the impression that making an x86 / arm / mips cpu is not much difficult that creating a subleq cpu, which is totally wrong.

also there were no io specifications for subleq, so the existing subleq implementations must be redesigned to follow dawn specifications to boot it, and they WILL. POINT. thats not a question of oppinion, the frame buffer output for example must have a memory location to it, etc. a computer is very much useless if you dont have an operating system to run on it.

Re: Dawn

Posted: Sun Mar 19, 2017 4:30 pm
by Schol-R-LEA
Geri wrote:for example you underestimate the complexity of an x86 isa by scales - and you overestimate the complexity of having a good branch predictor in a urisc architecture, creating the impression that making an x86 / arm / mips cpu is not much difficult that creating a subleq cpu, which is totally wrong.
I have to disagree. First, 'URISC' is a hopelessly inaccurate name for an OISC. Using it indicates a total failure to understand what load/store architecture (which is what 'RISC' really means in practice - reducing the number of instructions is not the defining quality of the approach, which is why the term RISC is such a terrible name for those types of ISAs) is all about.

Second, most of what makes x86 complicated to implement is not the ISA itself - far more complex ISAs have been implemented at the bleeding edge of performance for their time, including CADR and VAX - and the complexities that do arise from it are primarily from the limitations set in the design, with the asymmetric register file being the biggest offender. The dinky little register file, combined with the fact that there are several instructions which cannot work with specific registers (and others which only work with specific registers) adds a lot of performance-robbing qualities.

The x86 is actually harder to implement than a lot of ISAs with bigger instruction sets (VAX, CADR and the later LispMs, 68000) due to its irregular addressing schemes, special-purpose registers, memory segmentation, and a whole host of other idiocies which Intel only settled on because the 8086 was thrown together in a rush with the idea of giving the 8080 design a couple more years of usefulness while they finished the designs they actually wanted. They thought it would be used in microcontrollers, not general-purpose systems, and they actively argued against using it when IBM came to them about the PC - they thought home computers were a dead end, and wanted to focus on creating a line for professional workstations (the iAPX-432) and a separate line for embedded systems (the name of which escapes me, though it may have become the 8051). Don't you just love the irony of that?

Third, I really doubt that I am wrong about the branch prediction; if anything, I understated my case. While implementing an OISC isn't difficult, implementing a performant OISC would be damn near impossible even with current die densities, because it would require a branch weighting for every effective instruction, since they are all, in effect, branches. This means that you would need at least as many predictive paths as you have pipeline stages, and would probably need something closer to n! predictive paths for an n-stage pipeline. That's just not going to work, at least not with a naive implementation. There are ways to cut down the number of branch predictions needed (most of which wouldn't be applicable, however, as they use the ordering of the opcodes - and OISCs don't have opcodes...), but the implementation would still be dominated by the branch prediction.

Well, it would dominate the part of the die that isn't occupied by the multi-level caching which a system that has only memory-operand instructions would require, at any rate. That, BTW, is also why stack machines such as the JVM rarely run well in hardware - it can be done, but the cost in terms of die real estate is prohibitive. They do present a lot of advantages for developing a pseudo-machine, as the implementation is simpler, and since simulated registers aren't any faster than simulated memory accesses, there is no real difference in performance. Indeed, a complex instruction set performs better in a p-machine, since the complexity is baked into the underlying implementation (that is to say, the implementation of each instruction is a block of native code, and have longer sections of native code running to completion without needing further interpretation results in improved performance).

This brings up another issue: the interpretation speed of any bytecode is dominated by the instruction decoding (since it would be meaningless to have a pipeline of any kind unless you could manage some sort of minimal-overhead parallelism) rather than the speed of performing the operations. For a p-machine, it makes more sense to have each instruction do as much as much as possible, to reduce the number of instruction decodings needed, which is the opposite of how it usually goes with hardware operations (especially since most CPU designs are synchronous, meaning that a large instruction would either need a long clock cycle, or more than one clock cycle).

While an OISC dos not have any real decoding, since there is only one operation, it also does have to read and interpret the operands very, very frequently. This means that not only will an OISC always be under-performant in hardware (due to the branch prediction problem), but will also be inherently under-performant in software as well.

Of course, performance is by no means the sole measure of a system. If you can provide some idea of the other advantages of using SUBLEQ and Dawn, I would be interesting in hearing you out. Do remember, however, that I am deliberately being skeptical to try and find issues you will need to address.

As for developers of other SUBLEQ implementations converting theirs to support the Dawn hardware emulation model...

I would be astonished if even one person other than you ever did. You do know that most esolangs such as OISCs aren't meant to be production systems, right? No one else is going to bother, since, well, for most of them, the whole point of the exercise was code golfing (though not necessarily of the competitive variety), not developing a practical system.

The same applies to hobbyist hardware implementations - there is just no way that a TTL-based CPU can compete with one on a single chip as a practical product, and unless you have access to a silicon foundry and a team of engineers with years of experience in IC design, you aren't going to create a SoC of it, either. The day may come when a turnkey additive-process IC fabricator can be purchased by ordinary individuals for less than the mortgage on the Burj Kalifa, but sadly, that day isn't here now and it probably won't be soon.

Sorry to be so harsh, but this is something you are in dire need of a reality check on.

It is one thing to tilt at windmills oneself; I do it all the time, so I do get where you are coming from. It is something else entirely to expect people to agree that they might be giants and join you at it.

I mean, I would, maybe, if you gave me a convincing argument for it, but I already have my eyes on a different set of windmills (and ride under the banner of the Lambda).

Re: Dawn

Posted: Tue Mar 21, 2017 7:45 am
by Schol-R-LEA
SpyderTL wrote:Is there anything that anyone is aware of that you could NOT do with a single SUBLEQ instruction?
No, or at least, it doesn't have any limitations that aren't shared by every other sequential computer that can be built or simulated. The whole point of an OISC is that they are minimal implementations of a Turing-equivalent computation system - an OISC is just as close an approximation of a Universal Turing Machine as any other physical computing method can be, and while the finite nature of memories does limit any real computer, in practice the significant limitations are mainly a factor of the implementation, not the theoretical computation power (since most things which cannot be computed with an LBA cannot be computed in a finite time anyway).

I do have to give Geri that much - in principle, it should be possible to optimize a single implicit instruction to a far greater degree than one could with a larger set of explicit instructions. I am just not convinced that the theory can be borne out in practice. There is a reason why minimally Turing-equivalent languages are usually called 'Turing tarpits'. It is more acceptable in a low-level language meant as a target for a compiler, but it still is a case where smaller isn't necessarily better.

Also, there are many, many established UTM-equivalent models of computation (and probably infinitely more which aren't known), many (but not most) of which can be readily simulated in hardware (yes, simulated - all mechanical computation models such as Universal Turing Machines, Register Machines, and Stack Machines require at least one infinite resource - usually memory - to be universal, so any real computer can only be a linear-bounded automaton, not a true UTM equivalent). OISCs just happen to be a more familiar type of system, and one more readily cast into hardware than, say, the S combinator-K combinator pair.

Seriously, Turing-completeness is no big achievement; even Game of Life is Turing-complete (no, really, it is), as is Minecraft. That was sort of Turing's whole point, really: that it is possible to describe all computable mathematics (and conversely, help demonstrate when given problems aren't computable) within a very simple mechanical framework.

Finally, a real computer system does a lot more than just, uhm, compute. Most of the things you actually want a personal computer for aren't mathematical operations, even if the signal manipulations implemented in the CPU are designed to model computation. Things like processing speed, data storage capacity, user interface (both physical - keyboards, mice, monitors, etc. - and conceptual - windows, menus, etc.), networking, and so forth, are of greater concern to actual users. Any practical computer will in fact be an LBA, so theoretical computing power can't really serve to differentiate between computers (or programming languages for that matter).

Re: Dawn

Posted: Tue Mar 21, 2017 9:50 am
by Schol-R-LEA
I will further admit that the performance comparison video is compelling, but I am concerned that you may be misreading the results. You have to keep in mind that an OS such as Linux, Windows, or MacOS has to do a lot more during the boot cycle than a system such as Dawn does, for reasons which have very little to do with the hardware platform itself and more to do with the variability of said platform.

To clarify: I would estimate that 90% of the boot time in a production OS for stock hardware is taken up in checking the hardware configuration and initializing the different devices. What takes time (and memory) isn't the hardware support, but the process of determining which hardware is actually running, and how the system is set up to use it. Even the Macs, which have most of their hardware configuration fixed when the system is assembled and known when the OS is installed, have to do this. This will apply to any OS that doesn't severely restrict the hardware and configuration options - more so than even a system like modern Macintoshs, or even most smartphones and tablets do. Only a true embedded system, with software written for and installed to a fixed, unchangeable target, can ever truly avoid it - and then, only by avoiding the use of a stock OS design.

This becomes as much a matter of how the software is installed as anything else - the typical OS installer doesn't do nearly enough to check the configuration to limit the support to the things actually installed at the OS installation time - but even with a well-tailored installation, checking whether the hardware has changed since the last boot is still a necessary step.

Now, the natural reply to this would be that the Dawn hardware emulation means that there is no need to do all of this; but that's misleading. You aren't avoiding the configuration, you are pushing it into the emulator's boot cycle, instead - something that was a factor with things like UCSD Pascal back in the 1980s, or the various Java OSes in the 1990s. If you are running on a system where one or more pieces of hardware might change, this configuration will need to be done.

Now, it isn't all as dreary as I am making it out to be; the fact that Dawn is not meant to support bleeding-edge graphics, for example, means that you don't need to check as much. But as I said with some other things earlier, you do need to be aware of the tradeoffs you are making in order to judge whether they are worth it.

Re: Dawn

Posted: Tue Mar 21, 2017 10:26 am
by Geri
Schol-R-LEA: yes, Dawn pretty much have fixed hardware set, this for example means it will never have to support 20k type of graphics cards/sound cards/webcams/etc to deal with it, only one. this alone helps dawn to gain 5-10 seconds. but linux and windows generally is slow becouse it is slow, on x86 for example a kolibrios boots much faster than linux or windows, meanwhile it supports a large variety of hardware too be compatible with. but if you put an os together from scripts and individual packages like linux and windows does, you generally loose the performance.

Re: Dawn

Posted: Tue Mar 21, 2017 3:10 pm
by SpyderTL
Now, the natural reply to this would be that the Dawn hardware emulation means that there is no need to do all of this; but that's misleading. You aren't avoiding the configuration, you are pushing it into the emulator's boot cycle, instead.
This is why my first thought when I saw this idea was to emulate only the processor, and nothing else. And it is where I disagree with Geri's approach, slightly.

Instead of (or in addition to) writing an executable that handles the video card emulation, why not simply create a driver for your video card, written in Subleq? Then, you could have support for things like 3D acceleration and playing MPEG video without the performance hit.

In short, can you get rid of the emulator executable, and just write a boot loader that loads your Subleq kernel into memory, switches to 64-bit mode, and then goes into a Subleq instruction loop executing the kernel?

You can even keep your emulated graphics card code in place in your executable, as long as you write a separate device driver for it.

This is essentially what I'm thinking about doing for my next project. I've already reached the "Hello, World" benchmark. :) I also have Add and Subtract functions that work! :)

Re: Dawn

Posted: Tue Mar 21, 2017 6:09 pm
by Geri
SpyderTL, i alreday suggested you to create a bootable emulator for my OS on top of your OS. with that you could get immediate results. also you can just spin the emulated cpu for a few 1000 clocks, then return, becouse its enough to watch any hardware access once in a while. and you can do tricks, like you can wrap svga memory directly with paging into its dawn-equivalent location.

Re: Dawn

Posted: Sun Mar 26, 2017 10:40 pm
by Geri
i started to create a bootable emulator to have a bootable dawn running on regular pcs.

but i will possibly not develop the bootable emulator. the reason for this is basically all of the tools/tutorials/examples avaliable are not capable to run properly on real machines, i did several tests, and put together a bootable independent emulator, and tested it on 4 different real machines, none of them worked properly, all of them had major issues. for example, from 4 pc 3 lagged ps2 mouse emulation, one lacks usb booting, one can boot usb drives only in floppy emulation, one places parts of bios where it really should not, one not recognizes a bootable media if it has no x86 style partition table even if the boot sector have the magic values.

basically x86 computers can only run windows, dos, and linux properly, they are not capable of hosting new operating system technologies any more - its true that platforms like x86 and arm are enemy of freedom - and thats why dawn operating system is born aniway - to take back the control from the hardware-imperialists.

Re: Dawn

Posted: Mon Mar 27, 2017 1:21 am
by Brendan
Hi,
Geri wrote:i started to create a bootable emulator to have a bootable dawn running on regular pcs.

but i will possibly not develop the bootable emulator. the reason for this is basically all of the tools/tutorials/examples avaliable are not capable to run properly on real machines, i did several tests, and put together a bootable independent emulator, and tested it on 4 different real machines, none of them worked properly, all of them had major issues. for example, from 4 pc 3 lagged ps2 mouse emulation, one lacks usb booting, one can boot usb drives only in floppy emulation, one places parts of bios where it really should not, one not recognizes a bootable media if it has no x86 style partition table even if the boot sector have the magic values.

basically x86 computers can only run windows, dos, and linux properly, they are not capable of hosting new operating system technologies any more - its true that platforms like x86 and arm are enemy of freedom - and thats why dawn operating system is born aniway - to take back the control from the hardware-imperialists.
Nonsense.

You tested on 4 real machines, and:
  • "3 lagged ps2 mouse emulation". I'm not sure if you mean "lagged" or "lacked"; and don't know if your OS is incredibly slow and causes lag (very likely) or if your OS doesn't support USB (also very likely), but both of these are problems with your OS and not problems with the computers.
  • "one lacks USB booting". This is very unlikely unless the computer is very old; but if it is correct it just means that your OS can't boot from CD or something else. Note that I expect this is your fault (see below).
  • "one not recognizes a bootable media if it has no x86 style partition table even if the boot sector have the magic values". I assume this is your fault (see below).
Note: For booting from USB there never really was a standard. Originally some BIOSs treated it as removable media (e.g. floppy) even though it's too large for that, and some treated it as larger (e.g. hard drive) even though it's removable, and some BIOSs did both with a BIOS setting that caused confusing/hassle for normal users; then BIOSs started trying to auto-detect what to treat the USB as in the hope that it could just work (and avoid confusion/hassle for normal users). These "auto-detect what to treat the USB as" schemes rely on detecting if there's a BPB in the first sector (like a floppy and unlike a typical hard drive) and/or a partition table (like a hard drive and unlike a floppy). This mean that if you want your USB to be (reliably) treated as a floppy you need a BPB and can't have a partition table; and if you want your USB to be (reliably) treated as a hard disk you need a partition table (and probably shouldn't have a BPB).

Essentially; everything you mentioned is your fault.

What you have is a "grass is always greener" scenario. Because your pretend hardware is pure hype/vapour that will never exist, it's impossible to see all the problems that your fantasy would have if it actually did exist; so you think your pretend hardware is a silver bullet that will magically solve all possible problems (e.g. and won't have firmware or devices with various "compatibility quirks" after 3 decades of backward compatibility, etc).


Cheers,

Brendan

Re: Dawn

Posted: Mon Mar 27, 2017 7:32 am
by Geri
Brendan wrote: These "auto-detect what to treat the USB as" schemes rely on detecting if there's a BPB in the first sector (like a floppy and unlike a typical hard drive) and/or a partition table (like a hard drive and unlike a floppy).
-chief, my horses are dieing
-shepherd them up the river
Brendan wrote: This mean that if you want your USB to be (reliably) treated as a floppy you need a BPB and can't have a partition table; and if you want your USB to be (reliably) treated as a hard disk you need a partition table (and probably shouldn't have a BPB).
-chief, my horses are still dieing
-shepherd them down the river
Brendan wrote: Essentially; everything you mentioned is your fault.
and thats why everyone, including you, succesfully made compatible OS-es for x86. ( lol none. )
your x86 is a dreamworld, where ,,ONLY'' 4 computers is not enough to run the most possibly simplified codebase. maybe you should have test it on 100 different machine, and you will find one that can properly work. x86, where handling usb described by a 10000 page long book, 2 billion transistors of imperialism.

Re: Dawn

Posted: Mon Mar 27, 2017 7:50 am
by MollenOS
x86, where handling usb described by a 10000 page long book, 2 billion transistors of imperialism.
Other than your entire post made close to zero sense, you do know USB has very little to do with X86 right? The usb-controller and the usb-protocol works entirely the same on other architectures.