How change non-PAE paging mode to PAE paging mode?

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: How change non-PAE paging mode to PAE paging mode?

Post by embryo2 »

Brendan wrote:These setting are completely unnecessary for my distributed solution - all computers know they make their own decisions without settings.
For example, even small enterprises have security issues, so they want to be able to grant access rights to one user and to deny such rights to another user, it is the global setting for the entire enterprise. How your system can manage this?
Brendan wrote:Or do you do the traditional "loss of service" method, where if a server or network connection goes down the you're screwed until an (authorised) human intervenes?
Mostly it is inevitable. At least for enterprises. Every enterprise has it's specific data, would it be some database or a file server with excel files or whatever. So, your approach should cope with the data distribution. Else if a laptop is out of the enterprise network then it just stops working, despite of any tricky algorithm you have installed on it.
Brendan wrote:I'm trying to provide the best end user experience (including performance, fault tolerance, and end user hassle/configuration) with whatever hardware the end user happens to let the OS run on.

Part of the goal is to be able to go to typical small/medium businesses and convince them to switch from the current relatively standard approach (e.g. one cheap/desktop computer per user, an additional expensive/fault tolerant server, a bunch of networking equipment, and a non-distributed OS like Windows/Linux) to my approach (e.g. one cheap/desktop computer per pair of users, no expensive server, half the networking equipment, and my distributed OS) because this will:
  • significantly reduce purchase costs
  • significantly reduce running costs (e.g. less power consumption, less air-conditioning)
  • significantly reduce maintenance/administration costs
  • increase fault tolerance
  • increase performance (by distributing load to "otherwise idle" computers).
Purchase costs and power consumption are thing that you should prove not only using theoretical thinking, but also performing many "field tests". And tests often can be disappointing, because there's still need for the database, for the file-server, for the internet gateway, for enterprise's web server and so on.

May be it is possible to create a more fault tolerant network, but the existing technologies just do the same - when you move with your laptop in the area where the wi-fi is available, it just reconnects and becomes available for work. And work is performed with the centralized enterprise resources like web or database server.

May be it is possible to reduce purchasing costs by using one PC instead of two or even three, but there already are (and were) the solutions like terminals or remote desktop on a single server. For some reason such solutions haven't gathered enough power. May be it is the lesser flexibility. So, you need to provide the flexibility at the level of a system with one PC per user and many separate servers for every important task. May be it is possible, but all attempts before ended badly.

And about fault tolerance. For the PC it's not important at all, because PCs are very reliable. For networking it's again not the important case because cables and routers are reliable enough. And for servers..., but you think there shouldn't be any server and I still do not understand how a user can work with a database.

The same is for performance. The hardware is fast. If a distributed solution replaces processing power of a server, then it do not replaces it's storage. If the solution replaces storage, then it do not replaces server applications. And if the solution replaces applications then it's the CEO's smartphone with the entire enterprise on it. Can you create a one smartphone enterprise?
Brendan wrote:you mostly need a time-zone setting because (for stationary computers) there's a design flaw in firmware (OS can't ask firmware for the time zone when it's installed, where it can be set once for all OSs that are ever installed, possibly by the retailer before its purchased) and (for mobile devices and possibly stationary computers too) most of them have a design flaw in hardware (lack of GPS that would allow automatic time zone detection, even when user is moving between time zones).
There are costs. Everything can be flawless in a world with zero costs. But GPS is working only outside a building. So, we need more expensive hardware for it to work inside a building. And we need it to work without interruption during travels in planes and trains. And we need it to consume the accumulator's charge for every second. So, it's not as simple. And as you already have pointed out - OS developer just can't do much about it.

But yes, there is a way for the extended failure tolerance and distributed processing. The only problem here is the cost and the value for the cost. If the value is small then your investment of at least 10 years can be just a waste of time. And for the value to be important there's a need for the market research and other stuff the marketing way requires. Have you spent enough for the marketing efforts? I suppose you just think that smart thing is always welcome. But there's a lot of failure stories with the very smart things.
Brendan wrote:If a person is writing a new web browser they need to write their new web browser's source code (and need to know what their source code does) and they might also need to write a plain text document that explains the internals of the code they've written (because different programmers implement the same thing in radically different ways, so plain text documentation for an existing/competing web browser is completely worthless).
In case of the bad documentation it is possible to provide developers with source code for them to be able to understand how it works. But your solution is going to be a closed source one, so, you just have no way except the documentation. And in the documentation you can describe your messaging in every detail, but if a developer has high level interface with function calls it is most probable he will use such interface and only look at the messaging documentation in case of a really important performance drawback.
Brendan wrote:
embryo wrote:Well, it means your system can make very bad mistakes (5 orders of magnitude) if it isn't monitored closely by a human. Then why do you describe your messaging as a better solution than existing technologies?
You'll have to be more specific - which claims about "better" are you referring to?
As I see from your description your solution uses some kind of an algorithm for selection of the best way of sending messages. But next you tell me that there's a fork of a size of 5 orders of magnitude in the messaging performance. So, I suppose it is required to provide some help from a human for your solution to work more efficiently. And "better" here are existing solutions with the same requirements - they also need a help from the developer, but do not require the developer to learn about new solution.
Brendan wrote:if you use C++ operator overloading to make it harder for programmers to notice the difference between extremely expensive big rational number addition and extremely fast 32-bit integer addition, then you're contributing to the "high level programmers slapping together worthless puke without even realising it" problem.
The performance is important only if it's really important. It means you can spend a lot of time optimizing things for next 0.001% of the efficiency, but it's just unimportant. So, people usually concentrate on things like 100 or 200% efficiency increase.
Brendan wrote:
embryo2 wrote:
Brendan wrote:At a minimum you have to deal with the fact that (as soon as any networking is involved) there's no guarantee that the message will be delivered.
It's very simple. And you know many ways how to add the failure information to your library.
It's not simple, I don't know any sane way to add the failure information to a library
You can return a result code, you can throw an exception, you can change some static field, you can raise an interrupt, you can show full screen message "Surprise, you have an error! Congratulations!!!".
Brendan wrote:how about this: Instead of developing your OS by yourself, how about if I volunteer to help your project (and do this by continually insisting that all traces of Java and "managed puss" be removed)? The benefits of "more developers" would really help your project!
Well, if you prefer to concentrate on the worst case scenarios (there's no cooperation at all) even then I can ask about "why do you think the managed is a puss?". And at least I see there's no viable argument against the managed.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: How change non-PAE paging mode to PAE paging mode?

Post by Brendan »

Hi,
embryo2 wrote:
Brendan wrote:These setting are completely unnecessary for my distributed solution - all computers know they make their own decisions without settings.
For example, even small enterprises have security issues, so they want to be able to grant access rights to one user and to deny such rights to another user, it is the global setting for the entire enterprise. How your system can manage this?
It's just a small file (e.g. about 1 MiB for 10000 users) containing user login details (e.g. user name and password hash) for the entire cluster.

Note that my file system isn't what people are used to. It's a versioning file system (e.g. impossible to modify an existing file, and only possible to create a new version of an existing file), where each computer can have none or more native file systems, and all native file systems in the cluster are superimposed on top of each other to create the virtual file system. The VFS tries to ensure redundancy (e.g. ensure there's 3 or more copies of each file on different computers/disks) when files are created. It also works a little bit like a file cache - if a computer reads a file that isn't present on that computer disk/s, then it downloads the file from a remote computer and stores a copy on local disk/s so that the file doesn't need to be fetched again next time. The end result is that (multiple versions of) the small file containing user login details would end up duplicated on almost all computers (or at least, all computers where users log in).

For large enterprise you'd probably have multiple clusters (e.g. one for the marketing team, one for the accountants, etc); and you'd probably want some special "enterprise tools" for managing multiple clusters (including the files containing user login details). These tools are beyond the current scope of the project.
embryo2 wrote:
Brendan wrote:Or do you do the traditional "loss of service" method, where if a server or network connection goes down the you're screwed until an (authorised) human intervenes?
Mostly it is inevitable. At least for enterprises. Every enterprise has it's specific data, would it be some database or a file server with excel files or whatever. So, your approach should cope with the data distribution. Else if a laptop is out of the enterprise network then it just stops working, despite of any tricky algorithm you have installed on it.
If a laptop is out of the enterprise network; it'll still boot, users will still be able to log in, the GUI/s will still work, smaller apps (e.g. wordprocessor) would still work, etc. If there's a huge "thing" (e.g. large scale accountancy package) then that might work, or it might only partially work, or it might not work at all. I only provide the OS, the design of third-party applications (and how fault tolerant they are/aren't) is not an OS developer's responsibility.
embryo2 wrote:
Brendan wrote:I'm trying to provide the best end user experience (including performance, fault tolerance, and end user hassle/configuration) with whatever hardware the end user happens to let the OS run on.

Part of the goal is to be able to go to typical small/medium businesses and convince them to switch from the current relatively standard approach (e.g. one cheap/desktop computer per user, an additional expensive/fault tolerant server, a bunch of networking equipment, and a non-distributed OS like Windows/Linux) to my approach (e.g. one cheap/desktop computer per pair of users, no expensive server, half the networking equipment, and my distributed OS) because this will:
  • significantly reduce purchase costs
  • significantly reduce running costs (e.g. less power consumption, less air-conditioning)
  • significantly reduce maintenance/administration costs
  • increase fault tolerance
  • increase performance (by distributing load to "otherwise idle" computers).
Purchase costs and power consumption are thing that you should prove not only using theoretical thinking, but also performing many "field tests". And tests often can be disappointing, because there's still need for the database, for the file-server, for the internet gateway, for enterprise's web server and so on.
Yes; field testing is useful for all OSs eventually (including mine, maybe in 10 years time).
embryo2 wrote:May be it is possible to create a more fault tolerant network, but the existing technologies just do the same - when you move with your laptop in the area where the wi-fi is available, it just reconnects and becomes available for work. And work is performed with the centralized enterprise resources like web or database server.

May be it is possible to reduce purchasing costs by using one PC instead of two or even three, but there already are (and were) the solutions like terminals or remote desktop on a single server. For some reason such solutions haven't gathered enough power. May be it is the lesser flexibility. So, you need to provide the flexibility at the level of a system with one PC per user and many separate servers for every important task. May be it is possible, but all attempts before ended badly.
Have you ever really looked at "thin client" hardware? You can buy a set of 10 thin clients at $500 each and a $5000 server; and the thin clients will have more processing power than server. Why do you need to the server at all?

Most software doesn't distribute load - it just has "clients" (where each client's processing power is wasted), and "server" (that struggles to cope with the load of many clients). It's idiotic.
embryo2 wrote:And about fault tolerance. For the PC it's not important at all, because PCs are very reliable. For networking it's again not the important case because cables and routers are reliable enough. And for servers..., but you think there shouldn't be any server and I still do not understand how a user can work with a database.
Cheap/desktop PCs are "sort of reliable". For one PC it might have a 1% chance of hardware failure per month; and for a LAN of 100 PCs you'd have to assume one will fail each month.

I haven't explained how a user can work with a database because I have no idea why you think a user might need to work with a database. For most of the many possible scenarios; I'd assume a database is used because nobody wants to trust a web developer with the data (and it'd be far better to get rid of the web developer, and use a native/distributed application that has no need for a back-end database in the first place).
embryo2 wrote:The same is for performance. The hardware is fast. If a distributed solution replaces processing power of a server, then it do not replaces it's storage. If the solution replaces storage, then it do not replaces server applications. And if the solution replaces applications then it's the CEO's smartphone with the entire enterprise on it. Can you create a one smartphone enterprise?
If you've got 10 hard disks, does it really matter if all 10 disks are plugged into a single server or if they're all plugged into 10 different desktop systems? You've still got the same 10 hard disks in both cases.
embryo2 wrote:But yes, there is a way for the extended failure tolerance and distributed processing. The only problem here is the cost and the value for the cost. If the value is small then your investment of at least 10 years can be just a waste of time. And for the value to be important there's a need for the market research and other stuff the marketing way requires. Have you spent enough for the marketing efforts? I suppose you just think that smart thing is always welcome. But there's a lot of failure stories with the very smart things.
There's also a lot of vague/meaningless hand-waving. There's a lot of failure stories with very smart things, and a lot of failure stories with very "not smart" things, and a lot of failure stories with "average" things. There's just a lot of failure stories.
embryo2 wrote:
Brendan wrote:If a person is writing a new web browser they need to write their new web browser's source code (and need to know what their source code does) and they might also need to write a plain text document that explains the internals of the code they've written (because different programmers implement the same thing in radically different ways, so plain text documentation for an existing/competing web browser is completely worthless).
In case of the bad documentation it is possible to provide developers with source code for them to be able to understand how it works. But your solution is going to be a closed source one, so, you just have no way except the documentation. And in the documentation you can describe your messaging in every detail, but if a developer has high level interface with function calls it is most probable he will use such interface and only look at the messaging documentation in case of a really important performance drawback.
If have no idea what you're trying to say here; or how it corresponds to what you're replying to.

Are you suggesting that it's better for a programmer to make it hard to see what is/isn't expensive and then waste time writing documentation in a plain text file that says something like "Warning: The function call on line 123 of file "foo" is actually an expensive thing that sends/receives messages and isn't a normal/cheap function call at all." and then hope that anyone unfortunate enough to have to read the source code has 2 monitors so they can keep the retarded documentation open on one screen while they try to make sense of the source code on the other screen in an attempt to "de-obfuscate" the hideous mess you created by burying important information (like how expensive something is) under malicious syntactical sugar?
embryo2 wrote:
Brendan wrote:
embryo wrote:Well, it means your system can make very bad mistakes (5 orders of magnitude) if it isn't monitored closely by a human. Then why do you describe your messaging as a better solution than existing technologies?
You'll have to be more specific - which claims about "better" are you referring to?
As I see from your description your solution uses some kind of an algorithm for selection of the best way of sending messages. But next you tell me that there's a fork of a size of 5 orders of magnitude in the messaging performance. So, I suppose it is required to provide some help from a human for your solution to work more efficiently. And "better" here are existing solutions with the same requirements - they also need a help from the developer, but do not require the developer to learn about new solution.
If the destination/receiver is on another computer, then the message has to be sent to the other computer. Existing solutions have the same "variable network latency" problem.

What we're discussing here is whether or not a language should hide the (potentially high) latency. You seem to want to make it hard for programmers to see the difference between fast function calls and high latency networking in the hope of increasing the chance that they'll accidentally write extremely bad/inefficient software; while I prefer to make it impossible for programmers to be unaware of that (potentially high) latency in the hope of decreasing the chance that they'll accidentally write extremely bad/inefficient software.
embryo2 wrote:
Brendan wrote:if you use C++ operator overloading to make it harder for programmers to notice the difference between extremely expensive big rational number addition and extremely fast 32-bit integer addition, then you're contributing to the "high level programmers slapping together worthless puke without even realising it" problem.
The performance is important only if it's really important. It means you can spend a lot of time optimizing things for next 0.001% of the efficiency, but it's just unimportant. So, people usually concentrate on things like 100 or 200% efficiency increase.
For all cases where performance is not important, you can save developer time by using an infinite loop instead of wasting time writing useful code.
embryo2 wrote:
Brendan wrote:
embryo2 wrote:It's very simple. And you know many ways how to add the failure information to your library.
It's not simple, I don't know any sane way to add the failure information to a library, and don't have libraries in the first place. I have (potentially remote) services instead of libraries; where there's no guarantee a message that was successfully sent to a service/library will be successfully received by the service/library.
You can return a result code, you can throw an exception, you can change some static field, you can raise an interrupt, you can show full screen message "Surprise, you have an error! Congratulations!!!".
Deliberately editing/removing information doesn't make your "solutions" relevant or useful, and doesn't make your reply any less idiotic.

Note that I do realise that most programmers are used to more traditional programming models (e.g. procedural programming and/or OOP), and that it takes a while to really understand something that's closer to the actor model.
embryo2 wrote:
Brendan wrote:how about this: Instead of developing your OS by yourself, how about if I volunteer to help your project (and do this by continually insisting that all traces of Java and "managed puss" be removed)? The benefits of "more developers" would really help your project!
Well, if you prefer to concentrate on the worst case scenarios (there's no cooperation at all) even then I can ask about "why do you think the managed is a puss?". And at least I see there's no viable argument against the managed.
All I'm saying here is that "more developers makes development faster" is a false assumption during the initial stages of an OS. It's not until you're able to delegate tasks (e.g. ask one developer to write a network card driver, another to write a text editor, ...) that "more developers" translates into faster development time.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: How change non-PAE paging mode to PAE paging mode?

Post by embryo2 »

Brendan wrote:It's just a small file (e.g. about 1 MiB for 10000 users) containing user login details (e.g. user name and password hash) for the entire cluster.

Note that my file system isn't what people are used to. It's a versioning file system (e.g. impossible to modify an existing file, and only possible to create a new version of an existing file), where each computer can have none or more native file systems, and all native file systems in the cluster are superimposed on top of each other to create the virtual file system. The VFS tries to ensure redundancy (e.g. ensure there's 3 or more copies of each file on different computers/disks) when files are created. It also works a little bit like a file cache - if a computer reads a file that isn't present on that computer disk/s, then it downloads the file from a remote computer and stores a copy on local disk/s so that the file doesn't need to be fetched again next time. The end result is that (multiple versions of) the small file containing user login details would end up duplicated on almost all computers (or at least, all computers where users log in).
If there's a need for a new user, what we should do then? If we to change your "small file" then how all the cluster would know there's a new user? How long it will take to distribute the change over the entire cluster? With centralized approach we have no such problem.
Brendan wrote:I only provide the OS, the design of third-party applications (and how fault tolerant they are/aren't) is not an OS developer's responsibility.
And application developers had decided (very long ago) to create a server. Just because it's developer's responsibility to create something for cases when there's no fixed cluster. And it really works for all the laptops moving around. But requires the server. So, as you have declined the full responsibility, then developers should do something with your OS. For example, they can create a new server. And the server will spread it's load and data across the cluster. But there's a need for immediate response to the user's request. So, all the changes should be accessible to the server just after they are made. And if we remember that the changes can be on the other side of the earth (with your distribution approach), we can easily imaging the latency for an enterprise with 100 employees - the server just must to ask all nodes if there is a change. After the requests are collected we have, for example, 1 second passed. But during the second a few operators can hit the enter button and there will be more changes. Well, what should we do with the change mess?
Brendan wrote:Have you ever really looked at "thin client" hardware? You can buy a set of 10 thin clients at $500 each and a $5000 server; and the thin clients will have more processing power than server. Why do you need to the server at all?

Most software doesn't distribute load - it just has "clients" (where each client's processing power is wasted), and "server" (that struggles to cope with the load of many clients). It's idiotic.
I see the problem with server's cost. And I see the problem with idle clients. But do you see the problem with things, that require centralization which is not allowed for your solution?

It's "some cost" vs "just do not work" problem. For things, that require centralization.
Brendan wrote:Cheap/desktop PCs are "sort of reliable". For one PC it might have a 1% chance of hardware failure per month; and for a LAN of 100 PCs you'd have to assume one will fail each month.
Once per month we need to restore a system from it's image. Is it too hard? Is it a great advantage if such a tiny work will be eliminated?
Brendan wrote:I haven't explained how a user can work with a database because I have no idea why you think a user might need to work with a database. For most of the many possible scenarios; I'd assume a database is used because nobody wants to trust a web developer with the data (and it'd be far better to get rid of the web developer, and use a native/distributed application that has no need for a back-end database in the first place).
Well, you are redefining the world again. Now it's about databases. Surprisingly, there's a way of doing business without databases. It's great. But I just do not understand how can it work!
Brendan wrote:
embryo2 wrote:In case of the bad documentation it is possible to provide developers with source code for them to be able to understand how it works. But your solution is going to be a closed source one, so, you just have no way except the documentation. And in the documentation you can describe your messaging in every detail, but if a developer has high level interface with function calls it is most probable he will use such interface and only look at the messaging documentation in case of a really important performance drawback.
If have no idea what you're trying to say here; or how it corresponds to what you're replying to.
You will provide no source code, yes? If yes, then how a developer can know about call details? The call here is something that your OS provides as a service or as a library or as you want to name it.
Brendan wrote:Are you suggesting that it's better for a programmer to make it hard to see what is/isn't expensive and then waste time writing documentation in a plain text file that says something like "Warning: The function call on line 123 of file "foo" is actually an expensive thing that sends/receives messages and isn't a normal/cheap function call at all." and then hope that anyone unfortunate enough to have to read the source code has 2 monitors so they can keep the retarded documentation open on one screen while they try to make sense of the source code on the other screen in an attempt to "de-obfuscate" the hideous mess you created by burying important information (like how expensive something is) under malicious syntactical sugar?
Syntactic sugar speeds things up. And refusing to offer the sugar means insisting for all developers to use more tedious way of doing things. And argument about "let them fill the hardcore" won't make any sense for developers. They just won't use inconvenient solution. So, you better look at the sugar and documentation approach instead of plain refusing it. It's like "let them use only assembly because it's really insightful".
Brendan wrote:What we're discussing here is whether or not a language should hide the (potentially high) latency.
Why do you think that RMI calls hide the latency? Or web-service calls? Do you think all web-service developers are idiots? No, they know what's involved behind the scene.
Brendan wrote:I prefer to make it impossible for programmers to be unaware of that (potentially high) latency in the hope of decreasing the chance that they'll accidentally write extremely bad/inefficient software
Your approach looks like - do it hard way for you to see things that can be seen after a simple concept understanding, and no, after you have the understanding, there's no way to make things simpler, just hardcore! Else you can make a bit of performance decrease!
Brendan wrote:All I'm saying here is that "more developers makes development faster" is a false assumption during the initial stages of an OS. It's not until you're able to delegate tasks (e.g. ask one developer to write a network card driver, another to write a text editor, ...) that "more developers" translates into faster development time.
It's about how you can split the tasks. Initial stages include architecture and the architecture is the thing you prefer to have full control of. But it's the most interesting thing, while the drivers and text editors are not. So, you trade the architecture control for help only during very late development stages. And the help, to be present at all, requires you to create a lot of things to attract developers. It includes full featured OS, rich libraries and many applications. At least the OS can be done in such a way, but rich libraries and many applications - I doubt it. So, by refusing the architecture cooperation you actually create a lot of tasks to be done by yourself without high chances to attract developers at later stages.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: How change non-PAE paging mode to PAE paging mode?

Post by Brendan »

Hi,
embryo2 wrote:
Brendan wrote:It's just a small file (e.g. about 1 MiB for 10000 users) containing user login details (e.g. user name and password hash) for the entire cluster.

Note that my file system isn't what people are used to. It's a versioning file system (e.g. impossible to modify an existing file, and only possible to create a new version of an existing file), where each computer can have none or more native file systems, and all native file systems in the cluster are superimposed on top of each other to create the virtual file system. The VFS tries to ensure redundancy (e.g. ensure there's 3 or more copies of each file on different computers/disks) when files are created. It also works a little bit like a file cache - if a computer reads a file that isn't present on that computer disk/s, then it downloads the file from a remote computer and stores a copy on local disk/s so that the file doesn't need to be fetched again next time. The end result is that (multiple versions of) the small file containing user login details would end up duplicated on almost all computers (or at least, all computers where users log in).
If there's a need for a new user, what we should do then? If we to change your "small file" then how all the cluster would know there's a new user? How long it will take to distribute the change over the entire cluster? With centralized approach we have no such problem.
If a new file (or a new version of an old file) is created the computer that creates it tries to "push" the new file to 2 other computers. When a file is opened on one computer, that computer asks all other computers if they have a more recent version of the file (note: it's not quite that simple as there's directory info caching and notifications involved).

Therefore, in general; if a new user is added (a new version of the "user details" file is created), 2 other computers will know when it happens, and other computers will find out there's a new version when they open the file. However; this only applies for the normal case where all computers can communicate. If there's no connection from one group of computers to another, then a new version of a file can be created on one group of computers and the other group of computers won't know (and will continue using the old version of the file) until communication between the groups is possible.

Basically; it does the best job possible, which can include using older versions of files if it's impossible to reach newer versions. It's not like a retarded joke "central point of failure" system where any kind of fault anywhere leads to some or all of the computers becoming expensive and unusable ornaments; or where (worst case) that central server fails and the work of hundreds of employees is lost (all work from whenever the newest backup was taken up until the point where a new "central point of failure" can be put online), or where you need expensive solutions (e.g. RAID arrays, 2 or more servers with fail over, etc) to reduce the risks (that still doesn't work in most "network failure" cases despite the expense).
embryo2 wrote:
Brendan wrote:I only provide the OS, the design of third-party applications (and how fault tolerant they are/aren't) is not an OS developer's responsibility.
And application developers had decided (very long ago) to create a server.
This is like giving builders hammers and watching as they build everything with nails; and then assuming that if you gave the builders screwdrivers instead of hammers they'd just use screwdrivers to bang nails in.

For existing OSs, developers aren't given the tools to create a fault tolerant/distributed software, so they (usually) don't create fault tolerant/distributed software. Of course sometimes they do, even though they aren't given appropriate tools.
embryo2 wrote:Just because it's developer's responsibility to create something for cases when there's no fixed cluster. And it really works for all the laptops moving around. But requires the server. So, as you have declined the full responsibility, then developers should do something with your OS. For example, they can create a new server. And the server will spread it's load and data across the cluster. But there's a need for immediate response to the user's request. So, all the changes should be accessible to the server just after they are made. And if we remember that the changes can be on the other side of the earth (with your distribution approach), we can easily imaging the latency for an enterprise with 100 employees - the server just must to ask all nodes if there is a change. After the requests are collected we have, for example, 1 second passed. But during the second a few operators can hit the enter button and there will be more changes. Well, what should we do with the change mess?
Imagine a piece of string that is intended to be used for some unknown purpose. Is that piece of string too short or too long? Is it better to use some sort of clamp instead of string? Unless the specific purpose is known; these questions are meaningless/unanswerable.

Imagine some sort of data being used for who knows what in an unknown way. Is this the best approach? Will latency be too high? Do you expect any kind of useful answer to your meaningless/unanswerable questions?
embryo2 wrote:
Brendan wrote:Have you ever really looked at "thin client" hardware? You can buy a set of 10 thin clients at $500 each and a $5000 server; and the thin clients will have more processing power than server. Why do you need to the server at all?

Most software doesn't distribute load - it just has "clients" (where each client's processing power is wasted), and "server" (that struggles to cope with the load of many clients). It's idiotic.
I see the problem with server's cost. And I see the problem with idle clients. But do you see the problem with things, that require centralization which is not allowed for your solution?

It's "some cost" vs "just do not work" problem. For things, that require centralization.
I only see a problem of some existing programmers (who are used to technologies that suck), that don't understand that nothing actually requires centralisation in the first place.
embryo2 wrote:
Brendan wrote:Cheap/desktop PCs are "sort of reliable". For one PC it might have a 1% chance of hardware failure per month; and for a LAN of 100 PCs you'd have to assume one will fail each month.
Once per month we need to restore a system from it's image. Is it too hard? Is it a great advantage if such a tiny work will be eliminated?
The entire point of a distributed system like mine is to make (e.g.) a network of 100 computers (with 200 simultaneous users) behave like a single massive virtual computer (with 200 simultaneous users). I don't want that single massive virtual computer to fail every month just because one piece of hardware failed and don't want those 200 users to lose all their work every month. I also don't want people to have to buy 100 expensive (fault tolerant) servers to reduce the risk. I have to design it as a fault tolerant system because otherwise "cheap/commodity PC" failure rates would make the entire OS unacceptably unreliable.
embryo2 wrote:
Brendan wrote:I haven't explained how a user can work with a database because I have no idea why you think a user might need to work with a database. For most of the many possible scenarios; I'd assume a database is used because nobody wants to trust a web developer with the data (and it'd be far better to get rid of the web developer, and use a native/distributed application that has no need for a back-end database in the first place).
Well, you are redefining the world again. Now it's about databases. Surprisingly, there's a way of doing business without databases. It's great. But I just do not understand how can it work!
My point is that for "unknown/unspecified purpose" nobody can know how it might work, including me, and including you.

If it was something for a known/specified purpose (e.g. word processor, payroll system, online shop, fleet/vehicle tracking system, ....) we'd be able to discuss ways it might be implemented (that might or might not include the use of a generic distributed database); but even then it's not an OS developer's responsibility (it's the responsibility of whoever designs third-party applications).
embryo2 wrote:
Brendan wrote:
embryo2 wrote:In case of the bad documentation it is possible to provide developers with source code for them to be able to understand how it works. But your solution is going to be a closed source one, so, you just have no way except the documentation. And in the documentation you can describe your messaging in every detail, but if a developer has high level interface with function calls it is most probable he will use such interface and only look at the messaging documentation in case of a really important performance drawback.
If have no idea what you're trying to say here; or how it corresponds to what you're replying to.
You will provide no source code, yes? If yes, then how a developer can know about call details? The call here is something that your OS provides as a service or as a library or as you want to name it.
We're discussing 2 alternatives:
  • A programming language that makes the difference between (potentially expensive) messaging and (fast/cheap) function calls very obvious
  • A programming language that hides the difference between (potentially expensive) messaging and (fast/cheap) function calls in the hope of increasing the chance programmers write crappy software without being aware of it
It doesn't matter if the programmer is writing proprietary code or if they're working on open source code; in both cases it should be obvious to the programmer that is writing the code if something is expensive. They should not have to be distracted by having to write and then constantly refer to documentation just to know which parts of their own code are/aren't expensive.

It'd also be very nice if the IDE had a special mode where source code is coloured according to profiler results (e.g. each line/statement of a function coloured such that the most expensive line/statement is bright red and the least expensive line/statement is blue, and all other lines/statements are "shades of purple" somewhere between). Of course the time a thread spends blocked (not executing) doesn't show up in profiler results, so something like this won't/can't help people see the time their code spends waiting for a replies from asynchronous requests.
embryo2 wrote:
Brendan wrote:Are you suggesting that it's better for a programmer to make it hard to see what is/isn't expensive and then waste time writing documentation in a plain text file that says something like "Warning: The function call on line 123 of file "foo" is actually an expensive thing that sends/receives messages and isn't a normal/cheap function call at all." and then hope that anyone unfortunate enough to have to read the source code has 2 monitors so they can keep the retarded documentation open on one screen while they try to make sense of the source code on the other screen in an attempt to "de-obfuscate" the hideous mess you created by burying important information (like how expensive something is) under malicious syntactical sugar?
Syntactic sugar speeds things up. And refusing to offer the sugar means insisting for all developers to use more tedious way of doing things. And argument about "let them fill the hardcore" won't make any sense for developers. They just won't use inconvenient solution. So, you better look at the sugar and documentation approach instead of plain refusing it. It's like "let them use only assembly because it's really insightful".
Things like syntactical sugar help speed up development time at the expense of reducing the quality of the resulting code. I want the opposite - I want to make it easier/faster/more convenient for developers to create high quality software, and I want to make it harder/slower/less convenient for developers to create low quality software.

I also want "too high level" retards that don't know or care what their software does (as long as it "works") to continue writing software for existing OSs, so that it's much much easier for software on my OS to compete against the "sea of puke" that these people will continue to create.
embryo2 wrote:
Brendan wrote:What we're discussing here is whether or not a language should hide the (potentially high) latency.
Why do you think that RMI calls hide the latency? Or web-service calls? Do you think all web-service developers are idiots? No, they know what's involved behind the scene.
Yes; I very much do think that (in general) web-service developers are idiots, and that this is a gross understatement (e.g. "incompetent retards that need to be forcibly prevented from going anywhere near any computer" would be a much more accurate description). However, there may or may not be a some "well above average" web developers who have wasted extra time learning and memorising information that things like (e.g.) RMI tries to prevent them from knowing much more easily.
embryo2 wrote:
Brendan wrote:I prefer to make it impossible for programmers to be unaware of that (potentially high) latency in the hope of decreasing the chance that they'll accidentally write extremely bad/inefficient software
Your approach looks like - do it hard way for you to see things that can be seen after a simple concept understanding, and no, after you have the understanding, there's no way to make things simpler, just hardcore! Else you can make a bit of performance decrease!
If you think several orders of magnitude (the difference between round trip network latency and "call/ret") is just "a bit of a performance decrease"; then you are undeniably one of the many people who I never want working on any software for my OS.
embryo2 wrote:
Brendan wrote:All I'm saying here is that "more developers makes development faster" is a false assumption during the initial stages of an OS. It's not until you're able to delegate tasks (e.g. ask one developer to write a network card driver, another to write a text editor, ...) that "more developers" translates into faster development time.
It's about how you can split the tasks. Initial stages include architecture and the architecture is the thing you prefer to have full control of. But it's the most interesting thing, while the drivers and text editors are not. So, you trade the architecture control for help only during very late development stages. And the help, to be present at all, requires you to create a lot of things to attract developers.
Yes.
embryo2 wrote:It includes full featured OS, rich libraries and many applications. At least the OS can be done in such a way, but rich libraries and many applications - I doubt it. So, by refusing the architecture cooperation you actually create a lot of tasks to be done by yourself without high chances to attract developers at later stages.
No.

I don't need full featured OS, rich libraries and many applications just to begin attracting developers; I only need the fundamental parts of the OS, a relatively small number of device drivers, one simple GUI, the IDE/toolchain, and a set of specifications (some standardised messaging protocols and file formats). Something like (e.g.) a word processor isn't necessary, and is far less important than (e.g.) a "learn how to program" interactive tutorial.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: How change non-PAE paging mode to PAE paging mode?

Post by embryo2 »

Brendan wrote:My point is that for "unknown/unspecified purpose" nobody can know how it might work, including me, and including you.
Yes, if you don't know the real needs of the majority of users then you can't know how to write a good OS for them.
Brendan wrote:I very much do think that (in general) web-service developers are idiots, and that this is a gross understatement (e.g. "incompetent retards that need to be forcibly prevented from going anywhere near any computer" would be a much more accurate description).
Well, then reality will hit you in somewhat unexpected (for you) way. You can try it hard way, if you prefer.
Brendan wrote:I don't need full featured OS, rich libraries and many applications just to begin attracting developers; I only need the fundamental parts of the OS, a relatively small number of device drivers, one simple GUI, the IDE/toolchain, and a set of specifications (some standardised messaging protocols and file formats). Something like (e.g.) a word processor isn't necessary, and is far less important than (e.g.) a "learn how to program" interactive tutorial.
Yes, you want to redefine the world. But do the world wants it to be redefined? However, you can try it hard way.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: How change non-PAE paging mode to PAE paging mode?

Post by Brendan »

Hi,
embryo2 wrote:
Brendan wrote:My point is that for "unknown/unspecified purpose" nobody can know how it might work, including me, and including you.
Yes, if you don't know the real needs of the majority of users then you can't know how to write a good OS for them.
A third party application developer almost always does have a known/specified purpose for their application. It's not my fault if you were too busy to say what that third party application developer's purpose might be.

An OS does have a known purpose - to provide things that third party application developers may need (APIs, file systems, drivers, etc).
embryo2 wrote:
Brendan wrote:I very much do think that (in general) web-service developers are idiots, and that this is a gross understatement (e.g. "incompetent retards that need to be forcibly prevented from going anywhere near any computer" would be a much more accurate description).
Well, then reality will hit you in somewhat unexpected (for you) way. You can try it hard way, if you prefer.
I'm planning complete replacement of all existing technologies with zero compatibility with anything. There's nothing unexpected about the difficulties that this will involve.
embryo2 wrote:
Brendan wrote:I don't need full featured OS, rich libraries and many applications just to begin attracting developers; I only need the fundamental parts of the OS, a relatively small number of device drivers, one simple GUI, the IDE/toolchain, and a set of specifications (some standardised messaging protocols and file formats). Something like (e.g.) a word processor isn't necessary, and is far less important than (e.g.) a "learn how to program" interactive tutorial.
Yes, you want to redefine the world. But do the world wants it to be redefined? However, you can try it hard way.
Not really - I only want to create an alternative world, so that people that want to (including me) can escape.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Post Reply