Concise Way to Describe Colour Spaces

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
Antti
Member
Member
Posts: 923
Joined: Thu Jul 05, 2012 5:12 am
Location: Finland

Re: Concise Way to Describe Colour Spaces

Post by Antti »

Brendan wrote:For most things we've got several decades of "bad enough" attempts to look at already. We can/should go directly to "perfect enough, until/unless there's a significant and unforeseeable change in hardware capabilities".
For this thing, it makes sense because this field is well-studied and should be possible to understand thoroughly. What I tried to confusingly and poorly express is that sometimes "bad enough" is better than "good enough". For example, if you define that the first version of your pixel format is "256 colors from a fixed CIE XYZ palette", it is clearly "bad enough" but still a proper subset of the master plan. It is impossible that it could be the final version. If you define that your first version of pixel format is "sRGB", then it is "good enough" and people could get stuck with it. Another example is the difference between 16-bit and 32-bit unsigned integers for storing file sizes. The former is "bad enough" but the latter is "good enough". The latter is very likely to cause problems in the long run just because it is "good enough" but not "perfect enough". The last example could be your graphics pipeline idea, "a list of commands". Instead of trying to define all the command types for the first version, what if there is a usable set of simple commands that is "bad enough" but "highly potential"?

For this color space topic, this rant does not make much sense and I definitely recommend you to just keep up the good work. It should be possible to get this "perfect enough" and move on to next challenges along the way.
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Concise Way to Describe Colour Spaces

Post by embryo2 »

Brendan wrote:For legacy file formats my OS design uses "file format converters" to auto-convert files into my OS's native file formats. For example, if an application opens "test.jpg" (or "test.bmp" or "test.png" or ....) then the VFS finds a file format converter that converts JPG into my native graphics file format without the application knowing or caring; and the application itself never sees anything other than my native graphics file format.
If programs under your OS can read raw bytes, then developers would quickly port different graphics libraries and read jpegs directly. The libraries are well known for developers while your API isn't and because it is often simpler to port something instead of learning something, there will be a mess of your and standard formats. Or you should take care of developer teaching and invest heavily in the documentation, examples, blogs, forum talks and so on. But even after such investment is done there still will be developers who prefer the way of using well known libraries.
Brendan wrote:If normal programmers (excluding device driver writers) have to deal with the hassles of colour space conversion then I've failed. If users have to deal with the hassles of colour space conversion then it's far worse than just failure.
Some advanced users just need the color management functionality to convert images for specific purposes like printing, for example. Also the developers should have tools for color management to be able to support specific cases.

Your system should be flexible and feature rich in the same time. For example, the Android is feature rich, but it lacks flexibility despite of very huge investments. But a new really good system is expected to be more flexible than Android and to have most features that Android has. Are you sure you can do it? But it's just a side note, just keep making what you think is a good software, may be there will be something interesting in the end. At least you can learn from Android (and other OSes) how not to reproduce it's problems and as a result your system can be of a greater quality and attractiveness for users and developers.
Brendan wrote:So, your suggestion is to fail without even attempting to succeed?
My suggestion was about assessing costs. But if you don't think the efficiency is important, then, of course, the costs are just nothing for you.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
User avatar
xenos
Member
Member
Posts: 1121
Joined: Thu Aug 11, 2005 11:00 pm
Libera.chat IRC: xenos1984
Location: Tartu, Estonia
Contact:

Re: Concise Way to Describe Colour Spaces

Post by xenos »

Actually I like your idea of having a unified, simple, device independent way to describe colour spaces. Somehow the situation reminds me a bit to this comic, so I hope this standard you are working on will be clearly distinguished by its advantages over other existing standards.
Programmers' Hardware Database // GitHub user: xenos1984; OS project: NOS
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
embryo2 wrote:
Brendan wrote:For legacy file formats my OS design uses "file format converters" to auto-convert files into my OS's native file formats. For example, if an application opens "test.jpg" (or "test.bmp" or "test.png" or ....) then the VFS finds a file format converter that converts JPG into my native graphics file format without the application knowing or caring; and the application itself never sees anything other than my native graphics file format.
If programs under your OS can read raw bytes, then developers would quickly port different graphics libraries and read jpegs directly. The libraries are well known for developers while your API isn't and because it is often simpler to port something instead of learning something, there will be a mess of your and standard formats.
There won't be libraries or any support for them. Instead there's "shared services" that run as separate processes. There won't be C or C++ either. More importantly, the system uses "threads communicating via. asynchronous messages" for everything, which requires a different way of programming (e.g. state machines and event handlers, rather than procedural programming). It will be very difficult to port anything (far more difficult than supporting my standard native graphics file format that all the legacy crud is seamlessly converted into).

Software developers will also have a name and a public key. Their name determines where all of the software they wrote goes in the file system. The public key is needed to verify their digital signature. If a software developer supplies malicious software their key will be revoked, their software won't pass the digital signature check anymore, and nothing they wrote can be executed. There will also be strict guidelines for a variety of things (e.g. to ensure the file format standardisation process is followed, to ensure the "shared service" standardisation process is followed, etc). Anything that violates these strict guidelines is malicious software (e.g. maliciously trying to turn my OS into a hideous pile of steaming puke like existing OSs), and the software developer's key will be revoked.
embryo2 wrote:Or you should take care of developer teaching and invest heavily in the documentation, examples, blogs, forum talks and so on. But even after such investment is done there still will be developers who prefer the way of using well known libraries.
Yes. More specifically, I'm planning to provide interactive tutorials for various things (installing/configuring the OS, my programming language and IDE, the standardisation processes, etc) and "exams". For software developers to register a "name and key" (and be able to publish software other people can execute) they'll have to pass several exams (those relevant for software development). All file formats and all "shared service" messaging protocols will use open standards (anyone can read the official specifications). For common executable types there will also be templates people can use to get started.

There will be developers who prefer using well known libraries; so I need (and will have) multiple ways to defend the OS from the damage these people would otherwise cause.
embryo2 wrote:
Brendan wrote:If normal programmers (excluding device driver writers) have to deal with the hassles of colour space conversion then I've failed. If users have to deal with the hassles of colour space conversion then it's far worse than just failure.
Some advanced users just need the color management functionality to convert images for specific purposes like printing, for example. Also the developers should have tools for color management to be able to support specific cases.
No. It either works correctly (without people needing to become "advanced users" and without people needing colour management functionality) or it's not acceptable. The colour management functionality you see in existing OSs mostly only exists to work around design flaws.
embryo2 wrote:Your system should be flexible and feature rich in the same time. For example, the Android is feature rich, but it lacks flexibility despite of very huge investments. But a new really good system is expected to be more flexible than Android and to have most features that Android has. Are you sure you can do it? But it's just a side note, just keep making what you think is a good software, may be there will be something interesting in the end. At least you can learn from Android (and other OSes) how not to reproduce it's problems and as a result your system can be of a greater quality and attractiveness for users and developers.
Yes; but it's more important to avoid hassle for users and programmers, and providing the flexibility necessary to deal with hassle that should've been avoided is a mistake.
embryo2 wrote:
Brendan wrote:So, your suggestion is to fail without even attempting to succeed?
My suggestion was about assessing costs. But if you don't think the efficiency is important, then, of course, the costs are just nothing for you.
For costs; one thing I'm aware of is that it's going to cost me a minimum of 10 years just to get something that's barely usable (e.g. before anyone will be able to write drivers or applications for it).

Another thing I'm aware of is that people won't switch from what they're already using to something new unless that "something new" has significant benefits (enough to justify the cost of switching - e.g. learning a new system, replacing applications, etc). For example, if you wanted people to switch from Windows to another OS that slightly better than Windows, then "slightly better" isn't enough incentive and it won't happen. The third thing I'm aware of is that "recycling the status quo" makes it extremely difficult to provide significant benefits.

Basically; the primary goal is to provide significant benefits (like, "colour space management that just works without hassle"). Things like efficiency are important, but not as important.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
XenOS wrote:Actually I like your idea of having a unified, simple, device independent way to describe colour spaces. Somehow the situation reminds me a bit to this comic, so I hope this standard you are working on will be clearly distinguished by its advantages over other existing standards.
That's (part of) what my OS is designed to solve.

How it currently works is that anyone can "invent" a new standard, but there's no way to deprecate old standards, so you end up with software that has to support an ever increasing amount of competing standards (and bloat, and compatibility problems, and "vendor lock in", and end user hassles).

My solution is to have a formal standardisation process to limit the proliferation of competing standards. Anyone would be able to suggest a new standard or suggest changes/additions to an existing standard; but nobody can "invent" their own standard and expect it to be adopted (without going through the formal standardisation process).

In addition to that, my "file format converters" would allow file formats to be effectively deprecated. For example, if one of my OS's file formats has to be replaced by something different; then it'd only take 2 file format converters - old applications will automatically continue working with the new files (as new files will be auto-converted into the previous file format) and new applications will automatically work with old files (as old files will be auto-converted into the new file format). This means that I can have a "grace period" (e.g. give applications 5 years to switch to the new file format) and at the end of that period remove support for converting new files back to the old format.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Concise Way to Describe Colour Spaces

Post by embryo2 »

Brendan

The extent of your plans is so large that I am really afraid of something like ongoing implementation until the end of life. But in 10 years knowledgeable developer can make a lot and such term is a serious claim for the final victory. And only one problem will always be with such people as you - when a feature is implemented you often will see a way to improve it and then start to re-implement everything again. Can you resist such a permanent black hole, that suck developer's time? If can't, then the time will be much more than 10 years. If can, then quality will be compromised.
Brendan wrote:There won't be libraries or any support for them. Instead there's "shared services" that run as separate processes.
The name is not important and you still have libraries, but with different names and call convention.
Brendan wrote:There won't be C or C++ either.
And will there be a universal bytecode? And a compiler from it for different machine languages? And compiler(s) from high level language(s) into the bytecode? A lot of work is expected, it is possible to spend 10 years developing the mentioned set of compilers if the quality target is high enough.
Brendan wrote:More importantly, the system uses "threads communicating via. asynchronous messages" for everything, which requires a different way of programming (e.g. state machines and event handlers, rather than procedural programming).
Here we see another area to study - the best programming paradigm. It also can take 10 or even more years. And it will never be studied completely just as the nature can not be studied to an unlimited depth. You make a very serious claim.
Brendan wrote:It will be very difficult to port anything (far more difficult than supporting my standard native graphics file format that all the legacy crud is seamlessly converted into).
Then the whole world of algorithms should be re-implemented for your OS from scratch. How many human-years it will take? For example, just a translation from C++ to Java (or backward) takes a lot of time, but your case is much harder.
Brendan wrote:Software developers will also have a name and a public key. Their name determines where all of the software they wrote goes in the file system. The public key is needed to verify their digital signature. If a software developer supplies malicious software their key will be revoked, their software won't pass the digital signature check anymore, and nothing they wrote can be executed.
Such developers just create new account and then keep making something bad.
Brendan wrote:There will also be strict guidelines for a variety of things (e.g. to ensure the file format standardisation process is followed, to ensure the "shared service" standardisation process is followed, etc). Anything that violates these strict guidelines is malicious software (e.g. maliciously trying to turn my OS into a hideous pile of steaming puke like existing OSs), and the software developer's key will be revoked.
Here I see the main point - your OS will be closely monitored by you to prevent anything that you dislike. It means the developer base will be decreased drastically for them to match your very harsh standards. I can only hope that such developers really exist (beside of you) and they will be able to find your OS, to understand it and to get it very seriously.

But anyway, if you will manage to implement the proposed plan, even as the only user of your OS, it will be something really impressive.
Brendan wrote:I'm planning to provide interactive tutorials for various things (installing/configuring the OS, my programming language and IDE, the standardisation processes, etc) and "exams". For software developers to register a "name and key" (and be able to publish software other people can execute) they'll have to pass several exams (those relevant for software development). All file formats and all "shared service" messaging protocols will use open standards (anyone can read the official specifications). For common executable types there will also be templates people can use to get started.
Exams are good for something widely adopted, but for a new OS... Well, the number of developers decreases again despite of (I hope) really good documentation.
Brendan wrote:It either works correctly (without people needing to become "advanced users" and without people needing colour management functionality) or it's not acceptable. The colour management functionality you see in existing OSs mostly only exists to work around design flaws.
Imagine a vendor of a printer and next imagine yourself as a buyer of the printer (because it's a really good printer), now what if the vendor will refuse to write a driver for your OS? I suppose your answer will be - I won't buy such printer. But it seems to me that no vendor at all will implement a driver for you and it means you will end with something like Windows, that is used for printing despite it's design flaws.
Brendan wrote:it's more important to avoid hassle for users and programmers, and providing the flexibility necessary to deal with hassle that should've been avoided is a mistake.
Avoidance can help, but sometime you have only one exit and no window, how it is possible to avoid such situation without an ability to see the future?
Brendan wrote:Another thing I'm aware of is that people won't switch from what they're already using to something new unless that "something new" has significant benefits (enough to justify the cost of switching - e.g. learning a new system, replacing applications, etc). For example, if you wanted people to switch from Windows to another OS that slightly better than Windows, then "slightly better" isn't enough incentive and it won't happen.
Yes, people like quality things. But quality cost is very high. May be you will find the time, required for such quality, even if most people never find enough time.
Brendan wrote:The third thing I'm aware of is that "recycling the status quo" makes it extremely difficult to provide significant benefits.
And I hope you are also aware of the fact that big promises fail much more often than we all want.
Brendan wrote:Basically; the primary goal is to provide significant benefits (like, "colour space management that just works without hassle"). Things like efficiency are important, but not as important.
Ok, just keep working, at least it is really possible to decrease some hassles, so your work can help even if there won't be an "no hassle" OS.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Concise Way to Describe Colour Spaces

Post by embryo2 »

Brendan wrote:My solution is to have a formal standardisation process to limit the proliferation of competing standards. Anyone would be able to suggest a new standard or suggest changes/additions to an existing standard; but nobody can "invent" their own standard and expect it to be adopted (without going through the formal standardisation process).
If you talk about many people (but not you alone) then it is the new area for you to study - how to convince many people not only use your system, but even to participate in the standardization process. For many of people it is not enough to spend whole life to convince some viable number of human beings. It reminds me the Imperial overstretch.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
embryo2 wrote:
Brendan wrote:There won't be libraries or any support for them. Instead there's "shared services" that run as separate processes.
The name is not important and you still have libraries, but with different names and call convention.
The important part is that they run as separate processes. This means that if they crash an application using them can recover (e.g. just connect to a new instance of the service), and means I can do things like triple redundancy (e.g. send the same request message to several different instances of the same service and make sure all the responses match), and "live update" (where a service is upgraded to a new version without stopping/restarting applications that are using it), and that I can have the service running on a different computer to the application (to spread load around the network). It also means that a malicious service only has access to what it's given, and doesn't have full access to all data for every application that uses it.
embryo2 wrote:
Brendan wrote:There won't be C or C++ either.
And will there be a universal bytecode? And a compiler from it for different machine languages? And compiler(s) from high level language(s) into the bytecode? A lot of work is expected, it is possible to spend 10 years developing the mentioned set of compilers if the quality target is high enough.
Yes, processes will use a portable byte-code. Those "file format converters" I've mentioned are also used to convert byte-code into native machine code (e.g. OS asks VFS to open "my_bytecode.exe" as a native executable for "Intel family 6, model 54", and VFS finds a file format converter and caches the resulting file for next time).

It is a lot of work. Fortunately I don't really need high quality for this - e.g. if the initial implementation of the compiler is lame and doesn't optimise much (but the generated code is correct), then that's fine (it can be improved later). Of course this isn't necessarily the only option - for example, I have been thinking of only doing a "byte-code interpreter" initially and not doing the compiler until much much later.
embryo2 wrote:
Brendan wrote:More importantly, the system uses "threads communicating via. asynchronous messages" for everything, which requires a different way of programming (e.g. state machines and event handlers, rather than procedural programming).
Here we see another area to study - the best programming paradigm. It also can take 10 or even more years. And it will never be studied completely just as the nature can not be studied to an unlimited depth. You make a very serious claim.
I've already spent years on this. It's also not new (it's actually quite old) and there are multiple existing systems using this now (mostly because it's much easier to write scalable software and fault tolerant software).
embryo2 wrote:
Brendan wrote:Software developers will also have a name and a public key. Their name determines where all of the software they wrote goes in the file system. The public key is needed to verify their digital signature. If a software developer supplies malicious software their key will be revoked, their software won't pass the digital signature check anymore, and nothing they wrote can be executed.
Such developers just create new account and then keep making something bad.
I doubt it. Humans are inherently lazy, and spending a large amount of time just to get your work blacklisted isn't too likely.
embryo2 wrote:
Brendan wrote:There will also be strict guidelines for a variety of things (e.g. to ensure the file format standardisation process is followed, to ensure the "shared service" standardisation process is followed, etc). Anything that violates these strict guidelines is malicious software (e.g. maliciously trying to turn my OS into a hideous pile of steaming puke like existing OSs), and the software developer's key will be revoked.
Here I see the main point - your OS will be closely monitored by you to prevent anything that you dislike. It means the developer base will be decreased drastically for them to match your very harsh standards. I can only hope that such developers really exist (beside of you) and they will be able to find your OS, to understand it and to get it very seriously.
To be honest; part of the reason why I want to OS to come with good tutorials, etc is that I won't be looking for existing developers (they come with bad habits that are very hard to break). Instead I want to make it easy for people with no programming experience (and no bad habits) to learn how to develop software specifically for the OS and do it correctly.
embryo2 wrote:
Brendan wrote:It either works correctly (without people needing to become "advanced users" and without people needing colour management functionality) or it's not acceptable. The colour management functionality you see in existing OSs mostly only exists to work around design flaws.
Imagine a vendor of a printer and next imagine yourself as a buyer of the printer (because it's a really good printer), now what if the vendor will refuse to write a driver for your OS? I suppose your answer will be - I won't buy such printer. But it seems to me that no vendor at all will implement a driver for you and it means you will end with something like Windows, that is used for printing despite it's design flaws.
If the vendor won't write a driver, I ask for information needed to write a driver myself. If that information is unobtainable then I find a different printer. Don't forget that a lot of printers use PCL, where the information needed to create a driver can be obtained by anyone.
embryo2 wrote:
Brendan wrote:The third thing I'm aware of is that "recycling the status quo" makes it extremely difficult to provide significant benefits.
And I hope you are also aware of the fact that big promises fail much more often than we all want.
If I try and fail, then people will still be able to look at what I've done and learn from it (including me). Best case is that the OS succeeds, and worst case is still something I'd consider a "partial victory" (increasing the number of things people can learn from).

If I don't try (e.g. "recycle the status quo"), then best case is that the OS is worthless (completed and working; but no significant advantages so nobody has a good reason to switch from existing OSs), and worst case is that the OS is worthless (not completed and/or not working, and nobody can learn anything from it because it's the same old crud as everything else).
embryo2 wrote:
Brendan wrote:My solution is to have a formal standardisation process to limit the proliferation of competing standards. Anyone would be able to suggest a new standard or suggest changes/additions to an existing standard; but nobody can "invent" their own standard and expect it to be adopted (without going through the formal standardisation process).
If you talk about many people (but not you alone) then it is the new area for you to study - how to convince many people not only use your system, but even to participate in the standardization process. For many of people it is not enough to spend whole life to convince some viable number of human beings. It reminds me the Imperial overstretch.
I don't need to convince people to participate. Instead; I need to create an OS that's so interesting and so promising that people see it and want to participate.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Antti
Member
Member
Posts: 923
Joined: Thu Jul 05, 2012 5:12 am
Location: Finland

Re: Concise Way to Describe Colour Spaces

Post by Antti »

Brendan wrote:To be honest; part of the reason why I want to OS to come with good tutorials, etc is that I won't be looking for existing developers (they come with bad habits that are very hard to break). Instead I want to make it easy for people with no programming experience (and no bad habits) to learn how to develop software specifically for the OS and do it correctly.
I would be tempted to say this is a good strategy. I am a little bit surprised to see mostly negative comments about your plans (the big picture not the details) wherever you dare to write about them. I am not referring to this particular discussion but in general what I have read here for few years. Usually it goes that "newbies" are mostly positive about it. Pick up a few advanced developers from (e.g.) Linux mailing lists and try to introduce your ideas. I can almost guarantee that what you would get is mostly negative. The more advanced the developer is, the less likely you get encouraging comments.

There are a lot of reasons for this. I guess the biggest is that people are not very willing to change their habits (like your mentioned) but there is more than that. People are not very willing to admit that their current work, for which they have spent huge amount resources and time, would be lame when compared to this "new thing".
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Concise Way to Describe Colour Spaces

Post by embryo2 »

Brendan wrote:The important part is that they run as separate processes. This means that if they crash an application using them can recover (e.g. just connect to a new instance of the service)
There was the discussion about managed code where you suggested a smart compiler that imposes some limits on a programmer and prevents problems in more than 99% of cases. Also you proposed the install time compilation that makes possible to insert safety checks and throw exceptions if something is working unsafe. Then why do you see the need for even better safeness?
Brendan wrote:and means I can do things like triple redundancy (e.g. send the same request message to several different instances of the same service and make sure all the responses match)
The reason for such redundancy was the unreliability of hardware components. But if any number of copies of a program do the same then the result is also the same (excluding concurrency here, because it makes no sense to write a program that will try to match time dependent results). So, what is the purpose of the redundancy?
Brendan wrote:and "live update" (where a service is upgraded to a new version without stopping/restarting applications that are using it)
Why not to have simple address replacement in a running code?
Brendan wrote:I can have the service running on a different computer to the application (to spread load around the network).
Why a simple factory pattern, which returns a suitable service implementation stub, wouldn't work? It can provide local or remote service stub in a traditional application without the need for different processes.
Brendan wrote:It also means that a malicious service only has access to what it's given, and doesn't have full access to all data for every application that uses it.
Runtime checks also ensure such constraint. Then why the process based solution is better than the traditional one?
Brendan wrote:Fortunately I don't really need high quality for this - e.g. if the initial implementation of the compiler is lame and doesn't optimise much (but the generated code is correct), then that's fine
Well, then it contradicts with your perfectionist claims from previous messages. But despite of the contradiction it really can help to create something usable because the development becomes manageable (while perfect software is completely unmanageable because it is just impossible).
Brendan wrote:To be honest; part of the reason why I want to OS to come with good tutorials, etc is that I won't be looking for existing developers
It's dangerous to expect to find "a different kind of people". But some people are really incompatible with some ideas (just like you is incompatible with a quality, lesser than perfect). So, you need not only to teach, but also to foster the users/developers of your OS.
Brendan wrote:If I try and fail, then people will still be able to look at what I've done and learn from it (including me).
Yes. It's a considerable motivation.
Brendan wrote:I don't need to convince people to participate. Instead; I need to create an OS that's so interesting and so promising that people see it and want to participate.
Most often such approach just isn't work, but may be your case will be a special one.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
embryo2 wrote:
Brendan wrote:The important part is that they run as separate processes. This means that if they crash an application using them can recover (e.g. just connect to a new instance of the service)
There was the discussion about managed code where you suggested a smart compiler that imposes some limits on a programmer and prevents problems in more than 99% of cases. Also you proposed the install time compilation that makes possible to insert safety checks and throw exceptions if something is working unsafe. Then why do you see the need for even better safeness?
Because it's not enough to detect that something has gone wrong; for high availability you also want to recover and continue normal operation after something goes wrong. For normal libraries, if it crashes neither OS nor application can know how much it was corrupted before it crashed or while it was crashing (even with "managed") so you're screwed and can't recover. Also note that I'm thinking of both software failures and hardware failures here (where "hardware failures" includes user error - e.g. unplugging the wrong network cable).
embryo2 wrote:
Brendan wrote:and means I can do things like triple redundancy (e.g. send the same request message to several different instances of the same service and make sure all the responses match)
The reason for such redundancy was the unreliability of hardware components. But if any number of copies of a program do the same then the result is also the same (excluding concurrency here, because it makes no sense to write a program that will try to match time dependent results). So, what is the purpose of the redundancy?
Because multiple copies of a process can give different results for many reasons (including dodgy software with things like race conditions, temporary hardware problems like transient RAM errors, dodgy hardware including CPU errata, etc). For a simple example; imagine 3 processes that do "return x/3.14159265359;" where one process is running on a modern 80x86 machine, one is running on an old Pentium (which has a well documented floating point division bug), and one is running on an ARM system where there's no built-in support for floating point and it has to be emulated in software.
embryo2 wrote:
Brendan wrote:and "live update" (where a service is upgraded to a new version without stopping/restarting applications that are using it)
Why not to have simple address replacement in a running code?
Because it's over-complicated and excessively risky (higher chance of corrupting something) and doesn't work at all when something has been changed significantly; and because (once you've got the ability to restart a service that crashed without the application knowing) its much much easier to just restart a service that's being upgraded (without the application knowing).
embryo2 wrote:
Brendan wrote:I can have the service running on a different computer to the application (to spread load around the network).
Why a simple factory pattern, which returns a suitable service implementation stub, wouldn't work? It can provide local or remote service stub in a traditional application without the need for different processes.
For my case; communication is asynchronous - e.g. you can send 10 different requests to 10 different services; then do other things for a while; then get 10 replies in "random" order. You could have "local or remote service stub", but it'd end up being the similar to what I'm doing anyway.
embryo2 wrote:
Brendan wrote:It also means that a malicious service only has access to what it's given, and doesn't have full access to all data for every application that uses it.
Runtime checks also ensure such constraint. Then why the process based solution is better than the traditional one?
What runtime checks?
embryo2 wrote:
Brendan wrote:Fortunately I don't really need high quality for this - e.g. if the initial implementation of the compiler is lame and doesn't optimise much (but the generated code is correct), then that's fine
Well, then it contradicts with your perfectionist claims from previous messages. But despite of the contradiction it really can help to create something usable because the development becomes manageable (while perfect software is completely unmanageable because it is just impossible).
I'm not too sure which previous messages you're referring to. The initial implementation of anything is almost always "less good" than later versions.

Please note that part of the problem I'm going to have is that I want a compiler for a new language that is written in that new language. This means that I have no choice but to write the compiler twice - once in some other language, then a second time in the new language. That first implementation will only be used to create the second implementation, and then it will be discarded/replaced by the second implementation. It would be silly to waste years making the first implementation as perfect as possible when it's destined to be thrown away. It makes perfect sense to do the least work possible on the first implementation (and worry about things like optimising well in the second implementation).
embryo2 wrote:
Brendan wrote:I don't need to convince people to participate. Instead; I need to create an OS that's so interesting and so promising that people see it and want to participate.
Most often such approach just isn't work, but may be your case will be a special one.
If my OS isn't interesting/promising enough; then I'll just keep trying until it is interesting/promising enough. There are only 2 possible outcomes - either I will succeed before I die, or I will die before I succeed.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Concise Way to Describe Colour Spaces

Post by embryo2 »

Brendan wrote:
embryo2 wrote:There was the discussion about managed code where you suggested a smart compiler that imposes some limits on a programmer and prevents problems in more than 99% of cases. Also you proposed the install time compilation that makes possible to insert safety checks and throw exceptions if something is working unsafe. Then why do you see the need for even better safeness?
Because it's not enough to detect that something has gone wrong; for high availability you also want to recover and continue normal operation after something goes wrong. For normal libraries, if it crashes neither OS nor application can know how much it was corrupted before it crashed or while it was crashing (even with "managed") so you're screwed and can't recover. Also note that I'm thinking of both software failures and hardware failures here (where "hardware failures" includes user error - e.g. unplugging the wrong network cable).
What do you mean while using word "recover"? If it was a hardware problem, then most probably there is no way to recover. If it was a software problem in an application then it will persist until the application will be fixed. But managed environment can throw an exception and, in case of something like event or request handler, the problem can be contained within some small part of an application. If it was a system software problem, then again, it most probably can not be recovered (if we won't give a name "recovery" to the reboot process). All temporary hardware problems fall into one of the categories above. If the problem happen during managed application execution and contained within the application, then the exception is enough to contain farther problem distribution. Else the problem just can not be contained.
Brendan wrote:multiple copies of a process can give different results for many reasons (including dodgy software with things like race conditions, temporary hardware problems like transient RAM errors, dodgy hardware including CPU errata, etc). For a simple example; imagine 3 processes that do "return x/3.14159265359;" where one process is running on a modern 80x86 machine, one is running on an old Pentium (which has a well documented floating point division bug), and one is running on an ARM system where there's no built-in support for floating point and it has to be emulated in software.
If somebody writes a code with race conditions and expects it to deliver equal results all the time then I doubt your approach can help, because such programmer always can manage to introduce bugs in his code despite of your efforts.

And if the problem was in hardware then we again back to the situation when the redundancy was used to improve the system reliability. But in your case such improvement is overextended to the extreme. Contemporary hardware has very good reliability, but you still want more. It's ok to have a lot of reserves, but the cost/efficiency ratio is too high in your case. Or you just think about the seldom situations when the redundancy is cost effective? But then why not to contain the related complexity within a driver, for example? And if you think about some distributed environment, then I see a lot of work, that has been done many years ago for all those web-services and cloud computing. Why your services can be better than the means that big corporations throw billions at?
Brendan wrote:
embryo2 wrote:Why not to have simple address replacement in a running code?
Because it's over-complicated and excessively risky (higher chance of corrupting something) and doesn't work at all when something has been changed significantly; and because (once you've got the ability to restart a service that crashed without the application knowing) its much much easier to just restart a service that's being upgraded (without the application knowing).
The complexity of the address replacement is acceptable. And the risk here is not much more than in any other complex software. Next, what a problem you see if "something changed significantly"? There is always a call, that starts some processing of whatever possible complexity, so, we can change an address of the call without any reliance on the significance of changes.

And why do you want to restart a service? Here I mean the well known library approach - why do we need to restart a library? And in case of something bigger, like web-services, there are already a lot of means to improve the reliability of such services. So, why your approach is better than the already working solutions?
Brendan wrote:For my case; communication is asynchronous - e.g. you can send 10 different requests to 10 different services; then do other things for a while; then get 10 replies in "random" order. You could have "local or remote service stub", but it'd end up being the similar to what I'm doing anyway.
In case of the local implementation (in-process) it differs from your "every library is a different process".
Brendan wrote:
embryo2 wrote:
Brendan wrote:It also means that a malicious service only has access to what it's given, and doesn't have full access to all data for every application that uses it.
Runtime checks also ensure such constraint. Then why the process based solution is better than the traditional one?
What runtime checks?
Array bounds, memory and IO port access - such checks prevent a bad behaving software from touching anything outside of a predefined box.
Brendan wrote:I'm not too sure which previous messages you're referring to. The initial implementation of anything is almost always "less good" than later versions.
Ok, I see that you meant some ongoing process, but described just first part of it.
Brendan wrote:If my OS isn't interesting/promising enough; then I'll just keep trying until it is interesting/promising enough. There are only 2 possible outcomes - either I will succeed before I die, or I will die before I succeed.
Unfortunately, really interesting things just sink in the sea of mediocrity. And heavy advertising helps to float some trash on the surface, instead of the interesting things.

But sometime it is possible to emerge from the depth of the ocean. The only problem here - such events are too seldom. However, I wish you to be lucky with your attempt.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
embryo2 wrote:
Brendan wrote:
embryo2 wrote:There was the discussion about managed code where you suggested a smart compiler that imposes some limits on a programmer and prevents problems in more than 99% of cases. Also you proposed the install time compilation that makes possible to insert safety checks and throw exceptions if something is working unsafe. Then why do you see the need for even better safeness?
Because it's not enough to detect that something has gone wrong; for high availability you also want to recover and continue normal operation after something goes wrong. For normal libraries, if it crashes neither OS nor application can know how much it was corrupted before it crashed or while it was crashing (even with "managed") so you're screwed and can't recover. Also note that I'm thinking of both software failures and hardware failures here (where "hardware failures" includes user error - e.g. unplugging the wrong network cable).
What do you mean while using word "recover"?
"Recover" is where something fails and nobody notices. For example, when an application is using a service on another computer and that other computer explodes, and the OS just starts another version of the service on a third computer for the application to use and nobody using the application knows that anything happened.
embryo2 wrote:If it was a hardware problem, then most probably there is no way to recover. If it was a software problem in an application then it will persist until the application will be fixed. But managed environment can throw an exception and, in case of something like event or request handler, the problem can be contained within some small part of an application.
Wrong. For hardware failures "managed" is a worthless joke. For software failures it might help in theory (in the rare cases where the software developer wrote decent exception handlers and tested them to ensure they actually work properly in all cases; which means "almost never"); however you also have to worry about bugs in the compiler and/or managed environment/runtime and these pieces are typically far more complex (and far more error prone and likely to have bugs) than the application itself (especially if performance doesn't suck); so in practice it just makes things worse and doesn't actually help at all.
embryo2 wrote:
Brendan wrote:multiple copies of a process can give different results for many reasons (including dodgy software with things like race conditions, temporary hardware problems like transient RAM errors, dodgy hardware including CPU errata, etc). For a simple example; imagine 3 processes that do "return x/3.14159265359;" where one process is running on a modern 80x86 machine, one is running on an old Pentium (which has a well documented floating point division bug), and one is running on an ARM system where there's no built-in support for floating point and it has to be emulated in software.
If somebody writes a code with race conditions and expects it to deliver equal results all the time then I doubt your approach can help, because such programmer always can manage to introduce bugs in his code despite of your efforts.
I don't think you have any idea what you're talking about. The problem with race conditions (whether they're in software or hardware) is that typically everything works perfectly fine almost 100% of the time (except there's maybe one chance in a few million that the exact timing might cause something to go wrong).

Now think of larger companies. For example, here where I live there's a smelter. If they shut down for 1 day it costs the company several million dollars. If there's a race condition in their software that causes a problem (on average) once per year; do you think they want to shut down for 1 month while someone tries to find and fix the bug; or do you think they'd prefer an OS that supports triple redundancy, where the problem will be detected if/when it happens, the dodgy results will be discarded without effecting anything, and the system will keep running reliably while the bug is being found and fixed?
embryo2 wrote:And if the problem was in hardware then we again back to the situation when the redundancy was used to improve the system reliability. But in your case such improvement is overextended to the extreme. Contemporary hardware has very good reliability, but you still want more. It's ok to have a lot of reserves, but the cost/efficiency ratio is too high in your case. Or you just think about the seldom situations when the redundancy is cost effective? But then why not to contain the related complexity within a driver, for example? And if you think about some distributed environment, then I see a lot of work, that has been done many years ago for all those web-services and cloud computing. Why your services can be better than the means that big corporations throw billions at?
Erm? Do you think big corporations throw billions at fault toleration/redundant systems because they have nothing better to do with their money?

My services won't be "better" than systems that use the exact same techniques. However, they will be better than systems that don't (Windows, Linux) for cases where very high reliability is needed.
embryo2 wrote:
Brendan wrote:
embryo2 wrote:Why not to have simple address replacement in a running code?
Because it's over-complicated and excessively risky (higher chance of corrupting something) and doesn't work at all when something has been changed significantly; and because (once you've got the ability to restart a service that crashed without the application knowing) its much much easier to just restart a service that's being upgraded (without the application knowing).
The complexity of the address replacement is acceptable. And the risk here is not much more than in any other complex software. Next, what a problem you see if "something changed significantly"? There is always a call, that starts some processing of whatever possible complexity, so, we can change an address of the call without any reliance on the significance of changes.
Are you serious? Imagine changing a data structure from "linked list of things" to "balanced tree of things". Do you honestly think you can just modify the address of one function (while that function is being executed) and the data will be magically converted?
embryo2 wrote:And why do you want to restart a service? Here I mean the well known library approach - why do we need to restart a library? And in case of something bigger, like web-services, there are already a lot of means to improve the reliability of such services. So, why your approach is better than the already working solutions?
You want to restart a service if it crashed (either due to software or hardware problem), or because you want to upgrade its software, or because you want to upgrade the hardware it's running on.

There are already ways to improve the reliability of services; but they're all mostly doing the same thing as me. My approach is not better than the already working solutions that do the same thing to improve reliability; and is more reliable than solutions that don't.
embryo2 wrote:
Brendan wrote:
embryo2 wrote:Runtime checks also ensure such constraint. Then why the process based solution is better than the traditional one?
What runtime checks?
Array bounds, memory and IO port access - such checks prevent a bad behaving software from touching anything outside of a predefined box.
These checks don't prevent an "in process library" from touching things inside the predefined box.
embryo2 wrote:
Brendan wrote:If my OS isn't interesting/promising enough; then I'll just keep trying until it is interesting/promising enough. There are only 2 possible outcomes - either I will succeed before I die, or I will die before I succeed.
Unfortunately, really interesting things just sink in the sea of mediocrity. And heavy advertising helps to float some trash on the surface, instead of the interesting things.

But sometime it is possible to emerge from the depth of the ocean. The only problem here - such events are too seldom. However, I wish you to be lucky with your attempt.
You're right - this is why "slightly better" isn't enough, and the OS has to be significantly better in multiple ways.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Concise Way to Describe Colour Spaces

Post by embryo2 »

Brendan wrote:"Recover" is where something fails and nobody notices.
Nice definition, we can use it later.
Brendan wrote:
embryo2 wrote:If it was a hardware problem, then most probably there is no way to recover. If it was a software problem in an application then it will persist until the application will be fixed. But managed environment can throw an exception and, in case of something like event or request handler, the problem can be contained within some small part of an application.
Wrong. For hardware failures "managed" is a worthless joke.
It can make things like problem monitoring easier to implement. If we have a high level code (or bytecode) then it is possible to insert some checks, specific for particular hardware, when the code is being compiled.
Brendan wrote:For software failures it might help in theory (in the rare cases where the software developer wrote decent exception handlers and tested them to ensure they actually work properly in all cases; which means "almost never");
It helps in practice. If a handler has a bug, then it's execution is aborted with an exception and it's thread just stops running (if VM was designed by competent architects), but all other threads still work and application looses just some small part of it's functionality.
Brendan wrote:you also have to worry about bugs in the compiler and/or managed environment/runtime and these pieces are typically far more complex (and far more error prone and likely to have bugs) than the application itself (especially if performance doesn't suck)
Yes, but the VM's bugs affect millions of programs, so they can be detected very quickly. Then, despite of the bug complexity, the required improvement will be made in a short time.
Brendan wrote:The problem with race conditions (whether they're in software or hardware) is that typically everything works perfectly fine almost 100% of the time (except there's maybe one chance in a few million that the exact timing might cause something to go wrong).
And if you know about the problem, would you expect that a program with race condition will get the same result all the time? I think - no. So, first we should ensure there is no race condition and only after it we can compare outputs of a program from different computers.
Brendan wrote:Now think of larger companies. For example, here where I live there's a smelter. If they shut down for 1 day it costs the company several million dollars. If there's a race condition in their software that causes a problem (on average) once per year; do you think they want to shut down for 1 month while someone tries to find and fix the bug; or do you think they'd prefer an OS that supports triple redundancy, where the problem will be detected if/when it happens, the dodgy results will be discarded without effecting anything, and the system will keep running reliably while the bug is being found and fixed?
Yes, there is a need for reliability. But for most users your redundancy will be too expensive. Only if you make a system for something like a smelter, only then the redundancy is viable. So, your quest for better OS can end with something useless for the majority of OS users, while you still should spend a lot of time developing such useless functionality.
Brendan wrote:My services won't be "better" than systems that use the exact same techniques. However, they will be better than systems that don't (Windows, Linux) for cases where very high reliability is needed.
Ok, it's a good clarification. But for the majority of users fault tolerance is a least important problem. So, for the OS to be better than others, it is better to concentrate on something more useful for the majority of users.
Brendan wrote:Imagine changing a data structure from "linked list of things" to "balanced tree of things". Do you honestly think you can just modify the address of one function (while that function is being executed) and the data will be magically converted?
Just let the running function to finish and prevent new calls from accessing the old function (by changing it's address). It's simple.
Brendan wrote:
embryo2 wrote:Array bounds, memory and IO port access - such checks prevent a bad behaving software from touching anything outside of a predefined box.
These checks don't prevent an "in process library" from touching things inside the predefined box.
But if the box excludes everything important? If we use a VM, then what a library can access beside of a set of selected classes (or functions)?
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
embryo2 wrote:
Brendan wrote:
embryo2 wrote:If it was a hardware problem, then most probably there is no way to recover. If it was a software problem in an application then it will persist until the application will be fixed. But managed environment can throw an exception and, in case of something like event or request handler, the problem can be contained within some small part of an application.
Wrong. For hardware failures "managed" is a worthless joke.
It can make things like problem monitoring easier to implement. If we have a high level code (or bytecode) then it is possible to insert some checks, specific for particular hardware, when the code is being compiled.
Sure, every time you read a variable you'd check if it's the same value that was stored in the variable last "somehow", every time you add 2 numbers you follow that by a check (e.g. "c = a + b; if( c-a != b ) .."), every time you multiply you follow it with a check (e.g. "c = a * b; if(c / a != b) ..."), etc. Of course it's going to be much much slower, you won't be able to use 2 or more CPUs to spread the load, and it's still going to fail when (e.g.) the code itself is corrupted or (e.g.) the CPU fails or (e.g.) someone unplugs the wrong power or network cable.
embryo2 wrote:
Brendan wrote:For software failures it might help in theory (in the rare cases where the software developer wrote decent exception handlers and tested them to ensure they actually work properly in all cases; which means "almost never");
It helps in practice. If a handler has a bug, then it's execution is aborted with an exception and it's thread just stops running (if VM was designed by competent architects), but all other threads still work and application looses just some small part of it's functionality.
That's extremely naive at best. In practice those threads and will crash at "unfortunate" times (e.g. in the middle of modifying data while holding several locks) and you'll be screwed.
embryo2 wrote:
Brendan wrote:you also have to worry about bugs in the compiler and/or managed environment/runtime and these pieces are typically far more complex (and far more error prone and likely to have bugs) than the application itself (especially if performance doesn't suck)
Yes, but the VM's bugs affect millions of programs, so they can be detected very quickly. Then, despite of the bug complexity, the required improvement will be made in a short time.
More like the opposite - every time they add new features to the VM they introduce more bugs that effect millions of programs.
embryo2 wrote:
Brendan wrote:The problem with race conditions (whether they're in software or hardware) is that typically everything works perfectly fine almost 100% of the time (except there's maybe one chance in a few million that the exact timing might cause something to go wrong).
And if you know about the problem, would you expect that a program with race condition will get the same result all the time? I think - no. So, first we should ensure there is no race condition and only after it we can compare outputs of a program from different computers.
First we ensure there's no race conditions "somehow" (with magic or prayer?); then we compare outputs of a program from different computers to both detect and avoid problems caused by race conditions (which makes it easy to detect and correct those race conditions that we "ensured" couldn't happen "somehow")?
embryo2 wrote:
Brendan wrote:Now think of larger companies. For example, here where I live there's a smelter. If they shut down for 1 day it costs the company several million dollars. If there's a race condition in their software that causes a problem (on average) once per year; do you think they want to shut down for 1 month while someone tries to find and fix the bug; or do you think they'd prefer an OS that supports triple redundancy, where the problem will be detected if/when it happens, the dodgy results will be discarded without effecting anything, and the system will keep running reliably while the bug is being found and fixed?
Yes, there is a need for reliability. But for most users your redundancy will be too expensive. Only if you make a system for something like a smelter, only then the redundancy is viable. So, your quest for better OS can end with something useless for the majority of OS users, while you still should spend a lot of time developing such useless functionality.
Yes, for some/most things (e.g. where the additional reliability isn't needed) the redundancy won't be used. However, for other things it will be, and it's better to give users the ability to choose whether to use it or not (instead of simply assuming all the software that will ever be run on the OS will always be "not important enough").
embryo2 wrote:
Brendan wrote:My services won't be "better" than systems that use the exact same techniques. However, they will be better than systems that don't (Windows, Linux) for cases where very high reliability is needed.
Ok, it's a good clarification. But for the majority of users fault tolerance is a least important problem. So, for the OS to be better than others, it is better to concentrate on something more useful for the majority of users.
Fault tolerance plus other advantages is better than those other advantages alone.

Also note that I'm planning a distributed system. The chance of one computer failing might be "acceptably low"; but when you have 100 computers working together the chance that one of those computers will fail is 100 times higher than "acceptably low".
embryo2 wrote:
Brendan wrote:Imagine changing a data structure from "linked list of things" to "balanced tree of things". Do you honestly think you can just modify the address of one function (while that function is being executed) and the data will be magically converted?
Just let the running function to finish and prevent new calls from accessing the old function (by changing it's address). It's simple.
So now you need some sort of synchronisation point at the start and end of every function, plus some way to determine which functions use which data structures, and then you're still completely screwed if the function has some sort of main loop and you never leave that function (until/unless you exit the process). It's "simple" (like, winning the national lottery is simple - you just buy a ticket)!
embryo2 wrote:
Brendan wrote:
embryo2 wrote:Array bounds, memory and IO port access - such checks prevent a bad behaving software from touching anything outside of a predefined box.
These checks don't prevent an "in process library" from touching things inside the predefined box.
But if the box excludes everything important? If we use a VM, then what a library can access beside of a set of selected classes (or functions)?
If the box excludes everything unnecessary (e.g. one "box"/virtual address space for the application and a separate "box"/virtual address space for the library); and if there's a way to transfer information between "boxes" (e.g. messages) then it'd work because it's what I'm doing. The only difference is that you're bloated/inefficient VMs to create the boxes; which creates a whole new "VM can touch everything" security problem that's just as bad as the "library can touch everything" problem that you were trying to solve. Of course now I guess you'll just run the VM inside a VM, and put than in a VM within a VM. Yay!


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Post Reply