which do you think is better user experience?

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
OSwhatever
Member
Member
Posts: 595
Joined: Mon Jul 05, 2010 4:15 pm

Re: which do you think is better user experience?

Post by OSwhatever »

Kazinsal wrote:I think the best user experience when it comes to command-line arguments is having proactive autocompletion and not having any dashes, in an effort to make your command line flow better. Compare this:

Code: Select all

ping -W5 -Ienp1s0 -s512 -c12 172.168.24.1
to this:

Code: Select all

ping 172.168.24.1 timeout 5 interface ethernet1/0 size 512 repeat 12
Which one is immediately parsable? Which one do you need to pull out the manual to figure out the switches for?

The problem with the command line isn't its existence. The problem is the trend of making it intentionally as terse as possible with magic switches that are about as self-descriptive as a strained can of spaghetti-os and that are hopelessly inconsistent even between programs from the same authors and historical origin (-n, -c, and -s on unixlikes are particularly bad about this because between programs you can never guarantee what each one does).

Code: Select all

ping 172.168.24.1 timeout 5 interface ethernet1/0 size 512 repeat 12
This is absolutely horrible.

If I want to see what the switches does, I should be able to just type "command -h" or something. Nothing wrong with word options but not having a dashes makes me confused what's options and what is not.
User avatar
dozniak
Member
Member
Posts: 723
Joined: Thu Jul 12, 2012 7:29 am
Location: Tallinn, Estonia

Re: which do you think is better user experience?

Post by dozniak »

OSwhatever wrote:
Kazinsal wrote:proactive autocompletion

Code: Select all

ping 172.168.24.1 timeout 5 interface ethernet1/0 size 512 repeat 12
This is absolutely horrible.
You start seeing now why this is a non-constructive statement? This is absolutely much betterer than the default ping dash mess.
OSwhatever wrote:If I want to see what the switches does, I should be able to just type "command -h" or something. Nothing wrong with word options but not having a dashes makes me confused what's options and what is not.
Right. And for the example provided here you do NOT need any "-h" at all. Proactive autocompletion almost completely replaces the "-h" horror. Especially if it provides a description of the parameter and expected values as well as the parameter name itself. Docs as you type.
Learn to read.
OSwhatever
Member
Member
Posts: 595
Joined: Mon Jul 05, 2010 4:15 pm

Re: which do you think is better user experience?

Post by OSwhatever »

dozniak wrote:Right. And for the example provided here you do NOT need any "-h" at all. Proactive autocompletion almost completely replaces the "-h" horror. Especially if it provides a description of the parameter and expected values as well as the parameter name itself. Docs as you type.
I'm not convinced. There is no real reason that you can't implement auto completion with the dashes as well. You can even have auto completion with "-option=path" if you program for it.
User avatar
Sik
Member
Member
Posts: 251
Joined: Wed Aug 17, 2016 4:55 am

Re: which do you think is better user experience?

Post by Sik »

dozniak wrote:Right. And for the example provided here you do NOT need any "-h" at all. Proactive autocompletion almost completely replaces the "-h" horror. Especially if it provides a description of the parameter and expected values as well as the parameter name itself. Docs as you type.
I think that the complaint was more that there's nothing obvious to distinguish at a glance the arguments and their values (especially important when skimming). You'd be resorting to syntax highlighting to work around this, which means somehow knowing exactly which arguments every single program can take. Oh, and this is going to be fun with text editors (don't forget editing shell scripts!) since they'll need access to that knowledge somehow (and unlike programming languages, there isn't a fixed set of rules they can just be bundled with).

Yeah, this can end up with a seriously complex setup which may end up causing more harm than benefits in the long term, at least if you intend to stick to passing arguments as a raw line of text instead of some other form of data structure. It's probably still worth looking into other alternatives (maybe something better comes up).
Korona
Member
Member
Posts: 1000
Joined: Thu May 17, 2007 1:27 pm
Contact:

Re: which do you think is better user experience?

Post by Korona »

This series of posts and the original question diverged quite a bit (maybe split it into another thread?) but I think it's worthwhile to talk about that.

First a note on the "Linux loads fs modules in order to be able to probe file systems": That is completely wrong. The reason that some distributions include many fs modules in their default kernels is that they want their boot process to "just work" if a user switches to a different file system. They don't want to build an individual initrd for every single configuration.

Loading kernel modules in a modern Linux system is not the kernel's responsibility; it is done by the user space udev daemon. It would be easy to write a fs recognition udev rule in user space and load the required modules during run time.
Brendan wrote:Note that for GPT, to auto-detect the correct type of file system you only need to look at the 16-byte "partition type GUID" in the partition table entry.
That is not entirely true: GPT's partition type GUID tells you the intended use (aka "this is a root partition" or "this is a generic data partition") but not the file system type (aka "this is ext2" or "this is ntfs"). You still need the "probe superblock" magic.
Brendan wrote:While that description could be applied to other OSs (with wildly varying degrees of applicability); Linux is unique and earns that description more than any other OS for multiple reasons; where the largest reason is their inability to effectively coordinate disparate groups of developers (the "herding headless chickens" problem).
I basically agree with that statement but I don't think that this is necessarily a bad thing. Linux generally prefers working implementations over a abstract design processes. This often leads to sub-optimal designs and APIs that have to be revised and rewritten after limitations are hit. However it also opens up the development process: Device and CPU manufacturers can get their code into the kernel with relatively little coordination. The "herding headless chickens" development arguably led to Linux' success over other OS like the BSDs, Solaris or Windows.
Brendan wrote:Note that I am mostly reacting to assumptions (that are false assumptions more often than not) that take the form "Linux does it like this, therefore ..." (e.g. "therefore any other way won't work", or "therefore that must be the best way", or "therefore I'm not going to bother to think for myself", etc).
Citing Linux as an optimal design choice is almost always wrong. However Linux implementations are often practical: The way Linux does something is usually not completely braindead. Looking at Linux is often a good starting point if you want to implement a new feature. And Linux has quite a few subsystems that are actually quite well thought out: For example the VFS layer is actually pretty good. Another quite remarkable algorithm that Linux introduced is the RCU subsystem's quiescent state garbage collection.

Yet for us hobby OS developers the most valuable feature of the Linux source code is its large collection of drivers, hardware quirks and errata that would otherwise not be available to the public. I lost count of how many differences between official documentation and Linux' implementation I encountered where Linux' implementation did the right thing and the docs were just plain wrong.
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].
User avatar
zaval
Member
Member
Posts: 656
Joined: Fri Feb 17, 2017 4:01 pm
Location: Ukraine, Bachmut
Contact:

Re: which do you think is better user experience?

Post by zaval »

That is not entirely true: GPT's partition type GUID tells you the intended use (aka "this is a root partition" or "this is a generic data partition") but not the file system type (aka "this is ext2" or "this is ntfs"). You still need the "probe superblock" magic.
This is not true at all. Nothing prevents you from defining a GUID for a file system
class - NTFS, FAT32, ext3 etc. This looks the most valuable use of that field.
Unique ID that defines the purpose
and type of this Partition. A value of
zero defines that this partition entry
is not being used.
It says nothing about "root" partition or "general" or whatever. It defines
System Partition for itself this way, but then again - this automatically means its FS
type.
The truth is you might use it for "intended use" not bounding it to FS class, but you may
use it as a FS type identifier. And that would be the best usage. The spec by itself says nothing
more than quoted above.
ANT - NT-like OS for x64 and arm64.
efify - UEFI for a couple of boards (mips and arm). suspended due to lost of all the target park boards (russians destroyed our town).
Korona
Member
Member
Posts: 1000
Joined: Thu May 17, 2007 1:27 pm
Contact:

Re: which do you think is better user experience?

Post by Korona »

zaval wrote:This is not true at all. Nothing prevents you from defining a GUID for a file system
class - NTFS, FAT32, ext3 etc. This looks the most valuable use of that field.
Sure, you can do that: Define your own private constants for each fs and disregard what every other OS does because you're too lazy to write that probing code. However at that point you can just dump GPT completely and use your own partition table. Whats the point in using GPT when you don't need interoperability?
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].
User avatar
zaval
Member
Member
Posts: 656
Joined: Fri Feb 17, 2017 4:01 pm
Location: Ukraine, Bachmut
Contact:

Re: which do you think is better user experience?

Post by zaval »

Korona wrote:
zaval wrote:This is not true at all. Nothing prevents you from defining a GUID for a file system
class - NTFS, FAT32, ext3 etc. This looks the most valuable use of that field.
Sure, you can do that: Define your own private constants for each fs and disregard what every other OS does because you're too lazy to write that probing code. However at that point you can just dump GPT completely and use your own partition table. Whats the point in using GPT when you don't need interoperability?
Ah, "every other OS does" rule. But is that specified in the GPT spec?

It's not laziness, it's a confidence in what is the proper usage of this field.
About the "intended" use. There is the Attributes field - there are bits for the UEFI defined "intended use", and finally Bits 48-63 for the:
Reserved for GUID specific use. The use of these bits will vary depending on
the PartitionTypeGUID. Only the owner of the
PartitionTypeGUID is allowed to modify these bits. They must be
preserved if Bits 0–47 are modified.
I just am wondering - what else was missing for those "every other OS" to define their OS-specific partition usage there? Isn't this above field the very right place to put their OS-specific things? Yes of course it is! But they screwed up the PartitionTypeGUID field and now I should follow them for an imaginable "interoperability". But what if "intended use" on one OS isn't relevant for the other? Right, such an interoperability will have 0 value. The only thing it does efficiently - it f&cks up the idea of this field in GPTPE. Still using it as FS type ID would never be such an issue.
ANT - NT-like OS for x64 and arm64.
efify - UEFI for a couple of boards (mips and arm). suspended due to lost of all the target park boards (russians destroyed our town).
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: which do you think is better user experience?

Post by Brendan »

Hi,
Korona wrote:It would be easy to write a fs recognition udev rule in user space and load the required modules during run time.
It could be easy, but how many distros actually do it (and why should they have to)?
Korona wrote:
Brendan wrote:Note that for GPT, to auto-detect the correct type of file system you only need to look at the 16-byte "partition type GUID" in the partition table entry.
That is not entirely true: GPT's partition type GUID tells you the intended use (aka "this is a root partition" or "this is a generic data partition") but not the file system type (aka "this is ext2" or "this is ntfs"). You still need the "probe superblock" magic.
The UEFI spec (that defines GPT) says: "Unique ID that defines the purpose and type of this Partition. A value of zero defines that this partition entry is not being used.". Note that "the purpose and type" does not mean "the purpose or type", and doesn't mean "the purpose (and not the type)" either.

Of course if you look at the list of partition type GUIDs on Wikipedia you'll notice that various OSs screw it up in different ways. However this does not mean that a new OS has to screw it up, and doesn't mean that it's not a useful hint even if it has been screwed up (e.g. Microsoft's "basic data partition" narrows the possibilities to either NTFS or one of the variations of FAT).
Korona wrote:
Brendan wrote:While that description could be applied to other OSs (with wildly varying degrees of applicability); Linux is unique and earns that description more than any other OS for multiple reasons; where the largest reason is their inability to effectively coordinate disparate groups of developers (the "herding headless chickens" problem).
I basically agree with that statement but I don't think that this is necessarily a bad thing. Linux generally prefers working implementations over a abstract design processes. This often leads to sub-optimal designs and APIs that have to be revised and rewritten after limitations are hit. However it also opens up the development process: Device and CPU manufacturers can get their code into the kernel with relatively little coordination. The "herding headless chickens" development arguably led to Linux' success over other OS like the BSDs, Solaris or Windows.
There are multiple factors that led to both Linux's success and its lack of success; and it's impossible to accurately determine what influence any factor had. However, given that it's just as easy for device and CPU manufacturers to get their code into multiple other kernels I very much doubt that it's a significant factor. More likely is "popularity creates popularity" - e.g. developers are more likely to support Linux than (random e.g.) FreeBSD, because Linux is more popular than FreeBSD, because more developers support Linux than FreeBSD. This would imply that largest factor/s are whatever caused Linux' original popularity (which can be traced back to a combination of "GNU politics" combined with a "BSD vs. SCO" lawsuit casting doubt on the future of BSD; both with "extremely lucky for early Linux" timing).

Note that if you look at the most successful OSs/distros that use the Linux kernel; you'll notice that most of them involve some kind of cooperate governance/structure (Google/Andriod, Redhat/Redhat, Canonical/Ubuntu) that mitigates at least some of (and in Google's case, most of) the "herding headless chickens" problem.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Korona
Member
Member
Posts: 1000
Joined: Thu May 17, 2007 1:27 pm
Contact:

Re: which do you think is better user experience?

Post by Korona »

Brendan wrote:Of course if you look at the list of partition type GUIDs on Wikipedia you'll notice that various OSs screw it up in different ways. However this does not mean that a new OS has to screw it up, and doesn't mean that it's not a useful hint even if it has been screwed up (e.g. Microsoft's "basic data partition" narrows the possibilities to either NTFS or one of the variations of FAT).
If you want your OS to be capable of reading foreign storage media you'll have to implement probing anyways. Why bother with the GPT field and use it in a way that is different from any other OS when doing that does not have any advantage?
Brendan wrote:
Korona wrote:It would be easy to write a fs recognition udev rule in user space and load the required modules during run time.
It could be easy, but how many distros actually do it (and why should they have to)?
Linux tries not to stuff policy into the kernel; it delegates those things to distros. The location (e.g. path name) of kernel modules is seen as policy and thus handled by user space. I do think that this actually makes sense: In particular if you're writing a microkernel there is no other way to do it.
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].
User avatar
SpyderTL
Member
Member
Posts: 1074
Joined: Sun Sep 19, 2010 10:05 pm

Re: which do you think is better user experience?

Post by SpyderTL »

OSwhatever wrote:
dozniak wrote:Right. And for the example provided here you do NOT need any "-h" at all. Proactive autocompletion almost completely replaces the "-h" horror. Especially if it provides a description of the parameter and expected values as well as the parameter name itself. Docs as you type.
I'm not convinced. There is no real reason that you can't implement auto completion with the dashes as well. You can even have auto completion with "-option=path" if you program for it.
The problem is not necessarily with the dash... it's with the "-h". It's the same problem that ultimately caused me to write my own (simple) programming language instead of using Assembly. And it comes from working backwards from one of the most important requirements for a language, or a console shell -- user assistance.

You simply can not have intellisense or autocomplete functionality with 1 or 2 letter commands/switches. You actually need longer commands, so that the user can effectively navigate through the virtual ocean of commands and options that they have available.

Most command line utilities nowadays give you two sets of switches -- a single character and a verbose option -- and allow you to choose which to use. Once you have mastered a specific tool, the one character option makes the most sense. But, initially, having switches that are actually human-readable makes more sense. Autocomplete is just an alternate approach to the same problem -- give the user human-readable commands and parameters, but allow them to only type one or two characters.

I prefer the autocomplete method, because it enforces the human-readability of verbose command names, which, in turn, allows for additional features, like intellisense, which makes navigating large numbers of commands much easier. With one character switches, you only get the benefit of fewer keystrokes.
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: which do you think is better user experience?

Post by Brendan »

HI,
Korona wrote:
Brendan wrote:Of course if you look at the list of partition type GUIDs on Wikipedia you'll notice that various OSs screw it up in different ways. However this does not mean that a new OS has to screw it up, and doesn't mean that it's not a useful hint even if it has been screwed up (e.g. Microsoft's "basic data partition" narrows the possibilities to either NTFS or one of the variations of FAT).
If you want your OS to be capable of reading foreign storage media you'll have to implement probing anyways. Why bother with the GPT field and use it in a way that is different from any other OS when doing that does not have any advantage?
In my opinion; an OS should not support any other OS's file system unless either:
  • the file system has no security/permissions (e.g. FAT, ISO9660); or
  • the OS honours the other OS's security/permission system, including the other OS's user authentication (e.g. the other OS's "/etc/passwd" file)
Essentially; if a user on one OS creates a file and configures that file's permissions as "only readable by me", then a different user on a different OS should not be able to read that file.

Note that this extends to different instances of the same OS. For example, if Fred installs your OS in one partition and makes himself "root user of your OS instance 1" and Jane installs your OS in a different partition and makes herself "root user of your OS instance 2" (where the computer is configured as "dual boot" such that either instance of your OS can be booted); then Fred should not have "unauthorised by your OS instance 2" access to files that belong to "your OS instance 2", and Jane should not have "unauthorised by your OS instance 1" access to files that belong to "your OS instance 1", even though both Fred and Jane are root users of their instance.

What this means in practical terms is that (for my OS) I refuse to ever support NTFS, ext2/3/4, ReiserFS, ZFS, etc; because providing support for these file systems is either unethical or impractical to do in an ethical way; and the only file systems I need to care about are things like FAT and ISO9660 (where there isn't any assumption of security to violate) and things like NFS/CIFS (where it is practical to support in an ethical way).

The only other consideration is the potential sharing of swap partitions between OSs (which can only work if you're guaranteed that the OS that "owns" the swap partition never leaves any data in the partition when it isn't running).
Korona wrote:
Brendan wrote:
Korona wrote:It would be easy to write a fs recognition udev rule in user space and load the required modules during run time.
It could be easy, but how many distros actually do it (and why should they have to)?
Linux tries not to stuff policy into the kernel; it delegates those things to distros. The location (e.g. path name) of kernel modules is seen as policy and thus handled by user space. I do think that this actually makes sense: In particular if you're writing a microkernel there is no other way to do it.
While it's common for micro-kernels; for monolithic "no policy in the kernel" is just plain broken (and the Linux kernel does contain a huge amount of "policy"). It is not a case of "no policy in kernel" and is purely a case of "we're too incompetent to have effective standards".


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Sik
Member
Member
Posts: 251
Joined: Wed Aug 17, 2016 4:55 am

Re: which do you think is better user experience?

Post by Sik »

Korona wrote:If you want your OS to be capable of reading foreign storage media you'll have to implement probing anyways. Why bother with the GPT field and use it in a way that is different from any other OS when doing that does not have any advantage?
Yeah that was my first thought, it's utterly useless as soon as the values can be just about anything. Only real advantage would be to cache already known partitions (e.g. if you have a removable partition you use regularly, the caching would be useful), and even then nothing says that the partition hasn't been converted to another filesystem at some point...

Also really this is optimizing the wrong thing — mounting isn't a common operation, and if you're handling local drives chances are it'll take less than how long it takes for the user to react. The only reason that optimizations would be relevant is in cases where it could easily run into seconds, which would mean either non-local drives or some kind of drive that's awful at seeking. Otherwise no reason to not go directly to probing.
Brendan wrote:In my opinion; an OS should not support any other OS's file system unless either:
  • the file system has no security/permissions (e.g. FAT, ISO9660); or
  • the OS honours the other OS's security/permission system, including the other OS's user authentication (e.g. the other OS's "/etc/passwd" file)
Essentially; if a user on one OS creates a file and configures that file's permissions as "only readable by me", then a different user on a different OS should not be able to read that file.

Note that this extends to different instances of the same OS. For example, if Fred installs your OS in one partition and makes himself "root user of your OS instance 1" and Jane installs your OS in a different partition and makes herself "root user of your OS instance 2" (where the computer is configured as "dual boot" such that either instance of your OS can be booted); then Fred should not have "unauthorised by your OS instance 2" access to files that belong to "your OS instance 2", and Jane should not have "unauthorised by your OS instance 1" access to files that belong to "your OS instance 1", even though both Fred and Jane are root users of their instance.

What this means in practical terms is that (for my OS) I refuse to ever support NTFS, ext2/3/4, ReiserFS, ZFS, etc; because providing support for these file systems is either unethical or impractical to do in an ethical way; and the only file systems I need to care about are things like FAT and ISO9660 (where there isn't any assumption of security to violate) and things like NFS/CIFS (where it is practical to support in an ethical way).
How are you going to handle archive formats that do keep track of user owners and permissions? Because they're going to be normal files that normal programs can handle. Only workaround to that is to prevent programs from accessing files at all and enforcing all file parsing at the OS level. I recall you wanted to force conversion of existing image formats to your own, so I'm guessing you're likey to consider that route.

...that said, I'd really wish that user IDs weren't just 16-bit numbers (often assigned from 1000 onwards). Makes it a pain when moving files around between different systems, since what's your file on your system could be somebody's else in another system. (bonus points if both have the same users, just in different order - enjoy the confusion)
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: which do you think is better user experience?

Post by Brendan »

Hi,
Sik wrote:
Brendan wrote:In my opinion; an OS should not support any other OS's file system unless either:
  • the file system has no security/permissions (e.g. FAT, ISO9660); or
  • the OS honours the other OS's security/permission system, including the other OS's user authentication (e.g. the other OS's "/etc/passwd" file)
Essentially; if a user on one OS creates a file and configures that file's permissions as "only readable by me", then a different user on a different OS should not be able to read that file.

Note that this extends to different instances of the same OS. For example, if Fred installs your OS in one partition and makes himself "root user of your OS instance 1" and Jane installs your OS in a different partition and makes herself "root user of your OS instance 2" (where the computer is configured as "dual boot" such that either instance of your OS can be booted); then Fred should not have "unauthorised by your OS instance 2" access to files that belong to "your OS instance 2", and Jane should not have "unauthorised by your OS instance 1" access to files that belong to "your OS instance 1", even though both Fred and Jane are root users of their instance.

What this means in practical terms is that (for my OS) I refuse to ever support NTFS, ext2/3/4, ReiserFS, ZFS, etc; because providing support for these file systems is either unethical or impractical to do in an ethical way; and the only file systems I need to care about are things like FAT and ISO9660 (where there isn't any assumption of security to violate) and things like NFS/CIFS (where it is practical to support in an ethical way).
How are you going to handle archive formats that do keep track of user owners and permissions? Because they're going to be normal files that normal programs can handle. Only workaround to that is to prevent programs from accessing files at all and enforcing all file parsing at the OS level. I recall you wanted to force conversion of existing image formats to your own, so I'm guessing you're likey to consider that route.
There's no fundamental difference between:
  • archive files (e.g. zip, tar)
  • files containing a file system image (e.g. "/home/me/myCD.iso")
  • files containing disk images (e.g. "/home/me/bochs_partitioned_hard_disk.img")
  • partitioning schemes (e.g. the file "/dev/sda" that contains the files "/dev/sda1" and "/dev/sda2")
All of these can be treated as file systems (including mounting /dev/sda with the "GPT partitions file system", and including mounting a "myStuff.zip" file with the "PKZIP file system").

Everything I wrote previously (about permissions and ethical use of foreign file systems) applies to all of these cases.
Sik wrote:...that said, I'd really wish that user IDs weren't just 16-bit numbers (often assigned from 1000 onwards). Makes it a pain when moving files around between different systems, since what's your file on your system could be somebody's else in another system. (bonus points if both have the same users, just in different order - enjoy the confusion)
If you want exchange files between different OSs (even just different instances of the same OSs, and not just OSs with radically incompatible file system permissions), then you should strip/discard the original permissions (regardless of whether either OS forces you to do this or if everything gets broken when you don't).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Sik
Member
Member
Posts: 251
Joined: Wed Aug 17, 2016 4:55 am

Re: which do you think is better user experience?

Post by Sik »

Brendan wrote:There's no fundamental difference between:
  • archive files (e.g. zip, tar)
  • files containing a file system image (e.g. "/home/me/myCD.iso")
  • files containing disk images (e.g. "/home/me/bochs_partitioned_hard_disk.img")
  • partitioning schemes (e.g. the file "/dev/sda" that contains the files "/dev/sda1" and "/dev/sda2")
All of these can be treated as file systems (including mounting /dev/sda with the "GPT partitions file system", and including mounting a "myStuff.zip" file with the "PKZIP file system").

Everything I wrote previously (about permissions and ethical use of foreign file systems) applies to all of these cases.
The main difference is that archive files are normally extracted using an archiver instead of being mounted as a filesystem (even though the latter is technically feasible). It's more a difference of what software is normally used to handle each one.

Of course if you consider that's beyond the scope of the OS (since it's the fault of the applications) then that's a moot point.
Post Reply