True cross-platform development

Programming, for all ages and all languages.
nexos
Member
Member
Posts: 1078
Joined: Tue Feb 18, 2020 3:29 pm
Libera.chat IRC: nexos

Re: True cross-platform development

Post by nexos »

IMO, "true" cross platform development would be binary compatibility. This would require a common executable format, and then a runtime which would decide with OS was being ran on at runtime, and then it would wrap over its API to provide a common API. Such an undertaking would be long, hard, and wouldn't gain a huge amount either. That's why we have JIT languages.
"How did you do this?"
"It's very simple — you read the protocol and write the code." - Bill Joy
Projects: NexNix | libnex | nnpkg
vvaltchev
Member
Member
Posts: 274
Joined: Fri May 11, 2018 6:51 am

Re: True cross-platform development

Post by vvaltchev »

nexos wrote:IMO, "true" cross platform development would be binary compatibility. This would require a common executable format, and then a runtime which would decide with OS was being ran on at runtime, and then it would wrap over its API to provide a common API. Such an undertaking would be long, hard, and wouldn't gain a huge amount either. That's why we have JIT languages.
Yeah, totally agree. That's why for native compiled languages we typically just have portable source that uses a given stable portable API (e.g. libstdc++, boost, Qt, etc.). When the source is compiled, it is linked with right libraries pre-compiled for each platform. That's simple, efficient and reasonably convenient to do. Different binaries, but behave the same by doing at runtime different stuff under the hood.

What the OP intended is, clearly, something different, as you said. A single binary that will run everywhere and that's pretty hard to achieve without some sort of runtime translation from one API to another. For example, originally Microsoft implemented WSL1 directly in their kernel so we could natively run Linux apps on it. But, the were two problems with that:

1. The substantial overhead inside the kernel to make the Linux syscalls work. Windows has a very different architecture from Linux and it's inevitable to incur in some overhead while trying to behave as Linux. Even if the "emulation" was (still is) inside the NT kernel, some things like I/O couldn't be made as fast on the Linux subsystem as on the Win32 one. The whole idea of supporting multiple subsystems is very cool, but it has too much overhead in practice.

2. The Linux interface evolves continuously and keeping up with that is pretty expensive for the company.

Probably mostly because of the problem 2., Microsoft introduced WSL2, which is substantially a specially managed Linux VM. At the end of the day, virtualizing the whole Linux kernel turned out to be simpler and faster than "emulating" its interface directly.

Still, that's a little sad. The WSL1 idea was so cool. Unfortunately, it's already kind of deprecated and it will dropped at some point.
Tilck, a Tiny Linux-Compatible Kernel: https://github.com/vvaltchev/tilck
nullplan
Member
Member
Posts: 1769
Joined: Wed Aug 30, 2017 8:24 am

Re: True cross-platform development

Post by nullplan »

nexos wrote:IMO, "true" cross platform development would be binary compatibility. This would require a common executable format, and then a runtime which would decide with OS was being ran on at runtime, and then it would wrap over its API to provide a common API. Such an undertaking would be long, hard, and wouldn't gain a huge amount either. That's why we have JIT languages.
Yeah, I'm still wondering why writing things in an interpreted language (like Python, Perl, C#, Java) is not what the OP is asking. The Windows runtime known how to make the code work on Windows, the Linux runtime does the same for Linux, and the developer writes their code once and it works everywhere. I fail to see how that model is improved by shipping all possible runtimes with the product. That's like shipping all possible graphics drivers with a game. Most of that code will go unused most of the time, so you might as well skip it.
vvaltchev wrote:Still, that's a little sad. The WSL1 idea was so cool. Unfortunately, it's already kind of deprecated and it will dropped at some point.
Ah well, it was a nice try. Unfortunately, the irreconcilable differences in the end made it too incompatible. There was the problem with the different initial x87 control word between NT and Linux (I believe default precision on NT is double prec, while on Linux it is double-extended), with no real solution forthcoming (no, just setting the CW from user space is not a solution, since the whole plan was to be able to use Linux software unchanged), and then there was the plethora of unsupported and apparently unsupportable system calls. SysV IPC came to mind. Most importantly, however, docker didn't work, and I don't really know why. But since containers are the current fad, that was a dealbreaker for business users. Oh, and the impossibility of running 32-bit software was a particular spanner in the works for me specifically. Because I had received a precompiled program for Linux, and it was 32 bits, and WSL 1 cannot support it.
Carpe diem!
nexos
Member
Member
Posts: 1078
Joined: Tue Feb 18, 2020 3:29 pm
Libera.chat IRC: nexos

Re: True cross-platform development

Post by nexos »

vvaltchev wrote:What the OP intended is, clearly, something different, as you said. A single binary that will run everywhere and that's pretty hard to achieve without some sort of runtime translation from one API to another. For example, originally Microsoft implemented WSL1 directly in their kernel so we could natively run Linux apps on it. But, the were two problems with that:
That is pretty cool, its a shame it didn't work out.
One thing I forgot to mention is different architectures. That is impossible without a JITed language. You could not make an x86 binary run on ARM without emulation.
nullplan wrote:Yeah, I'm still wondering why writing things in an interpreted language (like Python, Perl, C#, Java) is not what the OP is asking.
He cited performance problems, which is understandable. The thing is, that's why we have C :) . Source compatibility is all that is necessary. Users are fine with one binary per architecture.
One app which goes overboard in one binary per architecture is VirtualBox. They literally have one binary per Linux distro!
"How did you do this?"
"It's very simple — you read the protocol and write the code." - Bill Joy
Projects: NexNix | libnex | nnpkg
lmemsm
Posts: 18
Joined: Wed Jul 17, 2019 2:27 pm
Contact:

Re: True cross-platform development

Post by lmemsm »

I don't think this is what you're looking for, but I thought it was an interesting solution to using C for cross-platform programming and building once and running on a variety of operating systems: https://github.com/jart/cosmopolitan
h0bby1
Member
Member
Posts: 240
Joined: Wed Aug 21, 2013 7:08 am

Re: True cross-platform development

Post by h0bby1 »

I developped an operating system neutral binary format with dynamic linking and pic with a converter from elf or dll.

With the runtime it allow for linking .elf with .dll from any plate-form.

https://gitlab.com/h0bby1/micro-kernel- ... /mod_maker


Eventually also in the idea to be able to add more runtime informations with a system like COM on Windows to have binary call format standard, with potentielly event sources, interupt handlers, interfaces etc directly in the binary.

And things like vx32 can be used to analyze memory access and code path with a controlled import API, to have something capability based secury in portable binary format.
linguofreak
Member
Member
Posts: 510
Joined: Wed Mar 09, 2011 3:55 am

Re: True cross-platform development

Post by linguofreak »

The big problem with trying to create a cross-platform environment like this is that, in the end, it's just one more platform:

https://xkcd.com/927/

And your platform will never be the best at everything. If it really, truly works well on top of every other platform, it will probably not even be the best at anything other than portability (because there will always be some overhead involved with the host platform). You can deal with the overhead of the host OS by making your environment able to run standalone, but then it's just one more OS (with the non-standalone versions effectively being compatibility layers for running code for your OS on other OSes). If you use bytecode or some other intermediate representation, there will be overhead associated with interpreting or JIT compiling the bytecode, or if you compile at install time, applications for your platform will take longer to install than on other platforms. You can deal with that by building hardware that interprets your bytecode natively, but now you've just got one more hardware platform.

So now, if you've written it well, your platform probably does a few things better than any other platform. If it does lots of things much better than other platforms, it might even become dominant. But there will probably be some use case that it doesn't serve well, and somebody will then write a new platform that handles those use cases well. Even if it is based on your platform, they may decide to leave out features of your platform that aren't relevant to the use-case they're serving (in order to improve performance), and may implement new features that you don't have. And, voila, suddenly you have two incompatible platforms again.

There ain't no such thing as a free lunch.
Post Reply