True cross-platform development

Programming, for all ages and all languages.
User avatar
bzt
Member
Member
Posts: 1584
Joined: Thu Oct 13, 2016 4:55 pm
Contact:

Re: True cross-platform development

Post by bzt »

SpyderTL wrote:To be clear, I want to write an application once, and run it on any OS (or even no OS), as long as the CPU matches the compile target for the executable.
I see. I was confused because your examples (like Java and WebAssembly) uses machine-independent bytecode. Thanks for the clarification!
SpyderTL wrote:If Windows, Linux, OSX and GRUB could all load the same ELF file, then the application would just need to figure out which one actually loaded it, and it could call the appropriate syscalls for that environment. With some static libraries to abstract this away, the application wouldn't even necessarily be aware of the platform details.
Yes, exactly. You are absolutely correct about this.
SpyderTL wrote:Unfortunately, Windows can't load ELF files, and GRUB has some limitations on it's ELF support. See here: https://stackoverflow.com/questions/253 ... 1#25492131

But the ELF format is probably the closest thing to a universally supported format, so it may just be a matter of writing a custom ELF loader for Windows, and working out an ELF format that works on all other platforms.

If I can somehow get my one executable loaded into memory on all platforms, without any interpreter, and without any Just-in-time compiler, I would consider that to be a success.
Now that I know what your goal is, I'd like to revise my former advice. I think if you're not up to fat binaries nor machine independent bytecode, then PE format is better for you.

Windows supports it, and you can't expand the Windows kernel with ELF support easily, so it makes sense to rely on the smallest common denominator.
For Linux, you can use binfmtpe module as a base for your loader.
For Grub, you'll have to create a PE support module (maybe there's already one, but I don't know about it). I'd recommend to start from the aout.c.

As for the services, on Windows you could use a dll instead of a loader. For Linux and Grub, you have the source so you can implement a run-time linker in the loader as you like.
SpyderTL wrote:Write once, compile once, run nearly anywhere, at native speeds.

To put it another way, .NET and Java both use native runtimes that are responsible for both loading the actual program, and re-compiling it into native code for the current platform. In my situation, the code is already in the correct format for the current platform. It just needs to be loaded into memory. So, similar to Java and .NET, you could create an install for your loader, just like you "install" .NET or the JRE now. Then, you could run any program that was compiled for your CPU, regardless of whether you had Windows, or Linux, or MacOS, or just wanted to put the program on a boot floppy with GRUB. The program would work virtually the same on all 4 environments, either in console mode or in windowed mode.

It seems like this should be possible, if not entirely practical. But in either case, it's a good excuse for me to learn the PE and ELF file formats, if nothing else.
Yeah, I agree. It is doable, it is a good exercize, and I'm not entirely sure it wouldn't be practical. I can see valid use cases for it.

Cheers,
bzt
PeterX
Member
Member
Posts: 590
Joined: Fri Nov 22, 2019 5:46 am

Re: True cross-platform development

Post by PeterX »

zaval wrote: to PeterX, expandability is not a virtue of being opensource, it's a virtue of a good design and being well documented. Candy demonstrated how one can add ELF loading support to Windows. just having million of lines of foreign code in front of your eyes is not a guarantee you are at the easiest way of achieving your goals of expanding something over there. I'd rather want to deal with a well structured system, that describes well how it can be extended. you got the point. ;)
Yeah, you are right about both: Good design and being well-documented! I assumed it's not possible with Windows, but I was wrong! (And I hope my answer wasn't of the kind "Don't do it". Because nobody needs people discouraging developers.)

You guys named several interesting solutions.
User avatar
bubach
Member
Member
Posts: 1223
Joined: Sat Oct 23, 2004 11:00 pm
Location: Sweden
Contact:

Re: True cross-platform development

Post by bubach »

Some serious lurking and thread necromancy, I know... but here goes

Inferno OS (based on Plan9) uses a custom byte-code and the Dis VM to execute platform independent Limbo-code (bastard child /missing link in the C -> Go-lang evolution). There's also a paper demonstrating a full JVM implementation on top of the Dis VM, reusing as much of the eisting libs and OS methods as possible in the java-base class implementations.

αcτµαlly pδrταblε εxεcµταblε (blog post at https://justine.lol/ape.html) is an interesting take on portable apps using some terrifying stuff to the executable headers in order to get something you can actually execute directly on both a unix terminal or a win32 console using the same binary.
"Simplicity is the ultimate sophistication."
http://bos.asmhackers.net/ - GitHub
Ethin
Member
Member
Posts: 625
Joined: Sun Jun 23, 2019 5:36 pm
Location: North Dakota, United States

Re: True cross-platform development

Post by Ethin »

My thoughts are that this is going to be incredibly difficult to do. Windows has a lot of undocumented syscalls, and I don't think MacOS syscalls are documented either. Not to mention the binary loading problems.
nexos
Member
Member
Posts: 1078
Joined: Tue Feb 18, 2020 3:29 pm
Libera.chat IRC: nexos

Re: True cross-platform development

Post by nexos »

Ethin wrote:My thoughts are that this is going to be incredibly difficult to do. Windows has a lot of undocumented syscalls, and I don't think MacOS syscalls are documented either. Not to mention the binary loading problems.
MacOS syscalls are kind of documented, as positive numbers are identical to FreeBSD. MacOS has the undocumented part as negative numbers for some odd reason. (I'm not a Mac fan in case you were wondering :) )
"How did you do this?"
"It's very simple — you read the protocol and write the code." - Bill Joy
Projects: NexNix | libnex | nnpkg
Octocontrabass
Member
Member
Posts: 5513
Joined: Mon Mar 25, 2013 7:01 pm

Re: True cross-platform development

Post by Octocontrabass »

Ethin wrote:Windows has a lot of undocumented syscalls,
On Windows, you would want to use the documented APIs provided by kernel32.dll and friends instead of trying to use system calls.
User avatar
Schol-R-LEA
Member
Member
Posts: 1925
Joined: Fri Oct 27, 2006 9:42 am
Location: Athens, GA, USA

Re: True cross-platform development

Post by Schol-R-LEA »

There is an alternative to a bytecode for portable code generation, but it is still an intermediate code and is in some ways more complex: compiling the program to an Annotated Abstract Syntax Tree, basically stopping the compiling process partway and saving the syntax tree after parsing but before code generation. These 'slim binaries' are then completed by a JIT compiler into native code before being run.

This approach was developed in the 1970s, and was used in the late 1980s by Niklaus Wirth in the Oberon language and operating system.

This is also the approach I've been intending to use for my Thelema language, as it is a good fit for a Lisp-style language (which are basically a type of human-readable ASTs anyway).

The main advantage is that it reduces the amount of transformation away from the original code, and thus can be better optimized.

It has its flaws though, most notably that (as Brendan pointed out to me years ago) it is slower for generating the final native code than a bytecode (which is generally simpler to complete, if less easily optimized). It also tends to be larger than an equivalent bytecode representation.
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Korona
Member
Member
Posts: 1000
Joined: Thu May 17, 2007 1:27 pm
Contact:

Re: True cross-platform development

Post by Korona »

Why are ASTs easier to optimize? Considering that no state-of-the-art C++ compiler (= neither GCC nor Clang) does optimization of the AST level, I seriously doubt that this is true. Modern compilers go through multiple layers of IR, but none of the optimization passes deals with ASTs.
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].
User avatar
Schol-R-LEA
Member
Member
Posts: 1925
Joined: Fri Oct 27, 2006 9:42 am
Location: Athens, GA, USA

Re: True cross-platform development

Post by Schol-R-LEA »

Korona wrote:Why are ASTs easier to optimize? Considering that no state-of-the-art C++ compiler (= neither GCC nor Clang) does optimization of the AST level, I seriously doubt that this is true. Modern compilers go through multiple layers of IR, but none of the optimization passes deals with ASTs.
It has less to do with the advantages of ASTs than with the limitations of a bytecode; because the bytecode compilation has some built-in assumptions, it doesn't provide as much detailed information about the overall structure of the source code, and thus is constrained in some of the optimizations it can make. At least this is my understanding of it. In essence, the AST is a more abstract level at which to perform some types of structural rearrangements, whereas the bytecode has lost some of the context information.

Also, some compilers do indeed optimize at the AST level (mostly, it is easier to do some kinds of dead code elimination and constant folding there), even if the majority of current ones don't.
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Korona
Member
Member
Posts: 1000
Joined: Thu May 17, 2007 1:27 pm
Contact:

Re: True cross-platform development

Post by Korona »

That's sounds more like a problem of bad bytecodes than an advantage of ASTs. Properly designed IRs do not suffer from a loss of context. For example, LLVM does almost all of its optimizations on SSA forms (and stuff like DCE constant folding is quite easy in SSA, you basically get it for free).

(If you have bytecodes like the Java or .NET ones in mind: it's true that these are not well suited for compilation. State-of-the-art JITs translate these bytecodes into another layer of IR that is more suitable for optimization.)
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].
thewrongchristian
Member
Member
Posts: 424
Joined: Tue Apr 03, 2018 2:44 am

Re: True cross-platform development

Post by thewrongchristian »

bzt wrote:
SpyderTL wrote: If I can somehow get my one executable loaded into memory on all platforms, without any interpreter, and without any Just-in-time compiler, I would consider that to be a success.
Now that I know what your goal is, I'd like to revise my former advice. I think if you're not up to fat binaries nor machine independent bytecode, then PE format is better for you.

Windows supports it, and you can't expand the Windows kernel with ELF support easily, so it makes sense to rely on the smallest common denominator.
I'd still have to question why?

If you want a portable application, then surely source portability is more useful? And for that, POSIX provides the best solution, and allows you to compile to bare metal (with exokernel/POSIX shim), Linux, or Windows.

Hell, Windows can even directly support Linux ELF binaries already.

It's like the eternal question of portable makefiles. Why bother making your Makefile portable across different make versions, when you can just use a single portable make.

Similarly, why have a portable binary for multiple runtimes, when you can just have a portable runtime (Linux/POSIX) and a single binary?

Of course, if it's just a "because I want to" sort of use case, then fair enough.

I still think it'd be easier to use static linked ELF/Linux as lowest common denominator, and build a library into the binary that handles the system calls on bare hardware.

Then, when loading from:
  • Windows/Linux - The binary will run as expected, using the Linux/WSL runtime.
  • Grub - Multiboot header directs start to built in shim, which sets up the hardware to trap Linux syscalls which will handle them as appropriate, then jumps to the regular ELF start symbol.
User avatar
bzt
Member
Member
Posts: 1584
Joined: Thu Oct 13, 2016 4:55 pm
Contact:

Re: True cross-platform development

Post by bzt »

thewrongchristian wrote:
bzt wrote:
SpyderTL wrote: If I can somehow get my one executable loaded into memory on all platforms, without any interpreter, and without any Just-in-time compiler, I would consider that to be a success.
Now that I know what your goal is, I'd like to revise my former advice. I think if you're not up to fat binaries nor machine independent bytecode, then PE format is better for you.

Windows supports it, and you can't expand the Windows kernel with ELF support easily, so it makes sense to rely on the smallest common denominator.
I'd still have to question why?

If you want a portable application, then surely source portability is more useful? And for that, POSIX provides the best solution, and allows you to compile to bare metal (with exokernel/POSIX shim), Linux, or Windows.
Only the OP can answer that. I'd go with source portability too. But admittedly such binary compatibility would have it benefits, easy package distribution, no need for developer toolchain on the end-user's machine, a single package for all systems etc.
thewrongchristian wrote:Hell, Windows can even directly support Linux ELF binaries already.
Nope, not the Windows kernel. With WSL, you are effectively running a Linux kernel, therefore you'll need Linux syscalls and POSIX libc, and that's not native. What the OP wants, is a binary that is linked with the executing OS' library, and for Windows that would be win32.

What the OP wants is basically what libc should have been if it were implemented as originally intended. An OS specific run-time, which hides the kernel's specifics and provides a common interface to the application no matter the OS.

Cheers,
bzt
Ethin
Member
Member
Posts: 625
Joined: Sun Jun 23, 2019 5:36 pm
Location: North Dakota, United States

Re: True cross-platform development

Post by Ethin »

The massive problem with this idea, as tempting as it sounds, is the amount of code you'd need for the underlying universal runtime. You'd need to support all the forms of apps that windows programs can come in -- win32, UWP, and any others they introduce. You'd need a function translation layer for system calls. For running on bare metal you'd need an absolutely massive amount of code to create a sane runtime environment for applications to run in. I'd guestimate that the minimal lines of code for this runtime would be no less than a few hundred thousand to a million and that's just for the three OSes (the Windows API is notoriously huge and complicated), but we're talking about not just that but the MacOS one, Linux, all the BSDs, and so on. Is it a good idea? Sure. Could it be done? Possibly, if you hired programmers to do it and paid them a few thousand bucks an hour. Open source would definitely not be good enough for something like this, at least not within the next decade. Unless you had a huge community around it. But I also might be over-guestimating for all I know.
vvaltchev
Member
Member
Posts: 274
Joined: Fri May 11, 2018 6:51 am

Re: True cross-platform development

Post by vvaltchev »

Ethin wrote:The massive problem with this idea, as tempting as it sounds, is the amount of code you'd need for the underlying universal runtime. You'd need to support all the forms of apps that windows programs can come in -- win32, UWP, and any others they introduce. You'd need a function translation layer for system calls. For running on bare metal you'd need an absolutely massive amount of code to create a sane runtime environment for applications to run in. I'd guestimate that the minimal lines of code for this runtime would be no less than a few hundred thousand to a million and that's just for the three OSes (the Windows API is notoriously huge and complicated), but we're talking about not just that but the MacOS one, Linux, all the BSDs, and so on. Is it a good idea? Sure. Could it be done? Possibly, if you hired programmers to do it and paid them a few thousand bucks an hour. Open source would definitely not be good enough for something like this, at least not within the next decade. Unless you had a huge community around it. But I also might be over-guestimating for all I know.
I agree with all you said.

Indeed, I can add that, if we're talking about portable software at source level , it is possible in C++ to use huge libraries like Boost and Qt together and have a program using plenty of abstractions and features work the same way on Linux, Windows, Mac and probably other operating systems like FreeBSD as well. I've worked on software like that. And yes, we're talking about ~1+ million LoC or more and no portability on bare metal. Portability (at source level) on "bare metal" could be achieved by either reducing drastically the amount of features to abstract and still writing a ton of code or by just bundling the app with a minimal bootable customized ad-hoc linux distro.
Tilck, a Tiny Linux-Compatible Kernel: https://github.com/vvaltchev/tilck
User avatar
bzt
Member
Member
Posts: 1584
Joined: Thu Oct 13, 2016 4:55 pm
Contact:

Re: True cross-platform development

Post by bzt »

Ethin wrote:The massive problem with this idea, as tempting as it sounds, is the amount of code you'd need for the underlying universal runtime. You'd need to support all the forms of apps that windows programs can come in -- win32, UWP, and any others they introduce.
Nope, you didn't understood the concept. It's the other way around: that library should not provide these interfaces, rather provide one common interface on top of these. It's pretty much what mingw32 did with glibc and libgcc, and NOT what wine did.
Ethin wrote:For running on bare metal you'd need an absolutely massive amount of code to create a sane runtime environment for applications to run in.
Absolutely not. It's not uncommon to have a minimal libc on bare metal, many of us have done that already. While a fully POSIX compatible libc is not small, it is still NOT a "massive amount of code". Take for example Solar's PDClibc, or ucLibc, musl etc. etc. etc.
Ethin wrote: I'd guestimate that the minimal lines of code for this runtime would be no less than a few hundred thousand to a million and that's just for the three OSes
You got that totally wrong, no runtime implementation is around million of SLoC, they are much much less, not even Boost is that large. Again, you got all that up-side down. The runtime does not provide ALL interfaces on ALL OSes, it only provides one common interface on top of one of the OSes at a time. Just like libc or libstdc++.

Cheers,
bzt
Post Reply