True cross-platform development
True cross-platform development
One of the reasons I decided to create my own development "language" was that I wanted something that I could use to build systems and applications for virtually any platform. I have yet to find a compiler that will build, let's say, a Commodore 64 executable, and an x86 ms-dos executable using the same source code file.
For extremely simple programs, I don't see why this shouldn't be possible, technically speaking, although the effort required would certainly not be worth the benefit.
The most obvious choices would be C or Pascal. Of course, to use the exact same source code, you would also need your libraries to be identical, from the callers side. For this, you'd probably want something like Java, or maybe Qt or GTK.
So, first off, what would be the best way using existing tools to build applications for extremely different platforms?
Then, assuming you had to write your own tools, would you use any existing language or library standards? And if not, then what would it take to make a usable language/library combination that could cover creating simple applications for the widest possible platform coverage using a single source file?
For extremely simple programs, I don't see why this shouldn't be possible, technically speaking, although the effort required would certainly not be worth the benefit.
The most obvious choices would be C or Pascal. Of course, to use the exact same source code, you would also need your libraries to be identical, from the callers side. For this, you'd probably want something like Java, or maybe Qt or GTK.
So, first off, what would be the best way using existing tools to build applications for extremely different platforms?
Then, assuming you had to write your own tools, would you use any existing language or library standards? And if not, then what would it take to make a usable language/library combination that could cover creating simple applications for the widest possible platform coverage using a single source file?
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Re: True cross-platform development
I don't really see how it would be possible, for a compiled language at least, to do this without massive duplications of library functions. Different arch and sub systems means that there isn't a single way to do thing that is common on all platforms.SpyderTL wrote:One of the reasons I decided to create my own development "language" was that I wanted something that I could use to build systems and applications for virtually any platform. I have yet to find a compiler that will build, let's say, a Commodore 64 executable, and an x86 ms-dos executable using the same source code file.
For extremely simple programs, I don't see why this shouldn't be possible, technically speaking, although the effort required would certainly not be worth the benefit.
The most obvious choices would be C or Pascal. Of course, to use the exact same source code, you would also need your libraries to be identical, from the callers side. For this, you'd probably want something like Java, or maybe Qt or GTK.
So, first off, what would be the best way using existing tools to build applications for extremely different platforms?
Then, assuming you had to write your own tools, would you use any existing language or library standards? And if not, then what would it take to make a usable language/library combination that could cover creating simple applications for the widest possible platform coverage using a single source file?
What I mean is that, for example, if you have a graphic function in the graph.h lib for putting a pixel on the screen, you would have to "overload" that function depending on the build target since a C64 doesn't adress graphics and screen like an x86 PC would.
Re: True cross-platform development
C code itself should be able to be compiled for most platforms anyway. It's when you start making assumptions about the system that you run into trouble. That isn't a problem with the compiler, it's a problem with your code. I could write a program to write characters to VGA memory that would work fine on x86. And I could compile it for ARM as well, but no characters would show up. I would just be blindly writing to memory.
To do what you're asking, you would have to design libraries to perform common tasks, and then reimplement it for each platform you want to support. That is in fat the point of the C standard library. You could write additional libraries to do other functions that the C library doesn't support, and then reimplement those for each platform as well.
However, in kernel space there won't be any code to do common things like send commands to the keyboard, so the C library would depend on those. In essence, an operating system is the solution to your problem.
Best,
Nathan
To do what you're asking, you would have to design libraries to perform common tasks, and then reimplement it for each platform you want to support. That is in fat the point of the C standard library. You could write additional libraries to do other functions that the C library doesn't support, and then reimplement those for each platform as well.
However, in kernel space there won't be any code to do common things like send commands to the keyboard, so the C library would depend on those. In essence, an operating system is the solution to your problem.
Best,
Nathan
Re: True cross-platform development
While you can express any logic as machine code on pretty much any architecture, there are some natural limiting factors when implementing something not so trivial or small:
Running a more or less decent C compiler on RetroBSD is a challenge. You can't ran any fat executables nor any memory-greedy ones. So, no pcc, no lcc, no Belard's TinyCC, forget about gcc or clang. While you could fake larger RAM by using the SD card and an MIPS CPU emulator, you'd slash your 80MHz by a factor of 50+. I've tried it. It's not much fun to run a compiler on a 1MHz computer. And Small-C is too much of a toy. So, right now the compiler is split into multiple stages to fit into the RAM: driver, preprocessor, compiler proper, assembler, linker. When the driver runs any of its subordinates, it gets swapped out to free all of the 96KB of user RAM for the new process.
The compiler proper (my Smaller C with a MIPS code generator) eats nearly all of the available 96KB. In order to fit more language and more declarations from the code being compiled into that space I had to do things like:
In theory, some of the compiler code could go into the Flash, but in practice it would mean updating the kernel with every compiler update or implementing an FS in a portion of the Flash.
Things will get worse if you get a more primitive CPU (e.g. the i8051), slower, with even less RAM.
And all of this assumes there actually is a file system with enough of space on it!
Network and "multimedia" may have their own hardware issues, imposing further limitations on software, on what can be done (if at all), in what order, etc.
IOW, while cross-compiling itself is not a problem, dealing with platform limitations and oddities is as you may need to restructure your code in order to work on a specific device satisfactorily.
- maximum code size
- maximum data size
- maximum stack size
- addressability of code as data and vice versa
- whether you can have separate address spaces or must swap in and out programs in their entirety
- CPU speed
- memory protection (segmentation, paging, etc)
Running a more or less decent C compiler on RetroBSD is a challenge. You can't ran any fat executables nor any memory-greedy ones. So, no pcc, no lcc, no Belard's TinyCC, forget about gcc or clang. While you could fake larger RAM by using the SD card and an MIPS CPU emulator, you'd slash your 80MHz by a factor of 50+. I've tried it. It's not much fun to run a compiler on a 1MHz computer. And Small-C is too much of a toy. So, right now the compiler is split into multiple stages to fit into the RAM: driver, preprocessor, compiler proper, assembler, linker. When the driver runs any of its subordinates, it gets swapped out to free all of the 96KB of user RAM for the new process.
The compiler proper (my Smaller C with a MIPS code generator) eats nearly all of the available 96KB. In order to fit more language and more declarations from the code being compiled into that space I had to do things like:
- use the simplest algorithms and the most compact data structures, favoring smaller memory footprint
- use static memory allocation (with a tiny bit of flexibility via recursion)
- throw out all of the standard C library (use custom fprintf(), fopen() and such, often as tiny wrappers around system calls)
- rework an internal data structure to minimize the memory occupied by declarations (I changed an array whose element was a pair of ints to two arrays, one of chars and the other of ints (I couldn't just go from an 8-byte to a 5-byte element because of alignment restrictions))
- compile the compiler using shorter (but limited) instructions of the MIPS16e ISA (somewhat like Thumb on ARM), which slows compilation a bit
In theory, some of the compiler code could go into the Flash, but in practice it would mean updating the kernel with every compiler update or implementing an FS in a portion of the Flash.
Things will get worse if you get a more primitive CPU (e.g. the i8051), slower, with even less RAM.
And all of this assumes there actually is a file system with enough of space on it!
Network and "multimedia" may have their own hardware issues, imposing further limitations on software, on what can be done (if at all), in what order, etc.
IOW, while cross-compiling itself is not a problem, dealing with platform limitations and oddities is as you may need to restructure your code in order to work on a specific device satisfactorily.
Re: True cross-platform development
Possible, but very much pointless. Generating the code from the same Intermediate Representation is easy (relatively, see LLVM backends), but hardware and APIs are very different.SpyderTL wrote:a compiler that will build, let's say, a Commodore 64 executable, and an x86 ms-dos executable using the same source code file.
Learn to read.
Re: True cross-platform development
I'm not a huge fan of the way that the C standard library is organized, but this is probably the closest thing to a common language/library combination that exists.
Can anyone think of a better (more portable) language/library combination?
If not, I'll probably try implenting an early C standard library for the C64 as a proof of concept, at some point.
Can anyone think of a better (more portable) language/library combination?
If not, I'll probably try implenting an early C standard library for the C64 as a proof of concept, at some point.
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
- dchapiesky
- Member
- Posts: 204
- Joined: Sun Dec 25, 2016 1:54 am
- Libera.chat IRC: dchapiesky
Re: True cross-platform development
TCL and Java originated from a project to do just what you are talking about... their solution was a virtual machine... which in time became the JVM.SpyderTL wrote:
Can anyone think of a better (more portable) language/library combination?
The VM becomes a Hardware Abstraction Layer (HAL)
Earlier attempts at a cross-platform holy grail also resulted in forth interpreters and eventually JIT'ing interpreters.
Going the JVM route you would have to, as stated by other posts, create your own device drivers and either statically link them or develop a loading mechanism.
Going the forth route, you build up new "verbs" and "nouns" from lower level vocabulary which in-turn extend the forth language to have more complex actions. Device drivers would, in effect, become extensions of the language itsself.
Finally, there was SmallTalk which combined both of these concepts: vm and extensible platform agnostic language.
Fun stuff and I wish you luck on your quest.
cheers
Plagiarize. Plagiarize. Let not one line escape thine eyes...
Re: True cross-platform development
See, the C language (and its standard library) defines only what is available on any (micro?)computer, that is, basic ALU operations, memory, and a few things that can be done by using those (think strlen() and atoi()). The only thing that is there and that may be unavailable is streams (AKA files). And maybe some aspects of floating point arithmetic, which may not be directly supported by the CPU. In this regard C can be viewed as a portable assembler and I don't think you can get any lower, commoner or more basic. OTOH, if you want more stuff in the language or its library, you just can't have it commonly available everywhere. It's a given.SpyderTL wrote: I'm not a huge fan of the way that the C standard library is organized, but this is probably the closest thing to a common language/library combination that exists.
Can anyone think of a better (more portable) language/library combination?
And like I said earlier, some things, even if available, may have issues when it comes to using them. Say, you're on an UEFI PC. If you don't have a video driver, you can't change the screen resolution after leaving UEFI. You can fake a larger or a smaller one, but to actually change the physical resolution you need something outside the language. Or, say, there's just a speaker that you can feed with ones and zeroes. You can play arbitrary polyphonic sound through it only if you use PWM. For that you need some CPU power. You may not have it. Or you may not have enough of it if you need to do something else in parallel. And some systems don't have an RTC (the ZX Spectrum didn't have it), so you need to make date and time management out of what you have.
What do you mean by that? We've got some open source implementations of it. Another one? What's the concept that you want to prove? That you can make a small one for C64? Take a look at FUZIX. Maybe you could save some work by using its code or ideas.SpyderTL wrote: If not, I'll probably try implenting an early C standard library for the C64 as a proof of concept, at some point.
Re: True cross-platform development
I guess I'm approaching the problem in a different way, then.
First of all, at some level, a Commodore 64 and a PC running Linux work roughly the same way. You turn on the machine, you get a prompt, and you can type in commands, and see the results on the screen. To me, that means that at least this much can be abstracted away to the point that you could write one "shell" application for both systems, and the OS and drivers would take care of all of the details beyond that.
I'm also making the assumption that there will be, by necessity, a high level of reflection, meaning that one application could determine what type of machine it was running on(graphical, text, headless, touch screen, etc.), and react accordingly.
So, that would mean that your application would need to contain several different user interfaces, in order to run on multiple platforms.
Alternatively, the interface could be chosen at build time, using #defines, or something similar. But I guess the idea is the same. Your source code would still need to include multiple user interfaces.
I guess the switching logic would need to happen at compile time, due to the extreme lack of resources on older platforms.
So, is there any value in a programming language/library/compiler combination that was truly write once, run anywhere, if it meant that you may need to include up to 3 or 4 user interfaces in your application for complete portability?
I found this while looking for cross platform libraries:
https://haxe.org/documentation/introduc ... arget-apis
This is pretty close to what I'm looking for, but it obviously does not go back to 1980 era hardware. But it may provide a framework to build from.
First of all, at some level, a Commodore 64 and a PC running Linux work roughly the same way. You turn on the machine, you get a prompt, and you can type in commands, and see the results on the screen. To me, that means that at least this much can be abstracted away to the point that you could write one "shell" application for both systems, and the OS and drivers would take care of all of the details beyond that.
I'm also making the assumption that there will be, by necessity, a high level of reflection, meaning that one application could determine what type of machine it was running on(graphical, text, headless, touch screen, etc.), and react accordingly.
So, that would mean that your application would need to contain several different user interfaces, in order to run on multiple platforms.
Alternatively, the interface could be chosen at build time, using #defines, or something similar. But I guess the idea is the same. Your source code would still need to include multiple user interfaces.
I guess the switching logic would need to happen at compile time, due to the extreme lack of resources on older platforms.
So, is there any value in a programming language/library/compiler combination that was truly write once, run anywhere, if it meant that you may need to include up to 3 or 4 user interfaces in your application for complete portability?
I found this while looking for cross platform libraries:
https://haxe.org/documentation/introduc ... arget-apis
This is pretty close to what I'm looking for, but it obviously does not go back to 1980 era hardware. But it may provide a framework to build from.
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Re: True cross-platform development
That looks like it may be what I want, except maybe for the OS part. I see they have builds for the 6502, but I don't see anything specific to the C64. But I'll do some more digging...Take a look at FUZIX. Maybe you could save some work by using its code or ideas.
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Re: True cross-platform development
To clarify, I'm really talking about two different "things".
One is that I'd like to have one language and one compiler that would cross-compile to virtually any platform, including things like 16-bit ms-dos executables, 32-bit windows executables, and Commodore 64 programs. I don't really care if the commands/functions are platform specific, but I do want to use a single compiler.
Second, I want to be able to write an application that will either run or compile to consoles, arm boards, windows executables, java classes, etc. This part would have to be pretty high level, and fairly "reflective". But for simple console in/out commands, and things like math functions, this shouldn't be too much of a problem.
I think I'm fairly close to both of these goals going down the path that I'm on. I was just double-checking to see if something already existed. As far as I can tell, it really doesn't.
GCC has several cross compilers for modern machines, but nothing nearly that old.
One is that I'd like to have one language and one compiler that would cross-compile to virtually any platform, including things like 16-bit ms-dos executables, 32-bit windows executables, and Commodore 64 programs. I don't really care if the commands/functions are platform specific, but I do want to use a single compiler.
Second, I want to be able to write an application that will either run or compile to consoles, arm boards, windows executables, java classes, etc. This part would have to be pretty high level, and fairly "reflective". But for simple console in/out commands, and things like math functions, this shouldn't be too much of a problem.
I think I'm fairly close to both of these goals going down the path that I'm on. I was just double-checking to see if something already existed. As far as I can tell, it really doesn't.
GCC has several cross compilers for modern machines, but nothing nearly that old.
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Re: True cross-platform development
I found this presentation from CppCon a few weeks ago, where this guy used C++ to write a game for the C64. The trick is that he wrote a converter utility to convert x86 byte code to c64 byte code.
I'm probably not going to do this, but I thought that it was interesting.
I'm probably not going to do this, but I thought that it was interesting.
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Re: True cross-platform development
Already done I think http://www.cc65.org/SpyderTL wrote:I'm not a huge fan of the way that the C standard library is organized, but this is probably the closest thing to a common language/library combination that exists.
Can anyone think of a better (more portable) language/library combination?
If not, I'll probably try implenting an early C standard library for the C64 as a proof of concept, at some point.
Re: True cross-platform development
Rather than derail geri's thread more than I already have, I've decided to resurrect this thread to further the topic of cross platform development.
First off, I did take a look at CC65, and I even wrote my own Hello World application. What was weird was that the text ended up getting printed to screen in inverted case ("hELLO WORLD"). The compiler also seems to put a lot of code in before your code to get the system into a more usable state, but whatever. It's an interesting option.
But back to the topic... While discussing geri's DawnOS release, I realized that there is actually another option to cross platform development. In addition to high level options, like a virtual machine, and lower level options, like C libraries, there is also the option of, let's say, a virtual CPU. In this case, an extremely simple virtual CPU that only has one instruction.
The difference between a VM and a VCPU is that in the latter case, only the CPU is virtualized, but the rest of the system is directly accessed using a nearly direct memory map. So, you can still access things like the PCI bus and IDE controllers, but do it using a single code base that contains "drivers" for all devices regardless of the platform.
So, my first question is, how would performance compare between a VM solution and a VCPU solution? I assume that the CPU emulation loop would slow things down by 5 to 10x just due to the size of the loop itself, and then the simplicity of the VCPU using a single instruction would slow things down by another 10x, due to the complexity of the code required to do simple operations. So let's just call it 100x slower than pure assembler. This sounds bad, but the flip side is that no other hardware would need to be virtualized. A typical vm must execute hundreds of instructions every time a virtual device is accessed to emulate the behavior of a real device.
So now I'm curious... my original plan was to implement a simple VM that could only execute methods on classes. No registers, no arithmetic, just call methods. But now I'm thinking that maybe I should start with a VM that can only execute one instruction instead, and build from there. Does anyone have any thoughts, either way?
First off, I did take a look at CC65, and I even wrote my own Hello World application. What was weird was that the text ended up getting printed to screen in inverted case ("hELLO WORLD"). The compiler also seems to put a lot of code in before your code to get the system into a more usable state, but whatever. It's an interesting option.
But back to the topic... While discussing geri's DawnOS release, I realized that there is actually another option to cross platform development. In addition to high level options, like a virtual machine, and lower level options, like C libraries, there is also the option of, let's say, a virtual CPU. In this case, an extremely simple virtual CPU that only has one instruction.
The difference between a VM and a VCPU is that in the latter case, only the CPU is virtualized, but the rest of the system is directly accessed using a nearly direct memory map. So, you can still access things like the PCI bus and IDE controllers, but do it using a single code base that contains "drivers" for all devices regardless of the platform.
So, my first question is, how would performance compare between a VM solution and a VCPU solution? I assume that the CPU emulation loop would slow things down by 5 to 10x just due to the size of the loop itself, and then the simplicity of the VCPU using a single instruction would slow things down by another 10x, due to the complexity of the code required to do simple operations. So let's just call it 100x slower than pure assembler. This sounds bad, but the flip side is that no other hardware would need to be virtualized. A typical vm must execute hundreds of instructions every time a virtual device is accessed to emulate the behavior of a real device.
So now I'm curious... my original plan was to implement a simple VM that could only execute methods on classes. No registers, no arithmetic, just call methods. But now I'm thinking that maybe I should start with a VM that can only execute one instruction instead, and build from there. Does anyone have any thoughts, either way?
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Re: True cross-platform development
This CPU will be slow. Single instruction gives only so much possibilities, you cannot for example do SIMD, or DSP, or OpenCL, or any multimedia processing fast enough, because the instruction is very narrow and multimedia processing requires instructions that are wide.
Learn to read.