Page 1 of 2

Studying C programming

Posted: Wed Aug 31, 2016 10:04 pm
by mac
So I've been going back and forth with trying out different languages, but lately I settled with C. No joke. I actually think C isn't that hard, except it can be a pain to have to correct minor syntax errors. I'm very passionate about it. I even written one small program that scans a character and displays the value and it's corresponding memory address. :o

I should tell you that even though I'm still feeling committed, I'm running into a little bump. I was using http://www.cprogramming.com to start off with, and in the later sections of the C tutorial, I start getting confused and unable to visualize what the code examples are doing.

Or should I just redirect this over to the kind people at Stack Overflow, even though I heard they only answer very specific programming questions.

Re: Studying C programming

Posted: Wed Aug 31, 2016 11:30 pm
by AndrewBuckley
you can subscribe to the C_programming subreddit to get a feel for the language.
" I even written one small program that scans a character and displays the value and it's corresponding memory address."
Scanning a character is not something that has an address, a buffer you could store that value in does. are you planning to write operating systems or just regular programs?

Re: Studying C programming

Posted: Thu Sep 01, 2016 6:52 am
by mac
Right now I'm only going to write normal applications to get the feel for things. But yes, I do want to write operating systems with it one day.

Re: Studying C programming

Posted: Sat Sep 03, 2016 3:17 pm
by ~
C isn't harder than PHP in itself.

The fact is that C programmers keep doing rare stuff with C, so that's why it looks so strange and hard, but that can be fixed by formally studying the code for a while and learning all of the tricks, algorithms, protocols and file formats involved in the code.

I'm currently writing a set of executable file skeletons, from DOS to Windows, although I plan to write Linux, Palm and any other kind of executable file skeletons in the future as needed.

My reasoning behind this is that we can use NASM/YASM, pure Intel-syntax assembly, to compile Windows executables under pure DOS mode, even very complex applications under nothing more than pure DOS (FreeDOS), NASM/YASM, and a 386 machine, when they are written in assembly. We just need to code the file format of the executable and its resources in the form of standard assembly source code.

So, to better learn C, I plan to take some time to think about how I could use pure DJGPP to write a raw executable with GCC/GPP , LD and MAKE. My intention is to produce raw binaries with DJGPP that happen to contain a proper PE EXE skeleton for EXEs, DLLs and the like. Even Win9x/XP drivers could be written like this.

With such a skeleton written in C and which could be compiled with DJGPP 2.05, we could compile DOS and Windows 9x/XP EXEs using any GCC version, from DOS DJGPP to MinGW to Linux GCC by compiling the whole EXE structure and code to a raw binary output.

Then learning C becomes easier because now we can separate what has to do with the language itself, with compiling and linking the EXE itself, and what has to do with system, library and application functions, and now we can add them manually and easily to our program, knowing that it will compile in any GCC. That alone will make it easier using C since we will be able to take and use the code anywhere there's GCC, and compile programs for a target OS no matter what OS we are actually running.

Re: Studying C programming

Posted: Sat Sep 03, 2016 3:26 pm
by max
Hello ~,

this part of your post is advice that is okay:
~ wrote:C isn't harder than PHP in itself.

The fact is that C programmers keep doing rare stuff with C, so that's why it looks so strange and hard, but that can be fixed by formally studying the code for a while and learning all of the tricks, algorithms, protocols and file formats involved in the code.
And this part is bullshit:
~ wrote:I'm currently writing a set of executable file skeletons, from DOS to Windows, although I plan to write Linux, Palm and any other kind of executable file skeletons in the future as needed.

My reasoning behind this is that we can use NASM/YASM, pure Intel-syntax assembly, to compile Windows executables under pure DOS mode, even very complex applications under nothing more than pure DOS (FreeDOS), NASM/YASM, and a 386 machine, when they are written in assembly. We just need to code the file format of the executable and its resources in the form of standard assembly source code.

So, to better learn C, I plan to take some time to think about how I could use pure DJGPP to write a raw executable with GCC/GPP , LD and MAKE. My intention is to produce raw binaries with DJGPP that happen to contain a proper PE EXE skeleton for EXEs, DLLs and the like. Even Win9x/XP drivers could be written like this.

With such a skeleton written in C and which could be compiled with DJGPP 2.05, we could compile DOS and Windows 9x/XP EXEs using any GCC version, from DOS DJGPP to MinGW to Linux GCC by compiling the whole EXE structure and code to a raw binary output.

Then learning C becomes easier because now we can separate what has to do with the language itself, with compiling and linking the EXE itself, and what has to do with system, library and application functions, and now we can add them manually and easily to our program, knowing that it will compile in any GCC. That alone will make it easier using C since we will be able to take and use the code anywhere there's GCC, and compile programs for a target OS no matter what OS we are actually running.
Stop advertising your crap here.

Re: Studying C programming

Posted: Sat Sep 03, 2016 3:47 pm
by ~
What are you talking about? It will help newbies in general to handle executable file formats and compile very portable code, produce it even without having a native compiler for their OS.

Plus, I want to create programs for which I have full source code instead of having libraries do all the work and build the file format. I want to be able to create applications for any OSes even in the oldest OS and machine even if I don't have as many libraries and tools there, portable to compile across platforms without even really needing a cross compiler.

Re: Studying C programming

Posted: Sat Sep 03, 2016 4:07 pm
by gerryg400
~, you say you will support complex applications. How will you support an application that needs threads, condvars, IPC or mmap? Or any other of basic features of an operating system that DOS does not provide?

Re: Studying C programming

Posted: Sat Sep 03, 2016 5:03 pm
by ~
I'm talking about compiling under DOS or a custom OS, to build binaries for any OS. We need text-mode, source-only executable/library/driver skeletons. It can be done with pure assembly programs with a skeleton targeting other platform. It could be done with pure C code and probably simple linker scripts instead of a cross compiler, and then target any OS.

As for applications, if they use advanced algorithms they would need to link to those functions statically as a minimum, just like old games which sometimes didn't even need an OS or as much from the BIOS to boot and run. But being able to compile a raw binary from an executable's skeleton and produce the headers makes it easier to understand what to implement, what is actually contained in the executable, and how to load them in other OSes by writing a loading layer and library function implementations.


Why are we using binary-mode libraries and references instead of optimizing by using text-mode libraries/applications and produce binary-mode files?

Why are we using binary-mode-only executable header generation intermixed (inefficiently at least at system-level programming) with text-mode-only sources, when their format looks trivial and very cheap (just like casual hand-made databases written before Fox Pro for DOS) when converted to C or assembly code, but becomes very confusing if only the tool-chain ever produces them for making for a more ignorant programmer?

Programs are much easier to debug then when finally compiled.

Re: Studying C programming

Posted: Sat Sep 03, 2016 6:49 pm
by gerryg400
~ wrote:We need text-mode, source-only executable/library/driver skeletons. It can be done with pure assembly programs with a skeleton targeting other platform. It could be done with pure C code and probably simple linker scripts instead of a cross compiler, and then target any OS.
So you are trying to replace the concept of cross-compiling with some other thing. What on earth are "text-mode, source-only executable/library/driver skeletons" ?
~ wrote:As for applications, if they use advanced algorithms they would need to link to those functions statically as a minimum, just like old games which sometimes didn't even need an OS or as much from the BIOS to boot and run.
An application that implements those things itself will necessarily take over the entire machine meaning that no other application could run at the same time. This is no longer an acceptable and one of the (many) reasons that no-one cares about DOS any more. It's plain stupid to implement this.
~ wrote:But being able to compile a raw binary from an executable's skeleton and produce the headers makes it easier to understand what to implement, what is actually contained in the executable, and how to load them in other OSes by writing a loading layer and library function implementations.
I have no idea what this means. Please help me understand. What is an executable skeleton? What are headers?

Overall, I am confused. If you simply said that you want to rewrite all existing software I would understand that you are saying. Is that what you mean? I'm totally lost.

Re: Studying C programming

Posted: Sat Sep 03, 2016 8:11 pm
by Schol-R-LEA
@~: At this point you are probably thinking, "they just don't get it", correct? Well, that's the point: we don't understand what you are trying to say. We can't even judge the merit of the ideas themselves at this point, because all we've gotten from you so far is a frenetic, haphazard and disjointed set of links, exclamations, and declarations of intent, in a manner typical of a crackpot. Until you provide us with a coherent explanation of what you are trying to say, then we have to assume that you are a crackpot, or worse, a troll.

Before going any further, I recommend that you take some time out from this to get your ideas down on paper (actual hand-written paper would work best for most people, though that's your call) in detail, not just as posts here in OS-Dev but in some sort of design document cum mission statement. Sit down, slow down, calm down, and piece your ideas together in detail, going from the most general (which probably needs to be considerably more general than you expect) to the specific (though not too specific, as that would just confuse people again.

I've been where you are, and I know how hard it is when you are excited about some idea, something that seems dead obvious to you, something that no one else seems to see and which looks like a genuine New Idea to you. It is at these times that you most need to be careful about how you explain yourself, and most need to listen to the objections and examples of prior art that skeptics will give back to you. Most good ideas aren't new, and most new ideas aren't good; that's why apply skeptical reasoning to anything that gets proposed.

Remember, communication is both a bigger part, and a harder part, of software development than all the other parts combined. Explaining a program is often much more work than writing it.

Now, to get on with my reply:
~ wrote:I'm talking about compiling under DOS or a custom OS, to build binaries for any OS. We need text-mode, source-only executable/library/driver skeletons. It can be done with pure assembly programs with a skeleton targeting other platform. It could be done with pure C code and probably simple linker scripts instead of a cross compiler, and then target any OS.
I'm not clear on how you expect that a program written in the assembly language for, say, an x86-64 PC desktop (with the stock memory and peripheral buses, standardized chipset for such things as the keyboard controller and the Programmable Interrupt Timer, and other hardware typical of the type) and using a specific operating system's syscalls, would run on an ARM-based tablet with a unique memory subsystem, a radically different peripheral bus, a different way of handling ROM and boot-up (the 'BIOS' in most Android phones is a modified Linux kernel burned to flash memory, for example, with the startup routine being mostly just copying the volatile data constructions to RAM), a different system call model, and possibly a different standard library as well. The program would have to be translated in some way, either through some kind of compiler, or by way of a CPU simulator - the opcode sets are not even remotely similar, nor are the assembly language instruction sets that parallel them.

Mind you, I can see a way to build up a common set of macros that could be used to write the non-hardware-specific parts of an 'assembly program' as a common source text, provided that you have a single retargetable assembler with a sufficiently complex macro language that the assembly instructions can be buried completely within it; the pseudo-code (bytecode, in modern terminology) interpreter targeted by the SNOBOL4 compiler was written in this way back in the day, and I have considered implementing Thelema as a set of Assiah macros in a similar fashion (though I have pretty much ruled it out for a number of reasons). However, at that point you aren't writing assembly code any more, you are writing in a new language that is translated into different assembly languages as needed - in other words, it's become an interpreter or compiler of a sort. So what you seem to be really heading towards is more language design than anything.
~ wrote:As for applications, if they use advanced algorithms they would need to link to those functions statically as a minimum, just like old games which sometimes didn't even need an OS or as much from the BIOS to boot and run. But being able to compile a raw binary from an executable's skeleton and produce the headers makes it easier to understand what to implement, what is actually contained in the executable, and how to load them in other OSes by writing a loading layer and library function implementations.
Again, this idea of having a cloud or swarm of small, semi-independent executables makes me think of how most Forth systems work, with the lines between 'interpreter', 'compiler', and 'assembler' blurred beyond recognition. You might want to look at how a direct-threading interpreter works (not threading in the multitasking sense, it has to do with how Forth words are invoked) and see how it compares to you plans.
~ wrote:Why are we using binary-mode libraries and references instead of optimizing by using text-mode libraries/applications and produce binary-mode files? Why are we using binary-mode-only executable header generation intermixed (inefficiently at least at system-level programming) with text-mode-only sources, when their format looks trivial and very cheap (just like casual hand-made databases written before Fox Pro for DOS) when converted to C or assembly code, but becomes very confusing if only the tool-chain ever produces them for making for a more ignorant programmer?
I can't see what you are asking here, or rather, I can see several different (possibly intertwingled) questions at once and I am not sure which of those you mean, if any. I'll post one of the possible/partial answers to the wiki RSN, as I have something I already wrote about it and just need to find it again.
~ wrote:Programs are much easier to debug then when finally compiled.
WUT.

I feel like I'm talking to Swampy and his 'gotos are easier to write and understand than the EVIIIL nested junk pushed by the perfect-perfects', and I despair of understanding either of you when I hear things like this.

Re: Studying C programming

Posted: Sat Sep 03, 2016 8:46 pm
by Schol-R-LEA
The partial answer I promised is now on the wiki at WhyNeverToPlaceFunctionsInIncludeHeaders. The title is a bit awkward, and the connection to the discussion may not be immediately obvious, but trust me, I think it is likely to be relevant.

You might also find Historical Notes on CISC and RISC Illuminating fnord as well.

Re: Studying C programming

Posted: Sun Sep 04, 2016 3:02 pm
by ~
gerryg400 wrote:
~ wrote:We need text-mode, source-only executable/library/driver skeletons. It can be done with pure assembly programs with a skeleton targeting other platform. It could be done with pure C code and probably simple linker scripts instead of a cross compiler, and then target any OS.
So you are trying to replace the concept of cross-compiling with some other thing. What on earth are "text-mode, source-only executable/library/driver skeletons" ?
~ wrote:As for applications, if they use advanced algorithms they would need to link to those functions statically as a minimum, just like old games which sometimes didn't even need an OS or as much from the BIOS to boot and run.
An application that implements those things itself will necessarily take over the entire machine meaning that no other application could run at the same time. This is no longer an acceptable and one of the (many) reasons that no-one cares about DOS any more. It's plain stupid to implement this.
~ wrote:But being able to compile a raw binary from an executable's skeleton and produce the headers makes it easier to understand what to implement, what is actually contained in the executable, and how to load them in other OSes by writing a loading layer and library function implementations.
I have no idea what this means. Please help me understand. What is an executable skeleton? What are headers?

Overall, I am confused. If you simply said that you want to rewrite all existing software I would understand that you are saying. Is that what you mean? I'm totally lost.
It would be cross-compiling made by hand. We could hand-pick library functions like printf() and malloc() from easy to read sources implemented specifically for one or more platforms but purely from sources to those full library functions, instead of just including huge headers and then huge object files to link, a consequence of using binary-mode sources with text-mode source code.

As for DOS, it's only doing what it should, that is providing an user interface and kernel that contain only the absolute minimum so that they can run and load any other module. It's a perfect classic example of a microkernel, where it only contains what it needs to access mass storage and user input/output. It doesn't even alter the hardware and it still runs (it excells at using a standard way to run with simplicity). It doesn't even switches into Protected Mode (let other modules do it while 32-bit capable Real Mode suffices for the current session).

Of course I want to rewrite all existing software. I know it will take a lot of time, and that if you want industry-grade results it will have to be rewritten by its authors to be "more reasonable", to run clean-up and quality control development cycles.

Wanting to rewrite all software has a practical reason too, and it's about understanding and then tutorializing every single existing trick. Then things will be much easier to implement with more knowledgeable people.

As for not monopolizing the system or not providing OS-independent libraries, I think that developing operating systems through websites like this in only inducing independence from any OS. In the future, we could just run a DOS-like shell or maybe just FreeDOS (yes, probably in Real Mode), and a lot of kernel libraries now made independent, like the full WinAPI and DPMI, and also a layer to load and implement a multitasking manager and a window manager (which could probably implement and execute natively-compiled HTML5/CSS, and from there derive any other windowing library).

In the future an OS could probably be best implemented as just a tiny hardware/user manager that is only capable to load itself or other modules to boot up without doing anything fancy or even manipulating hardware. I should only be interested in being able to run and let the user configure and load the rest of the system. FreeDOS is already like this so it's a start as good as writing a boot sequence and a binary blob loader. Then it won't monopolize the machine because the unprotected nature of the core (preferable 16-bit so we can enter back and forth to 32 or 64-bit cleanly but with default modules to enter into those modes) will allow us to always load our own OS as simply as just writing it into memory and jumping into it.

And that's what I'm trying to do. A kernel that is so minimal and standard that can just run and then load any other module as an optional component, and that is compilable to 16/32/64-bit with any toolchain. It will require some statically linked tiny partial file system drivers and other types of drivers like for the screen, but they should be unloaded, overwritten and replaced by more capable drivers once it becomes initialized (after a few millisecond or seconds to set up).

Re: Studying C programming

Posted: Sun Sep 04, 2016 4:16 pm
by Schol-R-LEA
I think you need to re-read the definition of a Microkernel, because MS-DOS isn't one. It is as monolithic as they come, really, with no formally-defined Kernel at all - the whole thing is a single executable except for the command processor (it can load certain drivers for specific hardware, but the ones the system actually uses are all hard-coded in), it runs in a single unprotected memory space, with only a single thread of execution and no separation of system and user program. That may be a small OS, but it is not a micro-kernel.

A micro-kernel has two basic properties, one of which is definitive to the modern use of the term, while the second was part of the original definition but now largely bypassed:

A micro-kernel is a kernel-based operating system that loads and runs most or all system drivers and services in user space, as independent processes separate from both the kernel and the user applications, and uses Message Passing as the basis for all Synchronization Primitives.

The key word here is 'processes' - an abstraction that Monotasking Systems like MS-DOS generally lack. When MS-DOS launches an application, the operating system itself stops running - the application is now the operating system, and MS-DOS is at best a library it can use for certain common forms of I/O; in many cases, it no longer even stays resident in memory, in which case even returning to the command line requires the OS to be reloaded. This is something that by definition never happens in a kernel-based design, micro or otherwise, regardless of memory protection or any other considerations.

One would be more justified in calling it a Megalithic Kernel, if anything, given the lack of memory separation, but even that doesn't quite fit since, again, there is no kernel.

To be honest, I think what you actually are thinking of is an Exokernel or a containerizing hypervisor, both of which fit your description better than a micro-kernel does - or MS-DOS does, for that matter.

Re: Studying C programming

Posted: Sun Sep 04, 2016 7:04 pm
by Octocontrabass
This topic should be split so the original poster can ignore ~'s worthless ramblings.
~ wrote:It would be cross-compiling made by hand. We could hand-pick library functions like printf() and malloc() from easy to read sources implemented specifically for one or more platforms but purely from sources to those full library functions, instead of just including huge headers and then huge object files to link, a consequence of using binary-mode sources with text-mode source code.
Why cross-compile by hand when a real cross-compiler is so much easier to use?

The size of the header has absolutely no bearing on the size of the final program.

There are already two better solutions to the problem of statically linking large object files. The first is using a compiler that can discard unused portions of the object file during linking. The second is dynamic linking.
~ wrote:As for DOS, it's only doing what it should, that is providing an user interface and kernel that contain only the absolute minimum so that they can run and load any other module. It's a perfect classic example of a microkernel, where it only contains what it needs to access mass storage and user input/output. It doesn't even alter the hardware and it still runs (it excells at using a standard way to run with simplicity). It doesn't even switches into Protected Mode (let other modules do it while 32-bit capable Real Mode suffices for the current session).
Microkernels have thread management and inter-process communication. DOS doesn't have threads or inter-process communication. DOS is not a microkernel.
~ wrote:Of course I want to rewrite all existing software. I know it will take a lot of time, and that if you want industry-grade results it will have to be rewritten by its authors to be "more reasonable", to run clean-up and quality control development cycles.
I'm sure they'll be happy to do that, just as soon as you're able to pay them for their time.
~ wrote:Wanting to rewrite all software has a practical reason too, and it's about understanding and then tutorializing every single existing trick. Then things will be much easier to implement with more knowledgeable people.
If all of your tutorials are showing overcomplicated ways to accomplish something that can be done in a single line of code, like your tutorial on finding the command line length in DOS, I don't think anyone will be interested in your tutorials.
~ wrote:As for not monopolizing the system or not providing OS-independent libraries, I think that developing operating systems through websites like this in only inducing independence from any OS. In the future, we could just run a DOS-like shell or maybe just FreeDOS (yes, probably in Real Mode), and a lot of kernel libraries now made independent, like the full WinAPI and DPMI, and also a layer to load and implement a multitasking manager and a window manager (which could probably implement and execute natively-compiled HTML5/CSS, and from there derive any other windowing library).
WinAPI is dependent on multitasking, and multitasking is dependent on controlling the system resources in order to prevent software from accidentally (or intentionally) using resources that it doesn't have permissions to use.
~ wrote:In the future an OS could probably be best implemented as just a tiny hardware/user manager that is only capable to load itself or other modules to boot up without doing anything fancy or even manipulating hardware. I should only be interested in being able to run and let the user configure and load the rest of the system. FreeDOS is already like this so it's a start as good as writing a boot sequence and a binary blob loader. Then it won't monopolize the machine because the unprotected nature of the core (preferable 16-bit so we can enter back and forth to 32 or 64-bit cleanly but with default modules to enter into those modes) will allow us to always load our own OS as simply as just writing it into memory and jumping into it.
This sounds an awful lot like a bootloader, not an operating system. In fact, GRUB and DOS have remarkably similar capabilities, although GRUB (at least GRUB2) is not dependent on having a BIOS or an x86 CPU.
~ wrote:And that's what I'm trying to do. A kernel that is so minimal and standard that can just run and then load any other module as an optional component, and that is compilable to 16/32/64-bit with any toolchain. It will require some statically linked tiny partial file system drivers and other types of drivers like for the screen, but they should be unloaded, overwritten and replaced by more capable drivers once it becomes initialized (after a few millisecond or seconds to set up).
This sounds remarkably similar to my own goals. I want a kernel that is minimal, provides a standard interface to load and run optional modules, and can be compiled for 32-bit x86, 64-bit x86, PowerPC, and others using a standard toolchain. It might have some minimal drivers statically linked, but storage drivers will be loaded by the bootloader so the kernel can load the remaining drivers directly from the filesystem.

Re: Studying C programming

Posted: Mon Sep 05, 2016 1:23 am
by Kevin
~ wrote:Why are we using binary-mode libraries and references instead of optimizing by using text-mode libraries/applications and produce binary-mode files?
Is the keyword that you're looking for Link Time Optimisation (LTO)? Or what other kind of optimisation are you thinking of?
Why are we using binary-mode-only executable header generation intermixed (inefficiently at least at system-level programming) with text-mode-only sources, when their format looks trivial and very cheap (just like casual hand-made databases written before Fox Pro for DOS) when converted to C or assembly code, but becomes very confusing if only the tool-chain ever produces them for making for a more ignorant programmer?
Essentially for the same reason why we write in a high-level language rather than assembly if we can, or we use an assembler rather than a hexeditor in order to create binaries at least. It's something that the machine is perfectly able to generate, which saves me time and avoids mistakes.

I very much prefer working on actually creative tasks rather than doing stupid legwork that the computer could be doing for me.