Re: Implementing pipes in a monotasking system
Posted: Sat Jan 15, 2011 1:09 am
Hi,
Without multi-tasking, you could pre-allocate buffer/s that are "big enough" for anything (e.g. including "tar | gzip" on an archive that could be several TiB). All of the RAM won't be "big enough" for one buffer, so you'd have to use disk space; but it would be a massive waste of disk space most of the time. To avoid pre-allocation something that is "big enough" you'd have to use dynamic allocation. To avoid dynamic memory allocation (to comply with stated requirements) you have to use dynamic disk allocation.
Basically, the only solution that fits the stated requirements (without introducing severe limitations) is temporary files. This doesn't mean you can't have small pre-allocated buffers in RAM (e.g. at least one sector) and then append that data to a temporary file on disk when the buffer in RAM becomes full.
This ends up being what Bewing suggested:
For example, each process could get its input from "tmp/stdin" and write its output to "tmp/stdout", and when a process terminates you'd delete "tmp/stdin" (if present) and then rename "tmp/stdout" to "tmp/stdin" ready to be the next process' input. That way you can chain an arbitrary number of processes together (e.g. "first | second | third | ... | last").
Cheers,
Brendan
Any solution that involves switching tasks on any condition (including when a ring buffer becomes full) is multi-tasking.Jezze wrote:I agree that ucosty's idea is much more intuitive so I will go with that. I do have the possibility to add pre-allocated tempfiles that I can use as a ring buffer so when thinking about it I pretty much have the solution available to me with only a few lines of code.
Without multi-tasking, you could pre-allocate buffer/s that are "big enough" for anything (e.g. including "tar | gzip" on an archive that could be several TiB). All of the RAM won't be "big enough" for one buffer, so you'd have to use disk space; but it would be a massive waste of disk space most of the time. To avoid pre-allocation something that is "big enough" you'd have to use dynamic allocation. To avoid dynamic memory allocation (to comply with stated requirements) you have to use dynamic disk allocation.
Basically, the only solution that fits the stated requirements (without introducing severe limitations) is temporary files. This doesn't mean you can't have small pre-allocated buffers in RAM (e.g. at least one sector) and then append that data to a temporary file on disk when the buffer in RAM becomes full.
This ends up being what Bewing suggested:
However, consider "foo | bar | woot". You need at least 2 temporary files for "bar" - one for the current process' input and one for the current process' output.bewing wrote:The fact that you are not using dynamic mem allocation puts you in a box.
So, alternately, you could have a predetermined temp file on physical media, write the output of program 1 to the tempfile, rewind the tempfile, start program 2, and direct the temp file to the input.
For example, each process could get its input from "tmp/stdin" and write its output to "tmp/stdout", and when a process terminates you'd delete "tmp/stdin" (if present) and then rename "tmp/stdout" to "tmp/stdin" ready to be the next process' input. That way you can chain an arbitrary number of processes together (e.g. "first | second | third | ... | last").
Cheers,
Brendan