Hi,
Kevin wrote:Brendan wrote:Claiming that Quake is a text adventure simply because you can use buttons/keypresses to control "things" you see on the screen is obviously retarded (and was obvious sarcasm). For my IDE, like Quake, you'd be able to use buttons/keypresses to control "things" you see on the screen. For my IDE, like Quake, it would be silly to claim that "using buttons/keypresses to control things you see on the screen" is the same as typing text.
You claimed that input and output were the same thing. So obviously when Quake gets text as input, it must also output only text and therefore is a text adventure. What you really were after, if I understand correctly, was to say that text and keypresses aren't the same; but claiming that Quake outputs key presses doesn't make it much better.
I didn't claim they are the same thing. What I said was:
Brendan wrote:Do you ever write software without looking at the source code you're creating? I don't think input and output be considered separate things.
Unless you've got a photographic memory, never make mistakes and only work solo; it's impossible to efficiently write software without viewing the source code. To efficiently write software, you need to be able to see what you're doing while you're doing it - a person can't do input and output separately. For example, you can't do input on Mondays and Wednesdays with your monitor turned off; and view output on Tuesdays and Thursdays with your keyboard and mouse unplugged.
When designing a user interface; it would be silly to assume the user will be doing input and output separately; and silly to design "input" without considering how output will be done, and silly to design "output" without considering how input will be done. It's one feedback loop (screen -> eyes -> brain -> fingers -> keyboard -> screen -> eyes -> brain), not separate unrelated things in isolation that can be designed separately.
Kevin wrote:For an unspecified picture with unspecified detail, the difference in file size (and the amount of visible artifacts) between an unspecified lossy format (when set to an unspecified amount of loss) and my format would be a little hard to estimate. When it comes to file sizes and only file sizes (e.g. ignoring image quality), I think PCX wins for compression - the PCX file header is smaller, so when you convert a "many mega-pixel" photo down to one huge pixel the total file size is less than it would've been in other formats.
The only sane way to compare would be to create a graph for each format, with "subjective image quality" on one axis and file size on the other axis; and then compare those graphs. This would still depend on what the images are (e.g. if there's alpha data then the subjective image quality for JPEG is going to be really bad) and how they're displayed (e.g. images might have the same "subjective image quality", until they're scaled up and rendered at strange angles).
Eh, yes. I was asking about lossless, but you're right that even there it depends on the image. I'm really just interested how it works out in "typical" cases. For example, how big would one of the smilies in this forum get?
For one of the forum smileys it depends on how it's encoded. For a pair of textured triangles it'd be the same quality (e.g. almost always look like crap due to scaling/transforming) and the file size would be about 4 times as much (mostly due to headers, etc; not the texture data itself, as it's a very small image). For "no textures" (solid and gradient triangles only) it'd be far better quality after rendering and the file size would be about twice as much as the original GIF (again, it's a very small image).
If the same smiley was a 100*100 GIF (more pixel data), then the original GIF would probably be 10 times what it is now, the "pair of textured triangles" would be a little larger than the "100*100" GIF, and the "no textures" version would be much smaller than the "100*100" GIF.
Of course nothing prevents the raw files from being compressed (e.g. using a lossless compression, like gzip). If someone attempts to open a compressed file as an image, then VFS does automatic file format conversion (decompression).
Kevin wrote:And what about a random photo of a landscape somewhere on the net? So I'm just looking for a comparison with gif/jpg/png on a handful of examples.
I don't know - would the top half of the picture be shades of blue (that can be converted into "gradient triangles" very efficiently), or would the photo have tall trees in the foreground (e.g. lots of fine detail/tree leaves)?
How much do users care about file sizes? How much do users care about things like image quality (after rendering) and rendering speed?
Kevin wrote:If I understood right, you already put some effort into this format, so I'd expect you have a prototype that could be used to check it?
I put effort into a slightly different "triangles only, no textures" format; and while I do have a "triangles to bitmap data" renderer for that format I never really perfected the opposite "bitmap data into triangles" conversion. I'm convinced I can solve the problems with the "bitmap data into triangles" conversion, and convinced the conversion will always be expensive (in terms of CPU time). I'm also fairly sure that the very high numbers of triangles necessary to avoid textures is going to cause performance problems for things like GUI rendering (e.g. a desktop full of windows with many pictures per window and millions of triangles per picture) and would have made it virtually impossible to do real time rendering in software while supporting other (unrelated) features.
Kevin wrote:Brendan wrote:So you're saying I should abandon all of my plans, just in case someone wants to port Linux to my OS(!)?
Neither am I saying that you should abandon anything nor am I talking specifically about your OS. But Linux qualifies as an example for a rather big project, so it should be an interesting case when we're talking about software projects in general.
I'm not talking about software projects in general - I'm talking about software projects for my OS (which is an OS motivated by a desire to abandon the existing "IT industry puke-fest", including "software projects in general"
).
Kevin wrote:If the class diagram for a project is too complex, then the project itself is too complex and needs to be split into several smaller projects (e.g. separate processes that cooperate via. IPC).
So you're claiming that process = project?
For my system; there are only processes/projects. What you think of as "an application" is multiple processes/projects - some that are internal and some that are public (named services).
For example, a spreadsheet might have a "front-end" process (the user interface) and a "back-end" process for each spreadsheet you're using. The back-ends might use a "maths" named service for calculating values in cells, and a "spell check" service for checking spelling.
If you start the spreadsheet application and open 3 spreadsheets, then you might end up 7 processes (a front-end, 3 back-ends, 2 maths services and one spell check service). It's intended as a distributed system - you might end up with the front-end running on the computer attached to your keyboard, 2 back-ends and a maths service running on a second computer, and one back-end and another maths service running on a third computer. The spell checker might be running on a fourth computer, and it might be being used by your spreadsheet and some other user's word-processor.
Kevin wrote:And wouldn't you just move the complexity out of the class diagram without really getting rid of it, so that you now need a diagram across multiple projects in order to understand the relationships between the projects?
No; you only need defined messaging protocols between processes/projects.
Could you write an FTP client without the class diagram (or source code) for an FTP server? Could you write an FTP server without the class diagram (or source code) for an FTP client? You only need a defined network protocol for these things.
Cheers,
Brendan