No. It's based on the number of operations, while ignoring what type of operations and how expensive the operations are. A "cmp byte [buffer],0" instruction might be one operation and loading a sector from a floppy disk might be one operation; so loading 10 sectors from floppy (something that probably takes 2 seconds) is "as fast as" doing 10 comparisons (which probably takes a few nanoseconds). See how idiotic that is?pauldinhqd wrote:This big-O is based on the number of byte-to-byte comparisons.
Am I getting this right
When attempting to use "big O notation" for file systems; maybe it'd make more sense to break operations up according to type. For example, one file system might be "O(log n) seeks, O(log n) reads and O(log m) comparisons", and another file system might be "O(1) seeks, O(n) reads and O(m) comparisons". From this it's a lot easier to tell that the first file system is going to be better for something like SSD (where seek is virtually free) and that the second file system is going to be better for hard disk (where seek is expensive).
To get even more accurate comparisons you could break the types of operations up even further - maybe "seeks to a random location" and "seeks to a sector that's likely to be close to where the disk heads are", and "reads with rotational latency costs" and "subsequent reads without rotational latency costs", etc.
If you attempt to do it in a correct/sane way, then you end up creating formulas that can be used to estimate the actual costs. For example, you could benchmark the cost of each type of operation for different devices and plug those costs into your formulas to generate something like an "estimated performance curve graph" that clearly shows how each file system being compared performs for each type of device.
If you attempt to do it in a correct/sane way, then you end up not using "big O notation" at all.
Cheers,
Brendan