Brendan wrote:It does make me wonder about code maintenance though. For e.g. can you split your code into seperate functions, where functionA acquires lockA, calls functionB, then unlocks lockA; and where function B acquires lockB and unlocks it? That way you could put all the code that uses lockA (and operates on the data protected by lockA) in the same place, and put all the code that uses lockB (and operates on the data protected by lockB) in the same place.
Not possible in this case -- normally that's what I would do. In this case, there is a graph of objects that can be modified in parallel by multiple threads, and each modification must be atomic with respect to the others. The problem is that many of these modifications touch many objects, not just one. What I've done is to design a locking protocol that establishes a ranking of all objects to ensure that they are always locked in a consistent order. Unlocking them in the reverse order is actually really easy... I was just wondering if it was necessary. The performance advantage is pretty clear.
From a maintenance perspective, it will be ok, as long as someone understands why it's designed the way it is.

For example, imagine that functionA() wants to modify objects A, B, and C while functionB() wants to modify objects B, C, and D. Fortunately, by virtue of the spec I'm implementing, I know this ahead of time and can write functionA() so that it calls a "locking subsystem" like "lock( A_B_C, a )" where A_B_C indicates a particular pattern of access and a is the starting point, and functionB() does "lock( B_C_D, b )", and so on. This keeps all the locking/unlocking in one place, guarantees deadlock freedom, and relieves the rest of the implementation from worrying about concurrency (at least when touching the object graph -- Singletons still need to be protected, but that's easy).
In a nutshell, I'm implementing ad-hoc software transactional memory that is specific to the problem domain I'm working in by using locking instead of making "shadow copies" or anything like that. It's nasty and complicated, but also fun. What makes me giggle is the fact that I can guarantee deadlock freedom in my system, and I am 99% sure that existing components of the same type have nasty race conditions and deadlocks in them just waiting to pop up under heavy load.
Just goes to show that OS dev (especially memory management) can be good practice for the real world.
