Improve scoped spinlock implementations

Both ScopedAllMutexesLock and ScopedExpectedMutexesOnWeakRefAccessLock
really implement simple spinlocks. But they did it in an unorthodox
way, with a CAS on unlock. I see no reason for that. Use the standard
(and faster) idiom instead.

The NanoSleep(100) waiting logic was probably suboptimal and definitely
misleading. I timed NanoSleep(100) on Linux4.4 host, and it takes about
60 usecs, i.e. 60,000 nsecs. By comparison a no-op sched_yield takes
about 1 usec. This replaces it with waiting logic that should be
generically usable. This is no doubt overkill, but the hope is that
we can eventually reuse this where it matters more.

Test: Built and booted AOSP.

Change-Id: I6e47508ecb8d5e5d0b4f08c8e8f073ad7b1d192e
1 file changed