From 9e629adc4ae39b8ced5a0ff74d5a5049a44a7286 Mon Sep 17 00:00:00 2001 From: Hans Boehm Date: Mon, 14 Sep 2015 13:50:00 -0700 Subject: Update SMP documentation. The old version was seriously obsolete, primarily in that it ignored C11 and C++11 atomics and the programming model underlying them. As a result it paid way too much attention to hardware characteristics, which 0.001% of application programmers should really be aware of. And some of those hardware descriptions were also obsolete. This is a fairly complete rewrite. Bug: 18523857 Change-Id: Icc14a390f74486193486c2ba07a86b05611e7d3c --- docs/html/training/articles/smp.jd | 2216 ++++++++++++++---------------------- 1 file changed, 861 insertions(+), 1355 deletions(-) diff --git a/docs/html/training/articles/smp.jd b/docs/html/training/articles/smp.jd index 0b45987558d6..20d2ee064fcd 100644 --- a/docs/html/training/articles/smp.jd +++ b/docs/html/training/articles/smp.jd @@ -11,27 +11,12 @@ page.article=true
  • Theory
    1. Memory consistency models -
        -
      1. Processor consistency
      2. -
      3. CPU cache behavior
      4. -
      5. Observability
      6. -
      7. ARM’s weak ordering
      8. -
      -
    2. -
    3. Data memory barriers -
        -
      1. Store/store and load/load
      2. -
      3. Load/store and store/load
      4. -
      5. Barrier instructions
      6. -
      7. Address dependencies and causal consistency
      8. -
      9. Memory barrier summary
      10. -
    4. -
    5. Atomic operations +
    6. Data-race-free programming
        -
      1. Atomic essentials
      2. -
      3. Atomic + barrier pairing
      4. -
      5. Acquire and release
      6. +
      7. What's a "data race"?
      8. +
      9. Avoiding data races
      10. +
      11. When memory reordering becomes visible
    @@ -51,18 +36,21 @@ page.article=true
  • What to do -
      -
    1. General advice
    2. -
    3. Synchronization primitive guarantees
    4. -
    5. Upcoming changes to C/C++
    6. -
  • +
  • A little more about weak memory orders +
      +
    1. Non-racing accesses
    2. +
    3. Result is not relied upon for correctness
    4. +
    5. Atomically modified but unread data
    6. +
    7. Simple flag communication
    8. +
    9. Immutable fields
    10. +
    +
  • Closing Notes
  • Appendix
      -
    1. SMP failure example
    2. Implementing synchronization stores
    3. Further reading
    @@ -73,15 +61,10 @@ page.article=true

    Android 3.0 and later platform versions are optimized to support multiprocessor architectures. This document introduces issues that -can arise when writing code for symmetric multiprocessor systems in C, C++, and the Java +can arise when writing multithreaded code for symmetric multiprocessor systems in C, C++, and the Java programming language (hereafter referred to simply as “Java” for the sake of -brevity). It's intended as a primer for Android app developers, not as a complete -discussion on the subject. The focus is on the ARM CPU architecture.

    - -

    If you’re in a hurry, you can skip the Theory section -and go directly to Practice for best practices, but this -is not recommended.

    - +brevity). It's intended as a primer for Android app developers, not as a complete +discussion on the subject.

    Introduction

    @@ -89,35 +72,38 @@ is not recommended.

    which two or more identical CPU cores share access to main memory. Until a few years ago, all Android devices were UP (Uni-Processor).

    -

    Most — if not all — Android devices do have multiple CPUs, but generally one -of them is used to run applications while others manage various bits of device -hardware (for example, the radio). The CPUs may have different architectures, and the -programs running on them can’t use main memory to communicate with each +

    Most — if not all — Android devices always had multiple CPUs, but +in the past only one of them was used to run applications while others manage various bits of device +hardware (for example, the radio). The CPUs may have had different architectures, and the +programs running on them couldn’t use main memory to communicate with each other.

    Most Android devices sold today are built around SMP designs, -making things a bit more complicated for software developers. The sorts of race -conditions you might encounter in a multi-threaded program are much worse on SMP -when two or more of your threads are running simultaneously on different cores. -What’s more, SMP on ARM is more challenging to work with than SMP on x86. Code -that has been thoroughly tested on x86 may break badly on ARM.

    +making things a bit more complicated for software developers. Race conditions +in a multi-threaded program may not cause visible problems on a uniprocessor, +but may fail regularly when two or more of your threads +are running simultaneously on different cores. +What’s more, code may be more or less prone to failures when run on different +processor architectures, or even on different implementations of the same +architecture. Code that has been thoroughly tested on x86 may break badly on ARM. +Code may start to fail when recompiled with a more modern compiler.

    The rest of this document will explain why, and tell you what you need to do to ensure that your code behaves correctly.

    -

    Theory

    +

    Memory consistency models: Why SMPs are a bit different

    This is a high-speed, glossy overview of a complex subject. Some areas will -be incomplete, but none of it should be misleading or wrong.

    +be incomplete, but none of it should be misleading or wrong. As you +will see in the next section, the details here are usually not important.

    See Further reading at the end of the document for pointers to more thorough treatments of the subject.

    -

    Memory consistency models

    -

    Memory consistency models, or often just “memory models”, describe the -guarantees the hardware architecture makes about memory accesses. For example, +guarantees the programming language or hardware architecture +makes about memory accesses. For example, if you write a value to address A, and then write a value to address B, the model might guarantee that every CPU core sees those writes happen in that order.

    @@ -129,23 +115,26 @@ Gharachorloo):

    +

    Let's assume temporarily that we have a very simple compiler or interpreter +that introduces no surprises: It translates +assignments in the source code to load and store instructions in exactly the +corresponding order, one instruction per access. We'll also assume for +simplicity that each thread executes on its own processor. +

    If you look at a bit of code and see that it does some reads and writes from memory, on a sequentially-consistent CPU architecture you know that the code will do those reads and writes in the expected order. It’s possible that the CPU is actually reordering instructions and delaying reads and writes, but there is no way for code running on the device to tell that the CPU is doing anything -other than execute instructions in a straightforward manner. (We’re ignoring -memory-mapped device driver I/O for the moment.)

    +other than execute instructions in a straightforward manner. (We’ll ignore +memory-mapped device driver I/O.)

    To illustrate these points it’s useful to consider small snippets of code, -commonly referred to as litmus tests. These are assumed to execute in -program order, that is, the order in which the instructions appear here is -the order in which the CPU will execute them. We don’t want to consider -instruction reordering performed by compilers just yet.

    +commonly referred to as litmus tests.

    Here’s a simple example, with code running on two threads:

    @@ -205,19 +194,80 @@ executing, the registers will be in one of the following states:

    the reads or the writes would have to happen out of order. On a sequentially-consistent machine, that can’t happen.

    -

    Most uni-processors, including x86 and ARM, are sequentially consistent. -Most SMP systems, including x86 and ARM, are not.

    - -

    Processor consistency

    - -

    x86 SMP provides processor consistency, which is slightly weaker than -sequential. While the architecture guarantees that loads are not reordered with -respect to other loads, and stores are not reordered with respect to other -stores, it does not guarantee that a store followed by a load will be observed -in the expected order.

    - -

    Consider the following example, which is a piece of Dekker’s Algorithm for -mutual exclusion:

    +

    Uni-processors, including x86 and ARM, are normally sequentially consistent. +Threads appear to execute in interleaved fashion, as the OS kernel switches +between them. Most SMP systems, including x86 and ARM, +are not sequentially consistent. For example, it is common for +hardware to buffer stores on their way to memory, so that they +don't immediately reach memory and become visible to other cores.

    + +

    The details vary substantially. For example, x86, though not sequentially +consistent, still guarantees that reg0 = 5 and reg1 = 0 remains impossible. +Stores are buffered, but their order is maintained. +ARM, on the other hand, does not. The order of buffered stores is not +maintained, and stores may not reach all other cores at the same time. +These differences are important to assembly programmers. +However, as we will see below, C, C++, or Java programmers can +and should program in a way that hides such architectural differences.

    + +

    So far, we've unrealistically assumed that it is only the hardware that +reorders instructions. In reality, the compiler also reorders instructions to +improve performance. In our example, the compiler might decide that some later +code in Thread 2 needed the value of reg1 before it needed reg0, and thus load +reg1 first. Or some prior code may already have loaded A, and the compiler +might decide to reuse that value instead of loading A again. In either case, +the loads to reg0 and reg1 might be reordered.

    + +

    Reordering accesses to different memory locations, +either in the hardware, or in the compiler, is +allowed, since it doesn't affect the execution of a single thread, and +it can significantly improve performance. As we will see, with a bit of care, +we can also prevent it from affecting the results of multithreaded programs.

    + +

    Since compilers can also reorder memory accesses, this problem is actually +not new to SMPs. Even on a uniprocessor, a compiler could reorder the loads to +reg0 and reg1 in our example, and Thread 1 could be scheduled between the +reordered instructions. But if our compiler happened to not reorder, we might +never observe this problem. On most ARM SMPs, even without compiler +reordering, the reordering will probably be seen, possibly after a very large +number of successful executions. Unless you're programming in assembly +language, SMPs generally just make it more likely you'll see problems that were +there all along.

    + +

    Data-race-free programming

    + +

    Fortunately, there is usually an easy way to avoid thinking about any of +these details. If you follow some straightforward rules, it's usually safe +to forget all of the preceding section except the "sequential consistency" part. +Unfortunately, the other complications may become visible if you +accidentally violate those rules. + +

    Modern programming languages encourage what's known as a "data-race-free" +programming style. So long as you promise not to introduce "data races", +and avoid a handful of constructs that tell the compiler otherwise, the compiler +and hardware promise to provide sequentially consistent results. This doesn't +really mean they avoid memory access reordering. It does mean that if you +follow the rules you won't be able to tell that memory accesses are being +reordered. It's a lot like telling you that sausage is a delicious and +appetizing food, so long as you promise not to visit the +sausage factory. Data races are what expose the ugly truth about memory +reordering.

    + +

    What's a "data race"?

    + +

    A data race occurs when at least two threads simultaneously access +the same ordinary data, and at least one of them modifies it. By "ordinary +data" we mean something that's not specifically a synchronization object +intended for thread communication. Mutexes, condition variables, Java +volatiles, or C++ atomic objects are not ordinary data, and their accesses +are allowed to race. In fact they are used to prevent data races on other +objects.

    + +

    In order to determine whether two threads simultaneously access the same +memory location, we can ignore the memory-reordering discussion from above, and +assume sequential consistency. The following program doesn't have a data race +if A and B are ordinary boolean variables that are +initially false:

    @@ -225,168 +275,91 @@ mutual exclusion:

    - - + +
    Thread 2
    A = true
    -reg1 = B
    -if (reg1 == false)
    -    critical-stuff
    B = true
    -reg2 = A
    -if (reg2 == false)
    -    critical-stuff
    if (A) B = trueif (B) A = true
    -

    The idea is that thread 1 uses A to indicate that it’s busy, and thread 2 -uses B. Thread 1 sets A and then checks to see if B is set; if not, it can -safely assume that it has exclusive access to the critical section. Thread 2 -does something similar. (If a thread discovers that both A and B are set, a -turn-taking algorithm is used to ensure fairness.)

    - -

    On a sequentially-consistent machine, this works correctly. On x86 and ARM -SMP, the store to A and the load from B in thread 1 can be “observed” in a -different order by thread 2. If that happened, we could actually appear to -execute this sequence (where blank lines have been inserted to highlight the -apparent order of operations):

    - - - - - - - - - - - -
    Thread 1Thread 2
    reg1 = B
    -
    -
    -A = true
    -if (reg1 == false)
    -    critical-stuff

    -B = true
    -reg2 = A
    -
    -if (reg2 == false)
    -    critical-stuff
    - -

    This results in both reg1 and reg2 set to “false”, allowing the threads to -execute code in the critical section simultaneously. To understand how this can -happen, it’s useful to know a little about CPU caches.

    - -

    CPU cache behavior

    - -

    This is a substantial topic in and of itself. An extremely brief overview -follows. (The motivation for this material is to provide some basis for -understanding why SMP systems behave as they do.)

    - -

    Modern CPUs have one or more caches between the processor and main memory. -These are labeled L1, L2, and so on, with the higher numbers being successively -“farther” from the CPU. Cache memory adds size and cost to the hardware, and -increases power consumption, so the ARM CPUs used in Android devices typically -have small L1 caches and little or no L2/L3.

    - -

    Loading or storing a value into the L1 cache is very fast. Doing the same to -main memory can be 10-100x slower. The CPU will therefore try to operate out of -the cache as much as possible. The write policy of a cache determines when data -written to it is forwarded to main memory. A write-through cache will initiate -a write to memory immediately, while a write-back cache will wait until it runs -out of space and has to evict some entries. In either case, the CPU will -continue executing instructions past the one that did the store, possibly -executing dozens of them before the write is visible in main memory. (While the -write-through cache has a policy of immediately forwarding the data to main -memory, it only initiates the write. It does not have to wait -for it to finish.)

    - -

    The cache behavior becomes relevant to this discussion when each CPU core has -its own private cache. In a simple model, the caches have no way to interact -with each other directly. The values held by core #1’s cache are not shared -with or visible to core #2’s cache except as loads or stores from main memory. -The long latencies on memory accesses would make inter-thread interactions -sluggish, so it’s useful to define a way for the caches to share data. This -sharing is called cache coherency, and the coherency rules are defined -by the CPU architecture’s cache consistency model.

    - -

    With that in mind, let’s return to the Dekker example. When core 1 executes -“A = 1”, the value gets stored in core 1’s cache. When core 2 executes “if (A -== 0)”, it might read from main memory or it might read from core 2’s cache; -either way it won’t see the store performed by core 1. (“A” could be in core -2’s cache because of a previous load from “A”.)

    - -

    For the memory consistency model to be sequentially consistent, core 1 would -have to wait for all other cores to be aware of “A = 1” before it could execute -“if (B == 0)” (either through strict cache coherency rules, or by disabling the -caches entirely so everything operates out of main memory). This would impose a -performance penalty on every store operation. Relaxing the rules for the -ordering of stores followed by loads improves performance but imposes a burden -on software developers.

    - -

    The other guarantees made by the processor consistency model are less -expensive to make. For example, to ensure that memory writes are not observed -out of order, it just needs to ensure that the stores are published to other -cores in the same order that they were issued. It doesn’t need to wait for -store #1 to finish being published before it can start on store -#2, it just needs to ensure that it doesn’t finish publishing #2 before it -finishes publishing #1. This avoids a performance bubble.

    - -

    Relaxing the guarantees even further can provide additional opportunities for -CPU optimization, but creates more opportunities for code to behave in ways the -programmer didn’t expect.

    - -

    One additional note: CPU caches don’t operate on individual bytes. Data is -read or written as cache lines; for many ARM CPUs these are 32 bytes. If you -read data from a location in main memory, you will also be reading some adjacent -values. Writing data will cause the cache line to be read from memory and -updated. As a result, you can cause a value to be loaded into cache as a -side-effect of reading or writing something nearby, adding to the general aura -of mystery.

    - -

    Observability

    - -

    Before going further, it’s useful to define in a more rigorous fashion what -is meant by “observing” a load or store. Suppose core 1 executes “A = 1”. The -store is initiated when the CPU executes the instruction. At some -point later, possibly through cache coherence activity, the store is -observed by core 2. In a write-through cache it doesn’t really -complete until the store arrives in main memory, but the memory -consistency model doesn’t dictate when something completes, just when it can be -observed.

    - - -

    (In a kernel device driver that accesses memory-mapped I/O locations, it may -be very important to know when things actually complete. We’re not going to go -into that here.)

    - -

    Observability may be defined as follows:

    - - - - -

    A less formal way to describe it (where “you” and “I” are CPU cores) would be:

    - - +

    Since operations are not reordered, both conditions will evaluate to false, and +neither variable is ever updated. Thus there cannot be a data race. There is +no need to think about what might happen if the load from A +and store to B in +Thread 1 were somehow reordered. The compiler is not allowed to reorder Thread +1 by rewriting it as "B = true; if (!A) B = false". That would be +like making sausage in the middle of town in broad daylight. + +

    Data races are officially defined on basic built-in types like integers and +references or pointers. Assigning to an int while simultaneously +reading it in another thread is clearly a data race. But both the C++ +standard library and +the Java Collections libraries are written to allow you to also reason about +data races at the library level. They promise to not introduce data races +unless there are concurrent accesses to the same container, at least one of +which updates it. Updating a set<T> in one thread while +simultaneously reading it in another allows the library to introduce a +data race, and can thus be thought of informally as a "library-level data race". +Conversely, updating one set<T> in one thread, while reading +a different one in another, does not result in a data race, because the +library promises not to introduce a (low-level) data race in that case. + +

    Normally concurrent accesses to different fields in a data structure +cannot introduce a data race. However there is one important exception to +this rule: Contiguous sequences of bit-fields in C or C++ are treated as +a single "memory location". Accessing any bit-field in such a sequence +is treated as accessing all of them for purposes of determining the +existence of a data race. This reflects the inability of common hardware +to update individual bits without also reading and re-writing adjacent bits. +Java programmers have no analogous concerns.

    + +

    Avoiding data races

    + +Modern programming languages provide a number of synchronization +mechanisms to avoid data races. The most basic tools are: -

    The notion of observing a write is intuitive; observing a read is a bit less -so (don’t worry, it grows on you).

    +
    +
    Locks or Mutexes
    + +
    Mutexes (C++11 std::mutex, or pthread_mutex_t), or +synchronized blocks in Java can be used to ensure that certain +section of code do not run concurrently with other sections of code accessing +the same data. We'll refer to these and other similar facilities generically +as "locks." Consistently acquiring a specific lock before accessing a shared +data structure and releasing it afterwards, prevents data races when accessing +the data structure. It also ensures that updates and accesses are atomic, i.e. no +other update to the data structure can run in the middle. This is deservedly +by far the most common tool for preventing data races. The use of Java +synchronized blocks or C++ lock_guard +or unique_lock ensure that locks are properly released in the +event of an exception. +
    + +
    Volatile/atomic variables
    + +
    Java provides volatile fields that support concurrent access +without introducing data races. Since 2011, C and C++ support +atomic variables and fields with similar semantics. These are +typically more difficult to use then locks, since they only ensure that +individual accesses to a single variable are atomic. (In C++ this normally +extends to simple read-modify-write operations, like increments. Java +requires special method calls for that.) +Unlike locks, volatile or atomic variables can't +be used directly to prevent other threads from interfering with longer code sequences.
    -

    With this in mind, we’re ready to talk about ARM.

    +
    -

    ARM's weak ordering

    +

    It's important to note that volatile has very different +meanings in C++ and Java. In C++, volatile does not prevent data +races, though older code often uses it as a workaround for the lack of +atomic objects. This is no longer recommended; in +C++, use atomic<T> for variables that can be concurrently +accessed by multiple threads. C++ volatile is meant for +device registers and the like.

    -

    ARM SMP provides weak memory consistency guarantees. It does not guarantee that -loads or stores are ordered with respect to each other.

    +

    C/C++ atomic variables or Java volatile variables +can be used to prevent data races on other variables. If flag is +declared to have type atomic<bool> +or atomic_bool(C/C++) or volatile boolean (Java), +and is initially false then the following snippet is data-race-free:

    @@ -394,51 +367,44 @@ loads or stores are ordered with respect to each other.

    - - + +
    Thread 2
    A = 41
    -B = 1 // “A is ready”
    loop_until (B == 1)
    -reg = A
    A = ...
    +  flag = true
    +
    while (!flag) {}
    +... = A
    +
    -

    Recall that all addresses are initially zero. The “loop_until” instruction -reads B repeatedly, looping until we read 1 from B. The idea here is that -thread 2 is waiting for thread 1 to update A. Thread 1 sets A, and then sets B -to 1 to indicate data availability.

    - -

    On x86 SMP, this is guaranteed to work. Thread 2 will observe the stores -made by thread 1 in program order, and thread 1 will observe thread 2’s loads in -program order.

    +

    Since Thread 2 waits for flag to be set, the access to +A in Thread 2 must happen after, and not concurrently with, the +assignment to A in Thread 1. Thus there is no data race on +A. The race on flag doesn't count as a data race, +since volatile/atomic accesses are not "ordinary memory accesses".

    -

    On ARM SMP, the loads and stores can be observed in any order. It is -possible, after all the code has executed, for reg to hold 0. It’s also -possible for it to hold 41. Unless you explicitly define the ordering, you -don’t know how this will come out.

    +

    The implementation is required to prevent or hide memory reordering +sufficiently to make code like the preceding litmus test behave as expected. +This normally makes volatile/atomic memory accesses +substantially more expensive than ordinary accesses.

    -

    (For those with experience on other systems, ARM’s memory model is equivalent -to PowerPC in most respects.)

    +

    Although the preceding example is data-race-free, locks together with +Object.wait() in Java or condition variables in C/C++ usually +provide a better solution that does not involve waiting in a loop while +draining battery power.

    +

    When memory reordering becomes visible

    -

    Data memory barriers

    - -

    Memory barriers provide a way for your code to tell the CPU that memory -access ordering matters. ARM/x86 uniprocessors offer sequential consistency, -and thus have no need for them. (The barrier instructions can be executed but -aren’t useful; in at least one case they’re hideously expensive, motivating -separate builds for SMP targets.)

    - -

    There are four basic situations to consider:

    +Data-race-free programming normally saves us from having to explicitly deal +with memory access reordering issues. However, there are several cases in +which reordering does become visible:
      -
    1. store followed by another store
    2. -
    3. load followed by another load
    4. -
    5. load followed by store
    6. -
    7. store followed by load
    8. -
    - -

    Store/store and load/load

    - -

    Recall our earlier example:

    +
  • If your program has a bug resulting in an unintentional data race, +compiler and hardware transformations can become visible, and the behavior +of your program may be surprising. For example, if we forgot to declare +flag volatile in the preceding example, Thread 2 may see an +uninitialized A. Or the compiler may decide that flag can't +possibly change during Thread 2's loop and transform the program to @@ -446,608 +412,47 @@ separate builds for SMP targets.)

    - - + +
    Thread 2
    A = 41
    -B = 1 // “A is ready”
    loop_until (B == 1)
    -reg = A
    A = ...
    +  flag = true
    +
    reg0 = flag; +while (!reg0) {}
    +... = A +
    - -

    Thread 1 needs to ensure that the store to A happens before the store to B. -This is a “store/store” situation. Similarly, thread 2 needs to ensure that the -load of B happens before the load of A; this is a load/load situation. As -mentioned earlier, the loads and stores can be observed in any order.

    - -
    -
    -

    Going back to the cache discussion, assume A and B are on separate cache -lines, with minimal cache coherency. If the store to A stays local but the -store to B is published, core 2 will see B=1 but won’t see the update to A. On -the other side, assume we read A earlier, or it lives on the same cache line as -something else we recently read. Core 2 spins until it sees the update to B, -then loads A from its local cache, where the value is still zero.

    -
    -
    - -

    We can fix it like this:

    - - - - - - - - - - -
    Thread 1Thread 2
    A = 41
    -store/store barrier
    -B = 1 // “A is ready”
    loop_until (B == 1)
    -load/load barrier
    -reg = A
    - -

    The store/store barrier guarantees that all observers will -observe the write to A before they observe the write to B. It makes no -guarantees about the ordering of loads in thread 1, but we don’t have any of -those, so that’s okay. The load/load barrier in thread 2 makes a similar -guarantee for the loads there.

    - -

    Since the store/store barrier guarantees that thread 2 observes the stores in -program order, why do we need the load/load barrier in thread 2? Because we -also need to guarantee that thread 1 observes the loads in program order.

    - -
    -
    -

    The store/store barrier could work by flushing all -dirty entries out of the local cache, ensuring that other cores see them before -they see any future stores. The load/load barrier could purge the local cache -completely and wait for any “in-flight” loads to finish, ensuring that future -loads are observed after previous loads. What the CPU actually does doesn’t -matter, so long as the appropriate guarantees are kept. If we use a barrier in -core 1 but not in core 2, core 2 could still be reading A from its local -cache.

    -
    -
    - -

    Because the architectures have different memory models, these barriers are -required on ARM SMP but not x86 SMP.

    - -

    Load/store and store/load

    - -

    The Dekker’s Algorithm fragment shown earlier illustrated the need for a -store/load barrier. Here’s an example where a load/store barrier is -required:

    - - - - - - - - - - -
    Thread 1Thread 2
    reg = A
    -B = 1 // “I have latched A”
    loop_until (B == 1)
    -A = 41 // update A
    - -

    Thread 2 could observe thread 1’s store of B=1 before it observe’s thread 1’s -load from A, and as a result store A=41 before thread 1 has a chance to read A. -Inserting a load/store barrier in each thread solves the problem:

    - - - - - - - - - - -
    Thread 1Thread 2
    reg = A
    -load/store barrier
    -B = 1 // “I have latched A”
    loop_until (B == 1)
    -load/store barrier
    -A = 41 // update A
    - -
    -
    -

    A store to local cache may be observed before a load from main memory, -because accesses to main memory are so much slower. In this case, assume core -1’s cache has the cache line for B but not A. The load from A is initiated, and -while that’s in progress execution continues. The store to B happens in local -cache, and by some means becomes available to core 2 while the load from A is -still in progress. Thread 2 is able to exit the loop before it has observed -thread 1’s load from A.

    - -

    A thornier question is: do we need a barrier in thread 2? If the CPU doesn’t -perform speculative writes, and doesn’t execute instructions out of order, can -thread 2 store to A before thread 1’s read if thread 1 guarantees the load/store -ordering? (Answer: no.) What if there’s a third core watching A and B? -(Answer: now you need one, or you could observe B==0 / A==41 on the third core.) - It’s safest to insert barriers in both places and not worry about the -details.

    -
    -
    - -

    As mentioned earlier, store/load barriers are the only kind required on x86 -SMP.

    - -

    Barrier instructions

    - -

    Different CPUs provide different flavors of barrier instruction. For -example:

    - - - -

    “Full barrier” means all four categories are included.

    - -

    It is important to recognize that the only thing guaranteed by barrier -instructions is ordering. Do not treat them as cache coherency “sync points” or -synchronous “flush” instructions. The ARM “dmb” instruction has no direct -effect on other cores. This is important to understand when trying to figure -out where barrier instructions need to be issued.

    - - -

    Address dependencies and causal consistency

    - -

    (This is a slightly more advanced topic and can be skipped.) - -

    The ARM CPU provides one special case where a load/load barrier can be -avoided. Consider the following example from earlier, modified slightly:

    - - - - - - - - - - -
    Thread 1Thread 2
    [A+8] = 41
    -store/store barrier
    -B = 1 // “A is ready”
    loop:
    -    reg0 = B
    -    if (reg0 == 0) goto loop
    -reg1 = 8
    -reg2 = [A + reg1]
    - -

    This introduces a new notation. If “A” refers to a memory address, “A+n” -refers to a memory address offset by 8 bytes from A. If A is the base address -of an object or array, [A+8] could be a field in the object or an element in the -array.

    - -

    The “loop_until” seen in previous examples has been expanded to show the load -of B into reg0. reg1 is assigned the numeric value 8, and reg2 is loaded from -the address [A+reg1] (the same location that thread 1 is accessing).

    - -

    This will not behave correctly because the load from B could be observed -after the load from [A+reg1]. We can fix this with a load/load barrier after -the loop, but on ARM we can also just do this:

    - - - - - - - - - - -
    Thread 1Thread 2
    [A+8] = 41
    -store/store barrier
    -B = 1 // “A is ready”
    loop:
    -    reg0 = B
    -    if (reg0 == 0) goto loop
    -reg1 = 8 + (reg0 & 0)
    -reg2 = [A + reg1]
    - -

    What we’ve done here is change the assignment of reg1 from a constant (8) to -a value that depends on what we loaded from B. In this case, we do a bitwise -AND of the value with 0, which yields zero, which means reg1 still has the value -8. However, the ARM CPU believes that the load from [A+reg1] depends upon the -load from B, and will ensure that the two are observed in program order.

    - -

    This is called an address dependency. Address dependencies exist -when the value returned by a load is used to compute the address of a subsequent -load or store. It can let you avoid the need for an explicit barrier in certain -situations.

    - -

    ARM does not provide control dependency guarantees. To illustrate -this it’s necessary to dip into ARM code for a moment: (Barrier Litmus Tests and Cookbook).

    - -
    -LDR r1, [r0]
    -CMP r1, #55
    -LDRNE r2, [r3]
    -
    - -

    The loads from r0 and r3 may be observed out of order, even though the load -from r3 will not execute at all if [r0] doesn’t hold 55. Inserting AND r1, r1, -#0 and replacing the last instruction with LDRNE r2, [r3, r1] would ensure -proper ordering without an explicit barrier. (This is a prime example of why -you can’t think about consistency issues in terms of instruction execution. -Always think in terms of memory accesses.)

    - -

    While we’re hip-deep, it’s worth noting that ARM does not provide causal -consistency:

    - - - - - - - - - - - - -
    Thread 1Thread 2Thread 3
    A = 1loop_until (A == 1)
    -B = 1
    loop:
    -  reg0 = B
    -  if (reg0 == 0) goto loop
    -reg1 = reg0 & 0
    -reg2 = [A+reg1]
    - -

    Here, thread 1 sets A, signaling thread 2. Thread 2 sees that and sets B to -signal thread 3. Thread 3 sees it and loads from A, using an address dependency -to ensure that the load of B and the load of A are observed in program -order.

    - -

    It’s possible for reg2 to hold zero at the end of this. The fact that a -store in thread 1 causes something to happen in thread 2 which causes something -to happen in thread 3 does not mean that thread 3 will observe the stores in -that order. (Inserting a load/store barrier in thread 2 fixes this.)

    - -

    Memory barrier summary

    - -

    Barriers come in different flavors for different situations. While there can -be performance advantages to using exactly the right barrier type, there are -code maintenance risks in doing so — unless the person updating the code -fully understands it, they might introduce the wrong type of operation and cause -a mysterious breakage. Because of this, and because ARM doesn’t provide a wide -variety of barrier choices, many atomic primitives use full -barrier instructions when a barrier is required.

    - -

    The key thing to remember about barriers is that they define ordering. Don’t -think of them as a “flush” call that causes a series of actions to happen. -Instead, think of them as a dividing line in time for operations on the current -CPU core.

    - - -

    Atomic operations

    - -

    Atomic operations guarantee that an operation that requires a series of steps -always behaves as if it were a single operation. For example, consider a -non-atomic increment (“++A”) executed on the same variable by two threads -simultaneously:

    - - - - - - - - - - -
    Thread 1Thread 2
    reg = A
    -reg = reg + 1
    -A = reg
    reg = A
    -reg = reg + 1
    -A = reg
    - -

    If the threads execute concurrently from top to bottom, both threads will -load 0 from A, increment it to 1, and store it back, leaving a final result of -1. If we used an atomic increment operation, you would be guaranteed that the -final result will be 2.

    - -

    Atomic essentials

    - -

    The most fundamental operations — loading and storing 32-bit values -— are inherently atomic on ARM so long as the data is aligned on a 32-bit -boundary. For example:

    - - - - - - - - - - -
    Thread 1Thread 2
    reg = 0x00000000
    -A = reg
    reg = 0xffffffff
    -A = reg
    - -

    The CPU guarantees that A will hold 0x00000000 or 0xffffffff. It will never -hold 0x0000ffff or any other partial “mix” of bytes.

    - -
    -
    -

    The atomicity guarantee is lost if the data isn’t aligned. Misaligned data -could straddle a cache line, so other cores could see the halves update -independently. Consequently, the ARMv7 documentation declares that it provides -“single-copy atomicity” for all byte accesses, halfword accesses to -halfword-aligned locations, and word accesses to word-aligned locations. -Doubleword (64-bit) accesses are not atomic, unless the -location is doubleword-aligned and special load/store instructions are used. -This behavior is important to understand when multiple threads are performing -unsynchronized updates to packed structures or arrays of primitive types.

    -
    -
    - -

    There is no need for 32-bit “atomic read” or “atomic write” functions on ARM -or x86. Where one is provided for completeness, it just does a trivial load or -store.

    - -

    Operations that perform more complex actions on data in memory are -collectively known as read-modify-write (RMW) instructions, because -they load data, modify it in some way, and write it back. CPUs vary widely in -how these are implemented. ARM uses a technique called “Load Linked / Store -Conditional”, or LL/SC.

    - -
    -
    -

    A linked or locked load reads the data from memory as -usual, but also establishes a reservation, tagging the physical memory address. -The reservation is cleared when another core tries to write to that address. To -perform an LL/SC, the data is read with a reservation, modified, and then a -conditional store instruction is used to try to write the data back. If the -reservation is still in place, the store succeeds; if not, the store will fail. -Atomic functions based on LL/SC usually loop, retrying the entire -read-modify-write sequence until it completes without interruption.

    -
    -
    - -

    It’s worth noting that the read-modify-write operations would not work -correctly if they operated on stale data. If two cores perform an atomic -increment on the same address, and one of them is not able to see what the other -did because each core is reading and writing from local cache, the operation -won’t actually be atomic. The CPU’s cache coherency rules ensure that the -atomic RMW operations remain atomic in an SMP environment.

    - -

    This should not be construed to mean that atomic RMW operations use a memory -barrier. On ARM, atomics have no memory barrier semantics. While a series of -atomic RMW operations on a single address will be observed in program order by -other cores, there are no guarantees when it comes to the ordering of atomic and -non-atomic operations.

    - -

    It often makes sense to pair barriers and atomic operations together. The -next section describes this in more detail.

    - -

    Atomic + barrier pairing

    - -

    As usual, it’s useful to illuminate the discussion with an example. We’re -going to consider a basic mutual-exclusion primitive called a spin -lock. The idea is that a memory address (which we’ll call “lock”) -initially holds zero. When a thread wants to execute code in the critical -section, it sets the lock to 1, executes the critical code, and then changes it -back to zero when done. If another thread has already set the lock to 1, we sit -and spin until the lock changes back to zero.

    - -

    To make this work we use an atomic RMW primitive called -compare-and-swap. The function takes three arguments: the memory -address, the expected current value, and the new value. If the value currently -in memory matches what we expect, it is replaced with the new value, and the old -value is returned. If the current value is not what we expect, we don’t change -anything. A minor variation on this is called compare-and-set; instead -of returning the old value it returns a boolean indicating whether the swap -succeeded. For our needs either will work, but compare-and-set is slightly -simpler for examples, so we use it and just refer to it as “CAS”.

    - -

    The acquisition of the spin lock is written like this (using a C-like -language):

    - -
    do {
    -    success = atomic_cas(&lock, 0, 1)
    -} while (!success)
    -
    -full_memory_barrier()
    -
    -critical-section
    - -

    If no thread holds the lock, the lock value will be 0, and the CAS operation -will set it to 1 to indicate that we now have it. If another thread has it, the -lock value will be 1, and the CAS operation will fail because the expected -current value does not match the actual current value. We loop and retry. -(Note this loop is on top of whatever loop the LL/SC code might be doing inside -the atomic_cas function.)

    - -
    -
    -

    On SMP, a spin lock is a useful way to guard a small critical section. If we -know that another thread is going to execute a handful of instructions and then -release the lock, we can just burn a few cycles while we wait our turn. -However, if the other thread happens to be executing on the same core, we’re -just wasting time because the other thread can’t make progress until the OS -schedules it again (either by migrating it to a different core or by preempting -us). A proper spin lock implementation would optimistically spin a few times -and then fall back on an OS primitive (such as a Linux futex) that allows the -current thread to sleep while waiting for the other thread to finish up. On a -uniprocessor you never want to spin at all. For the sake of brevity we’re -ignoring all this.

    -
    -
    - -

    The memory barrier is necessary to ensure that other threads observe the -acquisition of the lock before they observe any loads or stores in the critical -section. Without that barrier, the memory accesses could be observed while the -lock is not held.

    - -

    The full_memory_barrier call here actually does -two independent operations. First, it issues the CPU’s full -barrier instruction. Second, it tells the compiler that it is not allowed to -reorder code around the barrier. That way, we know that the -atomic_cas call will be executed before anything in the critical -section. Without this compiler reorder barrier, the compiler has a -great deal of freedom in how it generates code, and the order of instructions in -the compiled code might be much different from the order in the source code.

    - -

    Of course, we also want to make sure that none of the memory accesses -performed in the critical section are observed after the lock is released. The -full version of the simple spin lock is:

    - -
    do {
    -    success = atomic_cas(&lock, 0, 1)   // acquire
    -} while (!success)
    -full_memory_barrier()
    -
    -critical-section
    -
    -full_memory_barrier()
    -atomic_store(&lock, 0)                  // release
    - -

    We perform our second CPU/compiler memory barrier immediately -before we release the lock, so that loads and stores in the -critical section are observed before the release of the lock.

    - -

    As mentioned earlier, the atomic_store operation is a simple -assignment on ARM and x86. Unlike the atomic RMW operations, we don’t guarantee -that other threads will see this value immediately. This isn’t a problem, -though, because we only need to keep the other threads out. The -other threads will stay out until they observe the store of 0. If it takes a -little while for them to observe it, the other threads will spin a little -longer, but we will still execute code correctly.

    - -

    It’s convenient to combine the atomic operation and the barrier call into a -single function. It also provides other advantages, which will become clear -shortly.

    - - -

    Acquire and release

    - -

    When acquiring the spinlock, we issue the atomic CAS and then the barrier. -When releasing the spinlock, we issue the barrier and then the atomic store. -This inspires a particular naming convention: operations followed by a barrier -are “acquiring” operations, while operations preceded by a barrier are -“releasing” operations. (It would be wise to install the spin lock example -firmly in mind, as the names are not otherwise intuitive.)

    - -

    Rewriting the spin lock example with this in mind:

    - -
    do {
    -    success = atomic_acquire_cas(&lock, 0, 1)
    -} while (!success)
    -
    -critical-section
    -
    -atomic_release_store(&lock, 0)
    - -

    This is a little more succinct and easier to read, but the real motivation -for doing this lies in a couple of optimizations we can now perform.

    - -

    First, consider atomic_release_store. We need to ensure that -the store of zero to the lock word is observed after any loads or stores in the -critical section above it. In other words, we need a load/store and store/store -barrier. In an earlier section we learned that these aren’t necessary on x86 -SMP -- only store/load barriers are required. The implementation of -atomic_release_store on x86 is therefore just a compiler reorder -barrier followed by a simple store. No CPU barrier is required.

    - -

    The second optimization mostly applies to the compiler (although some CPUs, -such as the Itanium, can take advantage of it as well). The basic principle is -that code can move across acquire and release barriers, but only in one -direction.

    - -

    Suppose we have a mix of locally-visible and globally-visible memory -accesses, with some miscellaneous computation as well:

    - -
    local1 = arg1 / 41
    -local2 = threadStruct->field2
    -threadStruct->field3 = local2
    -
    -do {
    -    success = atomic_acquire_cas(&lock, 0, 1)
    -} while (!success)
    -
    -local5 = globalStruct->field5
    -globalStruct->field6 = local5
    -
    -atomic_release_store(&lock, 0)
    - -

    Here we see two completely independent sets of operations. The first set -operates on a thread-local data structure, so we’re not concerned about clashes -with other threads. The second set operates on a global data structure, which -must be protected with a lock.

    - -

    A full compiler reorder barrier in the atomic ops will ensure that the -program order matches the source code order at the lock boundaries. However, -allowing the compiler to interleave instructions can improve performance. Loads -from memory can be slow, but the CPU can continue to execute instructions that -don’t require the result of that load while waiting for it to complete. The -code might execute more quickly if it were written like this instead:

    - -
    do {
    -    success = atomic_acquire_cas(&lock, 0, 1)
    -} while (!success)
    -
    -local2 = threadStruct->field2
    -local5 = globalStruct->field5
    -local1 = arg1 / 41
    -threadStruct->field3 = local2
    -globalStruct->field6 = local5
    -
    -atomic_release_store(&lock, 0)
    - -

    We issue both loads, do some unrelated computation, and then execute the -instructions that make use of the loads. If the integer division takes less -time than one of the loads, we essentially get it for free, since it happens -during a period where the CPU would have stalled waiting for a load to -complete.

    - -

    Note that all of the operations are now happening inside the -critical section. Since none of the “threadStruct” operations are visible -outside the current thread, nothing else can see them until we’re finished here, -so it doesn’t matter exactly when they happen.

    - -

    In general, it is always safe to move operations into a -critical section, but never safe to move operations out of a -critical section. Put another way, you can migrate code “downward” across an -acquire barrier, and “upward” across a release barrier. If the atomic ops used -a full barrier, this sort of migration would not be possible.

    - -

    Returning to an earlier point, we can state that on x86 all loads are -acquiring loads, and all stores are releasing stores. As a result:

    - - - -

    Hence, you only need store/load barriers on x86 SMP.

    - -

    Labeling atomic operations with “acquire” or “release” describes not only -whether the barrier is executed before or after the atomic operation, but also -how the compiler is allowed to reorder code.

    +When you debug, you may well see the loop continuing forever in spite of +the fact that flag is true.
  • + +
  • C++ provides facilities for explicitly relaxing +sequential consistency even if there are no races. Atomic operations +can take explicit memory_order_... arguments. Similarly, the +java.util.concurrent.atomic package provides a more restricted +set of similar facilities, notably lazySet(). And Java +programmers occasionally use intentional data races for similar effect. +All of these provide performance improvements at a large +cost in programming complexity. We discuss them only briefly +below.
  • + +
  • Some C and C++ code is written in an older style, not entirely +consistent with current language standards, in which volatile +variables are used instead of atomic ones, and memory ordering +is explicitly disallowed by inserting so called fences or +barriers. This requires explicit reasoning about access +reordering and understanding of hardware memory models. A coding style +along these lines is still used in the Linux kernel. It should not +be used in new Android applications, and is also not further discussed here. +
  • +

    Practice

    Debugging memory consistency problems can be very difficult. If a missing -memory barrier causes some code to read stale data, you may not be able to +lock, atomic or volatile declaration causes +some code to read stale data, you may not be able to figure out why by examining memory dumps with a debugger. By the time you can -issue a debugger query, the CPU cores will have all observed the full set of +issue a debugger query, the CPU cores may have all observed the full set of accesses, and the contents of memory and the CPU registers will appear to be in an “impossible” state.

    @@ -1059,51 +464,52 @@ feature.

    C/C++ and "volatile"

    -

    When writing single-threaded code, declaring a variable “volatile” can be -very useful. The compiler will not omit or reorder accesses to volatile -locations. Combine that with the sequential consistency provided by the -hardware, and you’re guaranteed that the loads and stores will appear to happen -in the expected order.

    +

    C and C++ volatile declarations are a very special purpose tool. +They prevent the compiler from reordering or removing volatile +accesses. This can be helpful for code accessing hardware device registers, +memory mapped to more than one location, or in connection with +setjmp. But C and C++ volatile, unlike Java +volatile, is not designed for thread communication.

    + +

    In C and C++, accesses to volatile +data may be reordered with accessed to non-volatile data, and there are no +atomicity guarantees. Thus volatile can't be used for sharing data between +threads in portable code, even on a uniprocessor. C volatile usually does not +prevent access reordering by the hardware, so by itself it is even less useful in +multi-threaded SMP environments. This is the reason C11 and C++11 support +atomic objects. You should use those instead.

    -

    However, accesses to volatile storage may be reordered with non-volatile -accesses, so you have to be careful in multi-threaded uniprocessor environments -(explicit compiler reorder barriers may be required). There are no atomicity -guarantees, and no memory barrier provisions, so “volatile” doesn’t help you at -all in multi-threaded SMP environments. The C and C++ language standards are -being updated to address this with built-in atomic operations.

    +

    A lot of older C and C++ code still abuses volatile for thread +communication. This often works correctly for data that fits +in a machine register, provided it is used with either explicit fences or in cases +in which memory ordering is not important. But it is not guaranteed to work +correctly with future compilers.

    -

    If you think you need to declare something “volatile”, that is a strong -indicator that you should be using one of the atomic operations instead.

    Examples

    -

    In most cases you’d be better off with a synchronization primitive (like a -pthread mutex) rather than an atomic operation, but we will employ the latter to -illustrate how they would be used in a practical situation.

    +

    In most cases you’d be better off with a lock (like a +pthread_mutex_t or C++11 std::mutex) rather than an +atomic operation, but we will employ the latter to illustrate how they would be +used in a practical situation.

    -

    For the sake of brevity we’re ignoring the effects of compiler optimizations -here — some of this code is broken even on uniprocessors — so for -all of these examples you must assume that the compiler generates -straightforward code (for example, compiled with gcc -O0). The fixes presented here do -solve both compiler-reordering and memory-access-ordering issues, but we’re only -going to discuss the latter.

    -
    MyThing* gGlobalThing = NULL;
    +
    MyThing* gGlobalThing = NULL;  // Wrong!  See below.
     
    -void initGlobalThing()    // runs in thread 1
    +void initGlobalThing()    // runs in Thread 1
     {
         MyStruct* thing = malloc(sizeof(*thing));
         memset(thing, 0, sizeof(*thing));
    -    thing->x = 5;
    -    thing->y = 10;
    +    thing->x = 5;
    +    thing->y = 10;
         /* initialization complete, publish */
         gGlobalThing = thing;
     }
     
    -void useGlobalThing()    // runs in thread 2
    +void useGlobalThing()    // runs in Thread 2
     {
         if (gGlobalThing != NULL) {
    -        int i = gGlobalThing->x;    // could be 5, 0, or uninitialized data
    +        int i = gGlobalThing->x;    // could be 5, 0, or uninitialized data
             ...
         }
     }
    @@ -1111,162 +517,81 @@ void useGlobalThing() // runs in thread 2

    The idea here is that we allocate a structure, initialize its fields, and at the very end we “publish” it by storing it in a global variable. At that point, any other thread can see it, but that’s fine since it’s fully initialized, -right? At least, it would be on x86 SMP or a uniprocessor (again, making the -erroneous assumption that the compiler outputs code exactly as we have it in the -source).

    +right?

    -

    Without a memory barrier, the store to gGlobalThing could be observed before -the fields are initialized on ARM. Another thread reading from thing->x could +

    The problem is that the store to gGlobalThing could be observed +before the fields are initialized, typically because either the compiler or the +processor reordered the stores to gGlobalThing and +thing->x. Another thread reading from thing->x could see 5, 0, or even uninitialized data.

    -

    This can be fixed by changing the last assignment to:

    - -
        atomic_release_store(&gGlobalThing, thing);
    - -

    That ensures that all other threads will observe the writes in the proper -order, but what about reads? In this case we should be okay on ARM, because the -address dependency rules will ensure that any loads from an offset of -gGlobalThing are observed after the load of -gGlobalThing. However, it’s unwise to rely on architectural -details, since it means your code will be very subtly unportable. The complete -fix also requires a barrier after the load:

    - -
        MyThing* thing = atomic_acquire_load(&gGlobalThing);
    -    int i = thing->x;
    - -

    Now we know the ordering will be correct. This may seem like an awkward way -to write code, and it is, but that’s the price you pay for accessing data -structures from multiple threads without using locks. Besides, address -dependencies won’t always save us:

    - -
    MyThing gGlobalThing;
    -
    -void initGlobalThing()    // runs in thread 1
    -{
    -    gGlobalThing.x = 5;
    -    gGlobalThing.y = 10;
    -    /* initialization complete */
    -    gGlobalThing.initialized = true;
    -}
    -
    -void useGlobalThing()    // runs in thread 2
    -{
    -    if (gGlobalThing.initialized) {
    -        int i = gGlobalThing.x;    // could be 5 or 0
    -    }
    -}
    - -

    Because there is no relationship between the initialized field and the -others, the reads and writes can be observed out of order. (Note global data is -initialized to zero by the OS, so it shouldn’t be possible to read “random” -uninitialized data.)

    - -

    We need to replace the store with:

    -
        atomic_release_store(&gGlobalThing.initialized, true);
    - -

    and replace the load with:

    -
        int initialized = atomic_acquire_load(&gGlobalThing.initialized);
    +

    The core problem here is a data race on gGlobalThing. +If Thread 1 calls initGlobalThing() while Thread 2 +calls useGlobalThing(), gGlobalThing can be +read while being written. -

    Another example of the same problem occurs when implementing -reference-counted data structures. The reference count itself will be -consistent so long as atomic increment and decrement operations are used, but -you can still run into trouble at the edges, for example:

    +

    This can be fixed by declaring gGlobalThing as +atomic. In C++11:

    -
    void RefCounted::release()
    -{
    -    int oldCount = atomic_dec(&mRefCount);
    -    if (oldCount == 1) {    // was decremented to zero
    -        recycleStorage();
    -    }
    -}
    +
    atomic<MyThing*> gGlobalThing(NULL);
    -void useSharedThing(RefCountedThing sharedThing) -{ - int localVar = sharedThing->x; - sharedThing->release(); - sharedThing = NULL; // can’t use this pointer any more - doStuff(localVar); // value of localVar might be wrong -}
    - -

    The release() call decrements the reference count using a -barrier-free atomic decrement operation. Because this is an atomic RMW -operation, we know that it will work correctly. If the reference count goes to -zero, we recycle the storage.

    - -

    The useSharedThing() function extracts what it needs from -sharedThing and then releases its copy. However, because we didn’t -use a memory barrier, and atomic and non-atomic operations can be reordered, -it’s possible for other threads to observe the read of -sharedThing->x after they observe the recycle -operation. It’s therefore possible for localVar to hold a value -from "recycled" memory, for example a new object created in the same -location by another thread after release() is called.

    - -

    This can be fixed by replacing the call to atomic_dec() with -atomic_release_dec(). The barrier ensures that the reads from -sharedThing are observed before we recycle the object.

    - -
    -
    -

    In most cases the above won’t actually fail, because the “recycle” function -is likely guarded by functions that themselves employ barriers (libc heap -free()/delete(), or an object pool guarded by a -mutex). If the recycle function used a lock-free algorithm implemented without -barriers, however, the above code could fail on ARM SMP.

    -
    -
    +

    This ensures that the writes will become visible to other threads +in the proper order. It also guarantees to prevent some other failure +modes that are otherwise allowed, but unlikely to occur on real +Android hardware. For example, it ensures that we cannot see a +gGlobalThing pointer that has only been partially written.

    What not to do in Java

    We haven’t discussed some relevant Java language features, so we’ll take a quick look at those first.

    +

    Java technically does not require code to be data-race-free. And there +is a small amount of very-carefully-written Java code that works correctly +in the presence of data races. However, writing such code is extremely +tricky, and we discuss it only briefly below. To make matters +worse, the experts who specified the meaning of such code no longer believe the +specification is correct. (The specification is fine for data-race-free +code.) + +

    For now we will adhere to the data-race-free model, for which Java provides +essentially the same guarantees as C and C++. Again, the language provides +some primitives that explicitly relax sequential consistency, notably the +lazySet() and weakCompareAndSet() calls +in java.util.concurrent.atomic. +As with C and C++, we will ignore these for now. +

    Java's "synchronized" and "volatile" keywords

    The “synchronized” keyword provides the Java language’s in-built locking mechanism. Every object has an associated “monitor” that can be used to provide -mutually exclusive access.

    - -

    The implementation of the “synchronized” block has the same basic structure -as the spin lock example: it begins with an acquiring CAS, and ends with a -releasing store. This means that compilers and code optimizers are free to -migrate code into a “synchronized” block. One practical consequence: you must -not conclude that code inside a synchronized block happens -after the stuff above it or before the stuff below it in a function. Going -further, if a method has two synchronized blocks that lock the same object, and -there are no operations in the intervening code that are observable by another -thread, the compiler may perform “lock coarsening” and combine them into a -single block.

    - -

    The other relevant keyword is “volatile”. As defined in the specification -for Java 1.4 and earlier, a volatile declaration was about as weak as its C -counterpart. The spec for Java 1.5 was updated to provide stronger guarantees, -almost to the level of monitor synchronization.

    - -

    The effects of volatile accesses can be illustrated with an example. If -thread 1 writes to a volatile field, and thread 2 subsequently reads from that -same field, then thread 2 is guaranteed to see that write and all writes -previously made by thread 1. More generally, the writes made by -any thread up to the point where it writes the field will be -visible to thead 2 when it does the read. In effect, writing to a volatile is -like a monitor release, and reading from a volatile is like a monitor -acquire.

    - -

    Non-volatile accesses may be reorded with respect to volatile accesses in the -usual ways, for example the compiler could move a non-volatile load or store “above” a -volatile store, but couldn’t move it “below”. Volatile accesses may not be -reordered with respect to each other. The VM takes care of issuing the -appropriate memory barriers.

    - -

    It should be mentioned that, while loads and stores of object references and -most primitive types are atomic, long and double -fields are not accessed atomically unless they are marked as volatile. -Multi-threaded updates to non-volatile 64-bit fields are problematic even on -uniprocessors.

    +mutually exclusive access. If two threads try to "synchronize" on the +same object, one of them will wait until the other completes.

    + +

    As we mentioned above, Java's volatile T is the analog of +C++11's atomic<T>. Concurrent accesses to +volatile fields are allowed, and don't result in data races. +Ignoring lazySet() et al. and data races, it is the Java VM's job to +make sure that the result still appears sequentially consistent. + +

    In particular, if thread 1 writes to a volatile field, and +thread 2 subsequently reads from that same field and sees the newly written +value, then thread 2 is also guaranteed to see all writes previously made by +thread 1. In terms of memory effect, writing to +a volatile is analogous to a monitor release, and +reading from a volatile is like a monitor acquire.

    + +

    There is one notable difference from C++'s atomic: +If we write volatile int x; +in Java, then x++ is the same as x = x + 1; it +performs an atomic load, increments the result, and then performs an atomic +store. Unlike C++, the increment as a whole is not atomic. +Atomic increment operations are instead provided by +the java.util.concurrent.atomic.

    Examples

    -

    Here’s a simple, incorrect implementation of a monotonic counter: Here’s a simple, incorrect implementation of a monotonic counter: (Java theory and practice: Managing volatility).

    @@ -1294,23 +619,29 @@ threads, and we want to be sure that every thread sees the current count when

    If two threads execute in incr() simultaneously, one of the updates could be lost. To make the increment atomic, we need to declare -incr() “synchronized”. With this change, the code will run -correctly in multi-threaded uniprocessor environments.

    +incr() “synchronized”.

    -

    It’s still broken on SMP, however. Different threads might see different -results from get(), because we’re reading the value with an ordinary load. We -can correct the problem by declaring get() to be synchronized. -With this change, the code is obviously correct.

    +

    It’s still broken however, especially on SMP. There is still a data race, +in that get() can access mValue concurrently with +incr(). Under Java rules, the get() call can be +appear to be reordered with respect to other code. For example, if we read two +counters in a row, the results might appear to be inconsistent +because the get() calls we reordered, either by the hardware or +compiler. We can correct the problem by declaring get() to be +synchronized. With this change, the code is obviously correct.

    Unfortunately, we’ve introduced the possibility of lock contention, which could hamper performance. Instead of declaring get() to be synchronized, we could declare mValue with “volatile”. (Note -incr() must still use synchronize.) Now we know that -the volatile write to mValue will be visible to any subsequent volatile read of -mValue. incr() will be slightly slower, but +incr() must still use synchronize since +mValue++ is otherwise not a single atomic operation.) +This also avoids all data races, so sequential consistency is preserved. +incr() will be somewhat slower, since it incurs both monitor entry/exit +overhead, and the overhead associated with a volatile store, but get() will be faster, so even in the absence of contention this is -a win if reads outnumber writes. (See also {@link -java.util.concurrent.atomic.AtomicInteger}.)

    +a win if reads greatly outnumber writes. (See also {@link +java.util.concurrent.atomic.AtomicInteger} for a way to completely +remove the synchronized block.)

    Here’s another example, similar in form to the earlier C examples:

    @@ -1335,19 +666,21 @@ class MyClass { } }
    -

    This has the same problem as the C code, namely that the assignment +

    This has the same problem as the C code, namely that there is +a data race on sGoodies. Thus the assignment sGoodies = goods might be observed before the initialization of the fields in goods. If you declare sGoodies with the -volatile keyword, you can think about the loads as if they were -atomic_acquire_load() calls, and the stores as if they were -atomic_release_store() calls.

    +volatile keyword, sequential consistency is restored, and things will work +as expected. -

    (Note that only the sGoodies reference itself is volatile. The -accesses to the fields inside it are not. The statement z = +

    Note that only the sGoodies reference itself is volatile. The +accesses to the fields inside it are not. Once sGoodies is +volatile, and memory ordering is properly preserved, the fields +cannot be concurrently accessed. The statement z = sGoodies.x will perform a volatile load of MyClass.sGoodies followed by a non-volatile load of sGoodies.x. If you make a local -reference MyGoodies localGoods = sGoodies, z = -localGoods.x will not perform any volatile loads.)

    +reference MyGoodies localGoods = sGoodies, then a subsequent z = +localGoods.x will not perform any volatile loads.

    A more common idiom in Java programming is the infamous “double-checked locking”:

    @@ -1375,41 +708,35 @@ synchronize the object creation. However, we don’t want to pay the overhead f the “synchronized” block on every call, so we only do that part if helper is currently null.

    -

    This doesn’t work correctly on uniprocessor systems, unless you’re using a -traditional Java source compiler and an interpreter-only VM. Once you add fancy -code optimizers and JIT compilers it breaks down. See the “‘Double Checked -Locking is Broken’ Declaration” link in the appendix for more details, or Item -71 (“Use lazy initialization judiciously”) in Josh Bloch’s Effective Java, -2nd Edition..

    +

    This has a data race on the helper field. It can be +set concurrently with the helper == null in another thread. +

    -

    Running this on an SMP system introduces an additional way to fail. Consider +

    To see how this can fail, consider the same code rewritten slightly, as if it were compiled into a C-like language (I’ve added a couple of integer fields to represent Helper’s constructor activity):

    if (helper == null) {
    -    // acquire monitor using spinlock
    -    while (atomic_acquire_cas(&this.lock, 0, 1) != success)
    -        ;
    -    if (helper == null) {
    -        newHelper = malloc(sizeof(Helper));
    -        newHelper->x = 5;
    -        newHelper->y = 10;
    -        helper = newHelper;
    +    synchronized() {
    +        if (helper == null) {
    +            newHelper = malloc(sizeof(Helper));
    +            newHelper->x = 5;
    +            newHelper->y = 10;
    +            helper = newHelper;
    +        }
         }
    -    atomic_release_store(&this.lock, 0);
    +    return helper;
     }
    -

    Now the problem should be obvious: the store to helper is -happening before the memory barrier, which means another thread could observe -the non-null value of helper before the stores to the -x/y fields.

    - -

    You could try to ensure that the store to helper happens after -the atomic_release_store() on this.lock by rearranging -the code, but that won’t help, because it’s okay to migrate code upward — -the compiler could move the assignment back above the -atomic_release_store() to its original position.

    +

    There is nothing to prevent either the hardware or the compiler +from reordering the store to helper with those to the +x/y fields. Another thread could find +helper non-null but its fields not yet set and ready to use. +For more details and more failure modes, see the “‘Double Checked +Locking is Broken’ Declaration” link in the appendix for more details, or Item +71 (“Use lazy initialization judiciously”) in Josh Bloch’s Effective Java, +2nd Edition..

    There are two ways to fix this:

      @@ -1420,125 +747,378 @@ in Example J-3 will work correctly on Java 1.5 and later. (You may want to take a minute to convince yourself that this is true.)
    -

    This next example illustrates two important issues when using volatile:

    +

    Here is another illustration of volatile behavior:

    class MyClass {
         int data1, data2;
         volatile int vol1, vol2;
     
    -    void setValues() {    // runs in thread 1
    +    void setValues() {    // runs in Thread 1
             data1 = 1;
             vol1 = 2;
             data2 = 3;
         }
     
    -    void useValues1() {    // runs in thread 2
    +    void useValues() {    // runs in Thread 2
             if (vol1 == 2) {
                 int l1 = data1;    // okay
                 int l2 = data2;    // wrong
             }
         }
    -    void useValues2() {    // runs in thread 2
    -        int dummy = vol2;
    -        int l1 = data1;    // wrong
    -        int l2 = data2;    // wrong
    -    }
    +} -

    Looking at useValues1(), if thread 2 hasn’t yet observed the +

    Looking at useValues(), if Thread 2 hasn’t yet observed the update to vol1, then it can’t know if data1 or data2 has been set yet. Once it sees the update to -vol1, it knows that the change to data1 is also -visible, because that was made before vol1 was changed. However, +vol1, it knows that data1 can be safely accessed +and correctly read without introducing a data race. However, it can’t make any assumptions about data2, because that store was performed after the volatile store.

    -

    The code in useValues2() uses a second volatile field, -vol2, in an attempt to force the VM to generate a memory barrier. -This doesn’t generally work. To establish a proper “happens-before” -relationship, both threads need to be interacting with the same volatile field. -You’d have to know that vol2 was set after data1/data2 -in thread 1. (The fact that this doesn’t work is probably obvious from looking -at the code; the caution here is against trying to cleverly “cause” a memory -barrier instead of creating an ordered series of accesses.)

    +

    Note that volatile cannot be used to prevent reordering +of other memory accesses that race with each other. It is not guaranteed to +generate a machine memory fence instruction. It can be used to prevent +data races by executing code only when another thread has satisfied a +certain condition.

    What to do

    -

    General advice

    - -

    In C/C++, use the pthread operations, like mutexes and -semaphores. These include the proper memory barriers, providing correct and -efficient behavior on all Android platform versions. Be sure to use them -correctly, for example be wary of signaling a condition variable without holding the -corresponding mutex.

    - -

    It's best to avoid using atomic functions directly. Locking and -unlocking a pthread mutex require a single atomic operation each if there’s no +

    In C/C++, prefer C++11 +synchronization classes, such as std::mutex. If not, use +the corresponding pthread operations. +These include the proper memory fences, providing correct (sequentially consistent +unless otherwise specified) +and efficient behavior on all Android platform versions. Be sure to use them +correctly. For example, remember that condition variable waits may spuriously +return without being signaled, and should thus appear in a loop.

    + +

    It's best to avoid using atomic functions directly, unless the data structure +you are implementing is extremely simple, like a counter. Locking and +unlocking a pthread mutex require a single atomic operation each, +and often cost less than a single cache miss, if there’s no contention, so you’re not going to save much by replacing mutex calls with -atomic ops. If you need a lock-free design, you must fully understand the -concepts in this entire document before you begin (or, better yet, find an -existing code library that is known to be correct on SMP ARM).

    - -

    Be extremely circumspect with "volatile” in C/C++. It often indicates a -concurrency problem waiting to happen.

    - -

    In Java, the best answer is usually to use an appropriate utility class from +atomic ops. Lock-free designs for non-trivial data structures require +much more care to ensure that higher level operations on the data structure +appear atomic (as a whole, not just their explicitly atomic pieces).

    + +

    If you do use atomic operations, relaxing ordering with +memory_order... or lazySet() may provide performance +advantages, but requires deeper understanding than we have conveyed so far. +A large fraction of existing code using +these is discovered to have bugs after the fact. Avoid these if possible. +If your use cases doesn't exactly fit one of those in the next section, +make sure you either are an expert, or have consulted one. + +

    Avoid using volatile for thread communication in C/C++.

    + +

    In Java, concurrency problems are often best solved by +using an appropriate utility class from the {@link java.util.concurrent} package. The code is well written and well tested on SMP.

    -

    Perhaps the safest thing you can do is make your class immutable. Objects -from classes like String and Integer hold data that cannot be changed once the -class is created, avoiding all synchronization issues. The book Effective +

    Perhaps the safest thing you can do is make your objects immutable. Objects +from classes like Java's String and Integer hold data that cannot be changed once an +object is created, avoiding all potential for data races on those objects. +The book Effective Java, 2nd Ed. has specific instructions in “Item 15: Minimize Mutability”. Note in -particular the importance of declaring fields “final" (Bloch).

    -

    If neither of these options is viable, the Java “synchronized” statement -should be used to guard any field that can be accessed by more than one thread. -If mutexes won’t work for your situation, you should declare shared fields -“volatile”, but you must take great care to understand the interactions between -threads. The volatile declaration won’t save you from common concurrent -programming mistakes, but it will help you avoid the mysterious failures -associated with optimizing compilers and SMP mishaps.

    +

    Even if an object is immutable, remember that communicating it to another +thread without any kind of synchronization is a data race. This can occasionally +be acceptable in Java (see below), but requires great care, and is likely to result in +brittle code. If it's not extremely performance critical, add a +volatile declaration. In C++, communicating a pointer or +reference to an immutable object without proper synchronization, +like any data race, is a bug. +In this case, it is reasonably likely to result in intermittent crashes since, +for example, the receiving thread may see an uninitialized method table +pointer due to store reordering.

    + +

    If neither an existing library class, nor an immutable class is +appropriate, the Java synchronized statement or C++ +lock_guard / unique_lock should be used to guard +accesses to any field that can be accessed by more than one thread. If mutexes won’t +work for your situation, you should declare shared fields +volatile or atomic, but you must take great care to +understand the interactions between threads. These declarations won’t +save you from common concurrent programming mistakes, but they will help you +avoid the mysterious failures associated with optimizing compilers and SMP +mishaps.

    + +

    You should avoid +"publishing" a reference to an object, i.e. making it available to other +threads, in its constructor. This is less critical in C++ or if you stick to +our "no data races" advice in Java. But it's always good advice, and becomes +critical if your Java code is +run in other contexts in which the Java security model matters, and untrusted +code may introduce a data race by accessing that "leaked" object reference. +It's also critical if you choose to ignore our warnings and use some of the techniques +in the next section. +See (Safe Construction Techniques in Java) for +details

    + +

    A little more about weak memory orders

    + +

    C++11 and later provide explicit mechanisms for relaxing the sequential +consistency guarantees for data-race-free programs. Explicit +memory_order_relaxed, memory_order_acquire (loads +only), and memory_order_release(stores only) arguments for atomic +operations each provide strictly weaker guarantees than the default, typically +implicit, memory_order_seq_cst. memory_order_acq_rel +provides both memory_order_acquire and +memory_order_release guarantees for atomic read-modify write +operations. memory_order_consume is not yet sufficiently +well specified or implemented to be useful, and should be ignored for now.

    + +

    The lazySet methods in Java.util.concurrent.atomic +are similar to C++ memory_order_release stores. Java's +ordinary variables are sometimes used as a replacement for +memory_order_relaxed accesses, though they are actually +even weaker. Unlike C++, there is no real mechanism for unordered +accesses to variables that are declared as volatile.

    + +

    You should generally avoid these unless there are pressing performance reasons to +use them. On weakly ordered machine architectures like ARM, using them will +commonly save on the order of a few dozen machine cycles for each atomic operation. +On x86, the performance win is limited to stores, and likely to be less +noticeable. +Somewhat counter-intuitively, the benefit may decrease with larger core counts, +as the memory system becomes more of a limiting factor.

    + +

    The full semantics of weakly ordered atomics are complicated. +In general they require +precise understanding of the language rules, which we will +not go into here. For example: + +

      +
    • The compiler or hardware can move memory_order_relaxed +accesses into (but not out of) a critical section bounded by a lock +acquisition and release. This means that two +memory_order_relaxed stores may become visible out of order, +even if they are separated by a critical section. +
    • An ordinary Java variable, when abused as a shared counter, may appear +to another thread to decrease, even though it is only incremented by a single +other thread. But this is not true for C++ atomic +memory_order_relaxed. +
    -

    The Java Memory Model guarantees that assignments to final fields are visible -to all threads once the constructor has finished — this is what ensures -proper synchronization of fields in immutable classes. This guarantee does not -hold if a partially-constructed object is allowed to become visible to other -threads. It is necessary to follow safe construction practices.(Safe -Construction Techniques in Java).

    +With that as a warning, +here we give a small number of idioms that seem to cover many of the use +cases for weakly ordered atomics. Many of these are applicable only to C++: + +

    Non-racing accesses

    +

    It is fairly common that a variable is atomic because it is sometimes +read concurrently with a write, but not all accesses have this issue. +For example a variable +may need to be atomic because it is read outside a critical section, but all +updates are protected by a lock. In that case, a read that happens to be +protected by the same lock +cannot race, since there cannot be concurrent writes. In such a case, the +non-racing access (load in this case), can be annotated with +memory_order_relaxed without changing the correctness of C++ code. +The lock implementation already enforces the required memory ordering +with respect to access by other threads, and memory_order_relaxed +specifies that essentially no additional ordering constraints need to be +enforced for the atomic access.

    + +

    There is no real analog to this in Java.

    + +

    Result is not relied upon for correctness

    + +

    When we use a racing load only to generate a hint, it's generally also OK +to not enforce any memory ordering for the load. If the value is +not reliable, we also can't reliably use the result to infer anything about +other variables. Thus it's OK +if memory ordering is not guaranteed, and the load is +supplied with a memory_order_relaxed argument.

    + +

    A common +instance of this is the use of C++ compare_exchange +to atomically replace x by f(x). +The initial load of x to compute f(x) +does not need to be reliable. If we get it wrong, the +compare_exchange will fail and we will retry. +It is fine for the initial load of x to use +a memory_order_relaxed argument; only memory ordering +for the actual compare_exchange matters.

    + +

    Atomically modified but unread data

    + +

    Occasionally data is modified in parallel by multiple threads, but +not examined until the parallel computation is complete. A good +example of this is a counter that is atomically incremented (e.g. +using fetch_add() in C++ or +atomic_fetch_add_explicit() +in C) by multiple threads in parallel, but the result of these calls +is always ignored. The resulting value is only read at the end, +after all updates are complete.

    + +

    In this case, there is no way to tell whether accesses to this data +was reordered, and hence C++ code may use a memory_order_relaxed +argument.

    + +

    Simple event counters are a common example of this. Since it is +so common, it is worth making some observations about this case:

    -

    Synchronization primitive guarantees

    +
      +
    • Use of memory_order_relaxed improves performance, +but may not address the most important performance issue: Every update +requires exclusive access to the cache line holding the counter. This +results in a cache miss every time a new thread accesses the counter. +If updates are frequent and alternate between threads, it is much faster +to avoid updating the shared counter every time by, +for example, using thread-local counters and summing them at the end. +
    • This technique is combinable with the previous section: It is possible to +concurrently read approximate and unreliable values while they are being updated, +with all operations using memory_order_relaxed. +But it is important to treat the resulting values as completely unreliable. +Just because the count appears to have been incremented once does not +mean another thread can be counted on to have reached the point +at which the increment has been performed. The increment may instead have +been reordered with earlier code. (As for the similar case we mentioned +earlier, C++ does guarantee that a second load of such a counter will not +return a value less than an earlier load in the same thread. Unless of +course the counter overflowed.) +
    • It is common to find code that tries to compute approximate +counter values by performing individual atomic (or not) reads and writes, but +not making the increment as a whole atomic. The usual argument is that +this is "close enough" for performance counters or the like. +It's typically not. +When updates are sufficiently frequent (a case +you probably care about), a large fraction of the counts are typically +lost. On a quad core device, more than half the counts may commonly be lost. +(Easy exercise: construct a two thread scenario in which the counter is +updated a million times, but the final counter value is one.) +
    -

    The pthread library and VM make a couple of useful guarantees: all accesses -previously performed by a thread that creates a new thread are observable by -that new thread as soon as it starts, and all accesses performed by a thread -that is exiting are observable when a join() on that thread -returns. This means you don’t need any additional synchronization when -preparing data for a new thread or examining the results of a joined thread.

    +

    Simple Flag communication

    -

    Whether or not these guarantees apply to interactions with pooled threads -depends on the thread pool implementation.

    +

    A memory_order_release store (or read-modify-write operation) +ensures that if subsequently a memory_order_acquire load +(or read-modify-write operation) reads the written value, then it will +also observe any stores (ordinary or atomic) that preceded the +A memory_order_release store. Conversely, any loads +preceding the memory_order_release will not observe any +stores that followed the memory_order_acquire load. +Unlike memory_order_relaxed, this allows such atomic operations +to be used to communicate the progress of one thread to another.

    -

    In C/C++, the pthread library guarantees that any accesses made by a thread -before it unlocks a mutex will be observable by another thread after it locks -that same mutex. It also guarantees that any accesses made before calling -signal() or broadcast() on a condition variable will -be observable by the woken thread.

    +

    For example, we can rewrite the double-checked locking example +from above in C++ as

    -

    Java language threads and monitors make similar guarantees for the comparable -operations.

    +
    +class MyClass {
    +  private:
    +    atomic<Helper*> helper {nullptr};
    +    mutex mtx;
    +
    +  public:
    +    Helper* getHelper() {
    +      Helper* myHelper = helper.load(memory_order_acquire);
    +      if (myHelper == nullptr) {
    +        lock_guard<mutex> lg(mtx);
    +        myHelper = helper.load(memory_order_relaxed);
    +        if (myHelper == nullptr) {
    +          myHelper = new Helper();
    +          helper.store(myHelper, memory_order_release);
    +        }
    +      }
    +      return myHelper;
    +    }
    +};
    +
    -

    Upcoming changes to C/C++

    +

    The acquire load and release store ensure that if we see a non-null +helper, then we will also see its fields correctly initialized. +We've also incorporated the prior observation that non-racing loads +can use memory_order_relaxed.

    -

    The C and C++ language standards are evolving to include a sophisticated -collection of atomic operations. A full matrix of calls for common data types -is defined, with selectable memory barrier semantics (choose from relaxed, -consume, acquire, release, acq_rel, seq_cst).

    +

    A Java programmer could conceivably represent helper as a +java.util.concurrent.atomic.AtomicReference<Helper> +and use lazySet() as the release store. The load +operations would continue to use plain get() calls.

    -

    See the Further Reading section for pointers to the -specifications.

    +

    In both cases, our performance tweaking concentrated on the initialization +path, which is unlikely to be performance critical. +A more readable compromise might be:

    +
    +    Helper* getHelper() {
    +      Helper* myHelper = helper.load(memory_order_acquire);
    +      if (myHelper != nullptr) {
    +        return myHelper;
    +      }
    +      lock_guard<mutex> lg(mtx);
    +      if (helper == nullptr) {
    +        helper = new Helper();
    +      }
    +      return helper;
    +    }
    +
    + +

    This provides the same fast path, but resorts to default, +sequentially-consistent, operations on the non-performance-critical slow +path.

    + +

    Even here, helper.load(memory_order_acquire) is +likely to generate the same code on current Android-supported +architectures as a plain (sequentially-consistent) reference to +helper. Really the most beneficial optimization here +may be the introduction of myHelper to eliminate a +second load, though a future compiler might do that automatically.

    + +

    Acquire/release ordering does not prevent stores from getting visibly +delayed, and does not ensure that stores become visible to other threads +in a consistent order. As a result, it does not support a tricky, +but fairly common coding pattern exemplified by Dekker's mutual exclusion +algorithm: All threads first set a flag indicating that they want to do +something; if a thread t then notices that no other thread is +trying to do something, it can safely proceed, knowing that there +will be no interference. No other thread will be +able to proceed, since t's flag is still set. This fails +if the flag is accessed using acquire/release ordering, since that doesn't +prevent making a thread's flag visible to others late, after they have +erroneously proceeded. Default memory_order_seq_cst +does prevent it.

    + +

    Immutable fields

    + +

    If an object field is initialized on first use and then never changed, +it may be possible to initialize and subsequently read it using weakly +ordered accesses. In C++, it could be declared as atomic +and accessed using memory_order_relaxed or in Java, it +could be declared without volatile and accessed without +special measures. This requires that all of the following hold:

    + +
      +
    • It should be possible to tell from the value of the field itself +whether it has already been initialized. To access the field, +the fast path test-and-return value should read the field only once. +In Java the latter is essential. Even if the field tests as initialized, +a second load may read the earlier uninitialized value. In C++ +the "read once" rule is merely good practice. +
    • Both initialization and subsequent loads must be atomic, +in that partial updates should not be visible. For Java, the field +should not be a long or double. For C++, +an atomic assignment is required; constructing it in place will not work, since +construction of an atomic is not atomic. +
    • Repeated initializations must be safe, since multiple threads +may read the uninitialized value concurrently. In C++, this generally +follows from the "trivially copyable" requirement imposed for all +atomic types; types with nested owned pointers would require +deallocation in the +copy constructor, and would not be trivially copyable. For Java, +certain reference types are acceptable: +
    • Java references are limited to immutable types containing only final +fields. The constructor of the immutable type should not publish +a reference to the object. In this case the Java final field rules +ensure that if a reader sees the reference, it will also see the +initialized final fields. C++ has no analog to these rules and +pointers to owned objects are unacceptable for this reason as well (in +addition to violating the "trivially copyable" requirements). +

    Closing Notes

    @@ -1547,10 +1127,18 @@ manage more than a shallow gouge. This is a very broad and deep topic. Some areas for further exploration: