Replace memory barriers to better reflect Java needs.
Replaces barriers that enforce ordering of one access type
(e.g. Load) with respect to another (e.g. store) with more general
ones that better reflect both Java requirements and actual hardware
barrier/fence instructions. The old code was inconsistent and
unclear about which barriers implied which others. Sometimes
multiple barriers were generated and then eliminated;
sometimes it was assumed that certain barriers implied others.
The new barriers closely parallel those in C++11, though, for now,
we use something closer to the old naming.
Bug: 14685856
Change-Id: Ie1c80afe3470057fc6f2b693a9831dfe83add831
diff --git a/compiler/dex/quick/arm64/utility_arm64.cc b/compiler/dex/quick/arm64/utility_arm64.cc
index 22a4ec4..097fcdc 100644
--- a/compiler/dex/quick/arm64/utility_arm64.cc
+++ b/compiler/dex/quick/arm64/utility_arm64.cc
@@ -1145,10 +1145,8 @@
LIR* load = LoadBaseDispBody(r_base, displacement, r_dest, size);
if (UNLIKELY(is_volatile == kVolatile)) {
- // Without context sensitive analysis, we must issue the most conservative barriers.
- // In this case, either a load or store may follow so we issue both barriers.
- GenMemBarrier(kLoadLoad);
- GenMemBarrier(kLoadStore);
+ // TODO: This should generate an acquire load instead of the barrier.
+ GenMemBarrier(kLoadAny);
}
return load;
@@ -1232,9 +1230,10 @@
LIR* Arm64Mir2Lir::StoreBaseDisp(RegStorage r_base, int displacement, RegStorage r_src,
OpSize size, VolatileKind is_volatile) {
+ // TODO: This should generate a release store and no barriers.
if (UNLIKELY(is_volatile == kVolatile)) {
- // There might have been a store before this volatile one so insert StoreStore barrier.
- GenMemBarrier(kStoreStore);
+ // Ensure that prior accesses become visible to other threads first.
+ GenMemBarrier(kAnyStore);
}
// StoreBaseDisp() will emit correct insn for atomic store on arm64
@@ -1243,8 +1242,9 @@
LIR* store = StoreBaseDispBody(r_base, displacement, r_src, size);
if (UNLIKELY(is_volatile == kVolatile)) {
- // A load might follow the volatile store so insert a StoreLoad barrier.
- GenMemBarrier(kStoreLoad);
+ // Preserve order with respect to any subsequent volatile loads.
+ // We need StoreLoad, but that generally requires the most expensive barrier.
+ GenMemBarrier(kAnyAny);
}
return store;