Fix uninitialized register in read barrier

If there is a null pointer exception then read barrier code returns to
fast path and skips the initialization of IP0 register. IP0 is used
later in a placeholder instruction just to create a dependency on it to
prevent reordering load instructions. This instruction expects the
upper 32 bits of IP0 to be 0, if it is not then the base regiser of the
next load instruction is ruined, which can lead to crashing the system
or loading data from random addresses.
Test 1004-checker-volatile-ref-load will fail if there is something
other than 0 in the upper bits of IP0.

This fix clears IP0 register in the NPE code path.

Test: 1004-checker-volatile-ref-load
Test: test-art-target

Change-Id: Ibb32459070cb589815edff9bc822c6a1ea8b57d2
diff --git a/compiler/optimizing/code_generator_arm64.cc b/compiler/optimizing/code_generator_arm64.cc
index 5a5d36d..079c440 100644
--- a/compiler/optimizing/code_generator_arm64.cc
+++ b/compiler/optimizing/code_generator_arm64.cc
@@ -7083,6 +7083,7 @@
                                      vixl::aarch64::MemOperand& lock_word,
                                      vixl::aarch64::Label* slow_path,
                                      vixl::aarch64::Label* throw_npe = nullptr) {
+  vixl::aarch64::Label throw_npe_cont;
   // Load the lock word containing the rb_state.
   __ Ldr(ip0.W(), lock_word);
   // Given the numeric representation, it's enough to check the low bit of the rb_state.
@@ -7094,7 +7095,7 @@
       "Field and array LDR offsets must be the same to reuse the same code.");
   // To throw NPE, we return to the fast path; the artificial dependence below does not matter.
   if (throw_npe != nullptr) {
-    __ Bind(throw_npe);
+    __ Bind(&throw_npe_cont);
   }
   // Adjust the return address back to the LDR (1 instruction; 2 for heap poisoning).
   static_assert(BAKER_MARK_INTROSPECTION_FIELD_LDR_OFFSET == (kPoisonHeapReferences ? -8 : -4),
@@ -7106,6 +7107,12 @@
   // a memory barrier (which would be more expensive).
   __ Add(base_reg, base_reg, Operand(ip0, LSR, 32));
   __ Br(lr);          // And return back to the function.
+  if (throw_npe != nullptr) {
+    // Clear IP0 before returning to the fast path.
+    __ Bind(throw_npe);
+    __ Mov(ip0.X(), xzr);
+    __ B(&throw_npe_cont);
+  }
   // Note: The fake dependency is unnecessary for the slow path.
 }