Compile-time tuning: assembly phase

Not as much compile-time gain from reworking the assembly phase as I'd
hoped, but still worthwhile.  Should see ~2% improvement thanks to
the assembly rework.  On the other hand, expect some huge gains for some
application thanks to better detection of large machine-generated init
methods.  Thinkfree shows a 25% improvement.

The major assembly change was to establish thread the LIR nodes that
require fixup into a fixup chain.  Only those are processed during the
final assembly pass(es).  This doesn't help for methods which only
require a single pass to assemble, but does speed up the larger methods
which required multiple assembly passes.

Also replaced the block_map_ basic block lookup table (which contained
space for a BasicBlock* for each dex instruction unit) with a block id
map - cutting its space requirements by half in a 32-bit pointer
environment.

Changes:
  o Reduce size of LIR struct by 12.5% (one of the big memory users)
  o Repurpose the use/def portion of the LIR after optimization complete.
  o Encode instruction bits to LIR
  o Thread LIR nodes requiring pc fixup
  o Change follow-on assembly passes to only consider fixup LIRs
  o Switch on pc-rel fixup kind
  o Fast-path for small methods - single pass assembly
  o Avoid using cb[n]z for null checks (almost always exceed displacement)
  o Improve detection of large initialization methods.
  o Rework def/use flag setup.
  o Remove a sequential search from FindBlock using lookup table of 16-bit
    block ids rather than full block pointers.
  o Eliminate pcRelFixup and use fixup kind instead.
  o Add check for 16-bit overflow on dex offset.

Change-Id: I4c6615f83fed46f84629ad6cfe4237205a9562b4
diff --git a/compiler/dex/quick/x86/target_x86.cc b/compiler/dex/quick/x86/target_x86.cc
index 94dd759..f080830 100644
--- a/compiler/dex/quick/x86/target_x86.cc
+++ b/compiler/dex/quick/x86/target_x86.cc
@@ -132,37 +132,36 @@
   return 0ULL;
 }
 
-void X86Mir2Lir::SetupTargetResourceMasks(LIR* lir) {
+void X86Mir2Lir::SetupTargetResourceMasks(LIR* lir, uint64_t flags) {
   DCHECK_EQ(cu_->instruction_set, kX86);
+  DCHECK(!lir->flags.use_def_invalid);
 
   // X86-specific resource map setup here.
-  uint64_t flags = X86Mir2Lir::EncodingMap[lir->opcode].flags;
-
   if (flags & REG_USE_SP) {
-    lir->use_mask |= ENCODE_X86_REG_SP;
+    lir->u.m.use_mask |= ENCODE_X86_REG_SP;
   }
 
   if (flags & REG_DEF_SP) {
-    lir->def_mask |= ENCODE_X86_REG_SP;
+    lir->u.m.def_mask |= ENCODE_X86_REG_SP;
   }
 
   if (flags & REG_DEFA) {
-    SetupRegMask(&lir->def_mask, rAX);
+    SetupRegMask(&lir->u.m.def_mask, rAX);
   }
 
   if (flags & REG_DEFD) {
-    SetupRegMask(&lir->def_mask, rDX);
+    SetupRegMask(&lir->u.m.def_mask, rDX);
   }
   if (flags & REG_USEA) {
-    SetupRegMask(&lir->use_mask, rAX);
+    SetupRegMask(&lir->u.m.use_mask, rAX);
   }
 
   if (flags & REG_USEC) {
-    SetupRegMask(&lir->use_mask, rCX);
+    SetupRegMask(&lir->u.m.use_mask, rCX);
   }
 
   if (flags & REG_USED) {
-    SetupRegMask(&lir->use_mask, rDX);
+    SetupRegMask(&lir->u.m.use_mask, rDX);
   }
 }
 
@@ -275,8 +274,8 @@
     }
     /* Memory bits */
     if (x86LIR && (mask & ENCODE_DALVIK_REG)) {
-      sprintf(buf + strlen(buf), "dr%d%s", x86LIR->alias_info & 0xffff,
-              (x86LIR->alias_info & 0x80000000) ? "(+1)" : "");
+      sprintf(buf + strlen(buf), "dr%d%s", DECODE_ALIAS_INFO_REG(x86LIR->flags.alias_info),
+              (DECODE_ALIAS_INFO_WIDE(x86LIR->flags.alias_info)) ? "(+1)" : "");
     }
     if (mask & ENCODE_LITERAL) {
       strcat(buf, "lit ");