Compile-time tuning: assembly phase
Not as much compile-time gain from reworking the assembly phase as I'd
hoped, but still worthwhile. Should see ~2% improvement thanks to
the assembly rework. On the other hand, expect some huge gains for some
application thanks to better detection of large machine-generated init
methods. Thinkfree shows a 25% improvement.
The major assembly change was to establish thread the LIR nodes that
require fixup into a fixup chain. Only those are processed during the
final assembly pass(es). This doesn't help for methods which only
require a single pass to assemble, but does speed up the larger methods
which required multiple assembly passes.
Also replaced the block_map_ basic block lookup table (which contained
space for a BasicBlock* for each dex instruction unit) with a block id
map - cutting its space requirements by half in a 32-bit pointer
environment.
Changes:
o Reduce size of LIR struct by 12.5% (one of the big memory users)
o Repurpose the use/def portion of the LIR after optimization complete.
o Encode instruction bits to LIR
o Thread LIR nodes requiring pc fixup
o Change follow-on assembly passes to only consider fixup LIRs
o Switch on pc-rel fixup kind
o Fast-path for small methods - single pass assembly
o Avoid using cb[n]z for null checks (almost always exceed displacement)
o Improve detection of large initialization methods.
o Rework def/use flag setup.
o Remove a sequential search from FindBlock using lookup table of 16-bit
block ids rather than full block pointers.
o Eliminate pcRelFixup and use fixup kind instead.
o Add check for 16-bit overflow on dex offset.
Change-Id: I4c6615f83fed46f84629ad6cfe4237205a9562b4
diff --git a/compiler/dex/mir_graph.h b/compiler/dex/mir_graph.h
index 9d4ab98..5d01489 100644
--- a/compiler/dex/mir_graph.h
+++ b/compiler/dex/mir_graph.h
@@ -569,6 +569,26 @@
return IsBackedge(branch_bb, branch_bb->taken) || IsBackedge(branch_bb, branch_bb->fall_through);
}
+ void CountBranch(int target_offset) {
+ if (target_offset <= current_offset_) {
+ backward_branches_++;
+ } else {
+ forward_branches_++;
+ }
+ }
+
+ int GetBranchCount() {
+ return backward_branches_ + forward_branches_;
+ }
+
+ bool IsPseudoMirOp(Instruction::Code opcode) {
+ return static_cast<int>(opcode) >= static_cast<int>(kMirOpFirst);
+ }
+
+ bool IsPseudoMirOp(int opcode) {
+ return opcode >= static_cast<int>(kMirOpFirst);
+ }
+
void BasicBlockCombine();
void CodeLayout();
void DumpCheckStats();
@@ -725,15 +745,14 @@
ArenaBitVector* try_block_addr_;
BasicBlock* entry_block_;
BasicBlock* exit_block_;
- BasicBlock* cur_block_;
int num_blocks_;
const DexFile::CodeItem* current_code_item_;
- GrowableArray<BasicBlock*> block_map_; // FindBlock lookup cache.
+ GrowableArray<uint16_t> dex_pc_to_block_map_; // FindBlock lookup cache.
std::vector<DexCompilationUnit*> m_units_; // List of methods included in this graph
typedef std::pair<int, int> MIRLocation; // Insert point, (m_unit_ index, offset)
std::vector<MIRLocation> method_stack_; // Include stack
int current_method_;
- int current_offset_;
+ int current_offset_; // Dex offset in code units
int def_count_; // Used to estimate size of ssa name storage.
int* opcode_count_; // Dex opcode coverage stats.
int num_ssa_regs_; // Number of names following SSA transformation.
@@ -743,6 +762,8 @@
Checkstats* checkstats_;
SpecialCaseHandler special_case_;
ArenaAllocator* arena_;
+ int backward_branches_;
+ int forward_branches_;
};
} // namespace art