diff options
author | 2019-05-26 00:10:25 +0100 | |
---|---|---|
committer | 2019-05-26 23:47:47 +0000 | |
commit | e42a4b95eed312e6f7019645f4c66b2d77254433 (patch) | |
tree | dd150dd4651180c5fbba3a4fd90f8ca8a3f14e9d /compiler/optimizing/stack_map_stream.cc | |
parent | 67ba872df798271d2960be27c7f1e813259feabc (diff) |
Optimize stack maps: add fast path for no inline info.
Consumers of CodeInfo can skip significant chunks of work
if they can quickly determine that method has no inlining.
Store this fact as a flag bit at the start of code info.
This changes binary format and adds <0.1% to oat size.
I added the extra flag field as the simplest solution for now,
although I would like to use it for more things in the future.
(e.g. store the special cases of empty/deduped tables in it)
This improves app startup by 0.4% (maps,speed).
PMD on golem seems to gets around 15% faster.
Bug: 133257467
Test: ./art/test.py -b --host --64
Change-Id: Ia498a31bafc74b51cc95b8c70cf1da4b0e3d894e
Diffstat (limited to 'compiler/optimizing/stack_map_stream.cc')
-rw-r--r-- | compiler/optimizing/stack_map_stream.cc | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/compiler/optimizing/stack_map_stream.cc b/compiler/optimizing/stack_map_stream.cc index 8c3664312d..e21e21cdf3 100644 --- a/compiler/optimizing/stack_map_stream.cc +++ b/compiler/optimizing/stack_map_stream.cc @@ -184,6 +184,7 @@ void StackMapStream::BeginInlineInfoEntry(ArtMethod* method, in_inline_info_ = true; DCHECK_EQ(expected_num_dex_registers_, current_dex_registers_.size()); + flags_ |= CodeInfo::kHasInlineInfo; expected_num_dex_registers_ += num_dex_registers; BitTableBuilder<InlineInfo>::Entry entry; @@ -305,6 +306,7 @@ ScopedArenaVector<uint8_t> StackMapStream::Encode() { ScopedArenaVector<uint8_t> buffer(allocator_->Adapter(kArenaAllocStackMapStream)); BitMemoryWriter<ScopedArenaVector<uint8_t>> out(&buffer); + out.WriteVarint(flags_); out.WriteVarint(packed_frame_size_); out.WriteVarint(core_spill_mask_); out.WriteVarint(fp_spill_mask_); |