Use ShouldDeoptimizeFlag to check if method exit hooks are needed
When we want to execute a particular method in switch interrpeter we
update the entry point of the method and we need to deopt any method
invocations that are on the stack currently. All future invocations
would use the correct entry point so we only need to check for a deopt
for the methods that are currently on the stack. In the current
implementation, we check a global flag to call method exit hooks (that
checks if a deopt of caller is necessary) which means we call method
exit hooks more often than necessary. This CL changes it so we use a bit
on the ShouldDeoptimizeFlag and update the bit for all method
invocations that are currently on the stack.
We still have to call method exit hooks for any future invocations if
method exit listeners are installed. So the JITed code is now updated
to call method exit hooks if the stack slot indicates a deopt check
is necessary or if method exit listeners are installed.
This improves the performance of golem benchmarks by close to 8x
bringing the performance close it what it was before adding a
breakpoint.
Bug: 253232638
Test: art/test.py
Change-Id: Ic70a568c3099bc9df8d72f423b33b4f148209de9
diff --git a/compiler/optimizing/code_generator_arm64.cc b/compiler/optimizing/code_generator_arm64.cc
index 4092858..7fb3b24 100644
--- a/compiler/optimizing/code_generator_arm64.cc
+++ b/compiler/optimizing/code_generator_arm64.cc
@@ -1171,9 +1171,19 @@
new (codegen_->GetScopedAllocator()) MethodEntryExitHooksSlowPathARM64(instruction);
codegen_->AddSlowPath(slow_path);
+ if (instruction->IsMethodExitHook()) {
+ // Check if we are required to check if the caller needs a deoptimization. Strictly speaking it
+ // would be sufficient to check if CheckCallerForDeopt bit is set. Though it is faster to check
+ // if it is just non-zero. kCHA bit isn't used in debuggable runtimes as cha optimization is
+ // disabled in debuggable runtime. The other bit is used when this method itself requires a
+ // deoptimization due to redefinition. So it is safe to just check for non-zero value here.
+ __ Ldr(value, MemOperand(sp, codegen_->GetStackOffsetOfShouldDeoptimizeFlag()));
+ __ Cbnz(value, slow_path->GetEntryLabel());
+ }
+
uint64_t address = reinterpret_cast64<uint64_t>(Runtime::Current()->GetInstrumentation());
MemberOffset offset = instruction->IsMethodExitHook() ?
- instrumentation::Instrumentation::NeedsExitHooksOffset() :
+ instrumentation::Instrumentation::HaveMethodExitListenersOffset() :
instrumentation::Instrumentation::HaveMethodEntryListenersOffset();
__ Mov(temp, address + offset.Int32Value());
__ Ldrb(value, MemOperand(temp, 0));