Selective deoptimization.

Update the instrumentation to allow selective deoptimization.

Separate instrumentation listener registration from stubs configuration. A
listener is now responsible for configuring the appropriate stubs.
- The method tracing listener installs instrumentation entry/exit stubs or
the interpreter depending on the accuracy of events we want (controlled by
kDeoptimizeForAccurateMethodEntryExitListeners).
- The debugger registers itself as an instrumentation listener but does not
modify methods entrypoints. It only does this on demand when deoptimizing one
method or all the methods.

The selective deoptimization is used for breakpoint only. When a breakpoint is
requested, the debugger deoptimizes this method by setting its entrypoint to
the interpreter stub. As several breakpoints can be set on the same method, we
deoptimize only once. When the last breakpoint on a method is removed, we
reoptimize it by restoring the original entrypoints.

The full deoptimization is used for method entry, method exit and single-step
events. When one of these events is requested, we force eveything to run with
the interpreter (except native and proxy methods). When the last of these
events is removed, we restore all methods entrypoints except those which are
currently deoptimized.

Deoptimizing a method requires all mutator threads be suspended in order to
walk each thread's stack and ensure no code is actually executing while we
modify methods entrypoints. Suspending all the threads requires to not hold
any lock.
In the debugger, we deoptimize/undeoptimize when the JDWP event list changes
(add or remove a breakpoint for instance). During the update, we need to hold
the JDWP event list lock. This means we cannot suspend all the threads at this
time.
In order to deal with these constraints, we support a queue of deoptimization
requests. When an event needs selective/full deoptimization/undeoptimization,
we save its request in the queue. Once we release the JDWP event list lock, we
suspend all the threads, process this queue and finally resume all the threads.
This is done in Dbg::ManageDeoptimization. Note: threads already suspended
before doing this remain suspended so we don't "break" debugger suspensions.

When we deoptimize one method or every method, we need to browse each thread's
stack to install instrumentation exit PC as return PC and save information in
the instrumentation stack frame. Now we can deoptimize multiple times during
the execution of an application, we need to preserve exisiting instrumentation
frames (which is the result of a previous deoptimization). This require to push
new instrumentation frames before existing ones so we don't corrupt the
instrumentation stack frame while walking the stack.

Bug: 11538162
Change-Id: I477142df17edf2dab8ac5d879daacc5c08a67c39
diff --git a/runtime/debugger.cc b/runtime/debugger.cc
index bcf7267..4ea1366 100644
--- a/runtime/debugger.cc
+++ b/runtime/debugger.cc
@@ -176,14 +176,27 @@
 static size_t gAllocRecordHead GUARDED_BY(gAllocTrackerLock) = 0;
 static size_t gAllocRecordCount GUARDED_BY(gAllocTrackerLock) = 0;
 
-// Breakpoints and single-stepping.
+// Deoptimization support.
+struct MethodInstrumentationRequest {
+  bool deoptimize;
+
+  // Method for selective deoptimization. NULL means full deoptimization.
+  mirror::ArtMethod* method;
+
+  MethodInstrumentationRequest(bool deoptimize, mirror::ArtMethod* method)
+    : deoptimize(deoptimize), method(method) {}
+};
+// TODO we need to visit associated methods as roots.
+static std::vector<MethodInstrumentationRequest> gDeoptimizationRequests GUARDED_BY(Locks::deoptimization_lock_);
+
+// Breakpoints.
 static std::vector<Breakpoint> gBreakpoints GUARDED_BY(Locks::breakpoint_lock_);
 
 static bool IsBreakpoint(const mirror::ArtMethod* m, uint32_t dex_pc)
     LOCKS_EXCLUDED(Locks::breakpoint_lock_)
     SHARED_LOCKS_REQUIRED(Locks::mutator_lock_) {
   MutexLock mu(Thread::Current(), *Locks::breakpoint_lock_);
-  for (size_t i = 0; i < gBreakpoints.size(); ++i) {
+  for (size_t i = 0, e = gBreakpoints.size(); i < e; ++i) {
     if (gBreakpoints[i].method == m && gBreakpoints[i].dex_pc == dex_pc) {
       VLOG(jdwp) << "Hit breakpoint #" << i << ": " << gBreakpoints[i];
       return true;
@@ -520,11 +533,17 @@
     CHECK_EQ(gBreakpoints.size(), 0U);
   }
 
+  {
+    MutexLock mu(Thread::Current(), *Locks::deoptimization_lock_);
+    CHECK_EQ(gDeoptimizationRequests.size(), 0U);
+  }
+
   Runtime* runtime = Runtime::Current();
   runtime->GetThreadList()->SuspendAll();
   Thread* self = Thread::Current();
   ThreadState old_state = self->SetStateUnsafe(kRunnable);
   CHECK_NE(old_state, kRunnable);
+  runtime->GetInstrumentation()->EnableDeoptimization();
   runtime->GetInstrumentation()->AddListener(&gDebugInstrumentationListener,
                                              instrumentation::Instrumentation::kMethodEntered |
                                              instrumentation::Instrumentation::kMethodExited |
@@ -549,6 +568,14 @@
   runtime->GetThreadList()->SuspendAll();
   Thread* self = Thread::Current();
   ThreadState old_state = self->SetStateUnsafe(kRunnable);
+  {
+    // Since we're going to disable deoptimization, we clear the deoptimization requests queue.
+    // This prevent us from having any pending deoptimization request when the debugger attaches to
+    // us again while no event has been requested yet.
+    MutexLock mu(Thread::Current(), *Locks::deoptimization_lock_);
+    gDeoptimizationRequests.clear();
+  }
+  runtime->GetInstrumentation()->DisableDeoptimization();
   runtime->GetInstrumentation()->RemoveListener(&gDebugInstrumentationListener,
                                                 instrumentation::Instrumentation::kMethodEntered |
                                                 instrumentation::Instrumentation::kMethodExited |
@@ -1691,6 +1718,7 @@
     case kWaitingForDebuggerSend:
     case kWaitingForDebuggerSuspension:
     case kWaitingForDebuggerToAttach:
+    case kWaitingForDeoptimization:
     case kWaitingForGcToComplete:
     case kWaitingForCheckPointsToRun:
     case kWaitingForJniOnLoad:
@@ -2384,22 +2412,129 @@
   }
 }
 
+static void ProcessDeoptimizationRequests()
+    LOCKS_EXCLUDED(Locks::deoptimization_lock_)
+    EXCLUSIVE_LOCKS_REQUIRED(Locks::mutator_lock_) {
+  Locks::mutator_lock_->AssertExclusiveHeld(Thread::Current());
+  MutexLock mu(Thread::Current(), *Locks::deoptimization_lock_);
+  instrumentation::Instrumentation* instrumentation = Runtime::Current()->GetInstrumentation();
+  for (const MethodInstrumentationRequest& request : gDeoptimizationRequests) {
+    mirror::ArtMethod* const method = request.method;
+    if (method != nullptr) {
+      // Selective deoptimization.
+      if (request.deoptimize) {
+        VLOG(jdwp) << "Deoptimize method " << PrettyMethod(method);
+        instrumentation->Deoptimize(method);
+      } else {
+        VLOG(jdwp) << "Undeoptimize method " << PrettyMethod(method);
+        instrumentation->Undeoptimize(method);
+      }
+    } else {
+      // Full deoptimization.
+      if (request.deoptimize) {
+        VLOG(jdwp) << "Deoptimize the world";
+        instrumentation->DeoptimizeEverything();
+      } else {
+        VLOG(jdwp) << "Undeoptimize the world";
+        instrumentation->UndeoptimizeEverything();
+      }
+    }
+  }
+  gDeoptimizationRequests.clear();
+}
+
+// Process deoptimization requests after suspending all mutator threads.
+void Dbg::ManageDeoptimization() {
+  Thread* const self = Thread::Current();
+  {
+    // Avoid suspend/resume if there is no pending request.
+    MutexLock mu(self, *Locks::deoptimization_lock_);
+    if (gDeoptimizationRequests.empty()) {
+      return;
+    }
+  }
+  CHECK_EQ(self->GetState(), kRunnable);
+  self->TransitionFromRunnableToSuspended(kWaitingForDeoptimization);
+  // We need to suspend mutator threads first.
+  Runtime* const runtime = Runtime::Current();
+  runtime->GetThreadList()->SuspendAll();
+  const ThreadState old_state = self->SetStateUnsafe(kRunnable);
+  ProcessDeoptimizationRequests();
+  CHECK_EQ(self->SetStateUnsafe(old_state), kRunnable);
+  runtime->GetThreadList()->ResumeAll();
+  self->TransitionFromSuspendedToRunnable();
+}
+
+// Enable full deoptimization.
+void Dbg::EnableFullDeoptimization() {
+  MutexLock mu(Thread::Current(), *Locks::deoptimization_lock_);
+  VLOG(jdwp) << "Request full deoptimization";
+  gDeoptimizationRequests.push_back(MethodInstrumentationRequest(true, nullptr));
+}
+
+// Disable full deoptimization.
+void Dbg::DisableFullDeoptimization() {
+  MutexLock mu(Thread::Current(), *Locks::deoptimization_lock_);
+  VLOG(jdwp) << "Request full undeoptimization";
+  gDeoptimizationRequests.push_back(MethodInstrumentationRequest(false, nullptr));
+}
+
 void Dbg::WatchLocation(const JDWP::JdwpLocation* location) {
-  MutexLock mu(Thread::Current(), *Locks::breakpoint_lock_);
+  bool need_deoptimization = true;
   mirror::ArtMethod* m = FromMethodId(location->method_id);
-  gBreakpoints.push_back(Breakpoint(m, location->dex_pc));
-  VLOG(jdwp) << "Set breakpoint #" << (gBreakpoints.size() - 1) << ": " << gBreakpoints[gBreakpoints.size() - 1];
+  {
+    MutexLock mu(Thread::Current(), *Locks::breakpoint_lock_);
+
+    // If there is no breakpoint on this method yet, we need to deoptimize it.
+    for (const Breakpoint& breakpoint : gBreakpoints) {
+      if (breakpoint.method == m) {
+        // We already set a breakpoint on this method, hence we deoptimized it.
+        DCHECK(Runtime::Current()->GetInstrumentation()->IsDeoptimized(m));
+        need_deoptimization = false;
+        break;
+      }
+    }
+
+    gBreakpoints.push_back(Breakpoint(m, location->dex_pc));
+    VLOG(jdwp) << "Set breakpoint #" << (gBreakpoints.size() - 1) << ": " << gBreakpoints[gBreakpoints.size() - 1];
+  }
+
+  if (need_deoptimization) {
+    // Request its deoptimization. This will be done after updating the JDWP event list.
+    MutexLock mu(Thread::Current(), *Locks::deoptimization_lock_);
+    gDeoptimizationRequests.push_back(MethodInstrumentationRequest(true, m));
+    VLOG(jdwp) << "Request deoptimization of " << PrettyMethod(m);
+  }
 }
 
 void Dbg::UnwatchLocation(const JDWP::JdwpLocation* location) {
-  MutexLock mu(Thread::Current(), *Locks::breakpoint_lock_);
+  bool can_undeoptimize = true;
   mirror::ArtMethod* m = FromMethodId(location->method_id);
-  for (size_t i = 0; i < gBreakpoints.size(); ++i) {
-    if (gBreakpoints[i].method == m && gBreakpoints[i].dex_pc == location->dex_pc) {
-      VLOG(jdwp) << "Removed breakpoint #" << i << ": " << gBreakpoints[i];
-      gBreakpoints.erase(gBreakpoints.begin() + i);
-      return;
+  DCHECK(Runtime::Current()->GetInstrumentation()->IsDeoptimized(m));
+  {
+    MutexLock mu(Thread::Current(), *Locks::breakpoint_lock_);
+    for (size_t i = 0, e = gBreakpoints.size(); i < e; ++i) {
+      if (gBreakpoints[i].method == m && gBreakpoints[i].dex_pc == location->dex_pc) {
+        VLOG(jdwp) << "Removed breakpoint #" << i << ": " << gBreakpoints[i];
+        gBreakpoints.erase(gBreakpoints.begin() + i);
+        break;
+      }
     }
+
+    // If there is no breakpoint on this method, we can undeoptimize it.
+    for (const Breakpoint& breakpoint : gBreakpoints) {
+      if (breakpoint.method == m) {
+        can_undeoptimize = false;
+        break;
+      }
+    }
+  }
+
+  if (can_undeoptimize) {
+    // Request its undeoptimization. This will be done after updating the JDWP event list.
+    MutexLock mu(Thread::Current(), *Locks::deoptimization_lock_);
+    gDeoptimizationRequests.push_back(MethodInstrumentationRequest(false, m));
+    VLOG(jdwp) << "Request undeoptimization of " << PrettyMethod(m);
   }
 }
 
diff --git a/runtime/debugger.h b/runtime/debugger.h
index acbb2c6..a3f8b9c 100644
--- a/runtime/debugger.h
+++ b/runtime/debugger.h
@@ -137,8 +137,9 @@
    * when the debugger attaches.
    */
   static void Connected();
-  static void GoActive() LOCKS_EXCLUDED(Locks::breakpoint_lock_, Locks::mutator_lock_);
-  static void Disconnected() LOCKS_EXCLUDED(Locks::mutator_lock_);
+  static void GoActive()
+      LOCKS_EXCLUDED(Locks::breakpoint_lock_, Locks::deoptimization_lock_, Locks::mutator_lock_);
+  static void Disconnected() LOCKS_EXCLUDED(Locks::deoptimization_lock_, Locks::mutator_lock_);
   static void Disposed();
 
   // Returns true if we're actually debugging with a real debugger, false if it's
@@ -385,12 +386,29 @@
       LOCKS_EXCLUDED(Locks::breakpoint_lock_)
       SHARED_LOCKS_REQUIRED(Locks::mutator_lock_);
 
+  // Full Deoptimization control. Only used for method entry/exit and single-stepping.
+  static void EnableFullDeoptimization()
+      LOCKS_EXCLUDED(Locks::deoptimization_lock_)
+      SHARED_LOCKS_REQUIRED(Locks::mutator_lock_);
+  static void DisableFullDeoptimization()
+      EXCLUSIVE_LOCKS_REQUIRED(event_list_lock_)
+      SHARED_LOCKS_REQUIRED(Locks::mutator_lock_);
+
+  // Manage deoptimization after updating JDWP events list. This must be done while all mutator
+  // threads are suspended.
+  static void ManageDeoptimization()
+      LOCKS_EXCLUDED(Locks::deoptimization_lock_)
+      SHARED_LOCKS_REQUIRED(Locks::mutator_lock_);
+
+  // Breakpoints.
   static void WatchLocation(const JDWP::JdwpLocation* pLoc)
-      LOCKS_EXCLUDED(Locks::breakpoint_lock_)
+      LOCKS_EXCLUDED(Locks::breakpoint_lock_, Locks::deoptimization_lock_)
       SHARED_LOCKS_REQUIRED(Locks::mutator_lock_);
   static void UnwatchLocation(const JDWP::JdwpLocation* pLoc)
-      LOCKS_EXCLUDED(Locks::breakpoint_lock_)
+      LOCKS_EXCLUDED(Locks::breakpoint_lock_, Locks::deoptimization_lock_)
       SHARED_LOCKS_REQUIRED(Locks::mutator_lock_);
+
+  // Single-stepping.
   static JDWP::JdwpError ConfigureStep(JDWP::ObjectId thread_id, JDWP::JdwpStepSize size,
                                        JDWP::JdwpStepDepth depth)
       SHARED_LOCKS_REQUIRED(Locks::mutator_lock_);
diff --git a/runtime/instrumentation.cc b/runtime/instrumentation.cc
index 710d9dd..0b11543 100644
--- a/runtime/instrumentation.cc
+++ b/runtime/instrumentation.cc
@@ -23,6 +23,7 @@
 #include "class_linker.h"
 #include "debugger.h"
 #include "dex_file-inl.h"
+#include "interpreter/interpreter.h"
 #include "mirror/art_method-inl.h"
 #include "mirror/class-inl.h"
 #include "mirror/dex_cache.h"
@@ -58,76 +59,87 @@
 }
 
 bool Instrumentation::InstallStubsForClass(mirror::Class* klass) {
-  bool uninstall = !entry_exit_stubs_installed_ && !interpreter_stubs_installed_;
-  ClassLinker* class_linker = Runtime::Current()->GetClassLinker();
-  bool is_initialized = klass->IsInitialized();
-  for (size_t i = 0; i < klass->NumDirectMethods(); i++) {
-    mirror::ArtMethod* method = klass->GetDirectMethod(i);
-    if (!method->IsAbstract() && !method->IsProxyMethod()) {
-      const void* new_code;
-      if (uninstall) {
-        if (forced_interpret_only_ && !method->IsNative()) {
-          new_code = GetCompiledCodeToInterpreterBridge();
-        } else if (is_initialized || !method->IsStatic() || method->IsConstructor()) {
-          new_code = class_linker->GetOatCodeFor(method);
-        } else {
-          new_code = GetResolutionTrampoline(class_linker);
-        }
-      } else {  // !uninstall
-        if (!interpreter_stubs_installed_ || method->IsNative()) {
-          // Do not overwrite resolution trampoline. When the trampoline initializes the method's
-          // class, all its static methods' code will be set to the instrumentation entry point.
-          // For more details, see ClassLinker::FixupStaticTrampolines.
-          if (is_initialized || !method->IsStatic() || method->IsConstructor()) {
-            new_code = GetQuickInstrumentationEntryPoint();
-          } else {
-            new_code = GetResolutionTrampoline(class_linker);
-          }
-        } else {
-          new_code = GetCompiledCodeToInterpreterBridge();
-        }
-      }
-      method->SetEntryPointFromCompiledCode(new_code);
-    }
+  for (size_t i = 0, e = klass->NumDirectMethods(); i < e; i++) {
+    InstallStubsForMethod(klass->GetDirectMethod(i));
   }
-  for (size_t i = 0; i < klass->NumVirtualMethods(); i++) {
-    mirror::ArtMethod* method = klass->GetVirtualMethod(i);
-    if (!method->IsAbstract() && !method->IsProxyMethod()) {
-      const void* new_code;
-      if (uninstall) {
-        if (forced_interpret_only_ && !method->IsNative()) {
-          new_code = GetCompiledCodeToInterpreterBridge();
-        } else {
-          new_code = class_linker->GetOatCodeFor(method);
-        }
-      } else {  // !uninstall
-        if (!interpreter_stubs_installed_ || method->IsNative()) {
-          new_code = GetQuickInstrumentationEntryPoint();
-        } else {
-          new_code = GetCompiledCodeToInterpreterBridge();
-        }
-      }
-      method->SetEntryPointFromCompiledCode(new_code);
-    }
+  for (size_t i = 0, e = klass->NumVirtualMethods(); i < e; i++) {
+    InstallStubsForMethod(klass->GetVirtualMethod(i));
   }
   return true;
 }
 
+static void UpdateEntrypoints(mirror::ArtMethod* method, const void* code) {
+  method->SetEntryPointFromCompiledCode(code);
+  if (!method->IsResolutionMethod()) {
+    if (code == GetCompiledCodeToInterpreterBridge()) {
+      method->SetEntryPointFromInterpreter(art::interpreter::artInterpreterToInterpreterBridge);
+    } else {
+      method->SetEntryPointFromInterpreter(art::artInterpreterToCompiledCodeBridge);
+    }
+  }
+}
+
+void Instrumentation::InstallStubsForMethod(mirror::ArtMethod* method) {
+  if (method->IsAbstract() || method->IsProxyMethod()) {
+    // Do not change stubs for these methods.
+    return;
+  }
+  const void* new_code;
+  bool uninstall = !entry_exit_stubs_installed_ && !interpreter_stubs_installed_;
+  ClassLinker* class_linker = Runtime::Current()->GetClassLinker();
+  bool is_class_initialized = method->GetDeclaringClass()->IsInitialized();
+  if (uninstall) {
+    if ((forced_interpret_only_ || IsDeoptimized(method)) && !method->IsNative()) {
+      new_code = GetCompiledCodeToInterpreterBridge();
+    } else if (is_class_initialized || !method->IsStatic() || method->IsConstructor()) {
+      new_code = class_linker->GetOatCodeFor(method);
+    } else {
+      new_code = GetResolutionTrampoline(class_linker);
+    }
+  } else {  // !uninstall
+    if ((interpreter_stubs_installed_ || IsDeoptimized(method)) && !method->IsNative()) {
+      new_code = GetCompiledCodeToInterpreterBridge();
+    } else {
+      // Do not overwrite resolution trampoline. When the trampoline initializes the method's
+      // class, all its static methods code will be set to the instrumentation entry point.
+      // For more details, see ClassLinker::FixupStaticTrampolines.
+      if (is_class_initialized || !method->IsStatic() || method->IsConstructor()) {
+        // Do not overwrite interpreter to prevent from posting method entry/exit events twice.
+        new_code = class_linker->GetOatCodeFor(method);
+        if (entry_exit_stubs_installed_ && new_code != GetCompiledCodeToInterpreterBridge()) {
+          new_code = GetQuickInstrumentationEntryPoint();
+        }
+      } else {
+        new_code = GetResolutionTrampoline(class_linker);
+      }
+    }
+  }
+  UpdateEntrypoints(method, new_code);
+}
+
 // Places the instrumentation exit pc as the return PC for every quick frame. This also allows
 // deoptimization of quick frames to interpreter frames.
+// Since we may already have done this previously, we need to push new instrumentation frame before
+// existing instrumentation frames.
 static void InstrumentationInstallStack(Thread* thread, void* arg)
     SHARED_LOCKS_REQUIRED(Locks::mutator_lock_) {
   struct InstallStackVisitor : public StackVisitor {
-    InstallStackVisitor(Thread* thread, Context* context, uintptr_t instrumentation_exit_pc)
+    InstallStackVisitor(Thread* thread, Context* context, uintptr_t instrumentation_exit_pc,
+                        bool is_deoptimization_enabled)
         : StackVisitor(thread, context),  instrumentation_stack_(thread->GetInstrumentationStack()),
-          instrumentation_exit_pc_(instrumentation_exit_pc), last_return_pc_(0) {}
+          existing_instrumentation_frames_count_(instrumentation_stack_->size()),
+          instrumentation_exit_pc_(instrumentation_exit_pc),
+          is_deoptimization_enabled_(is_deoptimization_enabled),
+          reached_existing_instrumentation_frames_(false), instrumentation_stack_depth_(0),
+          last_return_pc_(0) {
+    }
 
     virtual bool VisitFrame() SHARED_LOCKS_REQUIRED(Locks::mutator_lock_) {
       mirror::ArtMethod* m = GetMethod();
       if (GetCurrentQuickFrame() == NULL) {
         if (kVerboseInstrumentation) {
           LOG(INFO) << "  Ignoring a shadow frame. Frame " << GetFrameId()
-              << " Method=" << PrettyMethod(m);
+                    << " Method=" << PrettyMethod(m);
         }
         return true;  // Ignore shadow frames.
       }
@@ -149,22 +161,45 @@
         LOG(INFO) << "  Installing exit stub in " << DescribeLocation();
       }
       uintptr_t return_pc = GetReturnPc();
-      CHECK_NE(return_pc, instrumentation_exit_pc_);
-      CHECK_NE(return_pc, 0U);
-      InstrumentationStackFrame instrumentation_frame(GetThisObject(), m, return_pc, GetFrameId(),
-                                                      false);
-      if (kVerboseInstrumentation) {
-        LOG(INFO) << "Pushing frame " << instrumentation_frame.Dump();
+      if (return_pc == instrumentation_exit_pc_) {
+        // We've reached a frame which has already been installed with instrumentation exit stub.
+        // We should have already installed instrumentation on previous frames.
+        reached_existing_instrumentation_frames_ = true;
+
+        CHECK_LT(instrumentation_stack_depth_, instrumentation_stack_->size());
+        const InstrumentationStackFrame& frame = instrumentation_stack_->at(instrumentation_stack_depth_);
+        CHECK_EQ(m, frame.method_) << "Expected " << PrettyMethod(m)
+                                   << ", Found " << PrettyMethod(frame.method_);
+        return_pc = frame.return_pc_;
+        if (kVerboseInstrumentation) {
+          LOG(INFO) << "Ignoring already instrumented " << frame.Dump();
+        }
+      } else {
+        CHECK_NE(return_pc, 0U);
+        CHECK(!reached_existing_instrumentation_frames_);
+        InstrumentationStackFrame instrumentation_frame(GetThisObject(), m, return_pc, GetFrameId(),
+                                                        false);
+        if (kVerboseInstrumentation) {
+          LOG(INFO) << "Pushing frame " << instrumentation_frame.Dump();
+        }
+
+        // Insert frame before old ones so we do not corrupt the instrumentation stack.
+        auto it = instrumentation_stack_->end() - existing_instrumentation_frames_count_;
+        instrumentation_stack_->insert(it, instrumentation_frame);
+        SetReturnPc(instrumentation_exit_pc_);
       }
-      instrumentation_stack_->push_back(instrumentation_frame);
       dex_pcs_.push_back(m->ToDexPc(last_return_pc_));
-      SetReturnPc(instrumentation_exit_pc_);
       last_return_pc_ = return_pc;
+      ++instrumentation_stack_depth_;
       return true;  // Continue.
     }
     std::deque<InstrumentationStackFrame>* const instrumentation_stack_;
+    const size_t existing_instrumentation_frames_count_;
     std::vector<uint32_t> dex_pcs_;
     const uintptr_t instrumentation_exit_pc_;
+    const bool is_deoptimization_enabled_;
+    bool reached_existing_instrumentation_frames_;
+    size_t instrumentation_stack_depth_;
     uintptr_t last_return_pc_;
   };
   if (kVerboseInstrumentation) {
@@ -172,21 +207,27 @@
     thread->GetThreadName(thread_name);
     LOG(INFO) << "Installing exit stubs in " << thread_name;
   }
+
+  Instrumentation* instrumentation = reinterpret_cast<Instrumentation*>(arg);
   UniquePtr<Context> context(Context::Create());
   uintptr_t instrumentation_exit_pc = GetQuickInstrumentationExitPc();
-  InstallStackVisitor visitor(thread, context.get(), instrumentation_exit_pc);
+  InstallStackVisitor visitor(thread, context.get(), instrumentation_exit_pc,
+                              instrumentation->IsDeoptimizationEnabled());
   visitor.WalkStack(true);
+  CHECK_EQ(visitor.dex_pcs_.size(), thread->GetInstrumentationStack()->size());
 
-  // Create method enter events for all methods current on the thread's stack.
-  Instrumentation* instrumentation = reinterpret_cast<Instrumentation*>(arg);
-  typedef std::deque<InstrumentationStackFrame>::const_reverse_iterator It;
-  for (It it = thread->GetInstrumentationStack()->rbegin(),
-       end = thread->GetInstrumentationStack()->rend(); it != end; ++it) {
-    mirror::Object* this_object = (*it).this_object_;
-    mirror::ArtMethod* method = (*it).method_;
-    uint32_t dex_pc = visitor.dex_pcs_.back();
-    visitor.dex_pcs_.pop_back();
-    instrumentation->MethodEnterEvent(thread, this_object, method, dex_pc);
+  if (!instrumentation->IsDeoptimizationEnabled()) {
+    // Create method enter events for all methods currently on the thread's stack. We only do this
+    // if no debugger is attached to prevent from posting events twice.
+    typedef std::deque<InstrumentationStackFrame>::const_reverse_iterator It;
+    for (It it = thread->GetInstrumentationStack()->rbegin(),
+        end = thread->GetInstrumentationStack()->rend(); it != end; ++it) {
+      mirror::Object* this_object = (*it).this_object_;
+      mirror::ArtMethod* method = (*it).method_;
+      uint32_t dex_pc = visitor.dex_pcs_.back();
+      visitor.dex_pcs_.pop_back();
+      instrumentation->MethodEnterEvent(thread, this_object, method, dex_pc);
+    }
   }
   thread->VerifyStack();
 }
@@ -233,9 +274,12 @@
             CHECK(m == instrumentation_frame.method_) << PrettyMethod(m);
           }
           SetReturnPc(instrumentation_frame.return_pc_);
-          // Create the method exit events. As the methods didn't really exit the result is 0.
-          instrumentation_->MethodExitEvent(thread_, instrumentation_frame.this_object_, m,
-                                            GetDexPc(), JValue());
+          if (!instrumentation_->IsDeoptimizationEnabled()) {
+            // Create the method exit events. As the methods didn't really exit the result is 0.
+            // We only do this if no debugger is attached to prevent from posting events twice.
+            instrumentation_->MethodExitEvent(thread_, instrumentation_frame.this_object_, m,
+                                              GetDexPc(), JValue());
+          }
           frames_removed_++;
           removed_stub = true;
           break;
@@ -274,18 +318,12 @@
 
 void Instrumentation::AddListener(InstrumentationListener* listener, uint32_t events) {
   Locks::mutator_lock_->AssertExclusiveHeld(Thread::Current());
-  bool require_entry_exit_stubs = false;
-  bool require_interpreter = false;
   if ((events & kMethodEntered) != 0) {
     method_entry_listeners_.push_back(listener);
-    require_interpreter = kDeoptimizeForAccurateMethodEntryExitListeners;
-    require_entry_exit_stubs = !kDeoptimizeForAccurateMethodEntryExitListeners;
     have_method_entry_listeners_ = true;
   }
   if ((events & kMethodExited) != 0) {
     method_exit_listeners_.push_back(listener);
-    require_interpreter = kDeoptimizeForAccurateMethodEntryExitListeners;
-    require_entry_exit_stubs = !kDeoptimizeForAccurateMethodEntryExitListeners;
     have_method_exit_listeners_ = true;
   }
   if ((events & kMethodUnwind) != 0) {
@@ -294,21 +332,17 @@
   }
   if ((events & kDexPcMoved) != 0) {
     dex_pc_listeners_.push_back(listener);
-    require_interpreter = true;
     have_dex_pc_listeners_ = true;
   }
   if ((events & kExceptionCaught) != 0) {
     exception_caught_listeners_.push_back(listener);
     have_exception_caught_listeners_ = true;
   }
-  ConfigureStubs(require_entry_exit_stubs, require_interpreter);
   UpdateInterpreterHandlerTable();
 }
 
 void Instrumentation::RemoveListener(InstrumentationListener* listener, uint32_t events) {
   Locks::mutator_lock_->AssertExclusiveHeld(Thread::Current());
-  bool require_entry_exit_stubs = false;
-  bool require_interpreter = false;
 
   if ((events & kMethodEntered) != 0) {
     bool contains = std::find(method_entry_listeners_.begin(), method_entry_listeners_.end(),
@@ -317,10 +351,6 @@
       method_entry_listeners_.remove(listener);
     }
     have_method_entry_listeners_ = method_entry_listeners_.size() > 0;
-    require_entry_exit_stubs |= have_method_entry_listeners_ &&
-        !kDeoptimizeForAccurateMethodEntryExitListeners;
-    require_interpreter = have_method_entry_listeners_ &&
-        kDeoptimizeForAccurateMethodEntryExitListeners;
   }
   if ((events & kMethodExited) != 0) {
     bool contains = std::find(method_exit_listeners_.begin(), method_exit_listeners_.end(),
@@ -329,10 +359,6 @@
       method_exit_listeners_.remove(listener);
     }
     have_method_exit_listeners_ = method_exit_listeners_.size() > 0;
-    require_entry_exit_stubs |= have_method_exit_listeners_ &&
-        !kDeoptimizeForAccurateMethodEntryExitListeners;
-    require_interpreter = have_method_exit_listeners_ &&
-        kDeoptimizeForAccurateMethodEntryExitListeners;
   }
   if ((events & kMethodUnwind) != 0) {
     method_unwind_listeners_.remove(listener);
@@ -344,13 +370,11 @@
       dex_pc_listeners_.remove(listener);
     }
     have_dex_pc_listeners_ = dex_pc_listeners_.size() > 0;
-    require_interpreter |= have_dex_pc_listeners_;
   }
   if ((events & kExceptionCaught) != 0) {
     exception_caught_listeners_.remove(listener);
     have_exception_caught_listeners_ = exception_caught_listeners_.size() > 0;
   }
-  ConfigureStubs(require_entry_exit_stubs, require_interpreter);
   UpdateInterpreterHandlerTable();
 }
 
@@ -394,9 +418,12 @@
     interpreter_stubs_installed_ = false;
     entry_exit_stubs_installed_ = false;
     runtime->GetClassLinker()->VisitClasses(InstallStubsClassVisitor, this);
-    instrumentation_stubs_installed_ = false;
-    MutexLock mu(self, *Locks::thread_list_lock_);
-    Runtime::Current()->GetThreadList()->ForEach(InstrumentationRestoreStack, this);
+    // Restore stack only if there is no method currently deoptimized.
+    if (deoptimized_methods_.empty()) {
+      instrumentation_stubs_installed_ = false;
+      MutexLock mu(self, *Locks::thread_list_lock_);
+      Runtime::Current()->GetThreadList()->ForEach(InstrumentationRestoreStack, this);
+    }
   }
 }
 
@@ -444,22 +471,115 @@
 }
 
 void Instrumentation::UpdateMethodsCode(mirror::ArtMethod* method, const void* code) const {
+  const void* new_code;
   if (LIKELY(!instrumentation_stubs_installed_)) {
-    method->SetEntryPointFromCompiledCode(code);
+    new_code = code;
   } else {
-    if (!interpreter_stubs_installed_ || method->IsNative()) {
-      // Do not overwrite resolution trampoline. When the trampoline initializes the method's
-      // class, all its static methods' code will be set to the instrumentation entry point.
-      // For more details, see ClassLinker::FixupStaticTrampolines.
-      if (code == GetResolutionTrampoline(Runtime::Current()->GetClassLinker())) {
-        method->SetEntryPointFromCompiledCode(code);
-      } else {
-        method->SetEntryPointFromCompiledCode(GetQuickInstrumentationEntryPoint());
-      }
+    if ((interpreter_stubs_installed_ || IsDeoptimized(method)) && !method->IsNative()) {
+      new_code = GetCompiledCodeToInterpreterBridge();
+    } else if (code == GetResolutionTrampoline(Runtime::Current()->GetClassLinker()) ||
+               code == GetCompiledCodeToInterpreterBridge()) {
+      new_code = code;
+    } else if (entry_exit_stubs_installed_) {
+      new_code = GetQuickInstrumentationEntryPoint();
     } else {
-      method->SetEntryPointFromCompiledCode(GetCompiledCodeToInterpreterBridge());
+      new_code = code;
     }
   }
+  UpdateEntrypoints(method, new_code);
+}
+
+void Instrumentation::Deoptimize(mirror::ArtMethod* method) {
+  CHECK(!method->IsNative());
+  CHECK(!method->IsProxyMethod());
+  CHECK(!method->IsAbstract());
+
+  std::pair<std::set<mirror::ArtMethod*>::iterator, bool> pair = deoptimized_methods_.insert(method);
+  bool already_deoptimized = !pair.second;
+  CHECK(!already_deoptimized) << "Method " << PrettyMethod(method) << " is already deoptimized";
+
+  if (!interpreter_stubs_installed_) {
+    UpdateEntrypoints(method, GetCompiledCodeToInterpreterBridge());
+
+    // Install instrumentation exit stub and instrumentation frames. We may already have installed
+    // these previously so it will only cover the newly created frames.
+    instrumentation_stubs_installed_ = true;
+    MutexLock mu(Thread::Current(), *Locks::thread_list_lock_);
+    Runtime::Current()->GetThreadList()->ForEach(InstrumentationInstallStack, this);
+  }
+}
+
+void Instrumentation::Undeoptimize(mirror::ArtMethod* method) {
+  CHECK(!method->IsNative());
+  CHECK(!method->IsProxyMethod());
+  CHECK(!method->IsAbstract());
+
+  auto it = deoptimized_methods_.find(method);
+  CHECK(it != deoptimized_methods_.end()) << "Method " << PrettyMethod(method) << " is not deoptimized";
+  deoptimized_methods_.erase(it);
+
+  // Restore code and possibly stack only if we did not deoptimize everything.
+  if (!interpreter_stubs_installed_) {
+    // Restore its code or resolution trampoline.
+    ClassLinker* class_linker = Runtime::Current()->GetClassLinker();
+    if (method->IsStatic() && !method->IsConstructor() && !method->GetDeclaringClass()->IsInitialized()) {
+      UpdateEntrypoints(method, GetResolutionTrampoline(class_linker));
+    } else {
+      UpdateEntrypoints(method, class_linker->GetOatCodeFor(method));
+    }
+
+    // If there is no deoptimized method left, we can restore the stack of each thread.
+    if (deoptimized_methods_.empty()) {
+      MutexLock mu(Thread::Current(), *Locks::thread_list_lock_);
+      Runtime::Current()->GetThreadList()->ForEach(InstrumentationRestoreStack, this);
+      instrumentation_stubs_installed_ = false;
+    }
+  }
+}
+
+bool Instrumentation::IsDeoptimized(mirror::ArtMethod* method) const {
+  DCHECK(method != nullptr);
+  return deoptimized_methods_.count(method);
+}
+
+void Instrumentation::EnableDeoptimization() {
+  CHECK(deoptimized_methods_.empty());
+}
+
+void Instrumentation::DisableDeoptimization() {
+  // If we deoptimized everything, undo it.
+  if (interpreter_stubs_installed_) {
+    UndeoptimizeEverything();
+  }
+  // Undeoptimized selected methods.
+  while (!deoptimized_methods_.empty()) {
+    auto it_begin = deoptimized_methods_.begin();
+    Undeoptimize(*it_begin);
+  }
+  CHECK(deoptimized_methods_.empty());
+}
+
+bool Instrumentation::IsDeoptimizationEnabled() const {
+  return interpreter_stubs_installed_ || !deoptimized_methods_.empty();
+}
+
+void Instrumentation::DeoptimizeEverything() {
+  CHECK(!interpreter_stubs_installed_);
+  ConfigureStubs(false, true);
+}
+
+void Instrumentation::UndeoptimizeEverything() {
+  CHECK(interpreter_stubs_installed_);
+  ConfigureStubs(false, false);
+}
+
+void Instrumentation::EnableMethodTracing() {
+  bool require_interpreter = kDeoptimizeForAccurateMethodEntryExitListeners;
+  ConfigureStubs(!require_interpreter, require_interpreter);
+}
+
+void Instrumentation::DisableMethodTracing() {
+  ConfigureStubs(false, false);
 }
 
 const void* Instrumentation::GetQuickCodeFor(const mirror::ArtMethod* method) const {
@@ -596,20 +716,19 @@
   mirror::Object* this_object = instrumentation_frame.this_object_;
   MethodExitEvent(self, this_object, instrumentation_frame.method_, dex_pc, return_value);
 
-  bool deoptimize = false;
-  if (interpreter_stubs_installed_) {
-    // Deoptimize unless we're returning to an upcall.
-    NthCallerVisitor visitor(self, 1, true);
-    visitor.WalkStack(true);
-    deoptimize = visitor.caller != NULL;
-    if (deoptimize && kVerboseInstrumentation) {
-      LOG(INFO) << "Deoptimizing into " << PrettyMethod(visitor.caller);
-    }
+  // Deoptimize if the caller needs to continue execution in the interpreter. Do nothing if we get
+  // back to an upcall.
+  NthCallerVisitor visitor(self, 1, true);
+  visitor.WalkStack(true);
+  bool deoptimize = (visitor.caller != NULL) &&
+                    (interpreter_stubs_installed_ || IsDeoptimized(visitor.caller));
+  if (deoptimize && kVerboseInstrumentation) {
+    LOG(INFO) << "Deoptimizing into " << PrettyMethod(visitor.caller);
   }
   if (deoptimize) {
     if (kVerboseInstrumentation) {
       LOG(INFO) << "Deoptimizing from " << PrettyMethod(method)
-          << " result is " << std::hex << return_value.GetJ();
+                << " result is " << std::hex << return_value.GetJ();
     }
     self->SetDeoptimizationReturnValue(return_value);
     return static_cast<uint64_t>(GetQuickDeoptimizationEntryPoint()) |
diff --git a/runtime/instrumentation.h b/runtime/instrumentation.h
index 72a646e..41b545d 100644
--- a/runtime/instrumentation.h
+++ b/runtime/instrumentation.h
@@ -22,6 +22,7 @@
 #include "locks.h"
 
 #include <stdint.h>
+#include <set>
 #include <list>
 
 namespace art {
@@ -120,6 +121,47 @@
       EXCLUSIVE_LOCKS_REQUIRED(Locks::mutator_lock_)
       LOCKS_EXCLUDED(Locks::thread_list_lock_, Locks::classlinker_classes_lock_);
 
+  // Deoptimization.
+  void EnableDeoptimization() EXCLUSIVE_LOCKS_REQUIRED(Locks::mutator_lock_);
+  void DisableDeoptimization() EXCLUSIVE_LOCKS_REQUIRED(Locks::mutator_lock_);
+  bool IsDeoptimizationEnabled() const SHARED_LOCKS_REQUIRED(Locks::mutator_lock_);
+
+  // Executes everything with interpreter.
+  void DeoptimizeEverything()
+      EXCLUSIVE_LOCKS_REQUIRED(Locks::mutator_lock_)
+      LOCKS_EXCLUDED(Locks::thread_list_lock_, Locks::classlinker_classes_lock_);
+
+  // Executes everything with compiled code (or interpreter if there is no code).
+  void UndeoptimizeEverything()
+      EXCLUSIVE_LOCKS_REQUIRED(Locks::mutator_lock_)
+      LOCKS_EXCLUDED(Locks::thread_list_lock_, Locks::classlinker_classes_lock_);
+
+  // Deoptimize a method by forcing its execution with the interpreter. Nevertheless, a static
+  // method (except a class initializer) set to the resolution trampoline will be deoptimized only
+  // once its declaring class is initialized.
+  void Deoptimize(mirror::ArtMethod* method)
+      LOCKS_EXCLUDED(Locks::thread_list_lock_)
+      EXCLUSIVE_LOCKS_REQUIRED(Locks::mutator_lock_);
+
+  // Undeoptimze the method by restoring its entrypoints. Nevertheless, a static method
+  // (except a class initializer) set to the resolution trampoline will be updated only once its
+  // declaring class is initialized.
+  void Undeoptimize(mirror::ArtMethod* method)
+      LOCKS_EXCLUDED(Locks::thread_list_lock_)
+      EXCLUSIVE_LOCKS_REQUIRED(Locks::mutator_lock_);
+
+  bool IsDeoptimized(mirror::ArtMethod* method) const;
+
+  // Enable method tracing by installing instrumentation entry/exit stubs.
+  void EnableMethodTracing()
+      EXCLUSIVE_LOCKS_REQUIRED(Locks::mutator_lock_)
+      LOCKS_EXCLUDED(Locks::thread_list_lock_, Locks::classlinker_classes_lock_);
+
+  // Disable method tracing by uninstalling instrumentation entry/exit stubs.
+  void DisableMethodTracing()
+      EXCLUSIVE_LOCKS_REQUIRED(Locks::mutator_lock_)
+      LOCKS_EXCLUDED(Locks::thread_list_lock_, Locks::classlinker_classes_lock_);
+
   InterpreterHandlerTable GetInterpreterHandlerTable() const {
     return interpreter_handler_table_;
   }
@@ -129,7 +171,8 @@
   void ResetQuickAllocEntryPoints();
 
   // Update the code of a method respecting any installed stubs.
-  void UpdateMethodsCode(mirror::ArtMethod* method, const void* code) const;
+  void UpdateMethodsCode(mirror::ArtMethod* method, const void* code) const
+      SHARED_LOCKS_REQUIRED(Locks::mutator_lock_);
 
   // Get the quick code for the given method. More efficient than asking the class linker as it
   // will short-cut to GetCode if instrumentation and static method resolution stubs aren't
@@ -232,6 +275,9 @@
   // Call back for configure stubs.
   bool InstallStubsForClass(mirror::Class* klass) SHARED_LOCKS_REQUIRED(Locks::mutator_lock_);
 
+  void InstallStubsForMethod(mirror::ArtMethod* method)
+      SHARED_LOCKS_REQUIRED(Locks::mutator_lock_);
+
  private:
   // Does the job of installing or removing instrumentation code within methods.
   void ConfigureStubs(bool require_entry_exit_stubs, bool require_interpreter)
@@ -294,6 +340,11 @@
   std::list<InstrumentationListener*> dex_pc_listeners_ GUARDED_BY(Locks::mutator_lock_);
   std::list<InstrumentationListener*> exception_caught_listeners_ GUARDED_BY(Locks::mutator_lock_);
 
+  // The set of methods being deoptimized (by the debugger) which must be executed with interpreter
+  // only.
+  // TODO we need to visit these methods as roots.
+  std::set<mirror::ArtMethod*> deoptimized_methods_;
+
   // Current interpreter handler table. This is updated each time the thread state flags are
   // modified.
   InterpreterHandlerTable interpreter_handler_table_;
@@ -317,9 +368,9 @@
 
   mirror::Object* this_object_;
   mirror::ArtMethod* method_;
-  const uintptr_t return_pc_;
-  const size_t frame_id_;
-  const bool interpreter_entry_;
+  uintptr_t return_pc_;
+  size_t frame_id_;
+  bool interpreter_entry_;
 };
 
 }  // namespace instrumentation
diff --git a/runtime/jdwp/jdwp.h b/runtime/jdwp/jdwp.h
index fd78bf2..ebc844e 100644
--- a/runtime/jdwp/jdwp.h
+++ b/runtime/jdwp/jdwp.h
@@ -328,9 +328,11 @@
   AtomicInteger event_serial_;
 
   // Linked list of events requested by the debugger (breakpoints, class prep, etc).
-  Mutex event_list_lock_;
+  Mutex event_list_lock_ DEFAULT_MUTEX_ACQUIRED_AFTER;
   JdwpEvent* event_list_ GUARDED_BY(event_list_lock_);
-  int event_list_size_ GUARDED_BY(event_list_lock_);  // Number of elements in event_list_.
+  size_t event_list_size_ GUARDED_BY(event_list_lock_);  // Number of elements in event_list_.
+  size_t full_deoptimization_requests_ GUARDED_BY(event_list_lock_);  // Number of events requiring
+                                                                      // full deoptimization.
 
   // Used to synchronize suspension of the event thread (to avoid receiving "resume"
   // events before the thread has finished suspending itself).
diff --git a/runtime/jdwp/jdwp_event.cc b/runtime/jdwp/jdwp_event.cc
index b05b49d..4aa7f13 100644
--- a/runtime/jdwp/jdwp_event.cc
+++ b/runtime/jdwp/jdwp_event.cc
@@ -135,6 +135,18 @@
   }
 }
 
+static bool NeedsFullDeoptimization(JdwpEventKind eventKind) {
+  switch (eventKind) {
+      case EK_METHOD_ENTRY:
+      case EK_METHOD_EXIT:
+      case EK_METHOD_EXIT_WITH_RETURN_VALUE:
+      case EK_SINGLE_STEP:
+        return true;
+      default:
+        return false;
+    }
+}
+
 /*
  * Add an event to the list.  Ordering is not important.
  *
@@ -170,16 +182,31 @@
     }
   }
 
-  /*
-   * Add to list.
-   */
-  MutexLock mu(Thread::Current(), event_list_lock_);
-  if (event_list_ != NULL) {
-    pEvent->next = event_list_;
-    event_list_->prev = pEvent;
+  {
+    /*
+     * Add to list.
+     */
+    MutexLock mu(Thread::Current(), event_list_lock_);
+    if (event_list_ != NULL) {
+      pEvent->next = event_list_;
+      event_list_->prev = pEvent;
+    }
+    event_list_ = pEvent;
+    ++event_list_size_;
+
+    /**
+     * Do we need to enable full deoptimization ?
+     */
+    if (NeedsFullDeoptimization(pEvent->eventKind)) {
+      if (full_deoptimization_requests_ == 0) {
+        // This is the first event that needs full deoptimization: enable it.
+        Dbg::EnableFullDeoptimization();
+      }
+      ++full_deoptimization_requests_;
+    }
   }
-  event_list_ = pEvent;
-  ++event_list_size_;
+
+  Dbg::ManageDeoptimization();
 
   return ERR_NONE;
 }
@@ -225,6 +252,17 @@
 
   --event_list_size_;
   CHECK(event_list_size_ != 0 || event_list_ == NULL);
+
+  /**
+   * Can we disable full deoptimization ?
+   */
+  if (NeedsFullDeoptimization(pEvent->eventKind)) {
+    --full_deoptimization_requests_;
+    if (full_deoptimization_requests_ == 0) {
+      // We no longer need full deoptimization.
+      Dbg::DisableFullDeoptimization();
+    }
+  }
 }
 
 /*
@@ -235,20 +273,25 @@
  * explicitly remove one-off single-step events.)
  */
 void JdwpState::UnregisterEventById(uint32_t requestId) {
-  MutexLock mu(Thread::Current(), event_list_lock_);
+  bool found = false;
+  {
+    MutexLock mu(Thread::Current(), event_list_lock_);
 
-  JdwpEvent* pEvent = event_list_;
-  while (pEvent != NULL) {
-    if (pEvent->requestId == requestId) {
-      UnregisterEvent(pEvent);
-      EventFree(pEvent);
-      return;      /* there can be only one with a given ID */
+    for (JdwpEvent* pEvent = event_list_; pEvent != nullptr; pEvent = pEvent->next) {
+      if (pEvent->requestId == requestId) {
+        found = true;
+        UnregisterEvent(pEvent);
+        EventFree(pEvent);
+        break;      /* there can be only one with a given ID */
+      }
     }
-
-    pEvent = pEvent->next;
   }
 
-  // ALOGD("Odd: no match when removing event reqId=0x%04x", requestId);
+  if (found) {
+    Dbg::ManageDeoptimization();
+  } else {
+    LOG(DEBUG) << StringPrintf("Odd: no match when removing event reqId=0x%04x", requestId);
+  }
 }
 
 /*
@@ -692,6 +735,8 @@
     expandBufAdd8BE(pReq, threadId);
   }
 
+  Dbg::ManageDeoptimization();
+
   /* send request and possibly suspend ourselves */
   SendRequestAndPossiblySuspend(pReq, suspend_policy, threadId);
 
@@ -753,14 +798,12 @@
     return false;
   }
 
-  JdwpEvent** match_list = NULL;
   int match_count = 0;
   ExpandBuf* pReq = NULL;
   JdwpSuspendPolicy suspend_policy = SP_NONE;
-
   {
     MutexLock mu(Thread::Current(), event_list_lock_);
-    match_list = AllocMatchList(event_list_size_);
+    JdwpEvent** match_list = AllocMatchList(event_list_size_);
     if ((eventFlags & Dbg::kBreakpoint) != 0) {
       FindMatchingEvents(EK_BREAKPOINT, &basket, match_list, &match_count);
     }
@@ -800,6 +843,8 @@
     CleanupMatchList(match_list, match_count);
   }
 
+  Dbg::ManageDeoptimization();
+
   SendRequestAndPossiblySuspend(pReq, suspend_policy, basket.threadId);
   return match_count != 0;
 }
@@ -859,6 +904,8 @@
     CleanupMatchList(match_list, match_count);
   }
 
+  Dbg::ManageDeoptimization();
+
   SendRequestAndPossiblySuspend(pReq, suspend_policy, basket.threadId);
 
   return match_count != 0;
@@ -912,13 +959,12 @@
     return false;
   }
 
-  JdwpEvent** match_list = NULL;
   int match_count = 0;
   ExpandBuf* pReq = NULL;
   JdwpSuspendPolicy suspend_policy = SP_NONE;
   {
     MutexLock mu(Thread::Current(), event_list_lock_);
-    match_list = AllocMatchList(event_list_size_);
+    JdwpEvent** match_list = AllocMatchList(event_list_size_);
     FindMatchingEvents(EK_EXCEPTION, &basket, match_list, &match_count);
     if (match_count != 0) {
       VLOG(jdwp) << "EVENT: " << match_list[0]->eventKind << "(" << match_count << " total)"
@@ -954,6 +1000,8 @@
     CleanupMatchList(match_list, match_count);
   }
 
+  Dbg::ManageDeoptimization();
+
   SendRequestAndPossiblySuspend(pReq, suspend_policy, basket.threadId);
 
   return match_count != 0;
@@ -1024,6 +1072,8 @@
     CleanupMatchList(match_list, match_count);
   }
 
+  Dbg::ManageDeoptimization();
+
   SendRequestAndPossiblySuspend(pReq, suspend_policy, basket.threadId);
 
   return match_count != 0;
diff --git a/runtime/jdwp/jdwp_main.cc b/runtime/jdwp/jdwp_main.cc
index 93deee5..127ebfa 100644
--- a/runtime/jdwp/jdwp_main.cc
+++ b/runtime/jdwp/jdwp_main.cc
@@ -214,6 +214,7 @@
       event_list_lock_("JDWP event list lock", kJdwpEventListLock),
       event_list_(NULL),
       event_list_size_(0),
+      full_deoptimization_requests_(0),
       event_thread_lock_("JDWP event thread lock"),
       event_thread_cond_("JDWP event thread condition variable", event_thread_lock_),
       event_thread_id_(0),
diff --git a/runtime/locks.cc b/runtime/locks.cc
index 5b462a1..d08206a 100644
--- a/runtime/locks.cc
+++ b/runtime/locks.cc
@@ -22,6 +22,7 @@
 
 Mutex* Locks::abort_lock_ = NULL;
 Mutex* Locks::breakpoint_lock_ = NULL;
+Mutex* Locks::deoptimization_lock_ = NULL;
 ReaderWriterMutex* Locks::classlinker_classes_lock_ = NULL;
 ReaderWriterMutex* Locks::heap_bitmap_lock_ = NULL;
 Mutex* Locks::logging_lock_ = NULL;
@@ -38,6 +39,7 @@
     // Already initialized.
     DCHECK(abort_lock_ != NULL);
     DCHECK(breakpoint_lock_ != NULL);
+    DCHECK(deoptimization_lock_ != NULL);
     DCHECK(classlinker_classes_lock_ != NULL);
     DCHECK(heap_bitmap_lock_ != NULL);
     DCHECK(logging_lock_ != NULL);
@@ -53,6 +55,8 @@
 
     DCHECK(breakpoint_lock_ == NULL);
     breakpoint_lock_ = new Mutex("breakpoint lock", kBreakpointLock);
+    DCHECK(deoptimization_lock_ == NULL);
+    deoptimization_lock_ = new Mutex("deoptimization lock", kDeoptimizationLock);
     DCHECK(classlinker_classes_lock_ == NULL);
     classlinker_classes_lock_ = new ReaderWriterMutex("ClassLinker classes lock",
                                                       kClassLinkerClassesLock);
diff --git a/runtime/locks.h b/runtime/locks.h
index 341319c..9164be6 100644
--- a/runtime/locks.h
+++ b/runtime/locks.h
@@ -53,6 +53,7 @@
   kBreakpointLock,
   kThreadListLock,
   kBreakpointInvokeLock,
+  kDeoptimizationLock,
   kTraceLock,
   kProfilerLock,
   kJdwpEventListLock,
@@ -143,11 +144,14 @@
   // attaching and detaching.
   static Mutex* thread_list_lock_ ACQUIRED_AFTER(runtime_shutdown_lock_);
 
-  // Guards breakpoints and single-stepping.
+  // Guards breakpoints.
   static Mutex* breakpoint_lock_ ACQUIRED_AFTER(thread_list_lock_);
 
+  // Guards deoptimization requests.
+  static Mutex* deoptimization_lock_ ACQUIRED_AFTER(breakpoint_lock_);
+
   // Guards trace requests.
-  static Mutex* trace_lock_ ACQUIRED_AFTER(breakpoint_lock_);
+  static Mutex* trace_lock_ ACQUIRED_AFTER(deoptimization_lock_);
 
   // Guards profile objects.
   static Mutex* profiler_lock_ ACQUIRED_AFTER(trace_lock_);
diff --git a/runtime/native/java_lang_Thread.cc b/runtime/native/java_lang_Thread.cc
index 5b34cfb..011e165 100644
--- a/runtime/native/java_lang_Thread.cc
+++ b/runtime/native/java_lang_Thread.cc
@@ -80,6 +80,7 @@
     case kWaitingForDebuggerToAttach:     return kJavaWaiting;
     case kWaitingInMainDebuggerLoop:      return kJavaWaiting;
     case kWaitingForDebuggerSuspension:   return kJavaWaiting;
+    case kWaitingForDeoptimization:       return kJavaWaiting;
     case kWaitingForJniOnLoad:            return kJavaWaiting;
     case kWaitingForSignalCatcherOutput:  return kJavaWaiting;
     case kWaitingInMainSignalCatcherLoop: return kJavaWaiting;
diff --git a/runtime/stack.cc b/runtime/stack.cc
index 4e3fb4a..e583ced 100644
--- a/runtime/stack.cc
+++ b/runtime/stack.cc
@@ -323,10 +323,10 @@
             } else if (instrumentation_frame.interpreter_entry_) {
               mirror::ArtMethod* callee = Runtime::Current()->GetCalleeSaveMethod(Runtime::kRefsAndArgs);
               CHECK_EQ(GetMethod(), callee) << "Expected: " << PrettyMethod(callee) << " Found: "
-                  << PrettyMethod(GetMethod());
+                                            << PrettyMethod(GetMethod());
             } else if (instrumentation_frame.method_ != GetMethod()) {
               LOG(FATAL)  << "Expected: " << PrettyMethod(instrumentation_frame.method_)
-                << " Found: " << PrettyMethod(GetMethod());
+                          << " Found: " << PrettyMethod(GetMethod());
             }
             if (num_frames_ != 0) {
               // Check agreement of frame Ids only if num_frames_ is computed to avoid infinite
diff --git a/runtime/thread_state.h b/runtime/thread_state.h
index 7615c41..57bf4f1 100644
--- a/runtime/thread_state.h
+++ b/runtime/thread_state.h
@@ -37,6 +37,7 @@
   kWaitingForJniOnLoad,             // WAITING        TS_WAIT      waiting for execution of dlopen and JNI on load code
   kWaitingForSignalCatcherOutput,   // WAITING        TS_WAIT      waiting for signal catcher IO to complete
   kWaitingInMainSignalCatcherLoop,  // WAITING        TS_WAIT      blocking/reading/processing signals
+  kWaitingForDeoptimization,        // WAITING        TS_WAIT      waiting for deoptimization suspend all
   kStarting,                        // NEW            TS_WAIT      native thread started, not yet ready to run managed code
   kNative,                          // RUNNABLE       TS_RUNNING   running in a JNI native method
   kSuspended,                       // RUNNABLE       TS_RUNNING   suspended by GC or debugger
diff --git a/runtime/trace.cc b/runtime/trace.cc
index da2c80a..5d053b6 100644
--- a/runtime/trace.cc
+++ b/runtime/trace.cc
@@ -383,6 +383,7 @@
                                                    instrumentation::Instrumentation::kMethodEntered |
                                                    instrumentation::Instrumentation::kMethodExited |
                                                    instrumentation::Instrumentation::kMethodUnwind);
+        runtime->GetInstrumentation()->EnableMethodTracing();
       }
     }
   }
@@ -412,6 +413,7 @@
       MutexLock mu(Thread::Current(), *Locks::thread_list_lock_);
       runtime->GetThreadList()->ForEach(ClearThreadStackTraceAndClockBase, NULL);
     } else {
+      runtime->GetInstrumentation()->DisableMethodTracing();
       runtime->GetInstrumentation()->RemoveListener(the_trace,
                                                     instrumentation::Instrumentation::kMethodEntered |
                                                     instrumentation::Instrumentation::kMethodExited |