Fix criteria to decide should next GC be minor
A GC throughput (in terms of freed-bytes per second) comparison between
minor GCs and full GCs is used to decide whether the next GC should be
minor or full. To take care of the corner case wherein minor GC's
throughput never falls below that of full GC's, a cap is put on
bytes_allocated to not go over target_footprint. This cap,
in case of concurrent GCs, should be at concurrent_start_bytes as that
is when a GC cycle is triggered.
Test: art/test/testrunner/testrunner.py --target
Test: Golem benchmarks to confirm performance isn't affected
Bug: 123662955
Change-Id: I94afd04f3fcac86d6f9cec6a1af407c5be599b26
diff --git a/runtime/gc/heap.cc b/runtime/gc/heap.cc
index d72003c..5473b52 100644
--- a/runtime/gc/heap.cc
+++ b/runtime/gc/heap.cc
@@ -2772,13 +2772,6 @@
} else {
LOG(FATAL) << "Invalid current allocator " << current_allocator_;
}
- if (IsGcConcurrent()) {
- // Disable concurrent GC check so that we don't have spammy JNI requests.
- // This gets recalculated in GrowForUtilization. It is important that it is disabled /
- // calculated in the same thread so that there aren't any races that can cause it to become
- // permanantly disabled. b/17942071
- concurrent_start_bytes_ = std::numeric_limits<size_t>::max();
- }
CHECK(collector != nullptr)
<< "Could not find garbage collector with collector_type="
@@ -3662,14 +3655,15 @@
// If the throughput of the current sticky GC >= throughput of the non sticky collector, then
// do another sticky collection next.
- // We also check that the bytes allocated aren't over the footprint limit in order to prevent a
+ // We also check that the bytes allocated aren't over the target_footprint, or
+ // concurrent_start_bytes in case of concurrent GCs, in order to prevent a
// pathological case where dead objects which aren't reclaimed by sticky could get accumulated
// if the sticky GC throughput always remained >= the full/partial throughput.
size_t target_footprint = target_footprint_.load(std::memory_order_relaxed);
if (current_gc_iteration_.GetEstimatedThroughput() * sticky_gc_throughput_adjustment >=
non_sticky_collector->GetEstimatedMeanThroughput() &&
non_sticky_collector->NumberOfIterations() > 0 &&
- bytes_allocated <= target_footprint) {
+ bytes_allocated <= (IsGcConcurrent() ? concurrent_start_bytes_ : target_footprint)) {
next_gc_type_ = collector::kGcTypeSticky;
} else {
next_gc_type_ = non_sticky_gc_type;