diff options
| author | 2019-05-13 12:38:54 -0700 | |
|---|---|---|
| committer | 2019-05-15 04:40:29 +0000 | |
| commit | a253c2d27bb95f120a27dc3fa8a66184a15a7442 (patch) | |
| tree | 0923928a83836ae8cc4e6538039c58517849910f /runtime/gc/allocator/rosalloc.cc | |
| parent | 092f7993336961434d6d3d30908c1ca4429e3d05 (diff) | |
Bytes_moved accounting fix and accounting cleanup
Bytes_moved should be incremented by the correct number of bytes needed
for the space the object is allocated in, not the number of bytes it
would take in region space.
Various minor cleanups for code that I found hard to read while
attempting to track this down.
Remove a CHECK that held only because of the incorrect accounting.
It now results in TreeHugger test failures.
Bug: 79921586
Test: Build and boot AOSP.
Change-Id: Iab75d271eb5b9812a127e708cf6b567d0c4c16f1
Diffstat (limited to 'runtime/gc/allocator/rosalloc.cc')
| -rw-r--r-- | runtime/gc/allocator/rosalloc.cc | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/runtime/gc/allocator/rosalloc.cc b/runtime/gc/allocator/rosalloc.cc index 87863656c6..f1572cd6ee 100644 --- a/runtime/gc/allocator/rosalloc.cc +++ b/runtime/gc/allocator/rosalloc.cc @@ -1000,7 +1000,7 @@ void RosAlloc::Run::InspectAllSlots(void (*handler)(void* start, void* end, size // If true, read the page map entries in BulkFree() without using the // lock for better performance, assuming that the existence of an // allocated chunk/pointer being freed in BulkFree() guarantees that -// the page map entry won't change. Disabled for now. +// the page map entry won't change. static constexpr bool kReadPageMapEntryWithoutLockInBulkFree = true; size_t RosAlloc::BulkFree(Thread* self, void** ptrs, size_t num_ptrs) { |