Age | Commit message (Collapse) | Author |
|
Bug: 260881207
Test: presubmit
Test: abtd app_compat_drm
Test: abtd app_compat_top_100
Test: abtd app_compat_banking
Change-Id: I34de0d083ec0bb476bb39cc31a2f64d15c80fe7b
|
|
At certain places we were using `relaxed` order which allows
reordering the preceding stores of to-space references in the object
after the state change.
Bug: 302845084
Test: art/test/testrunner/testrunner.py
Change-Id: Ibbf27c8fa9eda2bf9635c69668b3750139178a30
|
|
This reverts commit 07cbc5ba4f117ea74faecffe14ffc0ce8aa7ee0e.
PS1 is identical to aosp/1933038 .
PS2 applies a small fix for the CMS build.
Original commit message:
Remove the "preserve references" machinery. Instead let the reference
processor inform GetReferent about its state, so that it can leverage
knowledge about whether reachable memory has in fact been completely
marked.
Restructure the ReferenceProcessor interface by adding Setup to
ensure that ReferenceProcessor fields are properly set up before
we disable the normal fast path through GetReferent.
For the CC collector, forward essentially all SoftReferences as
part of normal marking, so we don't stall weak reference access
for those.
Note briefly in the log if we encounter References that are only
reachable from finalizers.
SS and MS collectors are only minimally updated to keep them working.
We now block in GetReferent only for the hopefully very brief period of
marking objects that were initially missed as a result of a mutator
collector race. This should hopefully eliminate multi-millisecond delays
here. For 2043-reference-pauses from aosp/1952438, it reduces blocking
from over 100 msecs to under 1 on host. This is mostly due to the
better SoftReference treatment; 100 msec pauses in GetReferent()
were never near-typical.
We iteratively mark through SoftReferences now. Previously we could
mistakenly clear SoftReferences discovered while marking from the top
level ones. (Lokesh pointed this out.) To make this work, we change
ForwardSoftReferences to actually remove References from the queue,
as the comment always said it did.
This also somewhat prepares us for a much less complete solution for
pauses to access WeakGlobalRefs or other "system weaks".
This fixes a memory ordering issue for the per-thread weak reference
access flags used with the CC collector. I think the issue is still
there for the CMS collector. That requires further discussion.
Bug: 190867430
Bug: 189738006
Bug: 211784084
Test: Build and boot aosp & Treehugger; aosp/195243
Change-Id: I9aa38da6c87555302243bd6c7d460747277ba8e7
|
|
This reverts commit 0ab5b6d2afbdd71a18f8fb9b1fcf39e54cfd55a5.
Reason for revert: Breaks CMS builds
Change-Id: Ib3dfcc90ac5b7259c7f718a0373b48acc2ba10b2
|
|
Remove the "preserve references" machinery. Instead let the reference
processor inform GetReferent about its state, so that it can leverage
knowledge about whether reachable memory has in fact been completely
marked.
Restructure the ReferenceProcessor interface by adding Setup to
ensure that ReferenceProcessor fields are properly set up before
we disable the normal fast path through GetReferent.
For the CC collector, forward essentially all SoftReferences as
part of normal marking, so we don't stall weak reference access
for those.
Note briefly in the log if we encounter References that are only
reachable from finalizers.
SS and MS collectors are only minimally updated to keep them working.
We now block in GetReferent only for the hopefully very brief period of
marking objects that were initially missed as a result of a mutator
collector race. This should hopefully eliminate multi-millisecond delays
here. For 2043-reference-pauses from aosp/1952438, it reduces blocking
from over 100 msecs to under 1 on host. This is mostly due to the
better SoftReference treatment; 100 msec pauses in GetReferent()
were never near-typical.
We iteratively mark through SoftReferences now. Previously we could
mistakenly clear SoftReferences discovered while marking from the top
level ones. (Lokesh pointed this out.) To make this work, we change
ForwardSoftReferences to actually remove References from the queue,
as the comment always said it did.
This also somewhat prepares us for a much less complete solution for
pauses to access WeakGlobalRefs or other "system weaks".
This fixes a memory ordering issue for the per-thread weak reference
access flags used with the CC collector. I think the issue is still
there for the CMS collector. That requires further discussion.
Bug: 190867430
Bug: 189738006
Bug: 211784084
Test: Build and boot aosp & Treehugger; aosp/195243
Change-Id: I02f12ac481db4c4e400d253662a7a126318d4bec
|
|
Test: m
Bug: 177048505
Change-Id: Ifb16927455b98996c61f0b6370bae9a114bf8018
|
|
Separate the marking piece of EnqueueFinalizerReferences.
Report the number of finalizable objects.
Similarly report the number of SoftReferences we encounter
and the amount of time we spend marking as a result.
Add trace information and possibly log entry when we block
dereferencing a WeakReference or the like.
Do the same for JNI WeakGlobals, with some code restructuring
to enable that.
Delete one of the two nested and almost entirely redundant
ProcessReferences ATrace tags, thus reducing the space needed
to display HeapTaskDaemon back to what it was.
Bug: 189738006
Test: Boot sc-dev and look at trace
Change-Id: I198db632d957bcb9353ab945cedc92aa733963f0
|
|
Replace "ObjPtr<.> const" with "const ObjPtr<.>".
Test: m test-art-host-gtest
Test: testrunner.py --host --optimizing
Bug: 31113334
Change-Id: I5a1c080bc88b091e15ee9eb0bb1ef6f6f290701c
|
|
When only annotating lock requirements, use locks.h.
Bug: 119869270
Test: mmma art
Change-Id: I1608b03254712feff0072ebad012c3af0cc3dda4
|
|
Handles runtime.
Bug: 116054210
Test: WITH_TIDY=1 mmma art
Change-Id: Ibc0d5086809d647f0ce4df5452eb84442d27ecf0
|
|
The read barrier state recorded in object's lockword used to be a
three-state value (white/gray/black), which was turned into a
two-state value (white/gray), where the "black" state was conceptually
merged into the "white" state. This change renames the "white" state
as "non-gray" and adjusts corresponding comments.
Test: art/test.py
Change-Id: I2a17ed15651bdbbe99270c1b81b4d78a1c2c132b
|
|
Pull out more dependencies through forward declarations.
Test: m test-art-host
Change-Id: I7d86726928937f788b956ec9eac91532d66d57ae
|
|
Fix a to-space invariant check failure in EnqueueFinalizerReferences.
Reference processing can be a problem and useless during transaction
because it's not easy to roll back what reference processing does and
there's no daemon threads running (in the unstarted runtime). To avoid
issues, always mark reference referents.
Add a do_atomic_update parameter to MarkHeapReference.
Bug: 35417063
Test: test-art-host with CC/CMS/SS.
Change-Id: If32eba8fca19ef86e5d13f7925d179c8aecb9e27
|
|
Rename IsMarkedHeapReference to IsNullOrMarkedHeapReference.
Move the null check from the caller of IsMarkedHeapReference into
IsNullOrMarkedHeapReference.
Make sure that the referent is only loaded once between the null
check and the IsMarked call.
Use a CAS in ConcurrentCopying::IsNullOrMarkedHeapReference when
called from DelayReferenceRefernent.
Bug: 33389022
Test: test-art-host without and with CC.
Change-Id: I20edab4dac2a4bb02dbb72af0f09de77b55ac08e
|
|
- Rename GetReadBarrierPointer to GetReadBarrierState.
- Change its return type to uint32_t.
- Fix the runtime fake address dependency for arm/arm64 using inline
asm.
- Drop ReadBarrier::black_ptr_ and some brooks code.
Bug: 12687968
Test: test-art with CC, Ritz EAAC, libartd boot on N9.
Change-Id: I595970db825db5be2e98ee1fcbd7696d5501af55
|
|
Bug: 31113334
Test: test-art-host
Change-Id: I2c7c3dfd88ebf12a0de271436f8a7781f997e061
|
|
Removed read barrier from IsUnprocessed, DequeuePendingReference,
EnqueueReference, and a few other places.
Hard to tell if GC time goes down.
EAAC:
Before GC slow path count: 254857
After GC slow path count: 1005
Bug: 30162165
Bug: 12687968
Test: test-art-host, volantis boot with CC
Change-Id: Ic2add3a9b1e1d7561b0b167f2218b10f8dbff76c
|
|
We used to set marked-through non-moving objects to black to
distinguish between an unmarked object and a marked-through
object (both would be white without black). This was to avoid a rare
case where a marked-through (white) object would be incorrectly set to
gray for a second time (and left gray) after it's marked
through (white/unmarked -> gray/marked -> white/marked-through ->
gray/incorrect). If an object is left gray, the invariant would be
broken that all objects are white when GC isn't running. Also, we
needed to have an extra pass over non-moving objects to change them
from black to white after the marking phase.
To avoid the need for the black color, we use a 'false gray' stack to
detect such rare cases and register affected objects on it and change
the objects to white at the end of the marking phase. This saves some
GC time because we can avoid the gray-to-black CAS per non-moving
object as well as the extra pass over non-moving objects.
Ritzperf EAAC (N6):
Avg GC time: 232 -> 183 ms (-21%)
Total GC time: 15.3 -> 14.1 s (-7.7%)
Bug: 12687968
Change-Id: Idb29c3dcb745b094bcf6abc4db646dac9cbd1f71
|
|
Also clean up a misleading comment in a reference queue test.
Bug: 24404957
Change-Id: Ieea4788039ecef73cba1871fb480a439bf65b499
|
|
Change-Id: I6c1f9cd6da7b2c21714175455e61479273d3669f
|
|
Following up on CL 170735.
It's possible that the referent may potentially be cleared which would
cause a check failure. Avoid that.
Bug: 12687968
Bug: 23896462
Change-Id: I8ccc5936b61ceacf250624681e65307f23ce0405
|
|
In DequeuePendingReference, acknowledge a black/white Reference object
in the queue if its referent was marked right after it's enqueued.
Bug: 12687968
Bug: 23896462
Change-Id: I33c802e04e1688a54a70ad3935628e3853c46e44
|
|
Change-Id: Ia08034a4e5931c4fcb329c3bd3c4b1f301135735
|
|
This enables the standard object header to be used with the
Baker-style read barrier.
Bug: 19355854
Bug: 12687968
Change-Id: Ie552b6e1dfe30e96cb1d0895bd0dff25f9d7d015
|
|
Also fixed some lines that were too long, and a few other minor
details.
Change-Id: I6efba5fb6e03eb5d0a300fddb2a75bf8e2f175cb
|
|
Bug: 12687968
Change-Id: I62f70274d47df6d6cab714df95c518b750ce3105
|
|
Also cleaned up reference queue.
TODO: Add tests for missing functionality.
Bug: 10808403
Change-Id: I182f9cb69022fe542ea9e53d4c6d35cff90af332
|
|
Called from FinalizerReference.enqueueSentinelReference to prevent
a race where the GC updates pendingNext of the sentinel reference
before enqueueSentinelReference.
Bug: 17462553
(cherry picked from commit 3256166df40981f1f1997a5f00303712277c963f)
Change-Id: I7ad2fd250c2715d1aeb919bd548ef9aab24f30a2
|
|
The mark compact collector is a 4 phase collection, doing a normal
full mark_sweep, calculating forwarding addresses of objects in the
from space, updating references of objects in the from space, and
moving the objects in the from space.
Support is diabled by default since it needs to have non movable
classes and field arrays. Performance numbers is around 50% as fast.
The main advantage that this has over semispace is that the worst
case memory usage is 50% since we only need one space isntead of two.
TODO: Make field arrays and classes movable. This causes complication
since Object::VisitReferences relies on these, so if we update the
fields of an object but another future object uses this object to
figure out what fields are reference fields it doesn't work.
Bug: 14059466
Change-Id: I661ed3b71ad4dde124ef80312c95696b4a5665a1
|
|
Removes several SetReferents for updating moved referents. Cleaned
up other aspects of the code.
Change-Id: Ibcb4d713fadea617efee7e936352ddf77ff4c370
|
|
Immediately return for references that are marked before reference
processing without blocking. Soft references are kept in the queue until
the reference processor stops preserving, after which, all marked
references are removed. Finalizer references will still block on get().
Bug: 15471830
Change-Id: I588fcaef40b79ed7c95a4aa7f4fc2e17ee0c288f
|
|
Otherwise, GC's reference processing would turn all referents alive
via read barriers, which is incorrect.
Bug: 12687968
Change-Id: I1463365981d55fa74a7bb207dd4a16aeec007f8b
|
|
Concurrent reference processing currently works by going into native
code from java.lang.ref.Reference.get(). From there, we have a fast
path if the references aren't being processed which returns the
referent without needing to access any locks. In the slow path we
block until reference processing is complete. It may be possible to
improve the slow path if the referent is blackened.
TODO: Investigate doing the fast path in java code by using racy reads
of a static volatile boolean. This will work as long as there are no
suspend points inbetween the boolean read and referent read.
Bug: 14381653
Change-Id: I1546b55be4691fe4ff4aa6d857b234cce7187d87
|
|
Added two new files: mirror/reference.h and mirror/reference-inl.h.
Change-Id: Ibe3ff6379aef7096ff130594535b7f7c0b7dabce
|
|
Removes the class initialization blacklist and use transaction to detect and
revert class initialization attempting to invoke native method. This only
concerns class initialization happening at compilation time when generating an
image (like boot.art for the system).
In transactional mode, we log every object's field assignment and array update.
Therefore we're able to abort a transaction to restore values of fields and
array as they were before the transaction starts. We also log changes to the
intern string table so we can restore its state prior to transaction start.
Since transactional mode only happens at compilation time, we don't need to log
all these changes at runtime. In order to reduce the overhead of testing if
transactional mode is on/off, we templatize interfaces of mirror::Object and
mirror::Array, respectively responsible for setting a field and setting an
array element.
For various reasons, we skip some specific fields from transaction:
- Object's class and array's length must remain unchanged so garbage collector
can compute object's size.
- Immutable fields only set during class loading: list of fields, method,
dex caches, vtables, ... as all classes have been loaded and verified before a
transaction occurs.
- Object's monitor for performance reason.
Before generating the image, we browse the heap to collect objects that need to
be written into it. Since the heap may still holds references to unreachable
objects due to aborted transactions, we trigger one collection at the end of
the class preinitialization phase.
Since the transaction is held by the runtime and all compilation threads share
the same runtime, we need to ensure only one compilation thread has exclusive
access to the runtime. To workaround this issue, we force class initialization
phase to run with only one thread. Note this is only done when generating image
so application compilation is not impacted. This issue will be addressed in a
separate CL.
Bug: 9676614
Change-Id: I221910a9183a5ba6c2b99a277f5a5a68bc69b5f9
|
|
Enables us to pass the root type and thread id to hprof.
Bug: 12680863
Change-Id: I6a0f1f9e3aa8f9b4033d695818ae7ca3460d67cb
|
|
Modify mirror objects so that references between them use an ObjectReference
value type rather than an Object* so that functionality to compress larger
references can be captured in the ObjectRefererence implementation.
ObjectReferences are 32bit and all other aspects of object layout remain as
they are currently.
Expand fields in objects holding pointers so they can hold 64bit pointers. Its
expected the size of these will come down by improving where we hold compiler
meta-data.
Stub out x86_64 architecture specific runtime implementation.
Modify OutputStream so that reads and writes are of unsigned quantities.
Make the use of portable or quick code more explicit.
Templatize AtomicInteger to support more than just int32_t as a type.
Add missing, and fix issues relating to, missing annotalysis information on the
mutator lock.
Refactor and share implementations for array copy between System and uses
elsewhere in the runtime.
Fix numerous 64bit build issues.
Change-Id: I1a5694c251a42c9eff71084dfdd4b51fff716822
|
|
Refactored the reference queue processing to reside in the heap code.
This removes significant code duplication in the semispace and
marksweep garbage collectors.
Changed the soft reference behaviour to preserve all soft references
unless the GC requires them to be cleared to avoid an out of memory
error. It may be worth investigating a better heuristic in the
future to preserve soft references by LRU order.
Change-Id: I1f3ff5bd4b3c5149271f4bb4fc94ba199e2f9bc2
|