| Age | Commit message (Collapse) | Author |
|
In http://r.android.com/2952876 we changed the RTP iterator to
HInstructionIterator, but we want the HandleChanges one.
Test: art/test/testrunner/testrunner.py --host --64 -b --optimizing
Change-Id: I1e9e9cc84d45aa34c24a805f16798e86fd123fc3
|
|
By adding extra bookkeeping, we can speed up the ValueSet reusable
algorithm of GVN. A block's ValueSet is reused if it will not be
used again in the future. This happens when said blocks last
dominated block or successor is visited, since we visit blocks
in RPO. This lets us create a list of free sets to be reused and
skip iterating through all visited blocks. This optimization is
better the bigger the graph.
Based on local compiles, this speeds up GVN by 12-27% and overall
compilation times by 0.6-1.6%.
Bug: 393108375
Test: art/test/testrunner/testrunner.py --host --64 -b --optimizing
Change-Id: I3731e67796a2055907b795656146aaea4f448614
|
|
Bug: http://b/395138850
Test: atest art-run-test-530-checker-loops4
Change-Id: Id7fb236cc4cbc7982de1a8e79f94814f9daf2bd1
|
|
Test: art/test/testrunner/testrunner.py --host --64 -b --optimizing
Change-Id: I469be66bc4f5efa9a70c5d86b9c04627cc9c5672
|
|
We can skip two ifs as they are implied from previous ifs.
Test: art/test/testrunner/testrunner.py --host --64 -b --optimizing
Change-Id: Ia6088887832117791b82b07a2c31d2f9b8bf8b58
|
|
By having a vector indexed by instruction id, we can speed up LSA
by 34-66% which in turns gives ~0.5-1.8% compile time improvement.
Bug: 393108375
Test: art/test/testrunner/testrunner.py --host --64 -b --optimizing
Change-Id: If67b10e9a3a6093452489da7c40d08e23f080874
|
|
This statistic has been obsoleted by
https://android-review.googlesource.com/2808063 .
Test: m test-art-host-gtest
Test: testrunner.py --host --optimizing
Bug: 316617683
Change-Id: Ice880753d0d4acaad53da15aa01937bd0548836a
|
|
Do not count diamond removal towards generated `HSelect`
instruction. Introduce a separate statistic for this case.
Test: m test-art-host-gtest
Test: testrunner.py --host --optimizing
Change-Id: Ic397d21bf0e7ffec266be9536446646442a6320e
|
|
Test: m test-art-host-gtest
Test: testrunner.py --host --optimizing
Bug: 298176183
Change-Id: Ib86953a17a5cd3db09f6107782f2ba6c0fecb07d
|
|
Test: art/test/testrunner/testrunner.py --host --64 -b --optimizing
Change-Id: I375edc2db3979e7080c0b9fe784fd2b5e2cfb4e4
|
|
Skip calling DeleteAllImpureWhich for side effects which
MayDependOn will always return false, which happened 65-75% of
the times. In fact, SideEffects::None() was passed on ~50% of
the calls to Kill.
Based on local compiles, this CL improves GVN runtime by ~15% and
overall dex2oat runtime by ~1%.
Bug: 393108375
Test: art/test/testrunner/testrunner.py --host --64 -b --optimizing
Change-Id: Ib5cdb33c9caa5f2cfffbc1a650dabbda185a3c6d
|
|
Bug: 392802982
Test: art/test/testrunner/testrunner.py --host --64 -b --optimizing
Change-Id: I5fefd6939e1d8434f1d7ad913ced582fefed1b30
|
|
Bug: 392802982
Test: art/test/testrunner/testrunner.py --host --64 -b --optimizing
Test: m test-art-host-gtest
Change-Id: Ic4f0f515dcf7e8b54e5dedff5ff59c2d2e4ebd8a
|
|
Test: m test-art-host-test
Test: testrunner.py --host --optimizing --interpreter
Change-Id: I8fd0cfa02ed3242c84143a4a99a76a4fec95a4ee
|
|
Bug: 392802982
Test: art/test/testrunner/testrunner.py --host --64 -b --optimizing
Test: m test-art-host-gtest
Change-Id: I6e40215a5b1b18223c5f17e9e0ac70e05515fa94
|
|
Special cases considered:
* Frame entry (hardcoded to be 0) or block entry.
* Native debuggable + slow paths, which is the only case where we
use the instruction's dex_pc.
Test: m test-art-host-gtest
Test: art/test/testrunner/testrunner.py --host --64 -b --optimizing
Change-Id: Ic5e0a6b5106395b891a9a45ea48da39dfb44a0a5
|
|
Currently handles 50% of methods by supporting methods that:
- don't branch
- don't have try/catch
- don't have float/double
- <= 8 registers
- opcodes are
const/invokes/iget/iput/const-string/new-instance/checkcast.
Cost of compilation is 10 times less than current baseline compiler.
Performance of generated code: jit-baseline-cc configuration shows no
impact on go/lem
Test: test.py
Change-Id: I8c99b8a8a7552c09c900c5c3e831e8897aef73e5
|
|
... to `HControlFlowSimplifier` because "control flow" is
the correct technical term.
Test: m test-art-host-gtest
Test: testrunner.py --host --optimizing
Change-Id: I2607ac699fa33c3e7ca7f54364e1e8497148412b
|
|
Test: art/test/testrunner/testrunner.py --target --64
Change-Id: I987b29b7d78e2913eb728003fbc85bdfe1fbceff
|
|
Test: test.py
Change-Id: I48a9da7111d61762ab04d53c7efd689aad08f71b
|
|
... as long as they have identical inputs at the relevant
indexes.
Test: m test-art-host-gtest
Test: testrunner.py --host --optimizing
Change-Id: I40168de61d2dfe1143786a2e4a27549cc54b0451
|
|
And do some other gtest cleanup.
Test: m test-art-host-gtest
Test: testrunner.py --host --optimizing
Change-Id: I9d2c3241e5cd9f96722284c4654b8b2fd446b104
|
|
Test: m test-art-host-gtest
Test: testrunner.py --host --optimizing
Change-Id: I32e0ef7956cabb436a1759934e24a1c0f4b7ea2d
|
|
Test: m test-art-host-gtest
Test: testrunner.py --host --optimizing
Bug: 358519867
Change-Id: I5b49f27c09582dc42eba8b6650a7032fad0ff14d
|
|
Test: test.py
Change-Id: I3a065dd5582269792032df0c6446c3c4b6cd72be
|
|
Initialize the value properly, to not rely on an optimization to do it.
Helps investigating performance when disabling optimizations in baseline
compiler.
Test: test.py
Change-Id: If3c5d7cd85dd905a10d29081f571c78baae2888c
|
|
Test: m test-art-host-gtest
Test: testrunner.py --host --optimizing
Change-Id: I500faee42b02dbc72474e30fa2a3c0388ae86674
|
|
... to `HCodeFlowSimplifier` in preparation for adding
more optimizations to this pass.
Test: m test-art-host-gtest
Test: testrunner.py --host --optimizing
Change-Id: Icb05c3455d93a7b939f82ced9b08165e533bb21a
|
|
Test: test.py
Change-Id: Ib97fca637a8866a41a4389b150c6000d9fb6d99b
|
|
Mark `H{Unary,Binary}Operation::CanBeMoved()` as `final`
and remove unnecessary overrides.
Test: m test-art-host-gtest
Test: testrunner.py --host --optimizing
Change-Id: Ic375513e942b18c3244017f19a020e5bad7901cf
|
|
The interpreter handles such invoke-virtual as if they are native
method and executes MethodHandle_invokeExact, which throws UOE.
Bug: 383057088
Test: ./art/test/testrunner/testrunner.py --host --64 -b --optimizing
Test: ./art/test/testrunner/testrunner.py --host --64 -b --interpreter
Change-Id: I4dd4dea46eddd2b26e5866c80548b530a603d9c9
|
|
It was added back in 2020 (r.android.com/1206763) but never used.
Test: art/test/testrunner/testrunner.py --host --64 --optimizing -b
Change-Id: Iace09f956bd520732588b3623722b74f6559a1fe
|
|
This is a minor cleanup after
https://android-review.googlesource.com/3048514 .
Test: m test-art-host-gtest
Test: testrunner.py --host --optimizing --speed-profile
Bug: 38313278
Change-Id: Ic47063231fd48656b612ede3ec100ceb8a379050
|
|
Test: m test-art-host-gtest
Test: testrunner.py --host --optimizing
Change-Id: I1658a61953aaf109d68a2df7a534c3972679c291
|
|
Bug: 297147201
Test: art/test/testrunner/testrunner.py --target --64
Change-Id: I69dc8d2ccd635410ff26b46f1ec1de5b0e1654a8
|
|
Test: m test-art-host-gtest
Test: testrunner.py --host --optimizing
Change-Id: I89d00710a4492fcca02dd12879702730c25779b3
|
|
Rename the 64-bit `movd()` assembler functions to `movq()`
and remove the `is64bit` argument from the other `movd()`
functions, making them always emit 32-bit MOVD.
Change `CodeGeneratorX86_64::Exchange32(XmmRegister, int)`
to use the 32-bit `MOVD` by not updating the old `movd()`
call, letting it resolve to the 32-bit version now.
Test: m test-art-host-gtest
Test: testrunner.py --host --optimizing
Change-Id: I3d8cdaf2c097ebc6044b09d20139aeb20ab00e11
|
|
CL aosp/3370562 refactored method resolution but accidentally removed an
ICCE check.
Test: 733-icce
Bug: 381631627
Change-Id: Ic9ad2f13053c0c81844ee09ed12a99a9fe9b8b99
|
|
Bug: 297147201
Test: art/test/testrunner/testrunner.py --target --64
Change-Id: Icb348125739250dd6e961f578ac7e3b5ecd111a5
|
|
This reverts commit fad4678f3ae48d84b7ed1c842b03a023e4f2cb37.
Bug: 380651440
bug: 297147201
Test: atest CtsLibcoreOjTestCases on a redfin device flashed
Test: with ab/12108082: test crashes w/o WriteBarrier line
Test: and passes w/ it.
Change-Id: Ibdfc090e3c2b693c1bb3b160a64c9f94448e18ec
|
|
Bug: 297147201
Test: art/test/testrunner/testrunner.py --target --64
Test: ART_HEAP_POISONING=1 art/test/testrunner/testrunner.py --target --64
Change-Id: I9b391356b45b1ab7b539f86c7fe5b733681e9106
|
|
DeleteDeadEmptyBlock does SetGraph(nullptr) as the last step.
Test: art/test/testrunner/testrunner.py --host --64 -b --optimizing
Change-Id: I8a155d22ff62d55bd07c1f3733e8891c2f9fe3de
|
|
Bug: 297147201
Test: ./art/test/testrunner/testrunner.py -b --host --64 -t 2277
Test: ART_HEAP_POISONING=1 ./art/test/testrunner/testrunner.py -b --host --64 -t 2277
Change-Id: Iad4a86faf84c834a44a2b622fc4eaab7752c2cba
|
|
Introduce ResolveMethodId and ResolveMethodWithChecks to make it more
explicit at the call site. This also simplifies the implementation of
ResolveMethodWithChecks.
Also avoid creating handles in ResolveField when the dex cache already
contains the field.
Test: test.py
Change-Id: Ie722c6d7ecadf7c6dbd780f0fc58dfae89140a01
|
|
Bug: 297147201
Test: ./art/test/testrunner/testrunner.py -b --host --64
Test: ./art/test/testrunner/testrunner.py -b --jvm
Change-Id: I2c07ae919921363e8e419ec7296cd24696e7f3b5
|
|
This reverts commit a687066b7043dbc1be8f85001eeb0f341cd25885.
Reason for revert: Probably caused b/380651440
Change-Id: I249aef9b6e9687d1d191c31034a2c9d02e4ea23b
|
|
Based on a comment in r.android.com/3365219.
Bug: 368984521
Test: art/test/testrunner/testrunner.py -b --host --64 --optimizing
Change-Id: Ic8b945ba418df380158456bfb295edfd1dc0ad01
|
|
To do so update:
* TryReplaceStringBuilderAppend
* Code paths relevant to previously InvokeVirtual that are now
InvokeStaticOrDirect
* checker tests.
Bug: 369206455
Test: art/test/testrunner/testrunner.py --host --64 -b --optimizing
Change-Id: I4d40980e416f3130d3c344c5f07b7b331deb5c97
|
|
This resulted in a crash for debug builds, and mistakes in the
optimizations for release builds.
It requires a variable to be used in the loop we are unrolling
as well as in a catch phi that it is not part of the loop we
are unrolling.
We could potentially fully unroll that loop, but this requires
a refactor on how SuperblockCloner works. This CL makes it so
that we won't optimize such loops, which should be rare enough
to not cause any visible regressions.
Test: art/test/testrunner/testrunner.py --host --64 --optimizing -b
Bug: 368984521
Fixes: 368984521
Change-Id: I39f88586358573bbe76d9369d4ec67a942fd3eec
|
|
Test: art/test/testrunner/testrunner.py --host --64 --optimizing -b
Change-Id: I75b9b12a27cb2db7de9d2f884e516c1e3570da84
|