diff options
| author | 2015-11-20 15:08:11 +0000 | |
|---|---|---|
| committer | 2015-11-20 16:18:39 +0000 | |
| commit | f9d741e32c6f1629ce70eefc68d3363fa1cfd696 (patch) | |
| tree | 409005e5b1d01d2830c20421f8466125e110d6af /compiler/utils/arm/assembler_thumb2.cc | |
| parent | beb709a2607a00b5df33f0235f22ccdd876cee22 (diff) | |
Optimizing/ARM: Improve long shifts by 1.
Implement long
Shl(x,1) as LSLS+ADC,
Shr(x,1) as ASR+RRX and
UShr(x,1) as LSR+RRX.
Remove the simplification substituting Shl(x,1) with
ADD(x,x) as it interferes with some other optimizations
instead of helping them. And since it didn't help 64-bit
architectures anyway, codegen is the correct place for it.
This is now implemented for ARM and x86, so only mips32 can
be improved.
Change-Id: Idd14f23292198b2260189e1497ca5411b21743b3
Diffstat (limited to 'compiler/utils/arm/assembler_thumb2.cc')
| -rw-r--r-- | compiler/utils/arm/assembler_thumb2.cc | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/compiler/utils/arm/assembler_thumb2.cc b/compiler/utils/arm/assembler_thumb2.cc index 297cc54e29..584a597701 100644 --- a/compiler/utils/arm/assembler_thumb2.cc +++ b/compiler/utils/arm/assembler_thumb2.cc @@ -3220,7 +3220,7 @@ void Thumb2Assembler::Ror(Register rd, Register rm, uint32_t shift_imm, void Thumb2Assembler::Rrx(Register rd, Register rm, Condition cond, SetCc set_cc) { CheckCondition(cond); - EmitShift(rd, rm, RRX, rm, cond, set_cc); + EmitShift(rd, rm, RRX, 0, cond, set_cc); } |