aboutsummaryrefslogtreecommitdiff
path: root/tcg/ppc
AgeCommit message (Collapse)Author
2015-10-19tcg/ppc: Prefer mask over andi.Richard Henderson
Prefer the instruction that isn't required to modify cr0. Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-10-19tcg/ppc: Revise goto_tb implementationRichard Henderson
Restrict the size of code_gen_buffer to 2GB on ppc64, which lets us assert that everything is reachable with addis+addi from tb_ret_addr. This lets us use a max of 4 insns for goto_tb instead of 7. Emit the indirect branch portion of goto_tb up front, which means we only have to update two insns to update any link. With a 64-bit store, we can update the link atomically, which may be required in future. Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-10-19tcg/ppc: Adjust exit_tb for change in prologue placementRichard Henderson
Changing the prologue to the beginning of the code_gen_buffer changes the direction of the "return" branch. Need to change the logic to match. Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24linux-user: remove useless macros GUEST_BASE and RESERVED_VALaurent Vivier
As we have removed CONFIG_USE_GUEST_BASE, we always use a guest base and the macros GUEST_BASE and RESERVED_VA become useless: replace them by their values. Reviewed-by: Alexander Graf <agraf@suse.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <1440420834-8388-1-git-send-email-laurent@vivier.eu> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24linux-user: remove --enable-guest-base/--disable-guest-baseLaurent Vivier
All tcg host architectures now support the guest base and as there is no real performance lost, it can be always enabled. Anyway, guest base use can be disabled lively by setting guest base to 0. CONFIG_USE_GUEST_BASE is defined as (USE_GUEST_BASE && USER_ONLY), it should have to be replaced by CONFIG_USER_ONLY in non CONFIG_USER_ONLY parts, but as some other parts are using !CONFIG_SOFTMMU I have chosen to use !CONFIG_SOFTMMU instead. Reviewed-by: Alexander Graf <agraf@suse.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <1440373328-9788-2-git-send-email-laurent@vivier.eu> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24tcg/ppc: Improve unaligned load/store handling on 64-bit backendBenjamin Herrenschmidt
Currently, we get to the slow path for any unaligned access in the backend, because we effectively preserve the bottom address bits below the alignment requirement when comparing with the TLB entry, so any non-0 bit there will cause the compare to fail. For the same number of instructions, we can instead add the access size - 1 to the address and stick to clearing all the bottom bits. That means that normal unaligned accesses will not fallback (the HW will handle them fine). Only when crossing a page boundary well we end up having a mismatch because we'll end up pointing to the next page which cannot possibly be in that same TLB entry. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Message-Id: <1437455978.5809.2.camel@kernel.crashing.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24tcg: Split trunc_shr_i32 opcode into extr[lh]_i64_i32Richard Henderson
Rather than allow arbitrary shift+trunc, only concern ourselves with low and high parts. This is all that was being used anyway. Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24tcg: implement real ext_i32_i64 and extu_i32_i64 opsAurelien Jarno
Implement real ext_i32_i64 and extu_i32_i64 ops. They ensure that a 32-bit value is always converted to a 64-bit value and not propagated through the register allocator or the optimizer. Cc: Andrzej Zaborowski <balrogg@gmail.com> Cc: Alexander Graf <agraf@suse.de> Cc: Blue Swirl <blauwirbel@gmail.com> Cc: Stefan Weil <sw@weilnetz.de> Acked-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24tcg: rename trunc_shr_i32 into trunc_shr_i64_i32Aurelien Jarno
The op is sometimes named trunc_shr_i32 and sometimes trunc_shr_i64_i32, and the name in the README doesn't match the name offered to the frontends. Always use the long name to make it clear it is a size changing op. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-06-09tcg: Mask TCGMemOp appropriately for indexingRichard Henderson
The addition of MO_AMASK means that places that used inverted masks need to be changed to use positive masks, and places that failed to mask the intended bits need updating. Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com> Tested-by: Yongbok Kim <yongbok.kim@imgtec.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-06-03tcg: add TCG_TARGET_TLB_DISPLACEMENT_BITSPaolo Bonzini
This will be used to size the TLB when more than 8 MMU modes are used by the target. Limitations come from the limited size of the immediate fields (which sometimes, as in the case of Aarch64, extend to instructions that shift the immediate). Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1424436345-37924-2-git-send-email-pbonzini@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2015-05-14tcg: Push merged memop+mmu_idx parameter to softmmu routinesRichard Henderson
The extra information is not yet used but it is now available. This requires minor changes through all of the tcg backends. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-05-14tcg: Merge memop and mmu_idx parameters to qemu_ld/stRichard Henderson
At the tcg opcode level, not at the tcg-op.h generator level. This requires minor changes through all of the tcg backends, but none of the cpu translators. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-03-13tcg: Change generator-side labels to a pointerRichard Henderson
This is less about improved type checking than enabling a subsequent change to the representation of labels. Acked-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com> Cc: Andrzej Zaborowski <balrogg@gmail.com> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Aurelien Jarno <aurelien@aurel32.net> Cc: Blue Swirl <blauwirbel@gmail.com> Cc: Stefan Weil <sw@weilnetz.de> Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-29tcg/ppc: Fix support for 64-bit PPC MacOSX hostsPeter Maydell
Add back in the support for 64-bit PPC MacOSX hosts that was broken in the recent merge of the 32-bit and 64-bit TCG backends. Reported-by: Andreas Färber <andreas.faerber@web.de> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Tested-by: Andreas Färber <andreas.faerber@web.de>
2014-06-27tcg/ppc: Fix failure in tcg_out_mem_longRichard Henderson
With rt != r0 on loads, we use rt for scratch. If we need an index register different from base, we can't use rt, but r0 is usable. Signed-off-by: Richard Henderson <rth@twiddle.net> Message-id: 1403843160-30332-1-git-send-email-rth@twiddle.net Tested-by: Cédric Le Goater <clg@fr.ibm.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-06-23tcg-ppc: Use the return address as a base pointerRichard Henderson
This can significantly reduce code size for generation of (some) 64-bit constants. With the side effect that we know for a fact that exit_tb can use the register to good effect. Tested-by: Tom Musta <tommusta@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23tcg-ppc: Merge cache-utils into the backendRichard Henderson
As a "utility", it only supported ppc, and in a way that other tcg backends provided directly in tcg-target.h. Removing this disparity is easier now that the two ppc backends are merged. Tested-by: Tom Musta <tommusta@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23tcg-ppc: Rename the tcg/ppc64 backendRichard Henderson
The other tcg backends that support 32- and 64-bit modes use the 32-bit name for the port. Follow suit. Tested-by: Tom Musta <tommusta@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23tcg-ppc: Remove the backendRichard Henderson
Vectoring the 32-bit build to the ppc64 directory. Tested-by: Tom Musta <tommusta@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23tcg-ppc: Use uintptr_t in ppc_tb_set_jmp_targetRichard Henderson
Tested-by: Tom Musta <tommusta@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-04tcg: Remove TCG_TARGET_HAS_new_ldstRichard Henderson
Since all backends have been converted, remove the compatibility code. Acked-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12tcg: Remove unreachable code in tcg_out_op and op_defsRichard Henderson
The INDEX_op_call case has just been obsoleted; the mov and movi cases have not been reachable for years. Attempt to document this both in each tcg_out_op switch, and via TCG_OPF_NOT_PRESENT. Because of the TCG_OPF_NOT_PRESENT change, this must be done for all targets in a single commit. Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12tcg-ppc: Split out tcg_out_callRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12tcg-ppc: Define TCG_TARGET_INSN_UNIT_SIZERichard Henderson
And use tcg pointer differencing functions as appropriate. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18tcg: Use HOST_WORDS_BIGENDIANRichard Henderson
Instead of rolling a local TCG_TARGET_WORDS_BIGENDIAN. Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18tcg: Relax requirement for mulu2_i32 on 32-bit hostsRichard Henderson
Instead require either mulu2_i32 or muluh_i32. The code in tcg-op.h already supports looking for both. Previous incomplete conversion? Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18tcg: Add TCGType parameter to tcg_target_const_matchRichard Henderson
Most 64-bit targets need to be able to ignore the high bits of a TCG_TYPE_I32 value. Suggested-by: Stuart Brady <sdb@zubnet.me.uk> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18tcg: Fix warning (1 bit signed bitfield entry) and replace int by boolStefan Weil
Static code analyzers complain about signed bitfields with only a single bit. is_ld is used as a boolean value, so make it bool. ppc64 already used bool for the 2nd argument is_ld of the local function add_qemu_ldst_label. Modify all other TCG targets to do follow this example. Signed-off-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12tcg-ppc: Support new ldst opcodesRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12tcg-ppc: Convert to le/be ldst helpersRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12tcg-ppc: Use TCGMemOp within qemu_ldst routinesRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-10tcg: Add qemu_ld_st_i32/64Richard Henderson
Step two in the transition, adding the new ldst opcodes. Keep the old opcodes around until all backends support the new opcodes. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-10tcg: Add tcg-be-ldst.hRichard Henderson
Move TCGLabelQemuLdst and related stuff out of tcg.h. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-25tcg-ppc: Fix and cleanup tcg_out_tlb_checkRichard Henderson
The fix is that sparc has so many mmu modes that the last one overflowed the 16-bit signed offset we assumed would fit. Handle this, and check the new assumption at compile time. Load the tlb addend earlier for the fast path. Remove the explicit address + addend and make use of index addressing. Adjust constraints for qemu_ld64 such that we don't clobber the address register or tlb addend before loading both values. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-25tcg-ppc: Use conditional branch and link to slow pathRichard Henderson
Saves one insn per slow path. Note that we can no longer use a tail call into the store helper. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-25tcg-ppc: Cleanup tcg_out_qemu_ld/st_slow_pathRichard Henderson
Coding style fixes. Use TCGReg enumeration values instead of raw numbers. Don't needlessly pull the whole TCGLabelQemuLdst struct into local variables. Less conditional compilation. No functional changes. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-25tcg-ppc: Avoid code for nop moveRichard Henderson
While these are rare from code that's been through the optimizer, it's not uncommon within the tcg backend. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-25tcg-ppc: use new return-argument ld/st helpersPaolo Bonzini
These use a 32-bit load-of-immediate to save a mflr+addi+mtlr sequence. Tested with a Windows 98 guest (pretty much the most recent thing I could run on my PPC machine) and kvm-unit-tests's sieve.flat. The speed up for sieve.flat is as high as 10% for qemu-system-i386, 25% (no kidding) for qemu-system-x86_64 on my PowerBook G4. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-25tcg-ppc: fix qemu_ld/qemu_st for AIX ABIPaolo Bonzini
For the AIX ABI, the function pointer and small area pointer need to be loaded in the trampoline. The trampoline instead is called with a normal BL instruction. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-02exec: Split softmmu_defs.hRichard Henderson
The _cmmu helpers can be moved to exec-all.h. The helpers that are used from TCG will shortly need access to tcg_target_long so move their declarations into tcg.h. This requires minor include adjustments to all TCG backends. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-02tcg: Change tcg_out_ld/st offset to intptr_tRichard Henderson
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-02tcg: Change relocation offsets to intptr_tRichard Henderson
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-02tcg: Change tcg_qemu_tb_exec return to uintptr_tRichard Henderson
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-02tcg: Add muluh and mulsh opcodesRichard Henderson
Use them in places where mulu2 and muls2 are used. Optimize mulx2 with dead low part to mulxh. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-07-09tcg-ppc: Don't implement remRichard Henderson
Reviewed-by: Andreas Färber <afaerber@suse.de> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-07-09tcg: Split rem requirement from div requirementRichard Henderson
There are several hosts with only a "div" insn. Remainder is computed manually from the quotient and inputs. We can do this generically. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-23tcg: Add signed multiword multiplication operationsRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-17tcg/ppc: Fix build of tcg_qemu_tb_exec()Andreas Färber
Commit 0b0d3320db74cde233ee7855ad32a9c121d20eb4 (TCG: Final globals clean-up) moved code_gen_prologue but forgot to update ppc code. This broke the build on 32-bit ppc. ppc64 is unaffected. Cc: Evgeny Voevodin <evgenyvoevodin@gmail.com> Cc: Blue Swirl <blauwirbel@gmail.com> Signed-off-by: Andreas Färber <andreas.faerber@web.de> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-12-19exec: move include files to include/exec/Paolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>