aboutsummaryrefslogtreecommitdiff
path: root/tcg
AgeCommit message (Collapse)Author
2014-02-21tcg/i386: Fix build for systems without working cpuid.h (MacOSX, Win32)Peter Maydell
Win32 doesn't have a cpuid.h, and MacOSX may have one but without the __cpuid() function we use, which means that commit 9d2eec20 broke the build for those platforms. Fix this by tightening up our configure cpuid.h check to test that the functions we need are present, and adding some missing #ifdef guards in tcg/i386/tcg-target.c. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/i386: Use SHLX/SHRX/SARX instructionsRichard Henderson
These three-operand shift instructions do not require the shift count to be placed into ECX. This reduces the number of mov insns required, with the mere addition of a new register constraint. Don't attempt to get rid of the matching constraint, as that's impossible to manipulate with just a new constraint. In addition, constant shifts still need the matching constraint. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/i386: Use ANDN instructionRichard Henderson
Note that the optimizer cannot simplify ANDC X,Y,C to AND X,Y,~C so we must handle constants in the implementation of andc. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/i386: Add tcg_out_vex_modrmRichard Henderson
Prepare for emitting BMI insns which require VEX encoding. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/i386: Move TCG_CT_CONST_* to tcg-target.cRichard Henderson
These are not needed by users of tcg-target.h. No need to recompile when we adjust them. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: Add more identity simplificationsRichard Henderson
Recognize 0 operand to andc, and -1 operands to and, orc, eqv. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: Optmize ANDC X,Y,Y to MOV X,0Richard Henderson
Like we already do for SUB and XOR. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: Simply some logical ops to NOTRichard Henderson
Given, of course, an appropriate constant. These could be generated from the "canonical" operation for inversion on the guest, or via other optimizations. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: Handle known-zeros masks for ANDCRichard Henderson
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: add known-zero bits compute for load opsAurelien Jarno
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: improve known-zero bits for 32-bit opsAurelien Jarno
The shl_i32 op might set some bits of the unused 32 high bits of the mask. Fix that by clearing the unused 32 high bits for all 32-bit ops except load/store which operate on tl values. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: fix known-zero bits optimizationAurelien Jarno
Known-zero bits optimization is a great idea that helps to generate more optimized code. However the current implementation only works in very few cases as the computed mask is not saved. Fix this to make it really working. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: fix known-zero bits for right shift opsAurelien Jarno
32-bit versions of sar and shr ops should not propagate known-zero bits from the unused 32 high bits. For sar it could even lead to wrong code being generated. Cc: qemu-stable@nongnu.org Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg-arm: The shift count of op_rotl_i32 is in args[2] not args[1].Huw Davies
It's this that should be subtracted from 0x20 when converting to a right rotate. Cc: qemu-stable@nongnu.org Signed-off-by: Huw Davies <huw@codeweavers.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-15TCG: Fix 32-bit host allocation typoRichard Henderson
The second half register of a 64-bit temp on a 32-bit host was allocated with the wrong base_type. The base_type of the second half register is never checked, but for consistency it should be the same as the first half. Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-08tcg: Add TCGV_UNUSED_PTR, TCGV_IS_UNUSED_PTR, TCGV_EQUAL_PTRPeter Maydell
We have macros for marking TCGv values as unused, checking if they are unused and comparing them to each other. However these only exist for TCGv_i32 and TCGv_i64; add them for TCGv_ptr as well. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2014-02-01tcg/s390: Remove sigill_handlerRichard Henderson
Commit c9baa30f42a87f61627391698f63fa4d1566d9d8 failed to delete all of the relevant code, leading to Werrors about unused symbols. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2014-01-30Merge remote-tracking branch 'rth/tcg-movbe' into stagingPeter Maydell
* rth/tcg-movbe: tcg/i386: cleanup useless #ifdef tcg/i386: use movbe instruction in qemu_ldst routines tcg/i386: add support for three-byte opcodes tcg/i386: remove hardcoded P_REXW value disas/i386.c: disassemble movbe instruction Message-id: 1390692772-15282-1-git-send-email-rth@twiddle.net Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-01-30TCG: Fix I64-on-32bit-host temporariesAlexander Graf
We have cache pools of temporaries that we can reuse later when they've already been allocated before. These cache pools differenciate between the target TCG variable type they contain. So we have one pool for I32 and one pool for I64 variables. On a 32bit system, we can't work with 64bit registers though. So instead we spawn two I32 temporaries for every I64 temporary we create. All caching works the same way as on a real 64-bit system though: We create a cache entry in the 64bit array for the first i32 index. However, when we free such a temporary we free it to the pool of its type (which is always i32 on 32bit systems) rather than its base_type (which is i64 or i32 depending on the variable). This means we put a temporary that is of base_type == i64 into the i32 preallocated temporary pool. Eventually, this results in failures like this on 32bit hosts: qemu-system-ppc64: tcg/tcg.c:515: tcg_temp_new_internal: Assertion `ts->base_type == type' failed. This patch makes the free routine use the base_type instead for the free case, so it's consistent with the temporary allocation. It fixes the above failure for me. Signed-off-by: Alexander Graf <agraf@suse.de> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-id: 1390146811-59936-1-git-send-email-agraf@suse.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-01-25tcg/i386: cleanup useless #ifdefAurelien Jarno
TCG_TARGET_HAS_movcond_i32 is always defined to 1 in tcg-target.h, so remove the corresponding #ifdef #endif sequence, left from a previous refactoring. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-01-25tcg/i386: use movbe instruction in qemu_ldst routinesAurelien Jarno
The movbe instruction has been added on some Intel Atom CPUs and on recent Intel Haswell CPUs. It allows to load/store a value and at the same time bswap it. This patch detects the avaibility of this instruction and when available use it in the qemu load/store routines in replacement of load/store + bswap. Note that for 16-bit unsigned loads, movbe + movzw is basically the same as movzw + bswap, so the patch doesn't touch this case. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> [RTH: Reduced the number of conditionals using "movop".] Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-01-25tcg/i386: add support for three-byte opcodesAurelien Jarno
Add support for three-byte opcodes, starting with the 0x0f 0x38 prefix. Use P_EXT38 as the new constant, and shift all other constants so that P_EXT and P_EXT38 have neighbouring values. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> [RTH: Changed the name from P_EXT2 to P_EXT38.] Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-01-25tcg/i386: remove hardcoded P_REXW valueAurelien Jarno
P_REXW is defined has a constant at the beginning of i386/tcg-target.c, but the corresponding bit is later used in a harcoded way, which defeat the purpose of a constant. Fix that by using a conditional expression operator instead of a shift. On x86 this actually makes the code slightly smaller as GCC does in practice (opc >> 8) & 8 instead of (opc & 0x800) >> 8 so the constants are smaller to load. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-12-21tcg/i386: fix a commentAurelien Jarno
The comments apply to 8-bit stores, not 8-byte stores. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2013-12-10tcg: Use bitmaps for free temporariesRichard Henderson
We previously allocated 32-bits per temp for the next_free_temp entry. We now allocate 4 bits per temp across the 4 bitmaps. Using a linked list meant that if a translator is tweeked, resulting in temps being freed in a different order, that would have follow-on effects throughout the TB. Always allocating the lowest free temp means that follow-on effects are minimized, which can make it easier to diff output when debugging the translators. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-30tcg-s390: Use qemu_getauxval in query_facilitiesRichard Henderson
No need to set up a SIGILL signal handler for detection anymore. Remove a ton of sanity checks that must be true, given that we're requiring a 64-bit build (the note about 31-bit KVM is satisfied by configuring with TCI). Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-30tcg-arm: Use qemu_getauxvalRichard Henderson
Allow host detection on linux systems without glibc 2.16 or later. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-30tcg-ppc64: Use qemu_getauxvalRichard Henderson
Allow host detection on linux systems without glibc 2.16 or later. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18tcg-ia64: Introduce tcg_opc_bswap64_iRichard Henderson
Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18tcg-ia64: Introduce tcg_opc_ext_iRichard Henderson
Being able to "extend" from 64-bits (with a mov) simplifies a few places where the conditional breaks the train of thought. Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18tcg-ia64: Introduce tcg_opc_movi_aRichard Henderson
Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18tcg-ia64: Introduce tcg_opc_mov_aRichard Henderson
Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18tcg-ia64: Use A3 form of logical operationsRichard Henderson
We can and/or/xor/andcm small constants, saving one cycle. Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18tcg-ia64: Use SUB_A3 and ADDS_A4 for subtractionRichard Henderson
We can subtract from more small constants that just 0 with one insn, and we can add the negative for most small constants. Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18tcg-ia64: Use ADDS for small additionRichard Henderson
Avoids a wasted cycle loading up small constants. Simplify the code assuming the tcg optimizer is going to work and don't expect the first operand of the add to be constant. Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18tcg-ia64: Avoid unnecessary stop bit in tcg_out_aluRichard Henderson
When performing an operation with two input registers, we'd leave the stop bit (and thus an extra cycle) that's only needed when one or the other input is a constant. Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18tcg-ia64: Move AREG0 to R32Richard Henderson
Since the move away from the global areg0, we're no longer globally reserving areg0. Which means our use of R7 clobbers a call-saved register. Shift areg0 into the windowed registers. Indeed, choose the incoming parameter register that it comes to us by. This requires moving the register holding the return address elsewhere. Choose R33 for tidiness. Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18tcg-ia64: Simplify brcondRichard Henderson
There was a misconception that a stop bit is required between a compare and the branch that uses the predicate set by the compare. This lead to the usage of an extra bundle in which to perform the compare. The extra bundle left room for constants to be loaded for use with the compare insn. If we pack the compare and the branch together in the same bundle, then there's no longer any room for non-zero constants. At which point we can eliminate half the function by not handling them. Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18tcg-ia64: Handle constant callsRichard Henderson
Using only indirect calls results in 3 bundles (one to load the descriptor address), and 4 stop bits. By looking through the descriptor to the constants, we can perform the call with 2 bundles and only 1 stop bit. Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18tcg-ia64: Use shortcuts for nop insnsRichard Henderson
There's no need to go through the full opcode-to-insn function call to generate nops. This makes the source a bit more readable. Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18tcg-ia64: Use TCGMemOp within qemu_ldst routinesRichard Henderson
Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12tcg-ppc64: Support new ldst opcodesRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12tcg-ppc: Support new ldst opcodesRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12tcg-ppc64: Convert to le/be ldst helpersRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12tcg-ppc: Convert to le/be ldst helpersRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12tcg-ppc64: Use TCGMemOp within qemu_ldst routinesRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12tcg-ppc: Use TCGMemOp within qemu_ldst routinesRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12tcg-arm: Improve GUEST_BASE qemu_ld/stRichard Henderson
If we pull the code to emit the actual load/store into a subroutine, we can share the reg+reg addressing mode code between softmmu and usermode. This lets us load GUEST_BASE into a temporary register rather than attempting to add it piece-wise to the address. Which lets us use movw+movt for armv7, rather than (up to) 4 adds. Code size for pre-armv7 stays the same. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12tcg-arm: Convert to new ldst opcodesRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12tcg-arm: Tidy variable naming convention in qemu_ld/stRichard Henderson
s/addr_reg2/addrhi/ s/addr_reg/addrlo/ s/data_reg2/datahi/ s/data_reg/datalo/ Signed-off-by: Richard Henderson <rth@twiddle.net>