aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2014-01-17target-arm: A64: Add SIMD shift by immediatea64-system-sysregsAlex Bennée
This implements a subset of the AdvSIMD shift operations (namely all the none saturating or narrowing ones). The actual shift generation code itself is common for both the scalar and vector cases but wrapped with either vector element iteration or the fp reg access. The rounding operations need to take special care to correctly reflect the result of adding rounding bits on high bits as the intermediates do not truncate. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-01-17target-arm: A64: Add simple SIMD 3-same floating point opsPeter Maydell
Implement a simple subset of the SIMD 3-same floating point operations. This includes a common helper function used for both scalar and vector ops; FABD is the only currently implemented shared op. Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-01-17target-arm: A64: Add integer ops from SIMD 3-same groupPeter Maydell
Add some of the integer operations in the SIMD 3-same group: specifically, the comparisons, addition and subtraction. Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-01-17target-arm: A64: Add logic ops from SIMD 3 same groupAlex Bennée
Add support for the logical operations (ORR, AND, BIC, ORN, EOR, BSL, BIT and BIF) from the SIMD 3 register same group (C3.6.16). Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-01-17target-arm: A64: Add top level decode for SIMD 3-same groupPeter Maydell
Add top level decode for the A64 SIMD three regs same group (C3.6.16), splitting it into the pairwise, logical, float and integer subgroups. Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-01-17target-arm: A64: Add SIMD scalar 3 same add, sub and compare opsPeter Maydell
Implement the add, sub and compare ops from the SIMD "scalar three same" group. Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-01-14target-arm: A64: Add SIMD three-different ABDL instructionsPeter Maydell
Implement the absolute-difference instructions in the SIMD three-different group: SABAL, SABAL2, UABAL, UABAL2, SABDL, SABDL2, UABDL, UABDL2. Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-01-14target-arm: A64: Add SIMD three-different multiply accumulate insnsPeter Maydell
Add support for the multiply-accumulate instructions from the SIMD three-different instructions group (C3.6.15): * skeleton decode of unallocated encodings and split of the group into its three sub-parts * framework for handling the 64x64->128 widening subpart * implementation of the multiply-accumulate instructions SMLAL, SMLAL2, UMLAL, UMLAL2, SMLSL, SMLSL2, UMLSL, UMLSL2, UMULL, UMULL2, SMULL, SMULL2 Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-01-14dataplane: fix shadowed return valueStefan Hajnoczi
Propagate the error return value from get_indirect(). Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-01-14target-arm: A64: Add SIMD scalar copy instructionsPeter Maydell
Add support for the SIMD scalar copy instruction group (C3.6.7), which consists of the single instruction DUP (element, scalar). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2014-01-14target-arm: A64: Add SIMD modified immediate groupAlex Bennée
This patch adds support for the AdvSIMD modified immediate group (C3.6.6) with all its suboperations (movi, orr, fmov, mvni, bic). Signed-off-by: Alexander Graf <agraf@suse.de> [AJB: new decode struct, minor bug fixes, optimisation] Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2014-01-14target-arm: A64: Add SIMD copy operationsAlex Bennée
This adds support for the all the AdvSIMD vector copy operations (ARM ARM 3.6.5). Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2014-01-14target-arm: A64: Add SIMD across-lanes instructionsMichael Matz
Add support for the SIMD "across lanes" instruction group (C3.6.4). Signed-off-by: Michael Matz <matz@suse.de> [PMM: Updated to current codebase, added fp min/max ops, added unallocated encoding checks] Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2014-01-14target-arm: A64: Add SIMD ZIP/UZP/TRNMichael Matz
Add support for the SIMD ZIP/UZIP/TRN instruction group (C3.6.3). Signed-off-by: Michael Matz <matz@suse.de> [PMM: use new do_vec_get/set etc functions and generally update to new codebase standards; refactor to pull per-element loop outside switch] Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2014-01-14target-arm: A64: Add SIMD TBL/TBLXMichael Matz
Add support for the SIMD TBL/TBLX instructions (group C3.6.2). Signed-off-by: Michael Matz <matz@suse.de> [PMM: rewritten to do more of the decode in translate-a64.c, and to do only one 64 bit pass at a time in the helper] Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2014-01-14target-arm: A64: Add SIMD EXTPeter Maydell
Add support for the SIMD EXT instruction (the only one in its group, C3.6.1). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2014-01-14target-arm: A64: Add decode skeleton for SIMD data processing insnsAlex Bennée
Add decode skeleton and function placeholders for all the SIMD data processing instructions. Due to the complexity of this part of the table the normal extract and switch approach gets very messy very quickly, so we use a simple data-driven pattern-and-mask approach. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2014-01-14target-arm: A64: Add SIMD ld/st singlePeter Maydell
Implement the SIMD ld/st single structure instructions. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2014-01-14target-arm: A64: Add SIMD ld/st multipleAlex Bennée
This adds support support for the SIMD load/store multiple category of instructions. This also brings in a couple of helper functions for manipulating sections of the SIMD registers: * do_vec_get - fetch value from a slice of a vector register * do_vec_set - set a slice of a vector register which use vec_reg_offset for consistent processing of offsets in an endian aware manner. There are also additional helpers: * do_vec_ld - load value into SIMD * do_vec_st - store value from SIMD which load or store a slice of a vector register to memory. These don't zero extend like the fp variants. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2014-01-14Merge remote branch 'luiz/queue/qmp' into qmpqEdgar E. Iglesias
* luiz/queue/qmp: migration: qmp_migrate(): keep working after syntax error qerror: Remove assert_no_error() qemu-option: Remove qemu_opts_create_nofail target-i386: Remove assert_no_error usage hw: Remove assert_no_error usages qdev: Delete dead code error: Add error_abort monitor: add object-add (QMP) and object_add (HMP) command monitor: add object-del (QMP) and object_del (HMP) command qom: catch errors in object_property_add_child qom: fix leak for objects created with -object rng: initialize file descriptor to -1 qemu-monitor: HMP cpu-add wrapper vl: add missing transition debug->finish_migrate Message-Id: 1389045795-18706-1-git-send-email-lcapitulino@redhat.com Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2014-01-14Microblaze: Convert Microblaze-pic handling to GPIOsAlistair Francis
This patch uses inbound GPIO lines (IRQ and FIR) for interrupts instead of using the old pic_cpu method, which doesn't correspond to real hardware. This creates the CPU's inbound IRQ and FIR GPIO lines and updates the Microblaze boards to use this new method. Signed-off-by: Alistair Francis <alistair.francis@xilinx.com> Suggested-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com> Reveiwed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2014-01-14target-arm: Switch ARMCPUInfo arrays to use terminator entriesPeter Maydell
Switch the ARMCPUInfo arrays in cpu.c and cpu64.c to use a terminator entry rather than looping based on ARRAY_SIZE. The latter causes compile warnings on some versions of gcc if the configure options happen to result in an empty array. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2014-01-13Merge remote-tracking branch 'quintela/tags/migration/20140113' into stagingAnthony Liguori
migration.next for 20140113 # gpg: Signature made Mon 13 Jan 2014 09:38:27 AM PST using RSA key ID 5872D723 # gpg: Can't check signature: public key not found * quintela/tags/migration/20140113: (49 commits) migration: synchronize memory bitmap 64bits at a time ram: split function that synchronizes a range memory: syncronize kvm bitmap using bitmaps operations memory: move bitmap synchronization to its own function kvm: refactor start address calculation kvm: use directly cpu_physical_memory_* api for tracking dirty pages memory: unfold memory_region_test_and_clear() memory: split cpu_physical_memory_* functions to its own include memory: cpu_physical_memory_set_dirty_tracking() should return void memory: make cpu_physical_memory_reset_dirty() take a length parameter memory: s/dirty/clean/ in cpu_physical_memory_is_dirty() memory: cpu_physical_memory_clear_dirty_range() now uses bitmap operations memory: cpu_physical_memory_set_dirty_range() now uses bitmap operations memory: use find_next_bit() to find dirty bits memory: s/mask/clear/ cpu_physical_memory_mask_dirty_range memory: cpu_physical_memory_get_dirty() is used as returning a bool memory: make cpu_physical_memory_get_dirty() the main function memory: unfold cpu_physical_memory_set_dirty_flag() memory: unfold cpu_physical_memory_set_dirty() in its only user memory: unfold cpu_physical_memory_clear_dirty_flag() in its only user ... Message-id: 1389634834-24181-1-git-send-email-quintela@redhat.com Signed-off-by: Anthony Liguori <aliguori@amazon.com>
2014-01-13migration: synchronize memory bitmap 64bits at a timeJuan Quintela
We use the old code if the bitmaps are not aligned Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13ram: split function that synchronizes a rangeJuan Quintela
This function is the only bit where we care about speed. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: syncronize kvm bitmap using bitmaps operationsJuan Quintela
If bitmaps are aligned properly, use bitmap operations. If they are not, just use old bit at a time code. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: move bitmap synchronization to its own functionJuan Quintela
We want to have all the functions that handle directly the dirty bitmap near. We will change it later. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13kvm: refactor start address calculationJuan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13kvm: use directly cpu_physical_memory_* api for tracking dirty pagesJuan Quintela
Performance is important in this function, and we want to optimize even further. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: unfold memory_region_test_and_clear()Juan Quintela
We are going to update the bitmap directly Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: split cpu_physical_memory_* functions to its own includeJuan Quintela
All the functions that use ram_addr_t should be here. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: cpu_physical_memory_set_dirty_tracking() should return voidJuan Quintela
Result was always 0, and not used anywhere. Once there, use bool type for the parameter. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: make cpu_physical_memory_reset_dirty() take a length parameterJuan Quintela
We have an end parameter in all the callers, and this make it coherent with the rest of cpu_physical_memory_* functions, that also take a length parameter. Once here, move the start/end calculation to tlb_reset_dirty_range_all() as we don't need it here anymore. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: s/dirty/clean/ in cpu_physical_memory_is_dirty()Juan Quintela
All uses except one really want the other meaning. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: cpu_physical_memory_clear_dirty_range() now uses bitmap operationsJuan Quintela
We were clearing a range of bits, so use bitmap_clear(). Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: cpu_physical_memory_set_dirty_range() now uses bitmap operationsJuan Quintela
We were setting a range of bits, so use bitmap_set(). Note: xen has always been wrong, and should have used start instead of addr from the beginning. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: use find_next_bit() to find dirty bitsJuan Quintela
This operation is way faster than doing it bit by bit. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: s/mask/clear/ cpu_physical_memory_mask_dirty_rangeJuan Quintela
Now all functions use the same wording that bitops/bitmap operations Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: cpu_physical_memory_get_dirty() is used as returning a boolJuan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: make cpu_physical_memory_get_dirty() the main functionJuan Quintela
And make cpu_physical_memory_get_dirty_flag() to use it. It used to be the other way around. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: unfold cpu_physical_memory_set_dirty_flag()Juan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: unfold cpu_physical_memory_set_dirty() in its only userJuan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: unfold cpu_physical_memory_clear_dirty_flag() in its only userJuan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: split dirty bitmap into threeJuan Quintela
After all the previous patches, spliting the bitmap gets direct. Note: For some reason, I have to move DIRTY_MEMORY_* definitions to the beginning of memory.h to make compilation work. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13bitmap: Add bitmap_zero_extend operationJuan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: cpu_physical_memory_clear_dirty_flag() result is never usedJuan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: only resize dirty bitmap when memory size increasesJuan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: make sure that client is always inside rangeJuan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: use bit 2 for migrationJuan Quintela
For historical reasons it was bit 3. Once there, create a constant to know the number of clients. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: cpu_physical_memory_mask_dirty_range() always clears a single flagJuan Quintela
Document it Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>