aboutsummaryrefslogtreecommitdiff
path: root/softmmu_template.h
AgeCommit message (Collapse)Author
2016-07-08cputlb: Fix for self-modifying writes across page boundariesSamuel Damashek
As it currently stands, QEMU does not properly handle self-modifying code when the write is unaligned and crosses a page boundary. The procedure for handling a write to the current translation block is to write-protect the current translation block, catch the write, split up the translation block into the current instruction (which remains write-protected so that the current instruction is not modified) and the remaining instructions in the translation block, and then restore the CPU state to before the write occurred so the write will be retried and successfully executed. However, since unaligned writes across pages are split into one-byte writes for simplicity, writes to the second page (which is not the current TB) may succeed before a write to the current TB is attempted, and since these writes are not invalidated before resuming state after splitting the TB, these writes will be performed a second time, thus corrupting the second page. Credit goes to Patrick Hulin for discovering this. In recent 64-bit versions of Windows running in emulated mode, this results in either being very unstable (a BSOD after a couple minutes of uptime), or being entirely unable to boot. Windows performs one or more 8-byte unaligned self-modifying writes (xors) which intersect the end of the current TB and the beginning of the next TB, which runs into the aforementioned issue. This commit fixes that issue by making the unaligned write loop perform the writes in forwards order, instead of reverse order. This way, QEMU immediately tries to write to the current TB, and splits the TB before any write to the second page is executed. The write then proceeds as intended. With this patch applied, I am able to boot and use Windows 7 64-bit and Windows 10 64-bit in QEMU without KVM. Per Richard Henderson's input, this patch also ensures the second page is in the TLB before executing the write loop, to ensure the second page is mapped. The original discussion of the issue is located at http://lists.nongnu.org/archive/html/qemu-devel/2014-08/msg02161.html. Signed-off-by: Samuel Damashek <samuel.damashek@invincea.com> Message-Id: <20160706182652.16190-1-samuel.damashek@invincea.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-07-08cputlb: Add address parameter to VICTIM_TLB_HITSamuel Damashek
[rth: Split out from the original patch.] Signed-off-by: Samuel Damashek <samuel.damashek@invincea.com> Message-Id: <20160706182652.16190-1-samuel.damashek@invincea.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-07-08cputlb: Move VICTIM_TLB_HIT out of lineRichard Henderson
There are currently 22 invocations of this function, and we're about to increase that number. Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-07-05tcg: Improve the alignment check infrastructureSergey Sorokin
Some architectures (e.g. ARMv8) need the address which is aligned to a size more than the size of the memory access. To support such check it's enough the current costless alignment check implementation in QEMU, but we need to support an alignment size specifying. Signed-off-by: Sergey Sorokin <afarallax@yandex.ru> Message-Id: <1466705806-679898-1-git-send-email-afarallax@yandex.ru> Signed-off-by: Richard Henderson <rth@twiddle.net> [rth: Assert in tcg_canonicalize_memop. Leave get_alignment_bits available for, though unused by, user-mode. Retain logging difference based on ALIGNED_ONLY.]
2016-01-21exec.c: Pass MemTxAttrs to iotlb_to_region so it uses the right ASPeter Maydell
Pass the MemTxAttrs for the memory access to iotlb_to_region(); this allows it to determine the correct AddressSpace to use for the lookup. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2015-09-11softmmu: remove now unused functionsPavel Dovgalyuk
Now that the cpu_ld/st_* function directly call helper_ret_ld/st, we can drop the old helper_ld/st functions. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20150710095656.13280.7085.stgit@PASHA-ISP> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-09-11softmmu: add helper function to pass through retaddrPavel Dovgalyuk
This patch introduces several helpers to pass return address which points to the TB. Correct return address allows correct restoring of the guest PC and icount. These functions should be used when helpers embedded into TB invoke memory operations. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20150710095650.13280.32255.stgit@PASHA-ISP> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-14exec: drop cpu_can_do_io, just read cpu->can_do_ioPaolo Bonzini
After commit 626cf8f (icount: set can_do_io outside TB execution, 2014-12-08), can_do_io is set to 1 if not executing code. It is no longer necessary to make this assumption in cpu_can_do_io. It is also possible to remove the use_icount test, simply by never setting cpu->can_do_io to 0 unless use_icount is true. With these changes cpu_can_do_io boils down to a read of cpu->can_do_io. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-11softmmu: Add probe_write()Yongbok Kim
Probe for whether the specified guest write access is permitted. If it is not permitted then an exception will be taken in the same way as if this were a real write access (and we will not return). Otherwise the function will return, and there will be a valid entry in the TLB for this access. Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com> Reviewed-by: Leon Alrae <leon.alrae@imgtec.com> Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
2015-05-14tcg: Add MO_ALIGN, MO_UNALNRichard Henderson
These modifiers control, on a per-memory-op basis, whether unaligned memory accesses are allowed. The default setting reflects the target's definition of ALIGNED_ONLY. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-05-14tcg: Push merged memop+mmu_idx parameter to softmmu routinesRichard Henderson
The extra information is not yet used but it is now available. This requires minor changes through all of the tcg backends. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-04-26Add MemTxAttrs to the IOTLBPeter Maydell
Add a MemTxAttrs field to the IOTLB, and allow target-specific code to set it via a new tlb_set_page_with_attrs() function; pass the attributes through to the device when making IO accesses. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2015-04-26Make CPU iotlb a structure rather than a plain hwaddrPeter Maydell
Make the CPU iotlb a structure rather than a plain hwaddr; this will allow us to add transaction attributes to it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2015-04-26memory: Replace io_mem_read/write with memory_region_dispatch_read/writePeter Maydell
Rather than retaining io_mem_read/write as simple wrappers around the memory_region_dispatch_read/write functions, make the latter public and change all the callers to use them, since we need to touch all the callsites anyway to add MemTxAttrs and MemTxResult support. Delete io_mem_read and io_mem_write entirely. (All the callers currently pass MEMTXATTRS_UNSPECIFIED and convert the return value back to bool or ignore it.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2015-02-16exec: make iotlb RCU-friendlyPaolo Bonzini
After the previous patch, TLBs will be flushed on every change to the memory mapping. This patch augments that with synchronization of the MemoryRegionSections referred to in the iotlb array. With this change, it is guaranteed that iotlb_to_region will access the correct memory map, even once the TLB will be accessed outside the BQL. Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-11-03softmmu: provide softmmu access type enumLeon Alrae
New MIPS features depend on the access type and enum is more convenient than using the numbers directly. Signed-off-by: Leon Alrae <leon.alrae@imgtec.com> Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
2014-09-01implementing victim TLB for QEMU system emulated TLBXin Tong
QEMU system mode page table walks are expensive. Taken by running QEMU qemu-system-x86_64 system mode on Intel PIN , a TLB miss and walking a 4-level page tables in guest Linux OS takes ~450 X86 instructions on average. QEMU system mode TLB is implemented using a directly-mapped hashtable. This structure suffers from conflict misses. Increasing the associativity of the TLB may not be the solution to conflict misses as all the ways may have to be walked in serial. A victim TLB is a TLB used to hold translations evicted from the primary TLB upon replacement. The victim TLB lies between the main TLB and its refill path. Victim TLB is of greater associativity (fully associative in this patch). It takes longer to lookup the victim TLB, but its likely better than a full page table walk. The memory translation path is changed as follows : Before Victim TLB: 1. Inline TLB lookup 2. Exit code cache on TLB miss. 3. Check for unaligned, IO accesses 4. TLB refill. 5. Do the memory access. 6. Return to code cache. After Victim TLB: 1. Inline TLB lookup 2. Exit code cache on TLB miss. 3. Check for unaligned, IO accesses 4. Victim TLB lookup. 5. If victim TLB misses, TLB refill 6. Do the memory access. 7. Return to code cache The advantage is that victim TLB can offer more associativity to a directly mapped TLB and thus potentially fewer page table walks while still keeping the time taken to flush within reasonable limits. However, placing a victim TLB before the refill path increase TLB refill path as the victim TLB is consulted before the TLB refill. The performance results demonstrate that the pros outweigh the cons. some performance results taken on SPECINT2006 train datasets and kernel boot and qemu configure script on an Intel(R) Xeon(R) CPU E5620 @ 2.40GHz Linux machine are shown in the Google Doc link below. https://docs.google.com/spreadsheets/d/1eiItzekZwNQOal_h-5iJmC4tMDi051m9qidi5_nwvH4/edit?usp=sharing In summary, victim TLB improves the performance of qemu-system-x86_64 by 11% on average on SPECINT2006, kernelboot and qemu configscript and with highest improvement of in 26% in 456.hmmer. And victim TLB does not result in any performance degradation in any of the measured benchmarks. Furthermore, the implemented victim TLB is architecture independent and is expected to benefit other architectures in QEMU as well. Although there are measurement fluctuations, the performance improvement is very significant and by no means in the range of noises. Signed-off-by: Xin Tong <trent.tong@gmail.com> Message-id: 1407202523-23553-1-git-send-email-trent.tong@gmail.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-06-05softmmu: move softmmu_template.h out of include/Paolo Bonzini
It is only included in cputlb.c now. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-19exec: move include files to include/exec/Paolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-11-03tcg: Add extended GETPC mechanism for MMU helpers with ldst optimizationYeongkyoon Lee
Add GETPC_EXT which is used by MMU helpers to selectively calculate the code address of accessing guest memory when called from a qemu_ld/st optimized code or a C function. Currently, it supports only i386 and x86-64 hosts. Signed-off-by: Yeongkyoon Lee <yeongkyoon.lee@samsung.com> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-23Rename target_phys_addr_t to hwaddrAvi Kivity
target_phys_addr_t is unwieldly, violates the C standard (_t suffixes are reserved) and its purpose doesn't match the name (most target_phys_addr_t addresses are not target specific). Replace it with a finger-friendly, standards conformant hwaddr. Outstanding patchsets can be fixed up with the command git rebase -i --exec 'find -name "*.[ch]" | xargs s/target_phys_addr_t/hwaddr/g' origin Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2012-09-15Remove unused CONFIG_TCG_PASS_AREG0 and dead codeBlue Swirl
Now that CONFIG_TCG_PASS_AREG0 is enabled for all targets, remove dead code and support for !CONFIG_TCG_PASS_AREG0 case. Remove dyngen-exec.h and all references to it. Although included by hw/spapr_hcall.c, it does not seem to use it. Remove unused HELPER_CFLAGS. Signed-off-by: Blue Swirl <blauwirbel@gmail.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-04-15w64: Fix data types in softmmu*.hStefan Weil
w64 requires uintptr_t. Signed-off-by: Stefan Weil <sw@weilnetz.de>
2012-04-14Use uintptr_t for various op related functionsBlue Swirl
Use uintptr_t instead of void * or unsigned long in several op related functions, env->mem_io_pc and GETPC() macro. Reviewed-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-03-18softmmu templates: optionally pass CPUState to memory access functionsBlue Swirl
Optionally, make memory access helpers take a parameter for CPUState instead of relying on global env. On most targets, perform simple moves to reorder registers. On i386, switch from regparm(3) calling convention to standard stack-based version. Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-03-18i386: Remove REGPARMBlue Swirl
Use stack based calling convention (GCC default) for interfacing with generated code instead of register based convention (regparm(3)). Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-03-08memory: dispatch directly via MemoryRegionAvi Kivity
Instead of indirecting via io_mem_region, dispatch directly through the MemoryRegion obtained from the iotlb or phys_page_find(). Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-08memory: store section indices in iotlb instead of io indicesAvi Kivity
A step towards eliminating io indices. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-01-04Remove IO_MEM_SHIFTAvi Kivity
We no longer use any of the lower bits of a ram_addr, so we might as well use them for the io table index. This increases the number of potential I/O handlers by a factor of 8. Signed-off-by: Avi Kivity <avi@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04Convert IO_MEM_{RAM,ROM,UNASSIGNED,NOTDIRTY} to MemoryRegionsAvi Kivity
Convert the fixed-address IO_MEM_RAM, IO_MEM_ROM, IO_MEM_UNASSIGNED, and IO_MEM_NOTDIRTY io handlers to MemoryRegions. These aren't real regions, since they are never added to the memory hierarchy, but they allow reuse of the dispatch functionality. Signed-off-by: Avi Kivity <avi@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04Avoid range comparisons on io index typesAvi Kivity
The code sometimes uses range comparisons on io indexes (e.g. index =< IO_MEM_ROM). Avoid these as they make moving to objects harder. Signed-off-by: Avi Kivity <avi@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04memory: move mmio access to functionsAvi Kivity
Currently mmio access goes directly to the io_mem_{read,write} arrays. In preparation for eliminating them, add indirection via a function. Signed-off-by: Avi Kivity <avi@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2011-10-01softmmu_header: pass CPUState to tlb_fillBlue Swirl
Pass CPUState pointer to tlb_fill() instead of architecture local cpu_single_env hacks. Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-10-01Document softmmu templatesBlue Swirl
Add some comments to describe each file. Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-04-05Split TLB addend and target_phys_addr_tPaul Brook
Historically the qemu tlb "addend" field was used for both RAM and IO accesses, so needed to be able to hold both host addresses (unsigned long) and guest physical addresses (target_phys_addr_t). However since the introduction of the iotlb field it has only been used for RAM accesses. This means we can change the type of addend to unsigned long, and remove associated hacks in the big-endian TCG backends. We can also remove the host dependence from target_phys_addr_t. Signed-off-by: Paul Brook <paul@codesourcery.com>
2010-03-29Compile qemu-timer only onceBlue Swirl
Arrange various declarations so that also non-CPU code can access them, adjust users. Move CPU specific code to cpus.c. Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-01-28softmmu: Dont clobber retaddr in slow_ldx().Edgar E. Iglesias
When splitting up unaligned IO accesses, ld calls slow_ld which was clobbering retaddr. AFAIK the problem only shows up when running emulations with -icount that may abort TB execution on IO accesses. Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
2009-10-01Revert "Get rid of _t suffix"Anthony Liguori
In the very least, a change like this requires discussion on the list. The naming convention is goofy and it causes a massive merge problem. Something like this _must_ be presented on the list first so people can provide input and cope with it. This reverts commit 99a0949b720a0936da2052cb9a46db04ffc6db29. Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-10-01Get rid of _t suffixmalc
Some not so obvious bits, slirp and Xen were left alone for the time being. Signed-off-by: malc <av1474@comtv.ru>
2009-08-24Unbreak large mem support by removing kqemuAnthony Liguori
kqemu introduces a number of restrictions on the i386 target. The worst is that it prevents large memory from working in the default build. Furthermore, kqemu is fundamentally flawed in a number of ways. It relies on the TSC as a time source which will not be reliable on a multiple processor system in userspace. Since most modern processors are multicore, this severely limits the utility of kqemu. kvm is a viable alternative for people looking to accelerate qemu and has the benefit of being supported by the upstream Linux kernel. If someone can implement work arounds to remove the restrictions introduced by kqemu, I'm happy to avoid and/or revert this patch. N.B. kqemu will still function in the 0.11 series but this patch removes it from the 0.12 series. Paul, please Ack or Nack this patch. Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-07-16Update to a hopefully more future proof FSF addressBlue Swirl
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2009-04-19kqemu: merge CONFIG_KQEMU and USE_KQEMUblueswir1
Basically a recursive ":%s/USE_KQEMU/CONFIG_KQEMU/g". Signed-off-by: Paul Bolle <pebolle@tiscali.nl> git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7189 c046a42c-6fe2-441c-8c8c-71466251a162
2009-01-04Update FSF address in GPL/LGPL boilerplateaurel32
The attached patch updates the FSF address in the GPL/LGPL boilerplate in most GPL/LGPLed files, and also in COPYING.LIB. Signed-off-by: Stuart Brady <stuart.brady@gmail.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6162 c046a42c-6fe2-441c-8c8c-71466251a162
2008-11-18Set mem_io_vaddr on io_read (Jan Kiszka)aliguori
Analogously to write accesses, we have to save the memory address also on read accesses in order to support read watchpoints. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Anthony Liguori <aliguori@us.ibm.com> git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5739 c046a42c-6fe2-441c-8c8c-71466251a162
2008-06-29Add instruction counter.pbrook
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@4799 c046a42c-6fe2-441c-8c8c-71466251a162
2008-06-09Clean up MMIO TLB handling.pbrook
The IO index is now stored in its own field, instead of being wedged into the vaddr field. This eliminates the ROMD and watchpoint host pointer weirdness. The IO index space is expanded by 1 bit, and several additional bits are made available in the TLB vaddr field. git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@4704 c046a42c-6fe2-441c-8c8c-71466251a162
2008-01-31use simpler REGPARM convention - make CPUTLBEntry size a power of twobellard
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@3935 c046a42c-6fe2-441c-8c8c-71466251a162
2007-11-17Don't compare '\0' against pointers.balrog
Add a note from Fabrice in slow_st template. git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@3669 c046a42c-6fe2-441c-8c8c-71466251a162
2007-11-17Check permissions for the last byte first in unaligned slow_st accesses ↵balrog
(patch from TeLeMan). git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@3665 c046a42c-6fe2-441c-8c8c-71466251a162
2007-10-14Replace is_user variable with mmu_idx in softmmu core,j_mayer
allowing support of more than 2 mmu access modes. Add backward compatibility is_user variable in targets code when needed. Implement per target cpu_mmu_index function, avoiding duplicated code and #ifdef TARGET_xxx in softmmu core functions. Implement per target mmu modes definitions. As an example, add PowerPC hypervisor mode definition and Alpha executive and kernel modes definitions. Optimize PowerPC case, precomputing mmu_idx when MSR register changes and using the same definition in code translation code. git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@3384 c046a42c-6fe2-441c-8c8c-71466251a162