aboutsummaryrefslogtreecommitdiff
path: root/include/sysemu/iothread.h
AgeCommit message (Collapse)Author
2023-09-08include/: spelling fixesMichael Tokarev
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2022-05-09Introduce event-loop-base abstract classNicolas Saenz Julienne
Introduce the 'event-loop-base' abstract class, it'll hold the properties common to all event loops and provide the necessary hooks for their creation and maintenance. Then have iothread inherit from it. EventLoopBaseClass is defined as user creatable and provides a hook for its children to attach themselves to the user creatable class 'complete' function. It also provides an update_params() callback to propagate property changes onto its children. The new 'event-loop-base' class will live in the root directory. It is built on its own using the 'link_whole' option (there are no direct function dependencies between the class and its children, it all happens trough 'constructor' magic). And also imposes new compilation dependencies: qom <- event-loop-base <- blockdev (iothread.c) And in subsequent patches: qom <- event-loop-base <- qemuutil (util/main-loop.c) All this forced some amount of reordering in meson.build: - Moved qom build definition before qemuutil. Doing it the other way around (i.e. moving qemuutil after qom) isn't possible as a lot of core libraries that live in between the two depend on it. - Process the 'hw' subdir earlier, as it introduces files into the 'qom' source set. No functional changes intended. Signed-off-by: Nicolas Saenz Julienne <nsaenzju@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Acked-by: Markus Armbruster <armbru@redhat.com> Message-id: 20220425075723.20019-2-nsaenzju@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2021-07-21iothread: add aio-max-batch parameterStefano Garzarella
The `aio-max-batch` parameter will be propagated to AIO engines and it will be used to control the maximum number of queued requests. When there are in queue a number of requests equal to `aio-max-batch`, the engine invokes the system call to forward the requests to the kernel. This parameter allows us to control the maximum batch size to reduce the latency that requests might accumulate while queued in the AIO engine queue. If `aio-max-batch` is equal to 0 (default value), the AIO engine will use its default maximum batch size value. Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Message-id: 20210721094211.69853-3-sgarzare@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2021-02-10multi-process: define MPQemuMsg format and transmission functionsElena Ufimtseva
Defines MPQemuMsg, which is the message that is sent to the remote process. This message is sent over QIOChannel and is used to command the remote process to perform various tasks. Define transmission functions used by proxy and by remote. Signed-off-by: Jagannathan Raman <jag.raman@oracle.com> Signed-off-by: John G Johnson <john.g.johnson@oracle.com> Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 56ca8bcf95195b2b195b08f6b9565b6d7410bce5.1611938319.git.jag.raman@oracle.com [Replace struct iovec send[2] = {0} with {} to make clang happy as suggested by Peter Maydell <peter.maydell@linaro.org>. --Stefan] Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2020-09-09Use DECLARE_*CHECKER* macrosEduardo Habkost
Generated using: $ ./scripts/codeconverter/converter.py -i \ --pattern=TypeCheckMacro $(git grep -l '' -- '*.[ch]') Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20200831210740.126168-12-ehabkost@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20200831210740.126168-13-ehabkost@redhat.com> Message-Id: <20200831210740.126168-14-ehabkost@redhat.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2020-09-09Move QOM typedefs and add missing includesEduardo Habkost
Some typedefs and macros are defined after the type check macros. This makes it difficult to automatically replace their definitions with OBJECT_DECLARE_TYPE. Patch generated using: $ ./scripts/codeconverter/converter.py -i \ --pattern=QOMStructTypedefSplit $(git grep -l '' -- '*.[ch]') which will split "typdef struct { ... } TypedefName" declarations. Followed by: $ ./scripts/codeconverter/converter.py -i --pattern=MoveSymbols \ $(git grep -l '' -- '*.[ch]') which will: - move the typedefs and #defines above the type check macros - add missing #include "qom/object.h" lines if necessary Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20200831210740.126168-9-ehabkost@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20200831210740.126168-10-ehabkost@redhat.com> Message-Id: <20200831210740.126168-11-ehabkost@redhat.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2019-08-16include: Make headers more self-containedMarkus Armbruster
Back in 2016, we discussed[1] rules for headers, and these were generally liked: 1. Have a carefully curated header that's included everywhere first. We got that already thanks to Peter: osdep.h. 2. Headers should normally include everything they need beyond osdep.h. If exceptions are needed for some reason, they must be documented in the header. If all that's needed from a header is typedefs, put those into qemu/typedefs.h instead of including the header. 3. Cyclic inclusion is forbidden. This patch gets include/ closer to obeying 2. It's actually extracted from my "[RFC] Baby steps towards saner headers" series[2], which demonstrates a possible path towards checking 2 automatically. It passes the RFC test there. [1] Message-ID: <87h9g8j57d.fsf@blackfin.pond.sub.org> https://lists.nongnu.org/archive/html/qemu-devel/2016-03/msg03345.html [2] Message-Id: <20190711122827.18970-1-armbru@redhat.com> https://lists.nongnu.org/archive/html/qemu-devel/2019-07/msg02715.html Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20190812052359.30071-2-armbru@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2019-03-08iothread: create the gcontext unconditionallyPeter Xu
In existing code we create the gcontext dynamically at the first access of the gcontext from caller. That can bring some complexity and potential races during using iothread. Since the context itself is not that big a resource, and we won't have millions of iothread, let's simply create the gcontext unconditionally. This will also be a preparation work further to move the thread context push operation earlier than before (now it's only pushed right before we want to start running the gmainloop). Removing the g_once since it's not necessary, while introducing a new run_gcontext boolean to show whether we want to run the gcontext. Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-id: 20190306115532.23025-3-peterx@redhat.com Message-Id: <20190306115532.23025-3-peterx@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2019-03-08iothread: replace init_done_cond with a semaphorePeter Xu
Only sending an init-done message using lock+cond seems an overkill to me. Replacing it with a simpler semaphore. Meanwhile, init the semaphore unconditionally, then we can destroy it unconditionally too in finalize which seems cleaner. Signed-off-by: Peter Xu <peterx@redhat.com> Message-id: 20190306115532.23025-2-peterx@redhat.com Message-Id: <20190306115532.23025-2-peterx@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2018-03-08vl: introduce vm_shutdown()Stefan Hajnoczi
Commit 00d09fdbbae5f7864ce754913efc84c12fdf9f1a ("vl: pause vcpus before stopping iothreads") and commit dce8921b2baaf95974af8176406881872067adfa ("iothread: Stop threads before main() quits") tried to work around the fact that emulation was still active during termination by stopping iothreads. They suffer from race conditions: 1. virtio_scsi_handle_cmd_vq() racing with iothread_stop_all() hits the virtio_scsi_ctx_check() assertion failure because the BDS AioContext has been modified by iothread_stop_all(). 2. Guest vq kick racing with main loop termination leaves a readable ioeventfd that is handled by the next aio_poll() when external clients are enabled again, resulting in unwanted emulation activity. This patch obsoletes those commits by fully disabling emulation activity when vcpus are stopped. Use the new vm_shutdown() function instead of pause_all_vcpus() so that vm change state handlers are invoked too. Virtio devices will now stop their ioeventfds, preventing further emulation activity after vm_stop(). Note that vm_stop(RUN_STATE_SHUTDOWN) cannot be used because it emits a QMP STOP event that may affect existing clients. It is no longer necessary to call replay_disable_events() directly since vm_shutdown() does so already. Drop iothread_stop_all() since it is no longer used. Cc: Fam Zheng <famz@redhat.com> Cc: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 20180307144205.20619-5-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2017-12-19iothread: fix iothread_stop() race conditionStefan Hajnoczi
There is a small chance that iothread_stop() hangs as follows: Thread 3 (Thread 0x7f63eba5f700 (LWP 16105)): #0 0x00007f64012c09b6 in ppoll () at /lib64/libc.so.6 #1 0x000055959992eac9 in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77 #2 0x000055959992eac9 in qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at util/qemu-timer.c:322 #3 0x0000559599930711 in aio_poll (ctx=0x55959bdb83c0, blocking=blocking@entry=true) at util/aio-posix.c:629 #4 0x00005595996806fe in iothread_run (opaque=0x55959bd78400) at iothread.c:59 #5 0x00007f640159f609 in start_thread () at /lib64/libpthread.so.0 #6 0x00007f64012cce6f in clone () at /lib64/libc.so.6 Thread 1 (Thread 0x7f640b45b280 (LWP 16103)): #0 0x00007f64015a0b6d in pthread_join () at /lib64/libpthread.so.0 #1 0x00005595999332ef in qemu_thread_join (thread=<optimized out>) at util/qemu-thread-posix.c:547 #2 0x00005595996808ae in iothread_stop (iothread=<optimized out>) at iothread.c:91 #3 0x000055959968094d in iothread_stop_iter (object=<optimized out>, opaque=<optimized out>) at iothread.c:102 #4 0x0000559599857d97 in do_object_child_foreach (obj=obj@entry=0x55959bdb8100, fn=fn@entry=0x559599680930 <iothread_stop_iter>, opaque=opaque@entry=0x0, recurse=recurse@entry=false) at qom/object.c:852 #5 0x0000559599859477 in object_child_foreach (obj=obj@entry=0x55959bdb8100, fn=fn@entry=0x559599680930 <iothread_stop_iter>, opaque=opaque@entry=0x0) at qom/object.c:867 #6 0x0000559599680a6e in iothread_stop_all () at iothread.c:341 #7 0x000055959955b1d5 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4913 The relevant code from iothread_run() is: while (!atomic_read(&iothread->stopping)) { aio_poll(iothread->ctx, true); and iothread_stop(): iothread->stopping = true; aio_notify(iothread->ctx); ... qemu_thread_join(&iothread->thread); The following scenario can occur: 1. IOThread: while (!atomic_read(&iothread->stopping)) -> stopping=false 2. Main loop: iothread->stopping = true; aio_notify(iothread->ctx); 3. IOThread: aio_poll(iothread->ctx, true); -> hang The bug is explained by the AioContext->notify_me doc comments: "If this field is 0, everything (file descriptors, bottom halves, timers) will be re-evaluated before the next blocking poll(), thus the event_notifier_set call can be skipped." The problem is that "everything" does not include checking iothread->stopping. This means iothread_run() will block in aio_poll() if aio_notify() was called just before aio_poll(). This patch fixes the hang by replacing aio_notify() with aio_bh_schedule_oneshot(). This makes aio_poll() or g_main_loop_run() to return. Implementing this properly required a new bool running flag. The new flag prevents races that are tricky if we try to use iothread->stopping. Now iothread->stopping is purely for iothread_stop() and iothread->running is purely for the iothread_run() thread. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20171207201320.19284-6-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2017-12-19iothread: add iothread_by_id() APIStefan Hajnoczi
Encapsulate IOThread QOM object lookup so that callers don't need to know how and where IOThread objects live. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20171206144550.22295-8-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2017-10-03iothread: export iothread_stop()Peter Xu
So that internal iothread users can explicitly stop one iothread without destroying it. Since at it, fix iothread_stop() to allow it to be called multiple times. Before this patch we may call iothread_stop() more than once on single iothread, while that may not be correct since qemu_thread_join() is not allowed to run twice. From manual of pthread_join(): Joining with a thread that has previously been joined results in undefined behavior. Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-id: 20170928025958.1420-4-peterx@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2017-10-03iothread: provide helpers for internal usePeter Xu
IOThread is a general framework that contains IO loop environment and a real thread behind. It's also good to be used internally inside qemu. Provide some helpers for it to create iothreads to be used internally. Put all the internal used iothreads into the internal object container. Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-id: 20170928025958.1420-3-peterx@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2017-09-08qemu-iothread: IOThread supports the GMainContext event loopWang Yong
IOThread uses AioContext event loop and does not run a GMainContext. Therefore,chardev cannot work in IOThread,such as the chardev is used for colo-compare packets reception. This patch makes the IOThread run the GMainContext event loop, chardev and IOThread can work together. Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Wang Yong <wang.yong155@zte.com.cn> Signed-off-by: Wang Guang <wang.guang55@zte.com.cn> Signed-off-by: Jason Wang <jasowang@redhat.com>
2017-01-03iothread: add poll-grow and poll-shrink parametersStefan Hajnoczi
These parameters control the poll time self-tuning algorithm. They are optional and will default to sane values if omitted. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 20161201192652.9509-14-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2017-01-03iothread: add polling parametersStefan Hajnoczi
Poll mode can be configured with -object iothread,poll-max-ns=NUM. Polling is disabled with a value of 0 nanoseconds. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 20161201192652.9509-7-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-09-13iothread: Stop threads before main() quitsFam Zheng
Right after main_loop ends, we release various things but keep iothread alive. The latter is not prepared to the sudden change of resources. Specifically, after bdrv_close_all(), virtio-scsi dataplane get a surprise at the empty BlockBackend: (gdb) bt at /usr/src/debug/qemu-2.6.0/hw/scsi/virtio-scsi.c:543 at /usr/src/debug/qemu-2.6.0/hw/scsi/virtio-scsi.c:577 It is because the d->conf.blk->root is set to NULL, then blk_get_aio_context() returns qemu_aio_context, whereas s->ctx is still pointing to the iothread: hw/scsi/virtio-scsi.c:543: if (s->dataplane_started) { assert(blk_get_aio_context(d->conf.blk) == s->ctx); } To fix this, let's stop iothreads before doing bdrv_close_all(). Cc: qemu-stable@nongnu.org Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 1473326931-9699-1-git-send-email-famz@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2015-05-08Remove various unused functionsThomas Huth
The functions tpm_backend_thread_tpm_reset() and iothread_find() are completely unused, let's remove them. Signed-off-by: Thomas Huth <huth@tuxfamily.org> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2014-04-04iothread: make IOThread struct definition publicStefan Hajnoczi
Make the IOThread struct definition public so objects can be embedded in parent structs. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Tested-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-03-13iothread: add I/O thread objectStefan Hajnoczi
This is a stand-in for Michael Roth's QContext. I expect this to be replaced once QContext is completed. The IOThread object is an AioContext event loop thread. This patch adds the concept of multiple event loop threads, allowing users to define them. When SMP guests run on SMP hosts it makes sense to instantiate multiple IOThreads. This spreads event loop processing across multiple cores. Note that additional patches are required to actually bind a device to an IOThread. [Andreas Färber <afaerber@suse.de> pointed out that the embedded parent object instance should be called "parent_obj" and have a newline afterwards. This patch has been changed to reflect this. -- Stefan] Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>