aboutsummaryrefslogtreecommitdiff
path: root/net
diff options
context:
space:
mode:
authorGreg Kurz <groug@kaod.org>2018-03-20 11:44:56 +0100
committerJason Wang <jasowang@redhat.com>2018-03-26 14:49:17 +0800
commit94b52958b77a2a040564cf7ed716d3a9545d94e5 (patch)
tree0d19db87e871e8c65407f5bf35b34d3d8303e3d6 /net
parent7b1db0908d88f0c9cfac24e214ff72a860692e23 (diff)
virtio_net: flush uncompleted TX on reset
If the backend could not transmit a packet right away for some reason, the packet is queued for asynchronous sending. The corresponding vq element is tracked in the async_tx.elem field of the VirtIONetQueue, for later freeing when the transmission is complete. If a reset happens before completion, virtio_net_tx_complete() will push async_tx.elem back to the guest anyway, and we end up with the inuse flag of the vq being equal to -1. The next call to virtqueue_pop() is then likely to fail with "Virtqueue size exceeded". This can be reproduced easily by starting a guest with an hubport backend that is not connected to a functional network, eg, -device virtio-net-pci,netdev=hub0 -netdev hubport,id=hub0,hubid=0 and no other -netdev hubport,hubid=0 on the command line. The appropriate fix is to ensure that such an asynchronous transmission cannot survive a device reset. So for all queues, we first try to send the packet again, and eventually we purge it if the backend still could not deliver it. CC: qemu-stable@nongnu.org Reported-by: R. Nageswara Sastry <nasastry@in.ibm.com> Buglink: https://github.com/open-power-host-os/qemu/issues/37 Signed-off-by: Greg Kurz <groug@kaod.org> Tested-by: R. Nageswara Sastry <nasastry@in.ibm.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
Diffstat (limited to 'net')
-rw-r--r--net/net.c1
1 files changed, 0 insertions, 1 deletions
diff --git a/net/net.c b/net/net.c
index 5222e45069..29f83983e5 100644
--- a/net/net.c
+++ b/net/net.c
@@ -595,7 +595,6 @@ void qemu_purge_queued_packets(NetClientState *nc)
qemu_net_queue_purge(nc->peer->incoming_queue, nc);
}
-static
void qemu_flush_or_purge_queued_packets(NetClientState *nc, bool purge)
{
nc->receive_disabled = 0;