aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPaul Clements <paul.clements@steeleye.com>2009-02-11 13:04:45 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2009-02-11 14:25:37 -0800
commit4d48a542b42747c36a5937447d9c3de7c897ea50 (patch)
tree2548058e60af5a76deb974ddc320d06accef5ff5
parent9480c53e9b2aa13a06283ffb96bb8f1873ac4e9a (diff)
nbd: fix I/O hang on disconnected nbds
Fix a problem that causes I/O to a disconnected (or partially initialized) nbd device to hang indefinitely. To reproduce: # ioctl NBD_SET_SIZE_BLOCKS /dev/nbd23 514048 # dd if=/dev/nbd23 of=/dev/null bs=4096 count=1 ...hangs... This can also occur when an nbd device loses its nbd-client/server connection. Although we clear the queue of any outstanding I/Os after the client/server connection fails, any additional I/Os that get queued later will hang. This bug may also be the problem reported in this bug report: http://bugzilla.kernel.org/show_bug.cgi?id=12277 Testing would need to be performed to determine if the two issues are the same. This problem was introduced by the new request handling thread code ("NBD: allow nbd to be used locally", 3/2008), which entered into mainline around 2.6.25. The fix, which is fairly simple, is to restore the check for lo->sock being NULL in do_nbd_request. This causes I/O to an uninitialized nbd to immediately fail with an I/O error, as it did prior to the introduction of this bug. Signed-off-by: Paul Clements <paul.clements@steeleye.com> Reported-by: Jon Nelson <jnelson-kernel-bugzilla@jamponi.net> Acked-by: Pavel Machek <pavel@ucw.cz> Cc: <stable@kernel.org> [2.6.26.x, 2.6.27.x, 2.6.28.x] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r--drivers/block/nbd.c9
1 files changed, 9 insertions, 0 deletions
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 34f80fa6fed..8299e2d3b61 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -549,6 +549,15 @@ static void do_nbd_request(struct request_queue * q)
BUG_ON(lo->magic != LO_MAGIC);
+ if (unlikely(!lo->sock)) {
+ printk(KERN_ERR "%s: Attempted send on closed socket\n",
+ lo->disk->disk_name);
+ req->errors++;
+ nbd_end_request(req);
+ spin_lock_irq(q->queue_lock);
+ continue;
+ }
+
spin_lock_irq(&lo->queue_lock);
list_add_tail(&req->queuelist, &lo->waiting_queue);
spin_unlock_irq(&lo->queue_lock);