aboutsummaryrefslogtreecommitdiff
path: root/block/cfq-iosched.c
AgeCommit message (Collapse)Author
2010-06-18cfq-iosched: Fixed boot warning with BLK_CGROUP=y and CFQ_GROUP_IOSCHED=nVivek Goyal
Hi Jens, Few days back Ingo noticed a CFQ boot time warning. This patch fixes it. The issue here is that with CFQ_GROUP_IOSCHED=n, CFQ should not really be making blkio stat related calls. > Hm, it's still not entirely fixed, as of 2.6.35-rc2-00131-g7908a9e. With > some > configs i get bad spinlock warnings during bootup: > > [ 28.968013] initcall net_olddevs_init+0x0/0x82 returned 0 after 93750 > usecs > [ 28.972003] calling b44_init+0x0/0x55 @ 1 > [ 28.976009] bus: 'pci': add driver b44 > [ 28.976374] sda: > [ 28.978157] BUG: spinlock bad magic on CPU#1, async/0/117 > [ 28.980000] lock: 7e1c5bbc, .magic: 00000000, .owner: <none>/-1, +.owner_cpu: 0 > [ 28.980000] Pid: 117, comm: async/0 Not tainted +2.6.35-rc2-tip-01092-g010e7ef-dirty #8183 > [ 28.980000] Call Trace: > [ 28.980000] [<41ba6d55>] ? printk+0x20/0x24 > [ 28.980000] [<4134b7b7>] spin_bug+0x7c/0x87 > [ 28.980000] [<4134b853>] do_raw_spin_lock+0x1e/0x123 > [ 28.980000] [<41ba92ca>] ? _raw_spin_lock_irqsave+0x12/0x20 > [ 28.980000] [<41ba92d2>] _raw_spin_lock_irqsave+0x1a/0x20 > [ 28.980000] [<4133476f>] blkiocg_update_io_add_stats+0x25/0xfb > [ 28.980000] [<41335dae>] ? cfq_prio_tree_add+0xb1/0xc1 > [ 28.980000] [<41337bc7>] cfq_insert_request+0x8c/0x425 Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
2010-06-17cfq: Don't allow queue merges for queues that have no process referencesJeff Moyer
Hi, A user reported a kernel bug when running a particular program that did the following: created 32 threads - each thread took a mutex, grabbed a global offset, added a buffer size to that offset, released the lock - read from the given offset in the file - created a new thread to do the same - exited The result is that cfq's close cooperator logic would trigger, as the threads were issuing I/O within the mean seek distance of one another. This workload managed to routinely trigger a use after free bug when walking the list of merge candidates for a particular cfqq (cfqq->new_cfqq). The logic used for merging queues looks like this: static void cfq_setup_merge(struct cfq_queue *cfqq, struct cfq_queue *new_cfqq) { int process_refs, new_process_refs; struct cfq_queue *__cfqq; /* Avoid a circular list and skip interim queue merges */ while ((__cfqq = new_cfqq->new_cfqq)) { if (__cfqq == cfqq) return; new_cfqq = __cfqq; } process_refs = cfqq_process_refs(cfqq); /* * If the process for the cfqq has gone away, there is no * sense in merging the queues. */ if (process_refs == 0) return; /* * Merge in the direction of the lesser amount of work. */ new_process_refs = cfqq_process_refs(new_cfqq); if (new_process_refs >= process_refs) { cfqq->new_cfqq = new_cfqq; atomic_add(process_refs, &new_cfqq->ref); } else { new_cfqq->new_cfqq = cfqq; atomic_add(new_process_refs, &cfqq->ref); } } When a merge candidate is found, we add the process references for the queue with less references to the queue with more. The actual merging of queues happens when a new request is issued for a given cfqq. In the case of the test program, it only does a single pread call to read in 1MB, so the actual merge never happens. Normally, this is fine, as when the queue exits, we simply drop the references we took on the other cfqqs in the merge chain: /* * If this queue was scheduled to merge with another queue, be * sure to drop the reference taken on that queue (and others in * the merge chain). See cfq_setup_merge and cfq_merge_cfqqs. */ __cfqq = cfqq->new_cfqq; while (__cfqq) { if (__cfqq == cfqq) { WARN(1, "cfqq->new_cfqq loop detected\n"); break; } next = __cfqq->new_cfqq; cfq_put_queue(__cfqq); __cfqq = next; } However, there is a hole in this logic. Consider the following (and keep in mind that each I/O keeps a reference to the cfqq): q1->new_cfqq = q2 // q2 now has 2 process references q3->new_cfqq = q2 // q2 now has 3 process references // the process associated with q2 exits // q2 now has 2 process references // queue 1 exits, drops its reference on q2 // q2 now has 1 process reference // q3 exits, so has 0 process references, and hence drops its references // to q2, which leaves q2 also with 0 process references q4 comes along and wants to merge with q3 q3->new_cfqq still points at q2! We follow that link and end up at an already freed cfqq. So, the fix is to not follow a merge chain if the top-most queue does not have a process reference, otherwise any queue in the chain could be already freed. I also changed the logic to disallow merging with a queue that does not have any process references. Previously, we did this check for one of the merge candidates, but not the other. That doesn't really make sense. Without the attached patch, my system would BUG within a couple of seconds of running the reproducer program. With the patch applied, my system ran the program for over an hour without issues. This addresses the following bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=16217 Thanks a ton to Phil Carns for providing the bug report and an excellent reproducer. [ Note for stable: this applies to 2.6.32/33/34 ]. Signed-off-by: Jeff Moyer <jmoyer@redhat.com> Reported-by: Phil Carns <carns@mcs.anl.gov> Cc: stable@kernel.org Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
2010-05-25cfq-iosched: fix an oops caused by slab leakShaohua Li
I got below oops when unloading cfq-iosched. Considering scenario: queue A merge to B, C merge to D and B will be merged to D. Before B is merged to D, we do split B. We should put B's reference for D. [ 807.768536] ============================================================================= [ 807.768539] BUG cfq_queue: Objects remaining on kmem_cache_close() [ 807.768541] ----------------------------------------------------------------------------- [ 807.768543] [ 807.768546] INFO: Slab 0xffffea0003e6b4e0 objects=26 used=1 fp=0xffff88011d584fd8 flags=0x200000000004082 [ 807.768550] Pid: 5946, comm: rmmod Tainted: G W 2.6.34-07097-gf4b87de-dirty #724 [ 807.768552] Call Trace: [ 807.768560] [<ffffffff81104e8d>] slab_err+0x8f/0x9d [ 807.768564] [<ffffffff811059e1>] ? flush_cpu_slab+0x0/0x93 [ 807.768569] [<ffffffff8164be52>] ? add_preempt_count+0xe/0xca [ 807.768572] [<ffffffff8164bd9c>] ? sub_preempt_count+0xe/0xb6 [ 807.768577] [<ffffffff81648871>] ? _raw_spin_unlock+0x15/0x30 [ 807.768580] [<ffffffff8164bd9c>] ? sub_preempt_count+0xe/0xb6 [ 807.768584] [<ffffffff811061bc>] list_slab_objects+0x9b/0x19f [ 807.768588] [<ffffffff8164bf0a>] ? add_preempt_count+0xc6/0xca [ 807.768591] [<ffffffff81109e27>] kmem_cache_destroy+0x13f/0x21d [ 807.768597] [<ffffffffa000ff13>] cfq_slab_kill+0x1a/0x43 [cfq_iosched] [ 807.768601] [<ffffffffa000ffcf>] cfq_exit+0x93/0x9e [cfq_iosched] [ 807.768606] [<ffffffff810973a2>] sys_delete_module+0x1b1/0x219 [ 807.768612] [<ffffffff8102fb5b>] system_call_fastpath+0x16/0x1b [ 807.768618] INFO: Object 0xffff88011d584618 @offset=1560 [ 807.768622] INFO: Allocated in cfq_get_queue+0x11e/0x274 [cfq_iosched] age=7173 cpu=1 pid=5496 [ 807.768626] ============================================================================= Cc: stable@kernel.org Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-05-24cfq-iosched: compact io_context radix_treeKonstantin Khlebnikov
Use small consequent indexes as radix tree keys instead of sparse cfqd address. This change will reduce radix tree depth from 11 (6 for 32-bit hosts) to 1 if host have <=64 disks under cfq control, or to 0 if there only one disk. So, this patch save 10*560 bytes for each process (5*296 for 32-bit hosts) For each cfqd allocate cic index from ida. To unlink dead cic from tree without cfqd access store index into ->key. (bit 0 -- dead mark, bits 1..30 -- index: ida produce id in range 0..2^31-1) Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-05-24cfq-iosched: remove dead_key from cfq_io_contextKonstantin Khlebnikov
Remove ->dead_key field from cfq_io_context to shrink its size to 128 bytes. (64 bytes for 32-bit hosts) Use lower bit in ->key as dead-mark, instead of moving key to separate field. After this for dead cfq_io_context we got cic->key != cfqd automatically. Thus, io_context's last-hit cache should work without changing. Now to check ->key for non-dead state compare it with cfqd, instead of checking ->key for non-null value as it was before. Plus remove obsolete race protection in cfq_cic_lookup. This race gone after v2.6.24-1728-g4ac845a Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-05-21Merge branch 'master' into for-2.6.35Jens Axboe
Conflicts: fs/ext3/fsync.c Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-05-21block: remove all rcu head initializationsPaul E. McKenney
Remove all rcu head inits. We don't care about the RCU head state before passing it to call_rcu() anyway. Only leave the "on_stack" variants so debugobjects can keep track of objects on stack. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-05-06blk-cgroup: Fix RCU correctness warning in cfq_init_queue()Vivek Goyal
It is necessary to be in an RCU read-side critical section when invoking css_id(), so this patch adds one to blkiocg_add_blkio_group(). This is actually a false positive, because this is called at initialization time and hence always refers to the root cgroup, which cannot go away. [ 103.790505] =================================================== [ 103.790509] [ INFO: suspicious rcu_dereference_check() usage. ] [ 103.790511] --------------------------------------------------- [ 103.790514] kernel/cgroup.c:4432 invoked rcu_dereference_check() without protection! [ 103.790517] [ 103.790517] other info that might help us debug this: [ 103.790519] [ 103.790521] [ 103.790521] rcu_scheduler_active = 1, debug_locks = 1 [ 103.790524] 4 locks held by bash/4422: [ 103.790526] #0: (&buffer->mutex){+.+.+.}, at: [<ffffffff8114befa>] sysfs_write_file+0x3c/0x144 [ 103.790537] #1: (s_active#102){.+.+.+}, at: [<ffffffff8114bfa5>] sysfs_write_file+0xe7/0x144 [ 103.790544] #2: (&q->sysfs_lock){+.+.+.}, at: [<ffffffff812263b1>] queue_attr_store+0x49/0x8f [ 103.790552] #3: (&(&blkcg->lock)->rlock){......}, at: [<ffffffff8122e4db>] blkiocg_add_blkio_group+0x2b/0xad [ 103.790560] [ 103.790561] stack backtrace: [ 103.790564] Pid: 4422, comm: bash Not tainted 2.6.34-rc4-blkio-second-crash #81 [ 103.790567] Call Trace: [ 103.790572] [<ffffffff81068f57>] lockdep_rcu_dereference+0x9d/0xa5 [ 103.790577] [<ffffffff8107fac1>] css_id+0x44/0x57 [ 103.790581] [<ffffffff8122e503>] blkiocg_add_blkio_group+0x53/0xad [ 103.790586] [<ffffffff81231936>] cfq_init_queue+0x139/0x32c [ 103.790591] [<ffffffff8121f2d0>] elv_iosched_store+0xbf/0x1bf [ 103.790595] [<ffffffff812263d8>] queue_attr_store+0x70/0x8f [ 103.790599] [<ffffffff8114bfa5>] ? sysfs_write_file+0xe7/0x144 [ 103.790603] [<ffffffff8114bfc6>] sysfs_write_file+0x108/0x144 [ 103.790609] [<ffffffff810f527f>] vfs_write+0xae/0x10b [ 103.790612] [<ffffffff81069863>] ? trace_hardirqs_on_caller+0x10c/0x130 [ 103.790616] [<ffffffff810f539c>] sys_write+0x4a/0x6e [ 103.790622] [<ffffffff81002b5b>] system_call_fastpath+0x16/0x1b [ 103.790625] Located-by: Miles Lane <miles.lane@gmail.com> Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-04-28cfq-iosched: fix broken cfq_ref_get_cfqf() for CONFIG_BLK_CGROUP=y && ↵Dmitry Monakhov
CFQ_GROUP_IOSCHED=n We should return the cfq_group for this case, not NULL. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-04-26blk-cgroup: config options re-arrangementVivek Goyal
This patch fixes few usability and configurability issues. o All the cgroup based controller options are configurable from "Genral Setup/Control Group Support/" menu. blkio is the only exception. Hence make this option visible in above menu and make it configurable from there to bring it inline with rest of the cgroup based controllers. o Get rid of CONFIG_DEBUG_CFQ_IOSCHED. This option currently does two things. - Enable printing of cgroup paths in blktrace - Enables CONFIG_DEBUG_BLK_CGROUP, which in turn displays additional stat files in cgroup. If we are using group scheduling, blktrace data is of not really much use if cgroup information is not present. To get this data, currently one has to also enable CONFIG_DEBUG_CFQ_IOSCHED, which in turn brings the overhead of all the additional debug stat files which is not desired. Hence, this patch moves printing of cgroup paths under CONFIG_CFQ_GROUP_IOSCHED. This allows us to get rid of CONFIG_DEBUG_CFQ_IOSCHED completely. Now all the debug stat files are controlled only by CONFIG_DEBUG_BLK_CGROUP which can be enabled through config menu. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Acked-by: Divyesh Shah <dpshah@google.com> Reviewed-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-04-26blkio: Fix another BUG_ON() crash due to cfqq movement across groupsVivek Goyal
o Once in a while, I was hitting a BUG_ON() in blkio code. empty_time was assuming that upon slice expiry, group can't be marked empty already (except forced dispatch). But this assumption is broken if cfqq can move (group_isolation=0) across groups after receiving a request. I think most likely in this case we got a request in a cfqq and accounted the rq in one group, later while adding the cfqq to tree, we moved the queue to a different group which was already marked empty and after dispatch from slice we found group already marked empty and raised alarm. This patch does not error out if group is already marked empty. This can introduce some empty_time stat error only in case of group_isolation=0. This is better than crashing. In case of group_isolation=1 we should still get same stats as before this patch. [ 222.308546] ------------[ cut here ]------------ [ 222.309311] kernel BUG at block/blk-cgroup.c:236! [ 222.309311] invalid opcode: 0000 [#1] SMP [ 222.309311] last sysfs file: /sys/devices/virtual/block/dm-3/queue/scheduler [ 222.309311] CPU 1 [ 222.309311] Modules linked in: dm_round_robin dm_multipath qla2xxx scsi_transport_fc dm_zero dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] [ 222.309311] [ 222.309311] Pid: 4780, comm: fio Not tainted 2.6.34-rc4-blkio-config #68 0A98h/HP xw8600 Workstation [ 222.309311] RIP: 0010:[<ffffffff8121ad88>] [<ffffffff8121ad88>] blkiocg_set_start_empty_time+0x50/0x83 [ 222.309311] RSP: 0018:ffff8800ba6e79f8 EFLAGS: 00010002 [ 222.309311] RAX: 0000000000000082 RBX: ffff8800a13b7990 RCX: ffff8800a13b7808 [ 222.309311] RDX: 0000000000002121 RSI: 0000000000000082 RDI: ffff8800a13b7a30 [ 222.309311] RBP: ffff8800ba6e7a18 R08: 0000000000000000 R09: 0000000000000001 [ 222.309311] R10: 000000000002f8c8 R11: ffff8800ba6e7ad8 R12: ffff8800a13b78ff [ 222.309311] R13: ffff8800a13b7990 R14: 0000000000000001 R15: ffff8800a13b7808 [ 222.309311] FS: 00007f3beec476f0(0000) GS:ffff880001e40000(0000) knlGS:0000000000000000 [ 222.309311] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 222.309311] CR2: 000000000040e7f0 CR3: 00000000a12d5000 CR4: 00000000000006e0 [ 222.309311] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 222.309311] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [ 222.309311] Process fio (pid: 4780, threadinfo ffff8800ba6e6000, task ffff8800b3d6bf00) [ 222.309311] Stack: [ 222.309311] 0000000000000001 ffff8800bab17a48 ffff8800bab17a48 ffff8800a13b7800 [ 222.309311] <0> ffff8800ba6e7a68 ffffffff8121da35 ffff880000000001 00ff8800ba5c5698 [ 222.309311] <0> ffff8800ba6e7a68 ffff8800a13b7800 0000000000000000 ffff8800bab17a48 [ 222.309311] Call Trace: [ 222.309311] [<ffffffff8121da35>] __cfq_slice_expired+0x2af/0x3ec [ 222.309311] [<ffffffff8121fd7b>] cfq_dispatch_requests+0x2c8/0x8e8 [ 222.309311] [<ffffffff8120f1cd>] ? spin_unlock_irqrestore+0xe/0x10 [ 222.309311] [<ffffffff8120fb1a>] ? blk_insert_cloned_request+0x70/0x7b [ 222.309311] [<ffffffff81210461>] blk_peek_request+0x191/0x1a7 [ 222.309311] [<ffffffffa0002799>] dm_request_fn+0x38/0x14c [dm_mod] [ 222.309311] [<ffffffff810ae61f>] ? sync_page_killable+0x0/0x35 [ 222.309311] [<ffffffff81210fd4>] __generic_unplug_device+0x32/0x37 [ 222.309311] [<ffffffff81211274>] generic_unplug_device+0x2e/0x3c [ 222.309311] [<ffffffffa00011a6>] dm_unplug_all+0x42/0x5b [dm_mod] [ 222.309311] [<ffffffff8120ca37>] blk_unplug+0x29/0x2d [ 222.309311] [<ffffffff8120ca4d>] blk_backing_dev_unplug+0x12/0x14 [ 222.309311] [<ffffffff81109a7a>] block_sync_page+0x35/0x39 [ 222.309311] [<ffffffff810ae616>] sync_page+0x41/0x4a [ 222.309311] [<ffffffff810ae62d>] sync_page_killable+0xe/0x35 [ 222.309311] [<ffffffff8158aa59>] __wait_on_bit_lock+0x46/0x8f [ 222.309311] [<ffffffff810ae4f5>] __lock_page_killable+0x66/0x6d [ 222.309311] [<ffffffff81056f9c>] ? wake_bit_function+0x0/0x33 [ 222.309311] [<ffffffff810ae528>] lock_page_killable+0x2c/0x2e [ 222.309311] [<ffffffff810afbc5>] generic_file_aio_read+0x361/0x4f0 [ 222.309311] [<ffffffff810ea044>] do_sync_read+0xcb/0x108 [ 222.309311] [<ffffffff811e42f7>] ? security_file_permission+0x16/0x18 [ 222.309311] [<ffffffff810ea6ab>] vfs_read+0xab/0x108 [ 222.309311] [<ffffffff810ea7c8>] sys_read+0x4a/0x6e [ 222.309311] [<ffffffff81002b5b>] system_call_fastpath+0x16/0x1b [ 222.309311] Code: 58 01 00 00 00 48 89 c6 75 0a 48 83 bb 60 01 00 00 00 74 09 48 8d bb a0 00 00 00 eb 35 41 fe cc 74 0d f6 83 c0 01 00 00 04 74 04 <0f> 0b eb fe 48 89 75 e8 e8 be e0 de ff 66 83 8b c0 01 00 00 04 [ 222.309311] RIP [<ffffffff8121ad88>] blkiocg_set_start_empty_time+0x50/0x83 [ 222.309311] RSP <ffff8800ba6e79f8> [ 222.309311] ---[ end trace 32b4f71dffc15712 ]--- Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Acked-by: Divyesh Shah <dpshah@google.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-04-21blkio: Fix blkio crash during rq stat updateVivek Goyal
blkio + cfq was crashing even when two sequential readers were put in two separate cgroups (group_isolation=0). The reason being that cfqq can migrate across groups based on its being sync-noidle or not, it can happen that at request insertion time, cfqq belonged to one cfqg and at request dispatch time, it belonged to root group. In this case request stats per cgroup can go wrong and it also runs into BUG_ON(). This patch implements rq stashing away a cfq group pointer and not relying on cfqq->cfqg pointer alone for rq stat accounting. [ 65.163523] ------------[ cut here ]------------ [ 65.164301] kernel BUG at block/blk-cgroup.c:117! [ 65.164301] invalid opcode: 0000 [#1] SMP [ 65.164301] last sysfs file: /sys/devices/pci0000:00/0000:00:05.0/0000:60:00.1/host9/rport-9:0-0/target9:0:0/9:0:0:2/block/sde/stat [ 65.164301] CPU 1 [ 65.164301] Modules linked in: dm_round_robin dm_multipath qla2xxx scsi_transport_fc dm_zero dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] [ 65.164301] [ 65.164301] Pid: 4505, comm: fio Not tainted 2.6.34-rc4-blk-for-35 #34 0A98h/HP xw8600 Workstation [ 65.164301] RIP: 0010:[<ffffffff8121924f>] [<ffffffff8121924f>] blkiocg_update_io_remove_stats+0x5b/0xaf [ 65.164301] RSP: 0018:ffff8800ba5a79e8 EFLAGS: 00010046 [ 65.164301] RAX: 0000000000000096 RBX: ffff8800bb268d60 RCX: 0000000000000000 [ 65.164301] RDX: ffff8800bb268eb8 RSI: 0000000000000000 RDI: ffff8800bb268e00 [ 65.164301] RBP: ffff8800ba5a7a08 R08: 0000000000000064 R09: 0000000000000001 [ 65.164301] R10: 0000000000079640 R11: ffff8800a0bd5bf0 R12: ffff8800bab4af01 [ 65.164301] R13: ffff8800bab4af00 R14: ffff8800bb1d8928 R15: 0000000000000000 [ 65.164301] FS: 00007f18f75056f0(0000) GS:ffff880001e40000(0000) knlGS:0000000000000000 [ 65.164301] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 65.164301] CR2: 000000000040e7f0 CR3: 00000000ba52b000 CR4: 00000000000006e0 [ 65.164301] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 65.164301] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [ 65.164301] Process fio (pid: 4505, threadinfo ffff8800ba5a6000, task ffff8800ba45ae80) [ 65.164301] Stack: [ 65.164301] ffff8800ba5a7a08 ffff8800ba722540 ffff8800bab4af68 ffff8800bab4af68 [ 65.164301] <0> ffff8800ba5a7a38 ffffffff8121d814 ffff8800ba722540 ffff8800bab4af68 [ 65.164301] <0> ffff8800ba722540 ffff8800a08f6800 ffff8800ba5a7a68 ffffffff8121d8ca [ 65.164301] Call Trace: [ 65.164301] [<ffffffff8121d814>] cfq_remove_request+0xe4/0x116 [ 65.164301] [<ffffffff8121d8ca>] cfq_dispatch_insert+0x84/0xe1 [ 65.164301] [<ffffffff8121e833>] cfq_dispatch_requests+0x767/0x8e8 [ 65.164301] [<ffffffff8120e524>] ? submit_bio+0xc3/0xcc [ 65.164301] [<ffffffff810ad657>] ? sync_page_killable+0x0/0x35 [ 65.164301] [<ffffffff8120ea8d>] blk_peek_request+0x191/0x1a7 [ 65.164301] [<ffffffffa000109c>] ? dm_get_live_table+0x44/0x4f [dm_mod] [ 65.164301] [<ffffffffa0002799>] dm_request_fn+0x38/0x14c [dm_mod] [ 65.164301] [<ffffffff810ad657>] ? sync_page_killable+0x0/0x35 [ 65.164301] [<ffffffff8120f600>] __generic_unplug_device+0x32/0x37 [ 65.164301] [<ffffffff8120f8a0>] generic_unplug_device+0x2e/0x3c [ 65.164301] [<ffffffffa00011a6>] dm_unplug_all+0x42/0x5b [dm_mod] [ 65.164301] [<ffffffff8120b063>] blk_unplug+0x29/0x2d [ 65.164301] [<ffffffff8120b079>] blk_backing_dev_unplug+0x12/0x14 [ 65.164301] [<ffffffff81108a82>] block_sync_page+0x35/0x39 [ 65.164301] [<ffffffff810ad64e>] sync_page+0x41/0x4a [ 65.164301] [<ffffffff810ad665>] sync_page_killable+0xe/0x35 [ 65.164301] [<ffffffff81589027>] __wait_on_bit_lock+0x46/0x8f [ 65.164301] [<ffffffff810ad52d>] __lock_page_killable+0x66/0x6d [ 65.164301] [<ffffffff81055fd4>] ? wake_bit_function+0x0/0x33 [ 65.164301] [<ffffffff810ad560>] lock_page_killable+0x2c/0x2e [ 65.164301] [<ffffffff810aebfd>] generic_file_aio_read+0x361/0x4f0 [ 65.164301] [<ffffffff810e906c>] do_sync_read+0xcb/0x108 [ 65.164301] [<ffffffff811e32a3>] ? security_file_permission+0x16/0x18 [ 65.164301] [<ffffffff810e96d3>] vfs_read+0xab/0x108 [ 65.164301] [<ffffffff810e97f0>] sys_read+0x4a/0x6e [ 65.164301] [<ffffffff81002b5b>] system_call_fastpath+0x16/0x1b [ 65.164301] Code: 00 74 1c 48 8b 8b 60 01 00 00 48 85 c9 75 04 0f 0b eb fe 48 ff c9 48 89 8b 60 01 00 00 eb 1a 48 8b 8b 58 01 00 00 48 85 c9 75 04 <0f> 0b eb fe 48 ff c9 48 89 8b 58 01 00 00 45 84 e4 74 16 48 8b [ 65.164301] RIP [<ffffffff8121924f>] blkiocg_update_io_remove_stats+0x5b/0xaf [ 65.164301] RSP <ffff8800ba5a79e8> [ 65.164301] ---[ end trace 1b2b828753032e68 ]--- Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-04-16blkio: Initialize blkg->stats_lock for the root cfqg tooDivyesh Shah
This fixes the lockdep warning reported by Gui Jianfeng. Signed-off-by: Divyesh Shah <dpshah@google.com> Reviewed-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-04-14blkio: Fix compile errorsDivyesh Shah
Fixes compile errors in blk-cgroup code for empty_time stat and a merge fix in CFQ. The first error was when CONFIG_DEBUG_CFQ_IOSCHED is not set. Signed-off-by: Divyesh Shah <dpshah@google.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-04-13Merge branch 'master' into for-2.6.35Jens Axboe
Conflicts: block/blk-cgroup.c block/cfq-iosched.c Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-04-13block: Update to io-controller statsDivyesh Shah
Changelog from v1: o Call blkiocg_update_idle_time_stats() at cfq_rq_enqueued() instead of at dispatch time. Changelog from original patchset: (in response to Vivek Goyal's comments) o group blkiocg_update_blkio_group_dequeue_stats() with other DEBUG functions o rename blkiocg_update_set_active_queue_stats() to blkiocg_update_avg_queue_size_stats() o s/request/io/ in blkiocg_update_request_add_stats() and blkiocg_update_request_remove_stats() o Call cfq_del_timer() at request dispatch() instead of blkiocg_update_idle_time_stats() Signed-off-by: Divyesh Shah<dpshah@google.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-04-13io-controller: Add a new interface "weight_device" for IO-ControllerGui Jianfeng
Currently, IO Controller makes use of blkio.weight to assign weight for all devices. Here a new user interface "blkio.weight_device" is introduced to assign different weights for different devices. blkio.weight becomes the default value for devices which are not configured by "blkio.weight_device" You can use the following format to assigned specific weight for a given device: #echo "major:minor weight" > blkio.weight_device major:minor represents device number. And you can remove weight for a given device as following: #echo "major:minor 0" > blkio.weight_device V1->V2 changes: - use user interface "weight_device" instead of "policy" suggested by Vivek - rename some struct suggested by Vivek - rebase to 2.6-block "for-linus" branch - remove an useless list_empty check pointed out by Li Zefan - some trivial typo fix V2->V3 changes: - Move policy_*_node() functions up to get rid of forward declarations - rename related functions by adding prefix "blkio_" Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-04-09Merge branch 'for-linus' of git://git.kernel.dk/linux-2.6-blockLinus Torvalds
* 'for-linus' of git://git.kernel.dk/linux-2.6-block: (34 commits) cfq-iosched: Fix the incorrect timeslice accounting with forced_dispatch loop: Update mtime when writing using aops block: expose the statistics in blkio.time and blkio.sectors for the root cgroup backing-dev: Handle class_create() failure Block: Fix block/elevator.c elevator_get() off-by-one error drbd: lc_element_by_index() never returns NULL cciss: unlock on error path cfq-iosched: Do not merge queues of BE and IDLE classes cfq-iosched: Add additional blktrace log messages in CFQ for easier debugging i2o: Remove the dangerous kobj_to_i2o_device macro block: remove 16 bytes of padding from struct request on 64bits cfq-iosched: fix a kbuild regression block: make CONFIG_BLK_CGROUP visible Remove GENHD_FL_DRIVERFS block: Export max number of segments and max segment size in sysfs block: Finalize conversion of block limits functions block: Fix overrun in lcm() and move it to lib vfs: improve writeback_inodes_wb() paride: fix off-by-one test drbd: fix al-to-on-disk-bitmap for 4k logical_block_size ...
2010-04-09cfq-iosched: Fix the incorrect timeslice accounting with forced_dispatchDivyesh Shah
When CFQ dispatches requests forcefully due to a barrier or changing iosched, it runs through all cfqq's dispatching requests and then expires each queue. However, it does not activate a cfqq before flushing its IOs resulting in using stale values for computing slice_used. This patch fixes it by calling activate queue before flushing reuqests from each queue. This is useful mostly for barrier requests because when the iosched is changing it really doesnt matter if we have incorrect accounting since we're going to break down all structures anyway. We also now expire the current timeslice before moving on with the dispatch to accurately account slice used for that cfqq. Signed-off-by: Divyesh Shah<dpshah@google.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-04-09blkio: Add more debug-only per-cgroup statsDivyesh Shah
1) group_wait_time - This is the amount of time the cgroup had to wait to get a timeslice for one of its queues from when it became busy, i.e., went from 0 to 1 request queued. This is different from the io_wait_time which is the cumulative total of the amount of time spent by each IO in that cgroup waiting in the scheduler queue. This stat is a great way to find out any jobs in the fleet that are being starved or waiting for longer than what is expected (due to an IO controller bug or any other issue). 2) empty_time - This is the amount of time a cgroup spends w/o any pending requests. This stat is useful when a job does not seem to be able to use its assigned disk share by helping check if that is happening due to an IO controller bug or because the job is not submitting enough IOs. 3) idle_time - This is the amount of time spent by the IO scheduler idling for a given cgroup in anticipation of a better request than the exising ones from other queues/cgroups. All these stats are recorded using start and stop events. When reading these stats, we do not add the delta between the current time and the last start time if we're between the start and stop events. We avoid doing this to make sure that these numbers are always monotonically increasing when read. Since we're using sched_clock() which may use the tsc as its source, it may induce some inconsistency (due to tsc resync across cpus) if we included the current delta. Signed-off-by: Divyesh Shah<dpshah@google.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-04-09blkio: Add io_queued and avg_queue_size statsDivyesh Shah
These stats are useful for getting a feel for the queue depth of the cgroup, i.e., how filled up its queues are at a given instant and over the existence of the cgroup. This ability is useful when debugging problems in the wild as it helps understand the application's IO pattern w/o having to read through the userspace code (coz its tedious or just not available) or w/o the ability to run blktrace (since you may not have root access and/or not want to disturb performance). Signed-off-by: Divyesh Shah<dpshah@google.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-04-09blkio: Add io_merged statDivyesh Shah
This includes both the number of bios merged into requests belonging to this cgroup as well as the number of requests merged together. In the past, we've observed different merging behavior across upstream kernels, some by design some actual bugs. This stat helps a lot in debugging such problems when applications report decreased throughput with a new kernel version. This needed adding an extra elevator function to capture bios being merged as I did not want to pollute elevator code with blkiocg knowledge and hence needed the accounting invocation to come from CFQ. Signed-off-by: Divyesh Shah<dpshah@google.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-04-09blkio: Changes to IO controller additional stats patchesDivyesh Shah
that include some minor fixes and addresses all comments. Changelog: (most based on Vivek Goyal's comments) o renamed blkiocg_reset_write to blkiocg_reset_stats o more clarification in the documentation on io_service_time and io_wait_time o Initialize blkg->stats_lock o rename io_add_stat to blkio_add_stat and declare it static o use bool for direction and sync o derive direction and sync info from existing rq methods o use 12 for major:minor string length o define io_service_time better to cover the NCQ case o add a separate reset_stats interface o make the indexed stats a 2d array to simplify macro and function pointer code o blkio.time now exports in jiffies as before o Added stats description in patch description and Documentation/cgroup/blkio-controller.txt o Prefix all stats functions with blkio and make them static as applicable o replace IO_TYPE_MAX with IO_TYPE_TOTAL o Moved #define constant to top of blk-cgroup.c o Pass dev_t around instead of char * o Add note to documentation file about resetting stats o use BLK_CGROUP_MODULE in addition to BLK_CGROUP config option in #ifdef statements o Avoid struct request specific knowledge in blk-cgroup. blk-cgroup.h now has rq_direction() and rq_sync() functions which are used by CFQ and when using io-controller at a higher level, bio_* functions can be added. Signed-off-by: Divyesh Shah<dpshah@google.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-04-05block: expose the statistics in blkio.time and blkio.sectors for the root cgroupRicky Benitez
Currently, the io statistics for the root cgroup are maintained, but they are not shown because the device information is not available at the point that the root blkio cgroup is created. This patch updates the device information when the statistics are updated so that the statistics become visible. Signed-off-by: Ricky Benitez <rickyb@google.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-04-02blkio: Increment the blkio cgroup stats for real nowDivyesh Shah
We also add start_time_ns and io_start_time_ns fields to struct request here to record the time when a request is created and when it is dispatched to device. We use ns uints here as ms and jiffies are not very useful for non-rotational media. Signed-off-by: Divyesh Shah<dpshah@google.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-04-02blkio: Add io controller stats likeDivyesh Shah
- io_service_time - io_wait_time - io_serviced - io_service_bytes These stats are accumulated per operation type helping us to distinguish between read and write, and sync and async IO. This patch does not increment any of these stats. Signed-off-by: Divyesh Shah<dpshah@google.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-04-02blkio: Remove per-cfqq nr_sectors as we'll be passingDivyesh Shah
that info at request dispatch with other stats now. This patch removes the existing support for accounting sectors for a blkio_group. This will be added back differently in the next two patches. Signed-off-by: Divyesh Shah<dpshah@google.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-03-30include cleanup: Update gfp.h and slab.h includes to prepare for breaking ↵Tejun Heo
implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-25cfq-iosched: Do not merge queues of BE and IDLE classesDivyesh Shah
Even if they are found to be co-operating. The prio_trees do not have any IDLE cfqqs on them. cfq_close_cooperator() is called from cfq_select_queue() and cfq_completed_request(). The latter ensures that the close cooperator code does not get invoked if the current cfqq is of class IDLE but the former doesn't seem to have any such checks. So an IDLE cfqq may get merged with a BE cfqq from the same group which should be avoided. Signed-off-by: Divyesh Shah<dpshah@google.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-03-25cfq-iosched: Add additional blktrace log messages in CFQ for easier debuggingDivyesh Shah
These have helped us debug some issues we've noticed in earlier IO controller versions and should be useful now as well. The extra logging covers: - idling behavior. Since there are so many conditions based on which we decide to idle or not, this patch adds a log message for some conditions that we've found useful. - workload slices and current prio and workload type Changelog from v1: o moved log message from cfq_set_active_queue() to __cfq_set_active_queue() o changed queue_count to st->count Signed-off-by: Divyesh Shah<dpshah@google.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-03-19cfq-iosched: fix a kbuild regressionShaohua Li
Alex Shi reported a kbuild regression which is about 10% performance lost. He bisected to this commit: 3dde36ddea3e07dd025c4c1ba47edec91606fec0. The reason is cfqq_close() can't find close cooperator. Restoring cfq_rq_close()'s threshold to original value makes the regression go away. Since for_preempt parameter isn't used anymore, this patch deletes it. Reported-by: Alex Shi <alex.shi@intel.com> Signed-off-by: Shaohua Li <shaohua.li@intel.com> Acked-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-03-01cfq: remove 8 bytes of padding from cfq_rb_root on 64 bit buildsRichard Kennedy
Reorder cfq_rb_root to remove 8 bytes of padding on 64 bit builds. Consequently removing 56 bytes from cfq_group and 64 bytes from cfq_data. Signed-off-by: Richard Kennedy <richard@rsk.demon.co.uk> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-03-01cfq-iosched: quantum check tweakShaohua Li
Currently a queue can only dispatch up to 4 requests if there are other queues. This isn't optimal, device can handle more requests, for example, AHCI can handle 31 requests. I can understand the limit is for fairness, but we could do a tweak: if the queue still has a lot of slice left, sounds we could ignore the limit. Test shows this boost my workload (two thread randread of a SSD) from 78m/s to 100m/s. Thanks for suggestions from Corrado and Vivek for the patch. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-28cfq-iosched: requests "in flight" vs "in driver" clarificationCorrado Zoccolo
Counters for requests "in flight" and "in driver" are used asymmetrically in cfq_may_dispatch, and have slightly different meaning. We split the rq_in_flight counter (was sync_flight) to count both sync and async requests, in order to use this one, which is more accurate in some corner cases. The rq_in_driver counter is coalesced, since individual sync/async counts are not used any more. Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-28cfq-iosched: rethink seeky detection for SSDsCorrado Zoccolo
CFQ currently applies the same logic of detecting seeky queues and grouping them together for rotational disks as well as SSDs. For SSDs, the time to complete a request doesn't depend on the request location, but only on the size. This patch therefore changes the criterion to group queues by request size in case of SSDs, in order to achieve better fairness. Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-28cfq-iosched: rework seeky detectionCorrado Zoccolo
Current seeky detection is based on average seek lenght. This is suboptimal, since the average will not distinguish between: * a process doing medium sized seeks * a process doing some sequential requests interleaved with larger seeks and even a medium seek can take lot of time, if the requested sector happens to be behind the disk head in the rotation (50% probability). Therefore, we change the seeky queue detection to work as follows: * each request can be classified as sequential if it is very close to the current head position, i.e. it is likely in the disk cache (disks usually read more data than requested, and put it in cache for subsequent reads). Otherwise, the request is classified as seeky. * an history window of the last 32 requests is kept, storing the classification result. * A queue is marked as seeky if more than 1/8 of the last 32 requests were seeky. This patch fixes a regression reported by Yanmin, on mmap 64k random reads. Reported-by: Yanmin Zhang <yanmin_zhang@linux.intel.com> Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-26cfq: Remove useless css reference getGui Jianfeng
There's no need to take css reference here, for the caller has already called rcu_read_lock() to prevent cgroup from being removed. Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Reviewed-by: Li Zefan <lizf@cn.fujitsu.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-22cfq: reorder cfq_queue removing padding on 64bitRichard Kennedy
This removes 8 bytes of padding from struct cfq_queue on 64 bit builds, shrinking it's size to 256 bytes, so fitting into 1 fewer cachelines and allowing 1 more object/slab in it's kmem_cache. Signed-off-by: Richard Kennedy <richard@rsk.demon.co.uk> Reviewed-by: Jeff Moyer <jmoyer@redhat.com> ---- patch against 2.6.33-rc8 tested on x86_64 AMDX2 Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-05cfq-iosched: split seeky coop queues after one sliceShaohua Li
Currently we split seeky coop queues after 1s, which is too big. Below patch marks seeky coop queue split_coop flag after one slice. After that, if new requests come in, the queues will be splitted. Patch is suggested by Corrado. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Reviewed-by: Corrado Zoccolo <czoccolo@gmail.com> Acked-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-02cfq-iosched: Do not idle on async queuesVivek Goyal
Few weeks back, Shaohua Li had posted similar patch. I am reposting it with more test results. This patch does two things. - Do not idle on async queues. - It also changes the write queue depth CFQ drives (cfq_may_dispatch()). Currently, we seem to driving queue depth of 1 always for WRITES. This is true even if there is only one write queue in the system and all the logic of infinite queue depth in case of single busy queue as well as slowly increasing queue depth based on last delayed sync request does not seem to be kicking in at all. This patch will allow deeper WRITE queue depths (subjected to the other WRITE queue depth contstraints like cfq_quantum and last delayed sync request). Shaohua Li had reported getting more out of his SSD. For me, I have got one Lun exported from an HP EVA and when pure buffered writes are on, I can get more out of the system. Following are test results of pure buffered writes (with end_fsync=1) with vanilla and patched kernel. These results are average of 3 sets of run with increasing number of threads. AVERAGE[bufwfs][vanilla] ------- job Set NR ReadBW(KB/s) MaxClat(us) WriteBW(KB/s) MaxClat(us) --- --- -- ------------ ----------- ------------- ----------- bufwfs 3 1 0 0 95349 474141 bufwfs 3 2 0 0 100282 806926 bufwfs 3 4 0 0 109989 2.7301e+06 bufwfs 3 8 0 0 116642 3762231 bufwfs 3 16 0 0 118230 6902970 AVERAGE[bufwfs] [patched kernel] ------- bufwfs 3 1 0 0 270722 404352 bufwfs 3 2 0 0 206770 1.06552e+06 bufwfs 3 4 0 0 195277 1.62283e+06 bufwfs 3 8 0 0 260960 2.62979e+06 bufwfs 3 16 0 0 299260 1.70731e+06 I also ran buffered writes along with some sequential reads and some buffered reads going on in the system on a SATA disk because the potential risk could be that we should not be driving queue depth higher in presence of sync IO going to keep the max clat low. With some random and sequential reads going on in the system on one SATA disk I did not see any significant increase in max clat. So it looks like other WRITE queue depth control logic is doing its job. Here are the results. AVERAGE[brr, bsr, bufw together] [vanilla] ------- job Set NR ReadBW(KB/s) MaxClat(us) WriteBW(KB/s) MaxClat(us) --- --- -- ------------ ----------- ------------- ----------- brr 3 1 850 546345 0 0 bsr 3 1 14650 729543 0 0 bufw 3 1 0 0 23908 8274517 brr 3 2 981.333 579395 0 0 bsr 3 2 14149.7 1175689 0 0 bufw 3 2 0 0 21921 1.28108e+07 brr 3 4 898.333 1.75527e+06 0 0 bsr 3 4 12230.7 1.40072e+06 0 0 bufw 3 4 0 0 19722.3 2.4901e+07 brr 3 8 900 3160594 0 0 bsr 3 8 9282.33 1.91314e+06 0 0 bufw 3 8 0 0 18789.3 23890622 AVERAGE[brr, bsr, bufw mixed] [patched kernel] ------- job Set NR ReadBW(KB/s) MaxClat(us) WriteBW(KB/s) MaxClat(us) --- --- -- ------------ ----------- ------------- ----------- brr 3 1 837 417973 0 0 bsr 3 1 14357.7 591275 0 0 bufw 3 1 0 0 24869.7 8910662 brr 3 2 1038.33 543434 0 0 bsr 3 2 13351.3 1205858 0 0 bufw 3 2 0 0 18626.3 13280370 brr 3 4 913 1.86861e+06 0 0 bsr 3 4 12652.3 1430974 0 0 bufw 3 4 0 0 15343.3 2.81305e+07 brr 3 8 890 2.92695e+06 0 0 bsr 3 8 9635.33 1.90244e+06 0 0 bufw 3 8 0 0 17200.3 24424392 So looks like it might make sense to include this patch. Thanks Vivek Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-01-11cfq-iosched: Respect ioprio_class when preemptingDivyesh Shah
In cfq_should_preempt(), we currently allow some cases where a non-RT request can preempt an ongoing RT cfqq timeslice. This should not happen. Examples include: o A sync_noidle wl type non-RT request pre-empting a sync_noidle wl type cfqq on which we are idling. o Once we have per-cgroup async queues, a non-RT sync request pre-empting a RT async cfqq. Signed-off-by: Divyesh Shah<dpshah@google.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-28cfq-iosched: don't regard requests with long distance as closeShaohua Li
seek_mean could be very big sometimes, using it as close criteria is meaningless as this doen't improve any performance. So if it's big, let's fallback to default value. Reviewed-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Shaohua Li<shaohua.li@intel.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-18cfq-iosched: Remove prio_change logic for workload selectionVivek Goyal
o CFQ now internally divides cfq queues in therr workload categories. sync-idle, sync-noidle and async. Which workload to run depends primarily on rb_key offset across three service trees. Which is a combination of mulitiple things including what time queue got queued on the service tree. There is one exception though. That is if we switched the prio class, say we served some RT tasks and again started serving BE class, then with-in BE class we always started with sync-noidle workload irrespective of rb_key offset in service trees. This can provide better latencies for sync-noidle workload in the presence of RT tasks. o This patch gets rid of that exception and which workload to run with-in class always depends on lowest rb_key across service trees. The reason being that now we have multiple BE class groups and if we always switch to sync-noidle workload with-in group, we can potentially starve a sync-idle workload with-in group. Same is true for async workload which will be in root group. Also the workload-switching with-in group will become very unpredictable as it now depends whether some RT workload was running in the system or not. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Reviewed-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Acked-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-18cfq-iosched: Get rid of nr_groupsVivek Goyal
o Currently code does not seem to be using cfqd->nr_groups. Get rid of it. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Reviewed-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-18cfq-iosched: Remove the check for same cfq group from allow_mergeVivek Goyal
o allow_merge() already checks if submitting task is pointing to same cfqq as rq has been queued in. If everything is fine, we should not be having a task in one cgroup and having a pointer to cfqq in other cgroup. Well I guess in some situations it can happen and that is, when a random IO queue has been moved into root cgroup for group_isolation=0. In this case, tasks's cgroup/group is different from where actually cfqq is, but this is intentional and in this case merging should be allowed. The second situation is where due to close cooperator patches, multiple processes can be sharing a cfqq. If everything implemented right, we should not end up in a situation where tasks from different processes in different groups are sharing the same cfqq as we allow merging of cooperating queues only if they are in same group. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Reviewed-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-15cfq: set workload as expired if it doesn't have any slice leftGui Jianfeng
When a group is resumed, if it doesn't have workload slice left, we should set workload_expires as expired. Otherwise, we might start from where we left in previous group by error. Thanks the idea from Corrado. Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-10Fix a CFQ crash in "for-2.6.33" branch of block treeVivek Goyal
I think my previous patch introduced a bug which can lead to CFQ hitting BUG_ON(). The offending commit in for-2.6.33 branch is. commit 7667aa0630407bc07dc38dcc79d29cc0a65553c1 Author: Vivek Goyal <vgoyal@redhat.com> Date: Tue Dec 8 17:52:58 2009 -0500 cfq-iosched: Take care of corner cases of group losing share due to deletion While doing some stress testing on my box, I enountered following. login: [ 3165.148841] BUG: scheduling while atomic: swapper/0/0x10000100 [ 3165.149821] Modules linked in: cfq_iosched dm_multipath qla2xxx igb scsi_transport_fc dm_snapshot [last unloaded: scsi_wait_scan] [ 3165.149821] Pid: 0, comm: swapper Not tainted 2.6.32-block-for-33-merged-new #3 [ 3165.149821] Call Trace: [ 3165.149821] <IRQ> [<ffffffff8103fab8>] __schedule_bug+0x5c/0x60 [ 3165.149821] [<ffffffff8103afd7>] ? __wake_up+0x44/0x4d [ 3165.149821] [<ffffffff8153a979>] schedule+0xe3/0x7bc [ 3165.149821] [<ffffffff8103a796>] ? cpumask_next+0x1d/0x1f [ 3165.149821] [<ffffffffa000b21d>] ? cfq_dispatch_requests+0x6ba/0x93e [cfq_iosched] [ 3165.149821] [<ffffffff810422d8>] __cond_resched+0x2a/0x35 [ 3165.149821] [<ffffffffa000b21d>] ? cfq_dispatch_requests+0x6ba/0x93e [cfq_iosched] [ 3165.149821] [<ffffffff8153b1ee>] _cond_resched+0x2c/0x37 [ 3165.149821] [<ffffffff8100e2db>] is_valid_bugaddr+0x16/0x2f [ 3165.149821] [<ffffffff811e4161>] report_bug+0x18/0xac [ 3165.149821] [<ffffffff8100f1fc>] die+0x39/0x63 [ 3165.149821] [<ffffffff8153cde1>] do_trap+0x11a/0x129 [ 3165.149821] [<ffffffff8100d470>] do_invalid_op+0x96/0x9f [ 3165.149821] [<ffffffffa000b21d>] ? cfq_dispatch_requests+0x6ba/0x93e [cfq_iosched] [ 3165.149821] [<ffffffff81034b4d>] ? enqueue_task+0x5c/0x67 [ 3165.149821] [<ffffffff8103ae83>] ? task_rq_unlock+0x11/0x13 [ 3165.149821] [<ffffffff81041aae>] ? try_to_wake_up+0x292/0x2a4 [ 3165.149821] [<ffffffff8100c935>] invalid_op+0x15/0x20 [ 3165.149821] [<ffffffffa000b21d>] ? cfq_dispatch_requests+0x6ba/0x93e [cfq_iosched] [ 3165.149821] [<ffffffff810df5a6>] ? virt_to_head_page+0xe/0x2f [ 3165.149821] [<ffffffff811d8c2a>] blk_peek_request+0x191/0x1a7 [ 3165.149821] [<ffffffff811e5b8d>] ? kobject_get+0x1a/0x21 [ 3165.149821] [<ffffffff812c8d4c>] scsi_request_fn+0x82/0x3df [ 3165.149821] [<ffffffff8110b2de>] ? bio_fs_destructor+0x15/0x17 [ 3165.149821] [<ffffffff810df5a6>] ? virt_to_head_page+0xe/0x2f [ 3165.149821] [<ffffffff811d931f>] __blk_run_queue+0x42/0x71 [ 3165.149821] [<ffffffff811d9403>] blk_run_queue+0x26/0x3a [ 3165.149821] [<ffffffff812c8761>] scsi_run_queue+0x2de/0x375 [ 3165.149821] [<ffffffff812b60ac>] ? put_device+0x17/0x19 [ 3165.149821] [<ffffffff812c92d7>] scsi_next_command+0x3b/0x4b [ 3165.149821] [<ffffffff812c9b9f>] scsi_io_completion+0x1c9/0x3f5 [ 3165.149821] [<ffffffff812c3c36>] scsi_finish_command+0xb5/0xbe I think I have hit following BUG_ON() in cfq_dispatch_request(). BUG_ON(RB_EMPTY_ROOT(&cfqq->sort_list)); Please find attached the patch to fix it. I have done some stress testing with it and have not seen it happening again. o We should wait on a queue even after slice expiry only if it is empty. If queue is not empty then continue to expire it. o If we decide to keep the queue then make cfqq=NULL. Otherwise select_queue() will return a valid cfqq and cfq_dispatch_request() can hit following BUG_ON(). BUG_ON(RB_EMPTY_ROOT(&cfqq->sort_list)) Reviewed-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-10cfq: Remove wait_request flag when idle time is being deletedGui Jianfeng
Remove wait_request flag when idle time is being deleted, otherwise it'll hit this path every time when a request is enqueued. Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-09cfq-iosched: commenting non-obvious initializationCorrado Zoccolo
Added a comment to explain the initialization of last_delayed_sync. Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com> Acked-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-09cfq-iosched: Take care of corner cases of group losing share due to deletionVivek Goyal
If there is a sequential reader running in a group, we wait for next request to come in that group after slice expiry and once new request is in, we expire the queue. Otherwise we delete the group from service tree and group looses its fair share. So far I was marking a queue as wait_busy if it had consumed its slice and it was last queue in the group. But this condition did not cover following two cases. 1.If a request completed and slice has not expired yet. Next request comes in and is dispatched to disk. Now select_queue() hits and slice has expired. This group will be deleted. Because request is still in the disk, this queue will never get a chance to wait_busy. 2.If request completed and slice has not expired yet. Before next request comes in (delay due to think time), select_queue() hits and expires the queue hence group. This queue never got a chance to wait busy. Gui was hitting the boundary condition 1 and not getting fairness numbers proportional to weight. This patch puts the checks for above two conditions and improves the fairness numbers for sequential workload on rotational media. Check in select_queue() takes care of case 1 and additional check in should_wait_busy() takes care of case 2. Reported-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>