From patchwork Fri Jul 24 20:27:39 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vivek Goyal X-Patchwork-Id: 37217 Received: from hormel.redhat.com (hormel1.redhat.com [209.132.177.33]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n6OKUat9032049 for ; Fri, 24 Jul 2009 20:30:40 GMT Received: from listman.util.phx.redhat.com (listman.util.phx.redhat.com [10.8.4.110]) by hormel.redhat.com (Postfix) with ESMTP id 805558E10E4; Fri, 24 Jul 2009 16:30:38 -0400 (EDT) Received: from int-mx2.corp.redhat.com ([172.16.27.26]) by listman.util.phx.redhat.com (8.13.1/8.13.1) with ESMTP id n6OKSFBi020994 for ; Fri, 24 Jul 2009 16:28:15 -0400 Received: from ns3.rdu.redhat.com (ns3.rdu.redhat.com [10.11.255.199]) by int-mx2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id n6OKSBxO008265; Fri, 24 Jul 2009 16:28:12 -0400 Received: from machine.usersys.redhat.com (dhcp-100-19-148.bos.redhat.com [10.16.19.148]) by ns3.rdu.redhat.com (8.13.8/8.13.8) with ESMTP id n6OKSARf014420; Fri, 24 Jul 2009 16:28:10 -0400 Received: by machine.usersys.redhat.com (Postfix, from userid 10451) id 6E7E726588; Fri, 24 Jul 2009 16:27:54 -0400 (EDT) From: Vivek Goyal To: linux-kernel@vger.kernel.org, containers@lists.linux-foundation.org, dm-devel@redhat.com, jens.axboe@oracle.com, nauman@google.com, dpshah@google.com, ryov@valinux.co.jp, guijianfeng@cn.fujitsu.com, balbir@linux.vnet.ibm.com, righi.andrea@gmail.com Date: Fri, 24 Jul 2009 16:27:39 -0400 Message-Id: <1248467274-32073-10-git-send-email-vgoyal@redhat.com> In-Reply-To: <1248467274-32073-1-git-send-email-vgoyal@redhat.com> References: <1248467274-32073-1-git-send-email-vgoyal@redhat.com> X-Scanned-By: MIMEDefang 2.58 on 172.16.27.26 X-loop: dm-devel@redhat.com Cc: paolo.valente@unimore.it, dhaval@linux.vnet.ibm.com, peterz@infradead.org, fernando@oss.ntt.co.jp, lizf@cn.fujitsu.com, jmoyer@redhat.com, mikew@google.com, fchecconi@gmail.com, vgoyal@redhat.com, s-uchida@ap.jp.nec.com, akpm@linux-foundation.org, agk@redhat.com, m-ikeda@ds.jp.nec.com Subject: [dm-devel] [PATCH 09/24] io-controller: cfq changes to use hierarchical fair queuing code in elevaotor layer X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.5 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com Make cfq hierarhical. Signed-off-by: Nauman Rafique Signed-off-by: Fabio Checconi Signed-off-by: Paolo Valente Signed-off-by: Aristeu Rozanski Signed-off-by: Vivek Goyal --- block/Kconfig.iosched | 8 ++++++ block/cfq-iosched.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++- init/Kconfig | 2 +- 3 files changed, 69 insertions(+), 3 deletions(-) diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched index dd5224d..a91a807 100644 --- a/block/Kconfig.iosched +++ b/block/Kconfig.iosched @@ -54,6 +54,14 @@ config IOSCHED_CFQ working environment, suitable for desktop systems. This is the default I/O scheduler. +config IOSCHED_CFQ_HIER + bool "CFQ Hierarchical Scheduling support" + depends on IOSCHED_CFQ && CGROUPS + select GROUP_IOSCHED + default n + ---help--- + Enable hierarhical scheduling in cfq. + choice prompt "Default I/O scheduler" default DEFAULT_CFQ diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c index 98fd508..a362ce1 100644 --- a/block/cfq-iosched.c +++ b/block/cfq-iosched.c @@ -1283,6 +1283,60 @@ static void cfq_init_cfqq(struct cfq_data *cfqd, struct cfq_queue *cfqq, cfqq->pid = pid; } +#ifdef CONFIG_IOSCHED_CFQ_HIER +static void changed_cgroup(struct io_context *ioc, struct cfq_io_context *cic) +{ + struct cfq_queue *async_cfqq = cic_to_cfqq(cic, 0); + struct cfq_queue *sync_cfqq = cic_to_cfqq(cic, 1); + struct cfq_data *cfqd = cic->key; + struct io_group *iog, *__iog; + unsigned long flags; + struct request_queue *q; + + if (unlikely(!cfqd)) + return; + + q = cfqd->queue; + + spin_lock_irqsave(q->queue_lock, flags); + + iog = io_get_io_group(q, 0); + + if (async_cfqq != NULL) { + __iog = cfqq_to_io_group(async_cfqq); + if (iog != __iog) { + /* cgroup changed, drop the reference to async queue */ + cic_set_cfqq(cic, NULL, 0); + cfq_put_queue(async_cfqq); + } + } + + if (sync_cfqq != NULL) { + __iog = cfqq_to_io_group(sync_cfqq); + + /* + * Drop reference to sync queue. A new sync queue will + * be assigned in new group upon arrival of a fresh request. + * If old queue has got requests, those reuests will be + * dispatched over a period of time and queue will be freed + * automatically. + */ + if (iog != __iog) { + cic_set_cfqq(cic, NULL, 1); + cfq_put_queue(sync_cfqq); + } + } + + spin_unlock_irqrestore(q->queue_lock, flags); +} + +static void cfq_ioc_set_cgroup(struct io_context *ioc) +{ + call_for_each_cic(ioc, changed_cgroup); + ioc->cgroup_changed = 0; +} +#endif /* CONFIG_IOSCHED_CFQ_HIER */ + static struct cfq_queue * cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc, gfp_t gfp_mask) @@ -1294,7 +1348,7 @@ cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync, struct io_group *iog = NULL; retry: - iog = io_get_io_group(q); + iog = io_get_io_group(q, 1); cic = cfq_cic_lookup(cfqd, ioc); /* cic always exists here */ @@ -1384,7 +1438,7 @@ cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc, const int ioprio_class = task_ioprio_class(ioc); struct cfq_queue *async_cfqq = NULL; struct cfq_queue *cfqq = NULL; - struct io_group *iog = io_get_io_group(cfqd->queue); + struct io_group *iog = io_get_io_group(cfqd->queue, 1); if (!is_sync) { async_cfqq = io_group_async_queue_prio(iog, ioprio_class, @@ -1540,6 +1594,10 @@ out: smp_read_barrier_depends(); if (unlikely(ioc->ioprio_changed)) cfq_ioc_set_ioprio(ioc); +#ifdef CONFIG_IOSCHED_CFQ_HIER + if (unlikely(ioc->cgroup_changed)) + cfq_ioc_set_cgroup(ioc); +#endif return cic; err_free: cfq_cic_free(cic); diff --git a/init/Kconfig b/init/Kconfig index fa3edd6..7a368d8 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -613,7 +613,7 @@ config CGROUP_MEM_RES_CTLR_SWAP size is 4096bytes, 512k per 1Gbytes of swap. config GROUP_IOSCHED - bool "Group IO Scheduler" + bool depends on CGROUPS && ELV_FAIR_QUEUING default n ---help---