From patchwork Wed Jul 8 03:27:25 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gui Jianfeng X-Patchwork-Id: 34537 Received: from hormel.redhat.com (hormel1.redhat.com [209.132.177.33]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n683SG0b004138 for ; Wed, 8 Jul 2009 03:28:16 GMT Received: from listman.util.phx.redhat.com (listman.util.phx.redhat.com [10.8.4.110]) by hormel.redhat.com (Postfix) with ESMTP id C29D3619CF2; Tue, 7 Jul 2009 23:28:14 -0400 (EDT) Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by listman.util.phx.redhat.com (8.13.1/8.13.1) with ESMTP id n683SBY7026286 for ; Tue, 7 Jul 2009 23:28:11 -0400 Received: from mx3.redhat.com (mx3.redhat.com [172.16.48.32]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id n683S9u4006494; Tue, 7 Jul 2009 23:28:09 -0400 Received: from song.cn.fujitsu.com (cn.fujitsu.com [222.73.24.84] (may be forged)) by mx3.redhat.com (8.13.8/8.13.8) with ESMTP id n683RtpV025152; Tue, 7 Jul 2009 23:27:56 -0400 Received: from tang.cn.fujitsu.com (tang.cn.fujitsu.com [10.167.250.3]) by song.cn.fujitsu.com (Postfix) with ESMTP id 94199170136; Wed, 8 Jul 2009 11:39:14 +0800 (CST) Received: from fnst.cn.fujitsu.com (localhost.localdomain [127.0.0.1]) by tang.cn.fujitsu.com (8.13.1/8.13.1) with ESMTP id n683u6iK025383; Wed, 8 Jul 2009 11:56:06 +0800 Received: from [127.0.0.1] (unknown [10.167.141.226]) by fnst.cn.fujitsu.com (Postfix) with ESMTPA id 49FDAD4016; Wed, 8 Jul 2009 11:27:26 +0800 (CST) Message-ID: <4A54121D.5090008@cn.fujitsu.com> Date: Wed, 08 Jul 2009 11:27:25 +0800 From: Gui Jianfeng User-Agent: Thunderbird 2.0.0.5 (Windows/20070716) MIME-Version: 1.0 To: Vivek Goyal References: <1246564917-19603-1-git-send-email-vgoyal@redhat.com> <1246564917-19603-22-git-send-email-vgoyal@redhat.com> In-Reply-To: <1246564917-19603-22-git-send-email-vgoyal@redhat.com> X-RedHat-Spam-Score: -0.729 X-Scanned-By: MIMEDefang 2.58 on 172.16.52.254 X-Scanned-By: MIMEDefang 2.63 on 172.16.48.32 X-loop: dm-devel@redhat.com Cc: dhaval@linux.vnet.ibm.com, snitzer@redhat.com, peterz@infradead.org, dm-devel@redhat.com, dpshah@google.com, jens.axboe@oracle.com, agk@redhat.com, balbir@linux.vnet.ibm.com, paolo.valente@unimore.it, fernando@oss.ntt.co.jp, mikew@google.com, jmoyer@redhat.com, nauman@google.com, m-ikeda@ds.jp.nec.com, lizf@cn.fujitsu.com, fchecconi@gmail.com, akpm@linux-foundation.org, jbaron@redhat.com, linux-kernel@vger.kernel.org, s-uchida@ap.jp.nec.com, righi.andrea@gmail.com, containers@lists.linux-foundation.org Subject: [dm-devel] Re: [PATCH 21/25] io-controller: Per cgroup request descriptor support X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.5 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com Vivek Goyal wrote: ... > } > +#ifdef CONFIG_GROUP_IOSCHED > +static ssize_t queue_group_requests_show(struct request_queue *q, char *page) > +{ > + return queue_var_show(q->nr_group_requests, (page)); > +} > + > +static ssize_t > +queue_group_requests_store(struct request_queue *q, const char *page, > + size_t count) > +{ > + unsigned long nr; > + int ret = queue_var_store(&nr, page, count); > + if (nr < BLKDEV_MIN_RQ) > + nr = BLKDEV_MIN_RQ; > + > + spin_lock_irq(q->queue_lock); > + q->nr_group_requests = nr; > + spin_unlock_irq(q->queue_lock); > + return ret; > +} > +#endif Hi Vivek, Do we need to update the congestion thresholds for allocated io groups? Signed-off-by: Gui Jianfeng --- block/blk-sysfs.c | 15 +++++++++++++++ 1 files changed, 15 insertions(+), 0 deletions(-) diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index 577ed42..92b9f25 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -83,17 +83,32 @@ static ssize_t queue_group_requests_show(struct request_queue *q, char *page) return queue_var_show(q->nr_group_requests, (page)); } +extern void elv_io_group_congestion_threshold(struct request_queue *q, + struct io_group *iog); + static ssize_t queue_group_requests_store(struct request_queue *q, const char *page, size_t count) { + struct hlist_node *n; + struct io_group *iog; + struct elv_fq_data *efqd; unsigned long nr; int ret = queue_var_store(&nr, page, count); + if (nr < BLKDEV_MIN_RQ) nr = BLKDEV_MIN_RQ; spin_lock_irq(q->queue_lock); + q->nr_group_requests = nr; + + efqd = &q->elevator->efqd; + + hlist_for_each_entry(iog, n, &efqd->group_list, elv_data_node) { + elv_io_group_congestion_threshold(q, iog); + } + spin_unlock_irq(q->queue_lock); return ret; }