From patchwork Mon Jun 29 05:27:47 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gui Jianfeng X-Patchwork-Id: 32885 Received: from hormel.redhat.com (hormel1.redhat.com [209.132.177.33]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n5T5UFkp001861 for ; Mon, 29 Jun 2009 05:30:15 GMT Received: from listman.util.phx.redhat.com (listman.util.phx.redhat.com [10.8.4.110]) by hormel.redhat.com (Postfix) with ESMTP id 18591618C5A; Mon, 29 Jun 2009 01:30:14 -0400 (EDT) Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by listman.util.phx.redhat.com (8.13.1/8.13.1) with ESMTP id n5T5UCPn030572 for ; Mon, 29 Jun 2009 01:30:12 -0400 Received: from mx3.redhat.com (mx3.redhat.com [172.16.48.32]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id n5T5UBPA026884; Mon, 29 Jun 2009 01:30:11 -0400 Received: from song.cn.fujitsu.com (cn.fujitsu.com [222.73.24.84] (may be forged)) by mx3.redhat.com (8.13.8/8.13.8) with ESMTP id n5T5THxX010476; Mon, 29 Jun 2009 01:29:18 -0400 Received: from tang.cn.fujitsu.com (tang.cn.fujitsu.com [10.167.250.3]) by song.cn.fujitsu.com (Postfix) with ESMTP id BFB8E1700BD; Mon, 29 Jun 2009 13:33:41 +0800 (CST) Received: from fnst.cn.fujitsu.com (localhost.localdomain [127.0.0.1]) by tang.cn.fujitsu.com (8.13.1/8.13.1) with ESMTP id n5T5qkth003802; Mon, 29 Jun 2009 13:52:46 +0800 Received: from [127.0.0.1] (unknown [10.167.141.226]) by fnst.cn.fujitsu.com (Postfix) with ESMTPA id EEEF6D4032; Mon, 29 Jun 2009 13:30:17 +0800 (CST) Message-ID: <4A4850D3.3000700@cn.fujitsu.com> Date: Mon, 29 Jun 2009 13:27:47 +0800 From: Gui Jianfeng User-Agent: Thunderbird 2.0.0.5 (Windows/20070716) MIME-Version: 1.0 To: Vivek Goyal References: <1245443858-8487-1-git-send-email-vgoyal@redhat.com> <1245443858-8487-6-git-send-email-vgoyal@redhat.com> In-Reply-To: <1245443858-8487-6-git-send-email-vgoyal@redhat.com> X-RedHat-Spam-Score: -0.746 X-Scanned-By: MIMEDefang 2.58 on 172.16.52.254 X-Scanned-By: MIMEDefang 2.63 on 172.16.48.32 X-loop: dm-devel@redhat.com Cc: dhaval@linux.vnet.ibm.com, snitzer@redhat.com, peterz@infradead.org, dm-devel@redhat.com, dpshah@google.com, jens.axboe@oracle.com, agk@redhat.com, balbir@linux.vnet.ibm.com, paolo.valente@unimore.it, fernando@oss.ntt.co.jp, mikew@google.com, jmoyer@redhat.com, nauman@google.com, m-ikeda@ds.jp.nec.com, lizf@cn.fujitsu.com, fchecconi@gmail.com, akpm@linux-foundation.org, jbaron@redhat.com, linux-kernel@vger.kernel.org, s-uchida@ap.jp.nec.com, righi.andrea@gmail.com, containers@lists.linux-foundation.org Subject: [dm-devel] [PATCH] io-controller: optimization for iog deletion when elevator exiting X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.5 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com Hi Vivek, There's no need to travel the iocg->group_data for each iog when exiting a elevator, that costs too much. An alternative solution is reseting iocg_id as soon as an io group unlinking from a iocg. Make a decision that whether it's need to carry out deleting action by checking iocg_id. Signed-off-by: Gui Jianfeng --- block/elevator-fq.c | 29 ++++++++++------------------- 1 files changed, 10 insertions(+), 19 deletions(-) -- 1.5.4.rc3 -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel diff --git a/block/elevator-fq.c b/block/elevator-fq.c index d779282..b26fe0f 100644 --- a/block/elevator-fq.c +++ b/block/elevator-fq.c @@ -2218,8 +2218,6 @@ void io_group_cleanup(struct io_group *iog) BUG_ON(iog->sched_data.active_entity != NULL); BUG_ON(entity != NULL && entity->tree != NULL); - iog->iocg_id = 0; - /* * Wait for any rcu readers to exit before freeing up the group. * Primarily useful when io_get_io_group() is called without queue @@ -2376,6 +2374,7 @@ remove_entry: group_node); efqd = rcu_dereference(iog->key); hlist_del_rcu(&iog->group_node); + iog->iocg_id = 0; spin_unlock_irqrestore(&iocg->lock, flags); spin_lock_irqsave(efqd->queue->queue_lock, flags); @@ -2403,35 +2402,27 @@ done: void io_group_check_and_destroy(struct elv_fq_data *efqd, struct io_group *iog) { struct io_cgroup *iocg; - unsigned short id = iog->iocg_id; - struct hlist_node *n; - struct io_group *__iog; unsigned long flags; struct cgroup_subsys_state *css; rcu_read_lock(); - BUG_ON(!id); - css = css_lookup(&io_subsys, id); + css = css_lookup(&io_subsys, iog->iocg_id); - /* css can't go away as associated io group is still around */ - BUG_ON(!css); + if (!css) + goto out; iocg = container_of(css, struct io_cgroup, css); spin_lock_irqsave(&iocg->lock, flags); - hlist_for_each_entry_rcu(__iog, n, &iocg->group_data, group_node) { - /* - * Remove iog only if it is still in iocg list. Cgroup - * deletion could have deleted it already. - */ - if (__iog == iog) { - hlist_del_rcu(&iog->group_node); - __io_destroy_group(efqd, iog); - break; - } + + if (iog->iocg_id) { + hlist_del_rcu(&iog->group_node); + __io_destroy_group(efqd, iog); } + spin_unlock_irqrestore(&iocg->lock, flags); +out: rcu_read_unlock(); }