From patchwork Tue Jun 2 07:02:42 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kiyoshi Ueda X-Patchwork-Id: 27399 X-Patchwork-Delegate: agk@redhat.com Received: from hormel.redhat.com (hormel1.redhat.com [209.132.177.33]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n5272xGY008651 for ; Tue, 2 Jun 2009 07:02:59 GMT Received: from listman.util.phx.redhat.com (listman.util.phx.redhat.com [10.8.4.110]) by hormel.redhat.com (Postfix) with ESMTP id 2B9CF619A5C; Tue, 2 Jun 2009 03:02:59 -0400 (EDT) Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by listman.util.phx.redhat.com (8.13.1/8.13.1) with ESMTP id n5272vOh017719 for ; Tue, 2 Jun 2009 03:02:57 -0400 Received: from mx1.redhat.com (mx1.redhat.com [172.16.48.31]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id n5272vrk018393; Tue, 2 Jun 2009 03:02:57 -0400 Received: from tyo202.gate.nec.co.jp (TYO202.gate.nec.co.jp [202.32.8.206]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id n5272hQI003495; Tue, 2 Jun 2009 03:02:44 -0400 Received: from mailgate3.nec.co.jp ([10.7.69.192]) by tyo202.gate.nec.co.jp (8.13.8/8.13.4) with ESMTP id n5272hiY004629; Tue, 2 Jun 2009 16:02:43 +0900 (JST) Received: (from root@localhost) by mailgate3.nec.co.jp (8.11.7/3.7W-MAILGATE-NEC) id n5272g229805; Tue, 2 Jun 2009 16:02:42 +0900 (JST) Received: from mailsv.linux.bs1.fc.nec.co.jp (mailsv.linux.bs1.fc.nec.co.jp [10.34.125.2]) by mailsv3.nec.co.jp (8.13.8/8.13.4) with ESMTP id n5272gHe003603; Tue, 2 Jun 2009 16:02:42 +0900 (JST) Received: from elcondor.linux.bs1.fc.nec.co.jp (elcondor.linux.bs1.fc.nec.co.jp [10.34.125.195]) by mailsv.linux.bs1.fc.nec.co.jp (Postfix) with ESMTP id 87663E482DD; Tue, 2 Jun 2009 16:02:42 +0900 (JST) Message-ID: <4A24CE92.6010201@ct.jp.nec.com> Date: Tue, 02 Jun 2009 16:02:42 +0900 From: Kiyoshi Ueda User-Agent: Thunderbird 2.0.0.21 (X11/20090320) MIME-Version: 1.0 To: Alasdair Kergon References: <4A24CD74.80708@ct.jp.nec.com> In-Reply-To: <4A24CD74.80708@ct.jp.nec.com> X-RedHat-Spam-Score: -0.34 X-Scanned-By: MIMEDefang 2.58 on 172.16.52.254 X-Scanned-By: MIMEDefang 2.63 on 172.16.48.31 X-loop: dm-devel@redhat.com Cc: device-mapper development Subject: [dm-devel] [PATCH 4/5] dm core: disable interrupt when taking map_lock X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.5 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com This patch disables interrupt when taking map_lock to prevent needless lockdep warnings in request-based dm. request-based dm takes map_lock after taking queue_lock with disabling interrupt: spin_lock_irqsave(queue_lock) q->request_fn() == dm_request_fn() => dm_get_table() => read_lock(map_lock) while queue_lock could be taken in interrupt context. So lockdep warns that a deadlock can happen: write_lock(map_lock) spin_lock_irqsave(queue_lock) q->request_fn() == dm_request_fn() => dm_get_table() => read_lock(map_lock) Currently there is no such code path in request-based dm where q->request_fn() is called from interrupt context, so no such deadlock happens. But such warning messages confuse users, so prevent them by disabling interrupt when taking map_lock. Signed-off-by: Kiyoshi Ueda Signed-off-by: Jun'ichi Nomura Acked-by: Christof Schmitt Acked-by: Hannes Reinecke Cc: Alasdair G Kergon --- drivers/md/dm.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel Index: linux-2.6-block/drivers/md/dm.c =================================================================== --- linux-2.6-block.orig/drivers/md/dm.c +++ linux-2.6-block/drivers/md/dm.c @@ -508,12 +508,13 @@ static void queue_io(struct mapped_devic struct dm_table *dm_get_table(struct mapped_device *md) { struct dm_table *t; + unsigned long flags; - read_lock(&md->map_lock); + read_lock_irqsave(&md->map_lock, flags); t = md->map; if (t) dm_table_get(t); - read_unlock(&md->map_lock); + read_unlock_irqrestore(&md->map_lock, flags); return t; } @@ -1960,6 +1961,7 @@ static int __bind(struct mapped_device * { struct request_queue *q = md->queue; sector_t size; + unsigned long flags; size = dm_table_get_size(t); @@ -1990,10 +1992,10 @@ static int __bind(struct mapped_device * __bind_mempools(md, t); - write_lock(&md->map_lock); + write_lock_irqsave(&md->map_lock, flags); md->map = t; dm_table_set_restrictions(t, q); - write_unlock(&md->map_lock); + write_unlock_irqrestore(&md->map_lock, flags); return 0; } @@ -2001,14 +2003,15 @@ static int __bind(struct mapped_device * static void __unbind(struct mapped_device *md) { struct dm_table *map = md->map; + unsigned long flags; if (!map) return; dm_table_event_callback(map, NULL, NULL); - write_lock(&md->map_lock); + write_lock_irqsave(&md->map_lock, flags); md->map = NULL; - write_unlock(&md->map_lock); + write_unlock_irqrestore(&md->map_lock, flags); dm_table_destroy(map); }