From patchwork Wed Mar 18 08:54:36 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kiyoshi Ueda X-Patchwork-Id: 12790 X-Patchwork-Delegate: agk@redhat.com Received: from hormel.redhat.com (hormel1.redhat.com [209.132.177.33]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n2I8sv2r023721 for ; Wed, 18 Mar 2009 08:54:57 GMT Received: from listman.util.phx.redhat.com (listman.util.phx.redhat.com [10.8.4.110]) by hormel.redhat.com (Postfix) with ESMTP id EF212618C1E; Wed, 18 Mar 2009 04:54:57 -0400 (EDT) Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by listman.util.phx.redhat.com (8.13.1/8.13.1) with ESMTP id n2I8suUx005466 for ; Wed, 18 Mar 2009 04:54:56 -0400 Received: from mx1.redhat.com (mx1.redhat.com [172.16.48.31]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id n2I8svW6023885; Wed, 18 Mar 2009 04:54:57 -0400 Received: from tyo202.gate.nec.co.jp (TYO202.gate.nec.co.jp [202.32.8.206]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id n2I8sfeO002495; Wed, 18 Mar 2009 04:54:42 -0400 Received: from mailgate3.nec.co.jp ([10.7.69.162]) by tyo202.gate.nec.co.jp (8.13.8/8.13.4) with ESMTP id n2I8sbYV021802; Wed, 18 Mar 2009 17:54:37 +0900 (JST) Received: (from root@localhost) by mailgate3.nec.co.jp (8.11.7/3.7W-MAILGATE-NEC) id n2I8sbV27218; Wed, 18 Mar 2009 17:54:37 +0900 (JST) Received: from mailsv.linux.bs1.fc.nec.co.jp (mailsv.linux.bs1.fc.nec.co.jp [10.34.125.2]) by mailsv4.nec.co.jp (8.13.8/8.13.4) with ESMTP id n2I8saSZ015778; Wed, 18 Mar 2009 17:54:36 +0900 (JST) Received: from elcondor.linux.bs1.fc.nec.co.jp (elcondor.linux.bs1.fc.nec.co.jp [10.34.125.195]) by mailsv.linux.bs1.fc.nec.co.jp (Postfix) with ESMTP id 4B8FEE48276; Wed, 18 Mar 2009 17:54:36 +0900 (JST) Message-ID: <49C0B6CC.2090005@ct.jp.nec.com> Date: Wed, 18 Mar 2009 17:54:36 +0900 From: Kiyoshi Ueda User-Agent: Thunderbird 2.0.0.19 (X11/20090105) MIME-Version: 1.0 To: Alasdair Kergon References: <49C0B474.4000204@ct.jp.nec.com> In-Reply-To: <49C0B474.4000204@ct.jp.nec.com> X-RedHat-Spam-Score: -0.235 X-Scanned-By: MIMEDefang 2.58 on 172.16.52.254 X-Scanned-By: MIMEDefang 2.63 on 172.16.48.31 X-loop: dm-devel@redhat.com Cc: Christof Schmitt , device-mapper development Subject: [dm-devel] [PATCH 4/8] dm core: disable interrupt when taking map_lock X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.5 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com This patch disables interrupt when taking map_lock to prevent needless lockdep warnings in request-based dm. request-based dm takes map_lock after taking queue_lock with disabling interrupt: spin_lock_irqsave(queue_lock) q->request_fn() == dm_request_fn() => dm_get_table() => read_lock(map_lock) while queue_lock could be taken in interrupt context. So lockdep warns that a deadlock can happen: write_lock(map_lock) spin_lock_irqsave(queue_lock) q->request_fn() == dm_request_fn() => dm_get_table() => read_lock(map_lock) Currently there is no such code path in request-based dm where q->request_fn() is called from interrupt context, so no such deadlock happens. But such warning messages confuse users, so prevent them by disabling interrupt when taking map_lock. Signed-off-by: Kiyoshi Ueda Signed-off-by: Jun'ichi Nomura Cc: Christof Schmitt Cc: Alasdair G Kergon --- drivers/md/dm.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel Index: 2.6.29-rc8/drivers/md/dm.c =================================================================== --- 2.6.29-rc8.orig/drivers/md/dm.c +++ 2.6.29-rc8/drivers/md/dm.c @@ -511,12 +511,13 @@ static int queue_io(struct mapped_device struct dm_table *dm_get_table(struct mapped_device *md) { struct dm_table *t; + unsigned long flags; - read_lock(&md->map_lock); + read_lock_irqsave(&md->map_lock, flags); t = md->map; if (t) dm_table_get(t); - read_unlock(&md->map_lock); + read_unlock_irqrestore(&md->map_lock, flags); return t; } @@ -1929,6 +1930,7 @@ static int __bind(struct mapped_device * { struct request_queue *q = md->queue; sector_t size; + unsigned long flags; size = dm_table_get_size(t); @@ -1960,10 +1962,10 @@ static int __bind(struct mapped_device * __bind_mempools(md, t); - write_lock(&md->map_lock); + write_lock_irqsave(&md->map_lock, flags); md->map = t; dm_table_set_restrictions(t, q); - write_unlock(&md->map_lock); + write_unlock_irqrestore(&md->map_lock, flags); return 0; } @@ -1971,14 +1973,15 @@ static int __bind(struct mapped_device * static void __unbind(struct mapped_device *md) { struct dm_table *map = md->map; + unsigned long flags; if (!map) return; dm_table_event_callback(map, NULL, NULL); - write_lock(&md->map_lock); + write_lock_irqsave(&md->map_lock, flags); md->map = NULL; - write_unlock(&md->map_lock); + write_unlock_irqrestore(&md->map_lock, flags); dm_table_destroy(map); }