From patchwork Thu Sep 27 11:36:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 10617831 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 51F1A46E4 for ; Thu, 27 Sep 2018 11:37:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 40E4E2B200 for ; Thu, 27 Sep 2018 11:37:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 351C52B1FB; Thu, 27 Sep 2018 11:37:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C33D82B1D7 for ; Thu, 27 Sep 2018 11:37:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727171AbeI0RzF (ORCPT ); Thu, 27 Sep 2018 13:55:05 -0400 Received: from mx2.suse.de ([195.135.220.15]:37116 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727182AbeI0RzF (ORCPT ); Thu, 27 Sep 2018 13:55:05 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 0335EB125; Thu, 27 Sep 2018 11:37:10 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id B55FF1E3BE6; Thu, 27 Sep 2018 13:37:09 +0200 (CEST) From: Jan Kara To: Jens Axboe Cc: , Tetsuo Handa , Jan Kara Subject: [PATCH 14/14] loop: Fix deadlock when calling blkdev_reread_part() Date: Thu, 27 Sep 2018 13:36:52 +0200 Message-Id: <20180927113652.5422-15-jack@suse.cz> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20180927113652.5422-1-jack@suse.cz> References: <20180927113652.5422-1-jack@suse.cz> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Calling blkdev_reread_part() under loop_ctl_mutex causes lockdep to complain about circular lock dependency between bdev->bd_mutex and lo->lo_ctl_mutex. The problem is that on loop device open or close lo_open() and lo_release() get called with bdev->bd_mutex held and they need to acquire loop_ctl_mutex. OTOH when loop_reread_partitions() is called with loop_ctl_mutex held, it will call blkdev_reread_part() which acquires bdev->bd_mutex. See syzbot report for details [1]. Move call to blkdev_reread_part() in __loop_clr_fd() from under loop_ctl_mutex to finish fixing of the lockdep warning and the possible deadlock. [1] https://syzkaller.appspot.com/bug?id=bf154052f0eea4bc7712499e4569505907d1588 Reported-by: syzbot Signed-off-by: Jan Kara --- drivers/block/loop.c | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/drivers/block/loop.c b/drivers/block/loop.c index d2be85c48f03..a0fb7bf62b29 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -1031,12 +1031,14 @@ loop_init_xfer(struct loop_device *lo, struct loop_func_table *xfer, return err; } -static int __loop_clr_fd(struct loop_device *lo) +static int __loop_clr_fd(struct loop_device *lo, bool release) { struct file *filp = NULL; gfp_t gfp = lo->old_gfp_mask; struct block_device *bdev = lo->lo_device; int err = 0; + bool partscan = false; + int lo_number; mutex_lock(&loop_ctl_mutex); if (WARN_ON_ONCE(lo->lo_state != Lo_rundown)) { @@ -1089,7 +1091,15 @@ static int __loop_clr_fd(struct loop_device *lo) module_put(THIS_MODULE); blk_mq_unfreeze_queue(lo->lo_queue); - if (lo->lo_flags & LO_FLAGS_PARTSCAN && bdev) { + partscan = lo->lo_flags & LO_FLAGS_PARTSCAN && bdev; + lo_number = lo->lo_number; + lo->lo_flags = 0; + if (!part_shift) + lo->lo_disk->flags |= GENHD_FL_NO_PART_SCAN; + loop_unprepare_queue(lo); +out_unlock: + mutex_unlock(&loop_ctl_mutex); + if (partscan) { /* * bd_mutex has been held already in release path, so don't * acquire it if this function is called in such case. @@ -1098,21 +1108,15 @@ static int __loop_clr_fd(struct loop_device *lo) * must be at least one and it can only become zero when the * current holder is released. */ - if (!atomic_read(&lo->lo_refcnt)) + if (release) err = __blkdev_reread_part(bdev); else err = blkdev_reread_part(bdev); pr_warn("%s: partition scan of loop%d failed (rc=%d)\n", - __func__, lo->lo_number, err); + __func__, lo_number, err); /* Device is gone, no point in returning error */ err = 0; } - lo->lo_flags = 0; - if (!part_shift) - lo->lo_disk->flags |= GENHD_FL_NO_PART_SCAN; - loop_unprepare_queue(lo); -out_unlock: - mutex_unlock(&loop_ctl_mutex); /* * Need not hold loop_ctl_mutex to fput backing file. * Calling fput holding loop_ctl_mutex triggers a circular @@ -1153,7 +1157,7 @@ static int loop_clr_fd(struct loop_device *lo) lo->lo_state = Lo_rundown; mutex_unlock(&loop_ctl_mutex); - return __loop_clr_fd(lo); + return __loop_clr_fd(lo, false); } static int @@ -1714,7 +1718,7 @@ static void lo_release(struct gendisk *disk, fmode_t mode) * In autoclear mode, stop the loop thread * and remove configuration after last close. */ - __loop_clr_fd(lo); + __loop_clr_fd(lo, true); return; } else if (lo->lo_state == Lo_bound) { /*