From patchwork Mon Nov 16 14:57:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11909479 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7DDCF697 for ; Mon, 16 Nov 2020 15:11:11 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4EC3D223C7 for ; Mon, 16 Nov 2020 15:11:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="uzx32xbE" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4EC3D223C7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.28113.56966 (Exim 4.92) (envelope-from ) id 1keg95-0002dm-77; Mon, 16 Nov 2020 15:10:19 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 28113.56966; Mon, 16 Nov 2020 15:10:18 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1keg93-0002bx-Gp; Mon, 16 Nov 2020 15:10:17 +0000 Received: by outflank-mailman (input) for mailman id 28113; Mon, 16 Nov 2020 15:10:11 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1keg3T-0006ni-Bz for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:31 +0000 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 13503b00-3caa-401f-a377-c6927975d40c; Mon, 16 Nov 2020 14:59:58 +0000 (UTC) Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kefyt-0004FO-7Z; Mon, 16 Nov 2020 14:59:47 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1keg3T-0006ni-Bz for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:31 +0000 X-Inumbo-ID: 13503b00-3caa-401f-a377-c6927975d40c Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 13503b00-3caa-401f-a377-c6927975d40c; Mon, 16 Nov 2020 14:59:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=GDdahn/aHFPgbq/Tch1XnIWDhbuP0LNShW0kbWCU2DI=; b=uzx32xbEXDeOyc3BQeMYzcMEP+ VpcFJj/mppjMWyEpC2f1TAGdERyyKZRFPMS6LQb/FLhD/jkr9TUTULDzsay+mncANq4u7ucvLomnp Kww7QziQHj9SrxdP5uPA+fxQzCqelMZmiy+dj6oy7m0yAUJH/x3Enl5GkdSBQ3vLRg56BDDTvBg5b /6VF20e7eXN/eur5CKoLa0f/4Y9uesPi5j923mPMuLXs+nJmuwvB4GqU9KGScPD2A37u2GF4wHIxT bXUovXGeQ3icRQ7sRczYatscyuMTybV1WBblJX+wktL3ld8Vx/0lixyLEk8QqnbHMtclj/KTJswjw cojmMX0g==; Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kefyt-0004FO-7Z; Mon, 16 Nov 2020 14:59:47 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Justin Sanders , Josef Bacik , Ilya Dryomov , Jack Wang , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Konrad Rzeszutek Wilk , =?utf-8?q?Roger_Pau_Monn?= =?utf-8?q?=C3=A9?= , Minchan Kim , Mike Snitzer , Song Liu , "Martin K. Petersen" , dm-devel@redhat.com, linux-block@vger.kernel.org, drbd-dev@lists.linbit.com, nbd@other.debian.org, ceph-devel@vger.kernel.org, xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 67/78] block: simplify the block device claiming interface Date: Mon, 16 Nov 2020 15:57:58 +0100 Message-Id: <20201116145809.410558-68-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201116145809.410558-1-hch@lst.de> References: <20201116145809.410558-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Stop passing the whole device as a separate argument given that it can be trivially deducted. Signed-off-by: Christoph Hellwig Reviewed-by: Hannes Reinecke --- drivers/block/loop.c | 12 +++----- fs/block_dev.c | 69 +++++++++++++++++++----------------------- include/linux/blkdev.h | 6 ++-- 3 files changed, 38 insertions(+), 49 deletions(-) diff --git a/drivers/block/loop.c b/drivers/block/loop.c index b42c728620c9e4..599e94a7e69259 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -1071,7 +1071,6 @@ static int loop_configure(struct loop_device *lo, fmode_t mode, struct file *file; struct inode *inode; struct address_space *mapping; - struct block_device *claimed_bdev = NULL; int error; loff_t size; bool partscan; @@ -1090,8 +1089,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode, * here to avoid changing device under exclusive owner. */ if (!(mode & FMODE_EXCL)) { - claimed_bdev = bdev->bd_contains; - error = bd_prepare_to_claim(bdev, claimed_bdev, loop_configure); + error = bd_prepare_to_claim(bdev, loop_configure); if (error) goto out_putf; } @@ -1178,15 +1176,15 @@ static int loop_configure(struct loop_device *lo, fmode_t mode, mutex_unlock(&loop_ctl_mutex); if (partscan) loop_reread_partitions(lo, bdev); - if (claimed_bdev) - bd_abort_claiming(bdev, claimed_bdev, loop_configure); + if (!(mode & FMODE_EXCL)) + bd_abort_claiming(bdev, loop_configure); return 0; out_unlock: mutex_unlock(&loop_ctl_mutex); out_bdev: - if (claimed_bdev) - bd_abort_claiming(bdev, claimed_bdev, loop_configure); + if (!(mode & FMODE_EXCL)) + bd_abort_claiming(bdev, loop_configure); out_putf: fput(file); out: diff --git a/fs/block_dev.c b/fs/block_dev.c index f36788d7699302..fd4df132a97590 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -110,24 +110,20 @@ EXPORT_SYMBOL(invalidate_bdev); int truncate_bdev_range(struct block_device *bdev, fmode_t mode, loff_t lstart, loff_t lend) { - struct block_device *claimed_bdev = NULL; - int err; - /* * If we don't hold exclusive handle for the device, upgrade to it * while we discard the buffer cache to avoid discarding buffers * under live filesystem. */ if (!(mode & FMODE_EXCL)) { - claimed_bdev = bdev->bd_contains; - err = bd_prepare_to_claim(bdev, claimed_bdev, - truncate_bdev_range); + int err = bd_prepare_to_claim(bdev, truncate_bdev_range); if (err) return err; } + truncate_inode_pages_range(bdev->bd_inode->i_mapping, lstart, lend); - if (claimed_bdev) - bd_abort_claiming(bdev, claimed_bdev, truncate_bdev_range); + if (!(mode & FMODE_EXCL)) + bd_abort_claiming(bdev, truncate_bdev_range); return 0; } EXPORT_SYMBOL(truncate_bdev_range); @@ -1055,7 +1051,6 @@ static bool bd_may_claim(struct block_device *bdev, struct block_device *whole, /** * bd_prepare_to_claim - claim a block device * @bdev: block device of interest - * @whole: the whole device containing @bdev, may equal @bdev * @holder: holder trying to claim @bdev * * Claim @bdev. This function fails if @bdev is already claimed by another @@ -1065,9 +1060,10 @@ static bool bd_may_claim(struct block_device *bdev, struct block_device *whole, * RETURNS: * 0 if @bdev can be claimed, -EBUSY otherwise. */ -int bd_prepare_to_claim(struct block_device *bdev, struct block_device *whole, - void *holder) +int bd_prepare_to_claim(struct block_device *bdev, void *holder) { + struct block_device *whole = bdev->bd_contains; + retry: spin_lock(&bdev_lock); /* if someone else claimed, fail */ @@ -1107,15 +1103,15 @@ static void bd_clear_claiming(struct block_device *whole, void *holder) /** * bd_finish_claiming - finish claiming of a block device * @bdev: block device of interest - * @whole: whole block device * @holder: holder that has claimed @bdev * * Finish exclusive open of a block device. Mark the device as exlusively * open by the holder and wake up all waiters for exclusive open to finish. */ -static void bd_finish_claiming(struct block_device *bdev, - struct block_device *whole, void *holder) +static void bd_finish_claiming(struct block_device *bdev, void *holder) { + struct block_device *whole = bdev->bd_contains; + spin_lock(&bdev_lock); BUG_ON(!bd_may_claim(bdev, whole, holder)); /* @@ -1140,11 +1136,10 @@ static void bd_finish_claiming(struct block_device *bdev, * also used when exclusive open is not actually desired and we just needed * to block other exclusive openers for a while. */ -void bd_abort_claiming(struct block_device *bdev, struct block_device *whole, - void *holder) +void bd_abort_claiming(struct block_device *bdev, void *holder) { spin_lock(&bdev_lock); - bd_clear_claiming(whole, holder); + bd_clear_claiming(bdev->bd_contains, holder); spin_unlock(&bdev_lock); } EXPORT_SYMBOL(bd_abort_claiming); @@ -1439,7 +1434,7 @@ static int bdev_get_gendisk(struct gendisk *disk) static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder, int for_part) { - struct block_device *whole = NULL, *claiming = NULL; + struct block_device *whole = NULL; struct gendisk *disk = bdev->bd_disk; int ret; bool first_open = false, unblock_events = true, need_restart; @@ -1460,11 +1455,7 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder, if (!for_part && (mode & FMODE_EXCL)) { WARN_ON_ONCE(!holder); - if (whole) - claiming = whole; - else - claiming = bdev; - ret = bd_prepare_to_claim(bdev, claiming, holder); + ret = bd_prepare_to_claim(bdev, holder); if (ret) goto out_put_whole; } @@ -1541,21 +1532,23 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder, } } bdev->bd_openers++; - if (for_part) + if (for_part) { bdev->bd_part_count++; - if (claiming) - bd_finish_claiming(bdev, claiming, holder); + } else if (mode & FMODE_EXCL) { + bd_finish_claiming(bdev, holder); - /* - * Block event polling for write claims if requested. Any write holder - * makes the write_holder state stick until all are released. This is - * good enough and tracking individual writeable reference is too - * fragile given the way @mode is used in blkdev_get/put(). - */ - if (claiming && (mode & FMODE_WRITE) && !bdev->bd_write_holder && - (disk->flags & GENHD_FL_BLOCK_EVENTS_ON_EXCL_WRITE)) { - bdev->bd_write_holder = true; - unblock_events = false; + /* + * Block event polling for write claims if requested. Any write + * holder makes the write_holder state stick until all are + * released. This is good enough and tracking individual + * writeable reference is too fragile given the way @mode is + * used in blkdev_get/put(). + */ + if ((mode & FMODE_WRITE) && !bdev->bd_write_holder && + (disk->flags & GENHD_FL_BLOCK_EVENTS_ON_EXCL_WRITE)) { + bdev->bd_write_holder = true; + unblock_events = false; + } } mutex_unlock(&bdev->bd_mutex); @@ -1576,8 +1569,8 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder, __blkdev_put(bdev->bd_contains, mode, 1); bdev->bd_contains = NULL; out_unlock_bdev: - if (claiming) - bd_abort_claiming(bdev, claiming, holder); + if (!for_part && (mode & FMODE_EXCL)) + bd_abort_claiming(bdev, holder); mutex_unlock(&bdev->bd_mutex); disk_unblock_events(disk); out_put_whole: diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 044d9dd159d882..696b2f9c5529d8 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1988,10 +1988,8 @@ void blkdev_show(struct seq_file *seqf, off_t offset); struct block_device *blkdev_get_by_path(const char *path, fmode_t mode, void *holder); struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder); -int bd_prepare_to_claim(struct block_device *bdev, struct block_device *whole, - void *holder); -void bd_abort_claiming(struct block_device *bdev, struct block_device *whole, - void *holder); +int bd_prepare_to_claim(struct block_device *bdev, void *holder); +void bd_abort_claiming(struct block_device *bdev, void *holder); void blkdev_put(struct block_device *bdev, fmode_t mode); struct block_device *bdev_alloc(struct gendisk *disk, u8 partno);