From patchwork Fri Apr 18 15:54:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Darrick J. Wong" X-Patchwork-Id: 14057369 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB8BE2AE66; Fri, 18 Apr 2025 15:54:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744991699; cv=none; b=VgvbdsfJ5CjNM7Ve4pxQG2EEL1svy475cMCt7+pLLMsssotct60muvo9peO/Ec+MIcLpCM3xgSt0UWCyYf5mwhHQ6WgjkKD3gvBA+B1ss20hKnoNHZVFV6hH1CyHU0Q/MnJLN+l7M0+/JZLfZllnzmdWzkXxJcYBCWetIj3I9M0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744991699; c=relaxed/simple; bh=SpADe2Wn1wIOfr84TDdewkxM001qcp+sCjJjBRU7fqo=; h=Date:From:To:Cc:Subject:Message-ID:MIME-Version:Content-Type: Content-Disposition; b=XPkxU9cSqGtZAShT56YfQ4baTESgPlQ11Tbi4C/9E1YzTasWJUOj9VWrPEel3SO8QwiEet4qCr4kL5zKe2XrMphnIiTkfpqDBk6tLU7IOrCnhxm3fchzGXhbHhCMsveTyYzKJtAfbjt1T2dGdZk9zrR/7ZXRfmnGr96yt08e84w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=RxZIFHsu; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="RxZIFHsu" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 160A3C4CEE2; Fri, 18 Apr 2025 15:54:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1744991699; bh=SpADe2Wn1wIOfr84TDdewkxM001qcp+sCjJjBRU7fqo=; h=Date:From:To:Cc:Subject:From; b=RxZIFHsu5JMy4h+WxwKs5ZF69kh7fONpOvD5V8+M5VTsgZ0yWuic/LVz5iQ826IUK piQ00qcKmU/2nIvxlyEt9s4LM91ZD2p/t/vja1evHappwfywp8yrRWawhl0Z14k8G9 /TPro1nvUsv9gOm11fjo9HmSn0QoKEeshj+aoeI/Mdj6dAXaW7R8q6dhAI6eT0LBY3 cphUhZKXVQJPrsjFa+jttAXDwXuw4JL43X+A3G4J6Qtt8H5TSN/ak/RIWjc+SNUBat AYkgtWHk0jvCM7H2DEUZDRFaeZmoZbTFGMokNia6sIws3+JKEIeBQUh5r71FHkQrmT LDrafm5DfchHQ== Date: Fri, 18 Apr 2025 08:54:58 -0700 From: "Darrick J. Wong" To: Jens Axboe Cc: Christoph Hellwig , Shinichiro Kawasaki , Luis Chamberlain , Matthew Wilcox , linux-block , linux-fsdevel , xfs Subject: [PATCH 1/2] block: fix race between set_blocksize and read paths Message-ID: <20250418155458.GR25675@frogsfrogsfrogs> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline From: Darrick J. Wong With the new large sector size support, it's now the case that set_blocksize can change i_blksize and the folio order in a manner that conflicts with a concurrent reader and causes a kernel crash. Specifically, let's say that udev-worker calls libblkid to detect the labels on a block device. The read call can create an order-0 folio to read the first 4096 bytes from the disk. But then udev is preempted. Next, someone tries to mount an 8k-sectorsize filesystem from the same block device. The filesystem calls set_blksize, which sets i_blksize to 8192 and the minimum folio order to 1. Now udev resumes, still holding the order-0 folio it allocated. It then tries to schedule a read bio and do_mpage_readahead tries to create bufferheads for the folio. Unfortunately, blocks_per_folio == 0 because the page size is 4096 but the blocksize is 8192 so no bufferheads are attached and the bh walk never sets bdev. We then submit the bio with a NULL block device and crash. Therefore, truncate the page cache after flushing but before updating i_blksize. However, that's not enough -- we also need to lock out file IO and page faults during the update. Take both the i_rwsem and the invalidate_lock in exclusive mode for invalidations, and in shared mode for read/write operations. I don't know if this is the correct fix, but xfs/259 found it. Signed-off-by: "Darrick J. Wong" Reviewed-by: Luis Chamberlain --- block/bdev.c | 17 +++++++++++++++++ block/blk-zoned.c | 5 ++++- block/fops.c | 16 ++++++++++++++++ block/ioctl.c | 6 ++++++ 4 files changed, 43 insertions(+), 1 deletion(-) diff --git a/block/bdev.c b/block/bdev.c index 7b4e35a661b0c9..1313ad256593c5 100644 --- a/block/bdev.c +++ b/block/bdev.c @@ -169,11 +169,28 @@ int set_blocksize(struct file *file, int size) /* Don't change the size if it is same as current */ if (inode->i_blkbits != blksize_bits(size)) { + /* + * Flush and truncate the pagecache before we reconfigure the + * mapping geometry because folio sizes are variable now. If a + * reader has already allocated a folio whose size is smaller + * than the new min_order but invokes readahead after the new + * min_order becomes visible, readahead will think there are + * "zero" blocks per folio and crash. Take the inode and + * invalidation locks to avoid racing with + * read/write/fallocate. + */ + inode_lock(inode); + filemap_invalidate_lock(inode->i_mapping); + sync_blockdev(bdev); + kill_bdev(bdev); + inode->i_blkbits = blksize_bits(size); mapping_set_folio_order_range(inode->i_mapping, get_order(size), get_order(size)); kill_bdev(bdev); + filemap_invalidate_unlock(inode->i_mapping); + inode_unlock(inode); } return 0; } diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 0c77244a35c92e..8f15d1aa6eb89a 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -343,6 +343,7 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, blk_mode_t mode, op = REQ_OP_ZONE_RESET; /* Invalidate the page cache, including dirty pages. */ + inode_lock(bdev->bd_mapping->host); filemap_invalidate_lock(bdev->bd_mapping); ret = blkdev_truncate_zone_range(bdev, mode, &zrange); if (ret) @@ -364,8 +365,10 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, blk_mode_t mode, ret = blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors); fail: - if (cmd == BLKRESETZONE) + if (cmd == BLKRESETZONE) { filemap_invalidate_unlock(bdev->bd_mapping); + inode_unlock(bdev->bd_mapping->host); + } return ret; } diff --git a/block/fops.c b/block/fops.c index be9f1dbea9ce0a..e221fdcaa8aaf8 100644 --- a/block/fops.c +++ b/block/fops.c @@ -746,7 +746,14 @@ static ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from) ret = direct_write_fallback(iocb, from, ret, blkdev_buffered_write(iocb, from)); } else { + /* + * Take i_rwsem and invalidate_lock to avoid racing with + * set_blocksize changing i_blkbits/folio order and punching + * out the pagecache. + */ + inode_lock_shared(bd_inode); ret = blkdev_buffered_write(iocb, from); + inode_unlock_shared(bd_inode); } if (ret > 0) @@ -757,6 +764,7 @@ static ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from) static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to) { + struct inode *bd_inode = bdev_file_inode(iocb->ki_filp); struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host); loff_t size = bdev_nr_bytes(bdev); loff_t pos = iocb->ki_pos; @@ -793,7 +801,13 @@ static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to) goto reexpand; } + /* + * Take i_rwsem and invalidate_lock to avoid racing with set_blocksize + * changing i_blkbits/folio order and punching out the pagecache. + */ + inode_lock_shared(bd_inode); ret = filemap_read(iocb, to, ret); + inode_unlock_shared(bd_inode); reexpand: if (unlikely(shorted)) @@ -836,6 +850,7 @@ static long blkdev_fallocate(struct file *file, int mode, loff_t start, if ((start | len) & (bdev_logical_block_size(bdev) - 1)) return -EINVAL; + inode_lock(inode); filemap_invalidate_lock(inode->i_mapping); /* @@ -868,6 +883,7 @@ static long blkdev_fallocate(struct file *file, int mode, loff_t start, fail: filemap_invalidate_unlock(inode->i_mapping); + inode_unlock(inode); return error; } diff --git a/block/ioctl.c b/block/ioctl.c index faa40f383e2736..e472cc1030c60c 100644 --- a/block/ioctl.c +++ b/block/ioctl.c @@ -142,6 +142,7 @@ static int blk_ioctl_discard(struct block_device *bdev, blk_mode_t mode, if (err) return err; + inode_lock(bdev->bd_mapping->host); filemap_invalidate_lock(bdev->bd_mapping); err = truncate_bdev_range(bdev, mode, start, start + len - 1); if (err) @@ -174,6 +175,7 @@ static int blk_ioctl_discard(struct block_device *bdev, blk_mode_t mode, blk_finish_plug(&plug); fail: filemap_invalidate_unlock(bdev->bd_mapping); + inode_unlock(bdev->bd_mapping->host); return err; } @@ -199,12 +201,14 @@ static int blk_ioctl_secure_erase(struct block_device *bdev, blk_mode_t mode, end > bdev_nr_bytes(bdev)) return -EINVAL; + inode_lock(bdev->bd_mapping->host); filemap_invalidate_lock(bdev->bd_mapping); err = truncate_bdev_range(bdev, mode, start, end - 1); if (!err) err = blkdev_issue_secure_erase(bdev, start >> 9, len >> 9, GFP_KERNEL); filemap_invalidate_unlock(bdev->bd_mapping); + inode_unlock(bdev->bd_mapping->host); return err; } @@ -236,6 +240,7 @@ static int blk_ioctl_zeroout(struct block_device *bdev, blk_mode_t mode, return -EINVAL; /* Invalidate the page cache, including dirty pages */ + inode_lock(bdev->bd_mapping->host); filemap_invalidate_lock(bdev->bd_mapping); err = truncate_bdev_range(bdev, mode, start, end); if (err) @@ -246,6 +251,7 @@ static int blk_ioctl_zeroout(struct block_device *bdev, blk_mode_t mode, fail: filemap_invalidate_unlock(bdev->bd_mapping); + inode_unlock(bdev->bd_mapping->host); return err; } From patchwork Fri Apr 18 15:58:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Darrick J. Wong" X-Patchwork-Id: 14057372 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B91CA2AE66; Fri, 18 Apr 2025 15:58:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744991885; cv=none; b=Rdoh4JFQv06k0ytrSnoPnWxHRqikXzqUBq/r4STfqHQPv0PL4uWIAuTGTT0idGUd9Ywiwu+qXQzG+G6YzIQjZ3nx3KT1NuS4WWI8WUdqf9sWdXrwPyFT3zz0YKMzVwL/EwEd1q2ZTRZoCTDLmCpMLdvz14GMMPBh+XFvqRjHr2g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744991885; c=relaxed/simple; bh=V3YS3WTbPEFUUdpSo94OwF/4mnYhwt/NdwLCN+EsU50=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=YmGJ4v81/1STANrH+g2pJxLD+jL4G2R/DI/+kfQs8CSM6DGWc7XRS1xqr7vywzlu0naNGx5hy8QaUq38nKFfNRbECMy1awRTcpSSS51rQWUlE8NCUmtmtPml5bFeMJ7BtMXH+E7xGAlAU08fxC5tPwUXNiJ2K1SkiTGVeUaeItk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=p1OCrIpr; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="p1OCrIpr" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2C7EDC4CEE2; Fri, 18 Apr 2025 15:58:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1744991885; bh=V3YS3WTbPEFUUdpSo94OwF/4mnYhwt/NdwLCN+EsU50=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=p1OCrIprZnzDkTNuxMLGkaPmhnASOwP0nXd2ZoMJ3Zq6s7QpDdrWRB0Le8bSOv5t/ bRygayAeB1XUg5J1dqieuBh/aI+EpiywvCM4tJV3sFuCcsGMmwZJKPXfXdbCXXwvTg pXUsTf/lJsfx0pHbGK4nCmNgkNSBSo0HuwS315xeRGrqPFQ8KWHUfMT9u9Dg1LLpvR jHGkG/pDLl1psrIgz5tF7kq/Q6IQv6gVWvdgXRImaaThDyTOtRRUxVdL8yz6hjSnjF qjVvDEef4Ca1k91qTD6+h0lsxGGEMzTATcGDzonCARXzUnM8t9wXIsst7QHp5oGDiX bP1Sc4Iky1iXQ== Date: Fri, 18 Apr 2025 08:58:04 -0700 From: "Darrick J. Wong" To: Carlos Maiolino Cc: Jens Axboe , Christoph Hellwig , Shinichiro Kawasaki , Luis Chamberlain , Matthew Wilcox , linux-block , linux-fsdevel , xfs Subject: [PATCH 2/2] xfs: stop using set_blocksize Message-ID: <20250418155804.GS25675@frogsfrogsfrogs> References: <20250418155458.GR25675@frogsfrogsfrogs> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20250418155458.GR25675@frogsfrogsfrogs> From: Darrick J. Wong XFS has its own buffer cache for metadata that uses submit_bio, which means that it no longer uses the block device pagecache for anything. Create a more lightweight helper that runs the blocksize checks and flushes dirty data and use that instead. No more truncating the pagecache because why would XFS care? Signed-off-by: "Darrick J. Wong" Reviewed-by: Luis Chamberlain --- include/linux/blkdev.h | 1 + block/bdev.c | 33 +++++++++++++++++++++++++++------ fs/xfs/xfs_buf.c | 15 +++++++++++---- 3 files changed, 39 insertions(+), 10 deletions(-) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index f442639dfae224..df6df616740371 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1618,6 +1618,7 @@ static inline void bio_end_io_acct(struct bio *bio, unsigned long start_time) return bio_end_io_acct_remapped(bio, start_time, bio->bi_bdev); } +int bdev_validate_blocksize(struct block_device *bdev, int block_size); int set_blocksize(struct file *file, int size); int lookup_bdev(const char *pathname, dev_t *dev); diff --git a/block/bdev.c b/block/bdev.c index 1313ad256593c5..0196b62007d343 100644 --- a/block/bdev.c +++ b/block/bdev.c @@ -152,17 +152,38 @@ static void set_init_blocksize(struct block_device *bdev) get_order(bsize), get_order(bsize)); } +/** + * bdev_validate_blocksize - check that this block size is acceptable + * @bdev: blockdevice to check + * @block_size: block size to check + * + * For block device users that do not use buffer heads or the block device + * page cache, make sure that this block size can be used with the device. + * + * Return: On success zero is returned, negative error code on failure. + */ +int bdev_validate_blocksize(struct block_device *bdev, int block_size) +{ + if (blk_validate_block_size(block_size)) + return -EINVAL; + + /* Size cannot be smaller than the size supported by the device */ + if (block_size < bdev_logical_block_size(bdev)) + return -EINVAL; + + return 0; +} +EXPORT_SYMBOL_GPL(bdev_validate_blocksize); + int set_blocksize(struct file *file, int size) { struct inode *inode = file->f_mapping->host; struct block_device *bdev = I_BDEV(inode); + int ret; - if (blk_validate_block_size(size)) - return -EINVAL; - - /* Size cannot be smaller than the size supported by the device */ - if (size < bdev_logical_block_size(bdev)) - return -EINVAL; + ret = bdev_validate_blocksize(bdev, size); + if (ret) + return ret; if (!file->private_data) return -EINVAL; diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 8e7f1b324b3bea..0b4bd16cb568c8 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -1718,18 +1718,25 @@ xfs_setsize_buftarg( struct xfs_buftarg *btp, unsigned int sectorsize) { + int error; + /* Set up metadata sector size info */ btp->bt_meta_sectorsize = sectorsize; btp->bt_meta_sectormask = sectorsize - 1; - if (set_blocksize(btp->bt_bdev_file, sectorsize)) { + error = bdev_validate_blocksize(btp->bt_bdev, sectorsize); + if (error) { xfs_warn(btp->bt_mount, - "Cannot set_blocksize to %u on device %pg", - sectorsize, btp->bt_bdev); + "Cannot use blocksize %u on device %pg, err %d", + sectorsize, btp->bt_bdev, error); return -EINVAL; } - return 0; + /* + * Flush the block device pagecache so our bios see anything dirtied + * before mount. + */ + return sync_blockdev(btp->bt_bdev); } int