diff mbox series

btrfs: allow compression even if the range is not page aligned

Message ID 05299dac9e4379aff3f24986b4ca3fafb04d5d6a.1726527472.git.wqu@suse.com (mailing list archive)
State New, archived
Headers show
Series btrfs: allow compression even if the range is not page aligned | expand

Commit Message

Qu Wenruo Sept. 16, 2024, 10:58 p.m. UTC
Previously for btrfs with sector size smaller than page size (subpage),
we only allow compression if the range is fully page aligned.

This is to work around the asynchronous submission of compressed range,
which delayed the page unlock and writeback into a workqueue,
furthermore asynchronous submission can lock multiple sector range
across page boundary.

Such asynchronous submission makes it very hard to co-operate with other
regular writes.

With the recent changes to the subpage folio unlock path, now
asynchronous submission of compressed pages can co-operate with regular
submission, so enable sector perfect compression if it's an experimental
build.

The ETA for moving this feature out of experimental is 6.15, and I hope
all remaining corner cases can be exposed before that.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
This is the final enablement patch for sector perfect compression
support. (Previously compression is only page perfect, which means it's
only block perfect for sector size == page size).

This patch relies on all the previous various small fixes (some are
already merged into for-next branch), the recommended (and local)
sequence of the unmerged fixes are (in git log order):

 btrfs: allow compression even if the range is not page aligned
 btrfs: mark all dirty sectors as locked inside writepage_delalloc()
 btrfs: move the delalloc range bitmap search into extent_io.c
 btrfs: do not assume the full page range is not dirty in extent_writepage_io()
 btrfs: make extent_range_clear_diryt_for_io() to handle sector size < page size cases
 btrfs: wait for writeback if sector size is smaller than page size
 btrfs: compression: add an ASSERT() to ensure the read-in length is sane
 btrfs: zstd: make the compression path to handle sector size < page size
 btrfs: zlib: make the compression path to handle sector size < page size

And the following patches already in for-next are also needed:

 btrfs: split out CONFIG_BTRFS_EXPERIMENTAL from CONFIG_BTRFS_DEBUG
 btrfs: only unlock the to-be-submitted ranges inside a folio
 btrfs: merge btrfs_folio_unlock_writer() into btrfs_folio_end_writer_lock()
 btrfs: make compression path to be subpage compatible
 btrfs: merge btrfs_orig_bbio_end_io() into btrfs_bio_end_io()
---
 fs/btrfs/inode.c | 41 +++++++----------------------------------
 1 file changed, 7 insertions(+), 34 deletions(-)
diff mbox series

Patch

diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index d53b9e4ca13e..fbb303f7ccaa 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -832,32 +832,16 @@  static inline int inode_need_compress(struct btrfs_inode *inode, u64 start,
 		return 0;
 	}
 	/*
-	 * Special check for subpage.
+	 * Only enable sector perfect compression for experimental builds.
 	 *
-	 * We lock the full page then run each delalloc range in the page, thus
-	 * for the following case, we will hit some subpage specific corner case:
+	 * This is a big feature change for subpage cases, and can hit
+	 * different corner cases, so only limit this feature for
+	 * experimental build for now.
 	 *
-	 * 0		32K		64K
-	 * |	|///////|	|///////|
-	 *		\- A		\- B
-	 *
-	 * In above case, both range A and range B will try to unlock the full
-	 * page [0, 64K), causing the one finished later will have page
-	 * unlocked already, triggering various page lock requirement BUG_ON()s.
-	 *
-	 * So here we add an artificial limit that subpage compression can only
-	 * if the range is fully page aligned.
-	 *
-	 * In theory we only need to ensure the first page is fully covered, but
-	 * the tailing partial page will be locked until the full compression
-	 * finishes, delaying the write of other range.
-	 *
-	 * TODO: Make btrfs_run_delalloc_range() to lock all delalloc range
-	 * first to prevent any submitted async extent to unlock the full page.
-	 * By this, we can ensure for subpage case that only the last async_cow
-	 * will unlock the full page.
+	 * ETA for moving this out of experimental builds is 6.15.
 	 */
-	if (fs_info->sectorsize < PAGE_SIZE) {
+	if (fs_info->sectorsize < PAGE_SIZE &&
+	    !IS_ENABLED(CONFIG_BTRFS_EXPERIMENTAL)) {
 		if (!PAGE_ALIGNED(start) ||
 		    !PAGE_ALIGNED(end + 1))
 			return 0;
@@ -1002,17 +986,6 @@  static void compress_file_range(struct btrfs_work *work)
 	   (start > 0 || end + 1 < inode->disk_i_size))
 		goto cleanup_and_bail_uncompressed;
 
-	/*
-	 * For subpage case, we require full page alignment for the sector
-	 * aligned range.
-	 * Thus we must also check against @actual_end, not just @end.
-	 */
-	if (blocksize < PAGE_SIZE) {
-		if (!PAGE_ALIGNED(start) ||
-		    !PAGE_ALIGNED(round_up(actual_end, blocksize)))
-			goto cleanup_and_bail_uncompressed;
-	}
-
 	total_compressed = min_t(unsigned long, total_compressed,
 			BTRFS_MAX_UNCOMPRESSED);
 	total_in = 0;