From patchwork Thu Jun 10 05:09:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 12311793 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83B50C48BCD for ; Thu, 10 Jun 2021 05:09:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6C3546128A for ; Thu, 10 Jun 2021 05:09:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229802AbhFJFLU (ORCPT ); Thu, 10 Jun 2021 01:11:20 -0400 Received: from smtp-out1.suse.de ([195.135.220.28]:59002 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229529AbhFJFLT (ORCPT ); Thu, 10 Jun 2021 01:11:19 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id 4B304219A4; Thu, 10 Jun 2021 05:09:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1623301763; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PTQ6bjaBNc9ISYxr+ZsoAIKCL1z8NX0FajDXmuPwGho=; b=SdVWhMy31P/P0y50g9OJK5rmTtu4pXT1h3O/OivLmeRxQwKk37qQInFhZooPEVIo8pJ4mN S8huLXrbHyeznaEHBSQdD0b/YQOSTpEioI+Hwsdjtbkedd+Huu+DFALVs8SqYHHwB0MyoQ HtyY+t/RkNiNicCSYnBheCkkLH+aCRk= Received: from adam-pc.lan (unknown [10.163.16.38]) by relay2.suse.de (Postfix) with ESMTP id AC9CEA3B81; Thu, 10 Jun 2021 05:09:21 +0000 (UTC) From: Qu Wenruo To: linux-btrfs@vger.kernel.org Cc: Johannes Thumshirn Subject: [PATCH v4 01/10] btrfs: defrag: pass file_ra_state instead of file for btrfs_defrag_file() Date: Thu, 10 Jun 2021 13:09:08 +0800 Message-Id: <20210610050917.105458-2-wqu@suse.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210610050917.105458-1-wqu@suse.com> References: <20210610050917.105458-1-wqu@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Currently btrfs_defrag_file() accepts both "struct inode" and "struct file" as parameter, which can't be more confusing. As we can easily grab "struct inode" from "struct file" using file_inode() helper. The reason why we need "struct file" is just to re-use its f_ra. This patch will change this by passing "struct file_ra_state" parameter, so that it's more clear what we really want. Since we're here, also add some comment on the function btrfs_defrag_file() to make later reader less miserable. Signed-off-by: Qu Wenruo Reviewed-by: Johannes Thumshirn --- fs/btrfs/ctree.h | 4 ++-- fs/btrfs/ioctl.c | 27 ++++++++++++++++++--------- 2 files changed, 20 insertions(+), 11 deletions(-) diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index 45798e68331a..918df8877b45 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -3212,9 +3212,9 @@ int btrfs_fileattr_set(struct user_namespace *mnt_userns, int btrfs_ioctl_get_supported_features(void __user *arg); void btrfs_sync_inode_flags_to_i_flags(struct inode *inode); int __pure btrfs_is_empty_uuid(u8 *uuid); -int btrfs_defrag_file(struct inode *inode, struct file *file, +int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra, struct btrfs_ioctl_defrag_range_args *range, - u64 newer_than, unsigned long max_pages); + u64 newer_than, unsigned long max_to_defrag); void btrfs_get_block_group_info(struct list_head *groups_list, struct btrfs_ioctl_space_info *space); void btrfs_update_ioctl_balance_args(struct btrfs_fs_info *fs_info, diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index 11adf4670c55..05af6f5ff3ff 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -1350,13 +1350,22 @@ static int cluster_pages_for_defrag(struct inode *inode, } -int btrfs_defrag_file(struct inode *inode, struct file *file, +/* + * Btrfs entrace for defrag. + * + * @inode: Inode to be defragged + * @ra: Readahead state. If NULL, one will be allocated at runtime. + * @range: Defrag options including range and flags. + * @newer_than: Minimal transid to defrag + * @max_to_defrag: Max number of sectors to be defragged, if 0, the whole inode + * will be defragged. + */ +int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra, struct btrfs_ioctl_defrag_range_args *range, u64 newer_than, unsigned long max_to_defrag) { struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); struct btrfs_root *root = BTRFS_I(inode)->root; - struct file_ra_state *ra = NULL; unsigned long last_index; u64 isize = i_size_read(inode); u64 last_len = 0; @@ -1374,6 +1383,7 @@ int btrfs_defrag_file(struct inode *inode, struct file *file, u64 new_align = ~((u64)SZ_128K - 1); struct page **pages = NULL; bool do_compress = range->flags & BTRFS_DEFRAG_RANGE_COMPRESS; + bool ra_allocated = false; if (isize == 0) return 0; @@ -1392,16 +1402,15 @@ int btrfs_defrag_file(struct inode *inode, struct file *file, extent_thresh = SZ_256K; /* - * If we were not given a file, allocate a readahead context. As + * If we were not given a ra, allocate a readahead context. As * readahead is just an optimization, defrag will work without it so * we don't error out. */ - if (!file) { + if (!ra) { + ra_allocated = true; ra = kzalloc(sizeof(*ra), GFP_KERNEL); if (ra) file_ra_state_init(ra, inode->i_mapping); - } else { - ra = &file->f_ra; } pages = kmalloc_array(max_cluster, sizeof(struct page *), GFP_KERNEL); @@ -1483,7 +1492,7 @@ int btrfs_defrag_file(struct inode *inode, struct file *file, ra_index = max(i, ra_index); if (ra) page_cache_sync_readahead(inode->i_mapping, ra, - file, ra_index, cluster); + NULL, ra_index, cluster); ra_index += cluster; } @@ -1554,7 +1563,7 @@ int btrfs_defrag_file(struct inode *inode, struct file *file, BTRFS_I(inode)->defrag_compress = BTRFS_COMPRESS_NONE; btrfs_inode_unlock(inode, 0); } - if (!file) + if (ra_allocated) kfree(ra); kfree(pages); return ret; @@ -3076,7 +3085,7 @@ static int btrfs_ioctl_defrag(struct file *file, void __user *argp) /* the rest are all set to zero by kzalloc */ range->len = (u64)-1; } - ret = btrfs_defrag_file(file_inode(file), file, + ret = btrfs_defrag_file(file_inode(file), &file->f_ra, range, BTRFS_OLDEST_GENERATION, 0); if (ret > 0) ret = 0; From patchwork Thu Jun 10 05:09:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 12311795 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83C97C47094 for ; Thu, 10 Jun 2021 05:09:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 66DEA6128A for ; Thu, 10 Jun 2021 05:09:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229823AbhFJFLV (ORCPT ); Thu, 10 Jun 2021 01:11:21 -0400 Received: from smtp-out1.suse.de ([195.135.220.28]:59010 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229529AbhFJFLV (ORCPT ); Thu, 10 Jun 2021 01:11:21 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id 03DD32199E for ; Thu, 10 Jun 2021 05:09:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1623301765; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Z12CneU40+wn6Y7ciW0XGQBlnfqN859GfdckqL8wS6Y=; b=hXcIFFvec9+4tIiafHvsW8+WJuzUNG6TGrLYGlkjyOXQWfrxQRtk9CWVqk8OdpHlYQktOA 5koRTL34IggokOM3TQB/PyJ0fAAPJQRjTC06zthZTEJtEe7+UQ6qRmsKYHJOFIC9iGQZ4F DD5Lm6FdYiobgX9LKi/EwuvjlEUNKrw= Received: from adam-pc.lan (unknown [10.163.16.38]) by relay2.suse.de (Postfix) with ESMTP id 107C8A3B81 for ; Thu, 10 Jun 2021 05:09:23 +0000 (UTC) From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH v4 02/10] btrfs: defrag: extract the page preparation code into one helper Date: Thu, 10 Jun 2021 13:09:09 +0800 Message-Id: <20210610050917.105458-3-wqu@suse.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210610050917.105458-1-wqu@suse.com> References: <20210610050917.105458-1-wqu@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org In cluster_pages_for_defrag(), we have complex code block inside one for() loop. The code block is to prepare one page for defrag, this will ensure: - The page is locked and set up properly - No ordered extent exists in the page range - The page is uptodate - The writeback has finished This behavior is pretty common and will be reused by later defrag rework. So extract the code into its own helper, defrag_prepare_one_page(), for later usage, and cleanup the code by a little. Since we're here, also make the page check to be subpage compatible, which means we will also check page::private, not only page->mapping. Signed-off-by: Qu Wenruo --- fs/btrfs/ioctl.c | 151 +++++++++++++++++++++++++++-------------------- 1 file changed, 86 insertions(+), 65 deletions(-) diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index 05af6f5ff3ff..a06ceb8fdf28 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -1143,6 +1143,89 @@ static int should_defrag_range(struct inode *inode, u64 start, u32 thresh, return ret; } +/* + * Prepare one page to be defragged. + * + * This will ensure: + * - Returned page is locked and has been set up properly + * - No ordered extent exists in the page + * - The page is uptodate + * - The writeback has finished + */ +static struct page *defrag_prepare_one_page(struct btrfs_inode *inode, + unsigned long index) +{ + struct address_space *mapping = inode->vfs_inode.i_mapping; + gfp_t mask = btrfs_alloc_write_mask(mapping); + u64 page_start = index << PAGE_SHIFT; + u64 page_end = page_start + PAGE_SIZE - 1; + struct extent_state *cached_state = NULL; + struct page *page; + int ret; + +again: + page = find_or_create_page(mapping, index, mask); + if (!page) + return ERR_PTR(-ENOMEM); + + ret = set_page_extent_mapped(page); + if (ret < 0) { + unlock_page(page); + put_page(page); + return ERR_PTR(ret); + } + + /* Wait for any exists ordered extent in the range */ + while (1) { + struct btrfs_ordered_extent *ordered; + + lock_extent_bits(&inode->io_tree, page_start, page_end, + &cached_state); + ordered = btrfs_lookup_ordered_range(inode, page_start, + PAGE_SIZE); + unlock_extent_cached(&inode->io_tree, page_start, page_end, + &cached_state); + if (!ordered) + break; + + unlock_page(page); + btrfs_start_ordered_extent(ordered, 1); + btrfs_put_ordered_extent(ordered); + lock_page(page); + /* + * we unlocked the page above, so we need check if + * it was released or not. + */ + if (page->mapping != mapping || !PagePrivate(page)) { + unlock_page(page); + put_page(page); + goto again; + } + } + + /* + * Now the page range has no ordered extent any more. + * Read the page to make it uptodate. + */ + if (!PageUptodate(page)) { + btrfs_readpage(NULL, page); + lock_page(page); + if (page->mapping != mapping || !PagePrivate(page)) { + unlock_page(page); + put_page(page); + goto again; + } + if (!PageUptodate(page)) { + unlock_page(page); + put_page(page); + return ERR_PTR(-EIO); + } + } + + wait_on_page_writeback(page); + return page; +} + /* * it doesn't do much good to defrag one or two pages * at a time. This pulls in a nice chunk of pages @@ -1170,11 +1253,8 @@ static int cluster_pages_for_defrag(struct inode *inode, int ret; int i; int i_done; - struct btrfs_ordered_extent *ordered; struct extent_state *cached_state = NULL; - struct extent_io_tree *tree; struct extent_changeset *data_reserved = NULL; - gfp_t mask = btrfs_alloc_write_mask(inode->i_mapping); file_end = (isize - 1) >> PAGE_SHIFT; if (!isize || start_index > file_end) @@ -1187,68 +1267,16 @@ static int cluster_pages_for_defrag(struct inode *inode, if (ret) return ret; i_done = 0; - tree = &BTRFS_I(inode)->io_tree; /* step one, lock all the pages */ for (i = 0; i < page_cnt; i++) { struct page *page; -again: - page = find_or_create_page(inode->i_mapping, - start_index + i, mask); - if (!page) - break; - ret = set_page_extent_mapped(page); - if (ret < 0) { - unlock_page(page); - put_page(page); + page = defrag_prepare_one_page(BTRFS_I(inode), start_index + i); + if (IS_ERR(page)) { + ret = PTR_ERR(page); break; } - - page_start = page_offset(page); - page_end = page_start + PAGE_SIZE - 1; - while (1) { - lock_extent_bits(tree, page_start, page_end, - &cached_state); - ordered = btrfs_lookup_ordered_extent(BTRFS_I(inode), - page_start); - unlock_extent_cached(tree, page_start, page_end, - &cached_state); - if (!ordered) - break; - - unlock_page(page); - btrfs_start_ordered_extent(ordered, 1); - btrfs_put_ordered_extent(ordered); - lock_page(page); - /* - * we unlocked the page above, so we need check if - * it was released or not. - */ - if (page->mapping != inode->i_mapping) { - unlock_page(page); - put_page(page); - goto again; - } - } - - if (!PageUptodate(page)) { - btrfs_readpage(NULL, page); - lock_page(page); - if (!PageUptodate(page)) { - unlock_page(page); - put_page(page); - ret = -EIO; - break; - } - } - - if (page->mapping != inode->i_mapping) { - unlock_page(page); - put_page(page); - goto again; - } - pages[i] = page; i_done++; } @@ -1258,13 +1286,6 @@ static int cluster_pages_for_defrag(struct inode *inode, if (!(inode->i_sb->s_flags & SB_ACTIVE)) goto out; - /* - * so now we have a nice long stream of locked - * and up to date pages, lets wait on them - */ - for (i = 0; i < i_done; i++) - wait_on_page_writeback(pages[i]); - page_start = page_offset(pages[0]); page_end = page_offset(pages[i_done - 1]) + PAGE_SIZE; From patchwork Thu Jun 10 05:09:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 12311797 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8953C47094 for ; Thu, 10 Jun 2021 05:09:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9042E6128A for ; Thu, 10 Jun 2021 05:09:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229910AbhFJFLX (ORCPT ); Thu, 10 Jun 2021 01:11:23 -0400 Received: from smtp-out1.suse.de ([195.135.220.28]:59016 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229529AbhFJFLX (ORCPT ); Thu, 10 Jun 2021 01:11:23 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id BD9C52199E for ; Thu, 10 Jun 2021 05:09:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1623301766; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NylQJnFXD/JBjo064gyrs98YfRzpynO9C0kWwjldi1M=; b=fduUBmnXJy8GW8NIiw1G6958EALqoPI/ZwAr7Snzmi5nq3VNI6Ir+kBWlIUkwvNLT8WvAy 0tXbQA9VL8PizAj7MEyvY1OiC2nTa2zMhLj9K1giO52I5UWv0aCEYqKfNrVHAobMMaCXvs xN63xY17bNAk44GlzBh36pVxtdC1dn0= Received: from adam-pc.lan (unknown [10.163.16.38]) by relay2.suse.de (Postfix) with ESMTP id BF664A3B81 for ; Thu, 10 Jun 2021 05:09:25 +0000 (UTC) From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH v4 03/10] btrfs: defrag: replace hard coded PAGE_SIZE to sectorsize for defrag_lookup_extent() Date: Thu, 10 Jun 2021 13:09:10 +0800 Message-Id: <20210610050917.105458-4-wqu@suse.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210610050917.105458-1-wqu@suse.com> References: <20210610050917.105458-1-wqu@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org When testing subpage defrag support, I always find some strange inode nbytes error, after a lot of debugging, it turns out that defrag_lookup_extent() is using PAGE_SIZE as size for lookup_extent_mapping(). Since lookup_extent_mapping() is calling __lookup_extent_mapping() with @strict == 1, this means any extent map smaller than one page will be ignored, prevent subpage defrag to grab a correct extent map. There are quite some PAGE_SIZE usage in ioctl.c, but most of them are correct usages, and can be one of the following cases: - ioctl structure size check We want ioctl structure be contained inside one page. - real page opeartions There is one special case left untouched, check_defrag_in_cache(). This function will later be removed completely, thus there is not much meaning to change it now. Signed-off-by: Qu Wenruo --- fs/btrfs/ioctl.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index a06ceb8fdf28..096157ee30f1 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -1037,23 +1037,24 @@ static struct extent_map *defrag_lookup_extent(struct inode *inode, u64 start) struct extent_map_tree *em_tree = &BTRFS_I(inode)->extent_tree; struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree; struct extent_map *em; - u64 len = PAGE_SIZE; + const u32 sectorsize = BTRFS_I(inode)->root->fs_info->sectorsize; /* * hopefully we have this extent in the tree already, try without * the full extent lock */ read_lock(&em_tree->lock); - em = lookup_extent_mapping(em_tree, start, len); + em = lookup_extent_mapping(em_tree, start, sectorsize); read_unlock(&em_tree->lock); if (!em) { struct extent_state *cached = NULL; - u64 end = start + len - 1; + u64 end = start + sectorsize - 1; /* get the big lock and read metadata off disk */ lock_extent_bits(io_tree, start, end, &cached); - em = btrfs_get_extent(BTRFS_I(inode), NULL, 0, start, len); + em = btrfs_get_extent(BTRFS_I(inode), NULL, 0, start, + sectorsize); unlock_extent_cached(io_tree, start, end, &cached); if (IS_ERR(em)) From patchwork Thu Jun 10 05:09:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 12311799 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62902C48BCD for ; Thu, 10 Jun 2021 05:09:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 49AF3613D0 for ; Thu, 10 Jun 2021 05:09:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229925AbhFJFLZ (ORCPT ); Thu, 10 Jun 2021 01:11:25 -0400 Received: from smtp-out2.suse.de ([195.135.220.29]:48236 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229914AbhFJFLZ (ORCPT ); Thu, 10 Jun 2021 01:11:25 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id 77BA81FD37 for ; Thu, 10 Jun 2021 05:09:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1623301768; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hC9O5dhTQXUULsCiShOTExajrocrgKv1cv1qMZicBzI=; b=CsTbReWDUYF4y4SGcfM5+Sf2nwglpODXfqV/rDiPF56sURj+/pqorUzxOCHagQ4J/+ydAe 34NLNUe3iDEBz+Qu2/veUQn04jxFC7rrmPbc7bgHCWieQEpF5mMGLhEEcvGxWkjQ3SHZfl 0626MSdf7ZRtD3vp0dNE6aopG9P4Xc4= Received: from adam-pc.lan (unknown [10.163.16.38]) by relay2.suse.de (Postfix) with ESMTP id 83918A3B81 for ; Thu, 10 Jun 2021 05:09:27 +0000 (UTC) From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH v4 04/10] btrfs: defrag: introduce a new helper to collect target file extents Date: Thu, 10 Jun 2021 13:09:11 +0800 Message-Id: <20210610050917.105458-5-wqu@suse.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210610050917.105458-1-wqu@suse.com> References: <20210610050917.105458-1-wqu@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Introduce a new helper, defrag_collect_targets(), to collect all possible targets to be defragged. This function will not consider things like max_sectors_to_defrag, thus caller should be responsible to ensure we don't exceed the limit. This function will be the first stage of later defrag rework. Signed-off-by: Qu Wenruo --- fs/btrfs/ioctl.c | 120 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 120 insertions(+) diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index 096157ee30f1..0bfd13b959ed 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -1372,6 +1372,126 @@ static int cluster_pages_for_defrag(struct inode *inode, } +struct defrag_target_range { + struct list_head list; + u64 start; + u64 len; +}; + +/* + * Helper to collect all valid target extents. + * + * @start: The file offset to lookup + * @len: The length to lookup + * @extent_thresh: File extent size threshold, any extent size >= this value + * will be ignored + * @newer_than: Only defrag extents newer than this value + * @do_compress: Whether the defrag is doing compression + * If true, @extent_thresh will be ignored and all regular + * file extents meeting @newer_than will be targets. + * @target_list: The list of targets file extents + */ +static int defrag_collect_targets(struct btrfs_inode *inode, + u64 start, u64 len, u32 extent_thresh, + u64 newer_than, bool do_compress, + struct list_head *target_list) +{ + u64 cur = start; + int ret = 0; + + while (cur < start + len) { + struct extent_map *em; + struct defrag_target_range *new; + bool next_mergeable = true; + u64 range_len; + + em = defrag_lookup_extent(&inode->vfs_inode, cur); + if (!em) + break; + + /* Skip hole/inline/preallocated extents */ + if (em->block_start >= EXTENT_MAP_LAST_BYTE || + test_bit(EXTENT_FLAG_PREALLOC, &em->flags)) + goto next; + + /* Skip older extent */ + if (em->generation < newer_than) + goto next; + + /* + * For do_compress case, we want to compress all valid file + * extents, thus no @extent_thresh or mergeable check. + */ + if (do_compress) + goto add; + + /* Skip too large extent */ + if (em->len >= extent_thresh) + goto next; + + next_mergeable = defrag_check_next_extent(&inode->vfs_inode, em); + if (!next_mergeable) { + struct defrag_target_range *last; + + /* Empty target list, no way to merge with last entry */ + if (list_empty(target_list)) + goto next; + last = list_entry(target_list->prev, + struct defrag_target_range, list); + /* Not mergeable with last entry */ + if (last->start + last->len != cur) + goto next; + + /* Mergeable, fall through to add it to @target_list. */ + } + +add: + range_len = min(extent_map_end(em), start + len) - cur; + /* + * This one is a good target, check if it can be merged into + * last range of the target list + */ + if (!list_empty(target_list)) { + struct defrag_target_range *last; + + last = list_entry(target_list->prev, + struct defrag_target_range, list); + ASSERT(last->start + last->len <= cur); + if (last->start + last->len == cur) { + /* Mergeable, enlarge the last entry */ + last->len += range_len; + goto next; + } + /* Fall through to allocate a new entry */ + } + + /* Allocate new defrag_target_range */ + new = kmalloc(sizeof(*new), GFP_NOFS); + if (!new) { + free_extent_map(em); + ret = -ENOMEM; + break; + } + new->start = cur; + new->len = range_len; + list_add_tail(&new->list, target_list); + +next: + cur = extent_map_end(em); + free_extent_map(em); + } + if (ret < 0) { + struct defrag_target_range *entry; + struct defrag_target_range *tmp; + + list_for_each_entry_safe(entry, tmp, target_list, list) { + list_del_init(&entry->list); + kfree(entry); + } + } + return ret; +} + /* * Btrfs entrace for defrag. * From patchwork Thu Jun 10 05:09:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 12311801 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95EE3C48BD1 for ; Thu, 10 Jun 2021 05:09:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 79F41613DE for ; Thu, 10 Jun 2021 05:09:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229963AbhFJFL0 (ORCPT ); Thu, 10 Jun 2021 01:11:26 -0400 Received: from smtp-out2.suse.de ([195.135.220.29]:48242 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229914AbhFJFL0 (ORCPT ); Thu, 10 Jun 2021 01:11:26 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id 3AE691FD37 for ; Thu, 10 Jun 2021 05:09:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1623301770; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nvVUDFD4RZ3BG8+q5qSSYC5VdidlH3x+AF4LH79ojWE=; b=n0KK13NH9ZMlP/GRoALs68tfXrgw12PhVXL4AVCCr2wineOInWqTnbE6lIT15MJqE8Uncr RYQ6Z6PIF29wGL8QRiQPE1dD6SbCvlazJbAJuBG4OvAGEmI30nXc1aWcq+ViSnhsbjsgQE Akaitzul936dEELOTY0EVo0Rf7sSKfs= Received: from adam-pc.lan (unknown [10.163.16.38]) by relay2.suse.de (Postfix) with ESMTP id 3CD34A3B81 for ; Thu, 10 Jun 2021 05:09:28 +0000 (UTC) From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH v4 05/10] btrfs: defrag: introduce a helper to defrag a continuous prepared range Date: Thu, 10 Jun 2021 13:09:12 +0800 Message-Id: <20210610050917.105458-6-wqu@suse.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210610050917.105458-1-wqu@suse.com> References: <20210610050917.105458-1-wqu@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org A new helper, defrag_one_locked_target(), introduced to do the real part of defrag. The caller needs to ensure both page and extents bits are locked, and no ordered extent for the range, and all writeback is finished. The core defrag part is pretty straight-forward: - Reserve space - Set extent bits to defrag - Update involved pages to be dirty Signed-off-by: Qu Wenruo --- fs/btrfs/ioctl.c | 56 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 56 insertions(+) diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index 0bfd13b959ed..423c880e42d4 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -47,6 +47,7 @@ #include "space-info.h" #include "delalloc-space.h" #include "block-group.h" +#include "subpage.h" #ifdef CONFIG_64BIT /* If we have a 32-bit userspace and 64-bit kernel, then the UAPI @@ -1492,6 +1493,61 @@ static int defrag_collect_targets(struct btrfs_inode *inode, return ret; } +#define CLUSTER_SIZE (SZ_256K) + +/* + * Defrag one continuous target range. + * + * @inode: Target inode + * @target: Target range to defrag + * @pages: Locked pages covering the defrag range + * @nr_pages: Number of locked pages + * + * Caller should ensure: + * + * - Pages are prepared + * Pages should be locked, no ordered extent in the pages range, + * no writeback. + * + * - Extent bits are locked + */ +static int defrag_one_locked_target(struct btrfs_inode *inode, + struct defrag_target_range *target, + struct page **pages, int nr_pages, + struct extent_state **cached_state) +{ + struct btrfs_fs_info *fs_info = inode->root->fs_info; + struct extent_changeset *data_reserved = NULL; + const u64 start = target->start; + const u64 len = target->len; + unsigned long last_index = (start + len - 1) >> PAGE_SHIFT; + unsigned long start_index = start >> PAGE_SHIFT; + unsigned long first_index = page_index(pages[0]); + int ret = 0; + int i; + + ASSERT(last_index - first_index + 1 <= nr_pages); + + ret = btrfs_delalloc_reserve_space(inode, &data_reserved, start, len); + if (ret < 0) + return ret; + clear_extent_bit(&inode->io_tree, start, start + len - 1, + EXTENT_DELALLOC | EXTENT_DO_ACCOUNTING | + EXTENT_DEFRAG, 0, 0, cached_state); + set_extent_defrag(&inode->io_tree, start, start + len - 1, + cached_state); + + /* Update the page status */ + for (i = start_index - first_index; i <= last_index - first_index; + i++) { + ClearPageChecked(pages[i]); + btrfs_page_clamp_set_dirty(fs_info, pages[i], start, len); + } + btrfs_delalloc_release_extents(inode, len); + extent_changeset_free(data_reserved); + return ret; +} + /* * Btrfs entrace for defrag. * From patchwork Thu Jun 10 05:09:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 12311803 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FFF4C47094 for ; Thu, 10 Jun 2021 05:09:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 02FCD613DE for ; Thu, 10 Jun 2021 05:09:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229979AbhFJFL2 (ORCPT ); Thu, 10 Jun 2021 01:11:28 -0400 Received: from smtp-out1.suse.de ([195.135.220.28]:59022 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229914AbhFJFL2 (ORCPT ); Thu, 10 Jun 2021 01:11:28 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id 0217A2199E for ; Thu, 10 Jun 2021 05:09:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1623301772; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B5yGb1iUk6vbJ7brVjIaANXIpifCZw4AJ5r7zAYu2ME=; b=qhI+wZfdI/mlE0LB3XZ2S6UPeJlisVjJmuZbcYUE9gyGv6SruW3n9otpPXtY3v4goNzN6e XRDFfrBMWIRbWjV8hTH80rxo1IYoOvnhHmtz7FY+f2qJT707yDuJIA8TPnTcxuxfX+uc/Q f8lXz5iXdsP62qkioVr/e/Gd3Imb20E= Received: from adam-pc.lan (unknown [10.163.16.38]) by relay2.suse.de (Postfix) with ESMTP id 0B61AA3B81 for ; Thu, 10 Jun 2021 05:09:30 +0000 (UTC) From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH v4 06/10] btrfs: defrag: introduce a helper to defrag a range Date: Thu, 10 Jun 2021 13:09:13 +0800 Message-Id: <20210610050917.105458-7-wqu@suse.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210610050917.105458-1-wqu@suse.com> References: <20210610050917.105458-1-wqu@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org A new helper, defrag_one_range(), is introduced to defrag one range. This function will mostly prepare the needed pages and extent status for defrag_one_locked_target(). As we can only have a consistent view of extent map with page and extent bits locked, we need to re-check the range passed in to get a real target list for defrag_one_locked_target(). Since defrag_collect_targets() will call defrag_lookup_extent() and lock extent range, we also need to teach those two functions to skip extent lock. Thus new parameter, @locked, is introruced to skip extent lock if the caller has already locked the range. Signed-off-by: Qu Wenruo --- fs/btrfs/ioctl.c | 102 ++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 92 insertions(+), 10 deletions(-) diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index 423c880e42d4..7ca013781e69 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -1033,7 +1033,8 @@ static int find_new_extents(struct btrfs_root *root, return -ENOENT; } -static struct extent_map *defrag_lookup_extent(struct inode *inode, u64 start) +static struct extent_map *defrag_lookup_extent(struct inode *inode, u64 start, + bool locked) { struct extent_map_tree *em_tree = &BTRFS_I(inode)->extent_tree; struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree; @@ -1053,10 +1054,12 @@ static struct extent_map *defrag_lookup_extent(struct inode *inode, u64 start) u64 end = start + sectorsize - 1; /* get the big lock and read metadata off disk */ - lock_extent_bits(io_tree, start, end, &cached); + if (!locked) + lock_extent_bits(io_tree, start, end, &cached); em = btrfs_get_extent(BTRFS_I(inode), NULL, 0, start, sectorsize); - unlock_extent_cached(io_tree, start, end, &cached); + if (!locked) + unlock_extent_cached(io_tree, start, end, &cached); if (IS_ERR(em)) return NULL; @@ -1065,7 +1068,8 @@ static struct extent_map *defrag_lookup_extent(struct inode *inode, u64 start) return em; } -static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em) +static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em, + bool locked) { struct extent_map *next; bool ret = true; @@ -1074,7 +1078,7 @@ static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em) if (em->start + em->len >= i_size_read(inode)) return false; - next = defrag_lookup_extent(inode, em->start + em->len); + next = defrag_lookup_extent(inode, em->start + em->len, locked); if (!next || next->block_start >= EXTENT_MAP_LAST_BYTE) ret = false; else if ((em->block_start + em->block_len == next->block_start) && @@ -1103,7 +1107,7 @@ static int should_defrag_range(struct inode *inode, u64 start, u32 thresh, *skip = 0; - em = defrag_lookup_extent(inode, start); + em = defrag_lookup_extent(inode, start, false); if (!em) return 0; @@ -1116,7 +1120,7 @@ static int should_defrag_range(struct inode *inode, u64 start, u32 thresh, if (!*defrag_end) prev_mergeable = false; - next_mergeable = defrag_check_next_extent(inode, em); + next_mergeable = defrag_check_next_extent(inode, em, false); /* * we hit a real extent, if it is big or the next extent is not a * real extent, don't bother defragging it @@ -1390,12 +1394,13 @@ struct defrag_target_range { * @do_compress: Whether the defrag is doing compression * If true, @extent_thresh will be ignored and all regular * file extents meeting @newer_than will be targets. + * @locked: If the range has already hold extent lock * @target_list: The list of targets file extents */ static int defrag_collect_targets(struct btrfs_inode *inode, u64 start, u64 len, u32 extent_thresh, u64 newer_than, bool do_compress, - struct list_head *target_list) + bool locked, struct list_head *target_list) { u64 cur = start; int ret = 0; @@ -1406,7 +1411,7 @@ static int defrag_collect_targets(struct btrfs_inode *inode, bool next_mergeable = true; u64 range_len; - em = defrag_lookup_extent(&inode->vfs_inode, cur); + em = defrag_lookup_extent(&inode->vfs_inode, cur, locked); if (!em) break; @@ -1430,7 +1435,8 @@ static int defrag_collect_targets(struct btrfs_inode *inode, if (em->len >= extent_thresh) goto next; - next_mergeable = defrag_check_next_extent(&inode->vfs_inode, em); + next_mergeable = defrag_check_next_extent(&inode->vfs_inode, em, + locked); if (!next_mergeable) { struct defrag_target_range *last; @@ -1548,6 +1554,82 @@ static int defrag_one_locked_target(struct btrfs_inode *inode, return ret; } +static int defrag_one_range(struct btrfs_inode *inode, + u64 start, u32 len, + u32 extent_thresh, u64 newer_than, + bool do_compress) +{ + struct extent_state *cached_state = NULL; + struct defrag_target_range *entry; + struct defrag_target_range *tmp; + LIST_HEAD(target_list); + struct page **pages; + const u32 sectorsize = inode->root->fs_info->sectorsize; + unsigned long last_index = (start + len - 1) >> PAGE_SHIFT; + unsigned long start_index = start >> PAGE_SHIFT; + unsigned int nr_pages = last_index - start_index + 1; + int ret = 0; + int i; + + ASSERT(nr_pages <= CLUSTER_SIZE / PAGE_SIZE); + ASSERT(IS_ALIGNED(start, sectorsize) && IS_ALIGNED(len, sectorsize)); + + pages = kcalloc(nr_pages, sizeof(struct page *), GFP_NOFS); + if (!pages) + return -ENOMEM; + + /* Prepare all pages */ + for (i = 0; i < nr_pages; i++) { + pages[i] = defrag_prepare_one_page(inode, start_index + i); + if (IS_ERR(pages[i])) { + ret = PTR_ERR(pages[i]); + pages[i] = NULL; + goto free_pages; + } + } + /* Also lock the pages range */ + lock_extent_bits(&inode->io_tree, start_index << PAGE_SHIFT, + (last_index << PAGE_SHIFT) + PAGE_SIZE - 1, + &cached_state); + /* + * Now we have a consistent view about the extent map, re-check + * which range really needs to be defragged. + * + * And this time we have extent locked already, pass @locked = true + * so that we won't re-lock the extent range and cause deadlock. + */ + ret = defrag_collect_targets(inode, start, len, extent_thresh, + newer_than, do_compress, true, + &target_list); + if (ret < 0) + goto unlock_extent; + + list_for_each_entry(entry, &target_list, list) { + ret = defrag_one_locked_target(inode, entry, pages, nr_pages, + &cached_state); + if (ret < 0) + break; + } + + list_for_each_entry_safe(entry, tmp, &target_list, list) { + list_del_init(&entry->list); + kfree(entry); + } +unlock_extent: + unlock_extent_cached(&inode->io_tree, start_index << PAGE_SHIFT, + (last_index << PAGE_SHIFT) + PAGE_SIZE - 1, + &cached_state); +free_pages: + for (i = 0; i < nr_pages; i++) { + if (pages[i]) { + unlock_page(pages[i]); + put_page(pages[i]); + } + } + kfree(pages); + return ret; +} + /* * Btrfs entrace for defrag. * From patchwork Thu Jun 10 05:09:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 12311805 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2821C48BCD for ; Thu, 10 Jun 2021 05:09:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8AFCA613E2 for ; Thu, 10 Jun 2021 05:09:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229989AbhFJFLa (ORCPT ); Thu, 10 Jun 2021 01:11:30 -0400 Received: from smtp-out2.suse.de ([195.135.220.29]:48248 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229914AbhFJFLa (ORCPT ); Thu, 10 Jun 2021 01:11:30 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id BAFB01FD37 for ; Thu, 10 Jun 2021 05:09:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1623301773; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0Bm9VnlJjip633Hs0lFF/lCU7L1DFI1eI5HWm3bdCzs=; b=Mr2e1czD1XPZ8KVw8tXRI9xrtaMgh8RNB69jCem/7SrzPg84kC65P2t7aJxJwhJPMd+h/n +J+j2b4IRFndD4LpUQ8Rjxn91yBm/2OorzYrWgLVs/1sk9d+38zQwsyUfsfm3vuqy2kenU aFBR5Goo793bU0PI+/pYFtQxle4jG30= Received: from adam-pc.lan (unknown [10.163.16.38]) by relay2.suse.de (Postfix) with ESMTP id BAE00A3B81 for ; Thu, 10 Jun 2021 05:09:32 +0000 (UTC) From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH v4 07/10] btrfs: defrag: introduce a new helper to defrag one cluster Date: Thu, 10 Jun 2021 13:09:14 +0800 Message-Id: <20210610050917.105458-8-wqu@suse.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210610050917.105458-1-wqu@suse.com> References: <20210610050917.105458-1-wqu@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org This new helper, defrag_one_cluster(), will defrag one cluster (at most 256K) by: - Collect all initial targets - Kick in readahead when possible - Call defrag_one_range() on each initial target With some extra range clamping. - Update @sectors_defragged parameter This involves one behavior change, the defragged sectors accounting is no longer as accurate as old behavior, as the initial targets are not consistent. We can have new holes punched inside the initial target, and we will skip such holes later. But the defragged sectors accounting doesn't need to be that accurate anyway, thus I don't want to pass those extra accounting burden into defrag_one_range(). Signed-off-by: Qu Wenruo --- fs/btrfs/ioctl.c | 56 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 56 insertions(+) diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index 7ca013781e69..c7f41b8f313d 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -1630,6 +1630,62 @@ static int defrag_one_range(struct btrfs_inode *inode, return ret; } +static int defrag_one_cluster(struct btrfs_inode *inode, + struct file_ra_state *ra, + u64 start, u32 len, u32 extent_thresh, + u64 newer_than, bool do_compress, + unsigned long *sectors_defragged, + unsigned long max_sectors) +{ + const u32 sectorsize = inode->root->fs_info->sectorsize; + struct defrag_target_range *entry; + struct defrag_target_range *tmp; + LIST_HEAD(target_list); + int ret; + + BUILD_BUG_ON(!IS_ALIGNED(CLUSTER_SIZE, PAGE_SIZE)); + ret = defrag_collect_targets(inode, start, len, extent_thresh, + newer_than, do_compress, false, + &target_list); + if (ret < 0) + goto out; + + list_for_each_entry(entry, &target_list, list) { + u32 range_len = entry->len; + + /* Reached the limit */ + if (max_sectors && max_sectors == *sectors_defragged) + break; + + if (max_sectors) + range_len = min_t(u32, range_len, + (max_sectors - *sectors_defragged) * sectorsize); + + if (ra) + page_cache_sync_readahead(inode->vfs_inode.i_mapping, + ra, NULL, entry->start >> PAGE_SHIFT, + ((entry->start + range_len - 1) >> PAGE_SHIFT) - + (entry->start >> PAGE_SHIFT) + 1); + /* + * Here we may not defrag any range if holes are punched before + * we locked the pages. + * But that's fine, it only affects the @sectors_defragged + * accounting. + */ + ret = defrag_one_range(inode, entry->start, range_len, + extent_thresh, newer_than, do_compress); + if (ret < 0) + break; + *sectors_defragged += range_len; + } +out: + list_for_each_entry_safe(entry, tmp, &target_list, list) { + list_del_init(&entry->list); + kfree(entry); + } + return ret; +} + /* * Btrfs entrace for defrag. * From patchwork Thu Jun 10 05:09:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 12311807 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7C39C47094 for ; Thu, 10 Jun 2021 05:09:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 93176613E2 for ; Thu, 10 Jun 2021 05:09:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230026AbhFJFLc (ORCPT ); Thu, 10 Jun 2021 01:11:32 -0400 Received: from smtp-out2.suse.de ([195.135.220.29]:48254 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229914AbhFJFLb (ORCPT ); Thu, 10 Jun 2021 01:11:31 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id 778941FD37 for ; Thu, 10 Jun 2021 05:09:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1623301775; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YC+ApFEaxe8NQFZWya3jxfQ1/QGl0C50yBId7vr1/V0=; b=gdKP/S9ON9tblh9AuGbp95zHzxhCaQQXsraUkAbc0I9J65EObCMH915uDtphBzDGsf9cTG Dkgl/Gkz74chv6hptracGZt3XuaUi5PymGb1RJIFfO16YIZMv9BgA1GoVl7c2sjEU/FcGC r5Pzj46TM8zMCy/IZCqPOVHh3P7onpM= Received: from adam-pc.lan (unknown [10.163.16.38]) by relay2.suse.de (Postfix) with ESMTP id 7FB63A3B8D for ; Thu, 10 Jun 2021 05:09:34 +0000 (UTC) From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH v4 08/10] btrfs: defrag: use defrag_one_cluster() to implement btrfs_defrag_file() Date: Thu, 10 Jun 2021 13:09:15 +0800 Message-Id: <20210610050917.105458-9-wqu@suse.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210610050917.105458-1-wqu@suse.com> References: <20210610050917.105458-1-wqu@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org The function defrag_one_cluster() is able to defrag one range well enough, we only need to do prepration for it, including: - Clamp and align the defrag range - Exclude invalid cases - Proper inode locking The old infrastructures will not be removed in this patch, as it would be too noisy to review. Signed-off-by: Qu Wenruo --- fs/btrfs/ioctl.c | 204 +++++++++++++---------------------------------- 1 file changed, 55 insertions(+), 149 deletions(-) diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index c7f41b8f313d..9bd062c92fcd 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -1701,25 +1701,15 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra, u64 newer_than, unsigned long max_to_defrag) { struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); - struct btrfs_root *root = BTRFS_I(inode)->root; - unsigned long last_index; + unsigned long sectors_defragged = 0; u64 isize = i_size_read(inode); - u64 last_len = 0; - u64 skip = 0; - u64 defrag_end = 0; - u64 newer_off = range->start; - unsigned long i; - unsigned long ra_index = 0; - int ret; - int defrag_count = 0; - int compress_type = BTRFS_COMPRESS_ZLIB; - u32 extent_thresh = range->extent_thresh; - unsigned long max_cluster = SZ_256K >> PAGE_SHIFT; - unsigned long cluster = max_cluster; - u64 new_align = ~((u64)SZ_128K - 1); - struct page **pages = NULL; + u64 cur; + u64 last_byte; bool do_compress = range->flags & BTRFS_DEFRAG_RANGE_COMPRESS; bool ra_allocated = false; + int compress_type = BTRFS_COMPRESS_ZLIB; + int ret; + u32 extent_thresh = range->extent_thresh; if (isize == 0) return 0; @@ -1737,6 +1727,14 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra, if (extent_thresh == 0) extent_thresh = SZ_256K; + if (range->start + range->len > range->start) { + /* Got a specific range */ + last_byte = min(isize, range->start + range->len) - 1; + } else { + /* Defrag until file end */ + last_byte = isize - 1; + } + /* * If we were not given a ra, allocate a readahead context. As * readahead is just an optimization, defrag will work without it so @@ -1749,159 +1747,67 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra, file_ra_state_init(ra, inode->i_mapping); } - pages = kmalloc_array(max_cluster, sizeof(struct page *), GFP_KERNEL); - if (!pages) { - ret = -ENOMEM; - goto out_ra; - } + /* Align the range */ + cur = round_down(range->start, fs_info->sectorsize); + last_byte = round_up(last_byte, fs_info->sectorsize) - 1; - /* find the last page to defrag */ - if (range->start + range->len > range->start) { - last_index = min_t(u64, isize - 1, - range->start + range->len - 1) >> PAGE_SHIFT; - } else { - last_index = (isize - 1) >> PAGE_SHIFT; - } - - if (newer_than) { - ret = find_new_extents(root, inode, newer_than, - &newer_off, SZ_64K); - if (!ret) { - range->start = newer_off; - /* - * we always align our defrag to help keep - * the extents in the file evenly spaced - */ - i = (newer_off & new_align) >> PAGE_SHIFT; - } else - goto out_ra; - } else { - i = range->start >> PAGE_SHIFT; - } - if (!max_to_defrag) - max_to_defrag = last_index - i + 1; - - /* - * make writeback starts from i, so the defrag range can be - * written sequentially. - */ - if (i < inode->i_mapping->writeback_index) - inode->i_mapping->writeback_index = i; - - while (i <= last_index && defrag_count < max_to_defrag && - (i < DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE))) { - /* - * make sure we stop running if someone unmounts - * the FS - */ - if (!(inode->i_sb->s_flags & SB_ACTIVE)) - break; - - if (btrfs_defrag_cancelled(fs_info)) { - btrfs_debug(fs_info, "defrag_file cancelled"); - ret = -EAGAIN; - goto error; - } - - if (!should_defrag_range(inode, (u64)i << PAGE_SHIFT, - extent_thresh, &last_len, &skip, - &defrag_end, do_compress)){ - unsigned long next; - /* - * the should_defrag function tells us how much to skip - * bump our counter by the suggested amount - */ - next = DIV_ROUND_UP(skip, PAGE_SIZE); - i = max(i + 1, next); - continue; - } + while (cur < last_byte) { + u64 cluster_end; - if (!newer_than) { - cluster = (PAGE_ALIGN(defrag_end) >> - PAGE_SHIFT) - i; - cluster = min(cluster, max_cluster); - } else { - cluster = max_cluster; - } + /* The cluster size 256K should always be page aligned */ + BUILD_BUG_ON(!IS_ALIGNED(CLUSTER_SIZE, PAGE_SIZE)); - if (i + cluster > ra_index) { - ra_index = max(i, ra_index); - if (ra) - page_cache_sync_readahead(inode->i_mapping, ra, - NULL, ra_index, cluster); - ra_index += cluster; - } + /* We want the cluster ends at page boundary when possible */ + cluster_end = (((cur >> PAGE_SHIFT) + + (SZ_256K >> PAGE_SHIFT)) << PAGE_SHIFT) - 1; + cluster_end = min(cluster_end, last_byte); btrfs_inode_lock(inode, 0); if (IS_SWAPFILE(inode)) { ret = -ETXTBSY; - } else { - if (do_compress) - BTRFS_I(inode)->defrag_compress = compress_type; - ret = cluster_pages_for_defrag(inode, pages, i, cluster); + btrfs_inode_unlock(inode, 0); + break; } - if (ret < 0) { + if (!(inode->i_sb->s_flags & SB_ACTIVE)) { btrfs_inode_unlock(inode, 0); - goto out_ra; + break; } - - defrag_count += ret; - balance_dirty_pages_ratelimited(inode->i_mapping); + if (do_compress) + BTRFS_I(inode)->defrag_compress = compress_type; + ret = defrag_one_cluster(BTRFS_I(inode), ra, cur, + cluster_end + 1 - cur, extent_thresh, + newer_than, do_compress, + §ors_defragged, max_to_defrag); btrfs_inode_unlock(inode, 0); - - if (newer_than) { - if (newer_off == (u64)-1) - break; - - if (ret > 0) - i += ret; - - newer_off = max(newer_off + 1, - (u64)i << PAGE_SHIFT); - - ret = find_new_extents(root, inode, newer_than, - &newer_off, SZ_64K); - if (!ret) { - range->start = newer_off; - i = (newer_off & new_align) >> PAGE_SHIFT; - } else { - break; - } - } else { - if (ret > 0) { - i += ret; - last_len += ret << PAGE_SHIFT; - } else { - i++; - last_len = 0; - } - } + if (ret < 0) + break; + cur = cluster_end + 1; } - ret = defrag_count; -error: - if ((range->flags & BTRFS_DEFRAG_RANGE_START_IO)) { - filemap_flush(inode->i_mapping); - if (test_bit(BTRFS_INODE_HAS_ASYNC_EXTENT, - &BTRFS_I(inode)->runtime_flags)) + if (ra_allocated) + kfree(ra); + if (sectors_defragged) { + /* + * We have defragged some sectors, for compression case + * they need to be written back immediately. + */ + if (range->flags & BTRFS_DEFRAG_RANGE_START_IO) { filemap_flush(inode->i_mapping); + if (test_bit(BTRFS_INODE_HAS_ASYNC_EXTENT, + &BTRFS_I(inode)->runtime_flags)) + filemap_flush(inode->i_mapping); + } + if (range->compress_type == BTRFS_COMPRESS_LZO) + btrfs_set_fs_incompat(fs_info, COMPRESS_LZO); + else if (range->compress_type == BTRFS_COMPRESS_ZSTD) + btrfs_set_fs_incompat(fs_info, COMPRESS_ZSTD); + ret = sectors_defragged; } - - if (range->compress_type == BTRFS_COMPRESS_LZO) { - btrfs_set_fs_incompat(fs_info, COMPRESS_LZO); - } else if (range->compress_type == BTRFS_COMPRESS_ZSTD) { - btrfs_set_fs_incompat(fs_info, COMPRESS_ZSTD); - } - -out_ra: if (do_compress) { btrfs_inode_lock(inode, 0); BTRFS_I(inode)->defrag_compress = BTRFS_COMPRESS_NONE; btrfs_inode_unlock(inode, 0); } - if (ra_allocated) - kfree(ra); - kfree(pages); return ret; } From patchwork Thu Jun 10 05:09:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 12311809 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76E5AC48BCD for ; Thu, 10 Jun 2021 05:09:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 58A01613E2 for ; Thu, 10 Jun 2021 05:09:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230029AbhFJFLe (ORCPT ); Thu, 10 Jun 2021 01:11:34 -0400 Received: from smtp-out2.suse.de ([195.135.220.29]:48260 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229914AbhFJFLd (ORCPT ); Thu, 10 Jun 2021 01:11:33 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id 24D0E1FD37 for ; Thu, 10 Jun 2021 05:09:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1623301777; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=A5t0aNO3nNK+uHY4VKLvsuReQgEdFrR/srLBGDbMLak=; b=KlWKjMPjZUEr97qsYEgR4vIo2fFx7Uke7Vf2z7AfY8lnczJgWHFQdmAcSq4oFGebW3Z5yN rB5SnKXzUPRx5EYF94oquPSbcftwumUmgAyhFIzqWmiM9KQUBmCTHFpCsv9ZxBIjqYPV9g lZqqy7Uwd3vHpGnLNV93I1RzEbxAKJg= Received: from adam-pc.lan (unknown [10.163.16.38]) by relay2.suse.de (Postfix) with ESMTP id 15DF0A3B8D for ; Thu, 10 Jun 2021 05:09:35 +0000 (UTC) From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH v4 09/10] btrfs: defrag: remove the old infrastructure Date: Thu, 10 Jun 2021 13:09:16 +0800 Message-Id: <20210610050917.105458-10-wqu@suse.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210610050917.105458-1-wqu@suse.com> References: <20210610050917.105458-1-wqu@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Now the old infrastructure can all be removed. Signed-off-by: Qu Wenruo --- fs/btrfs/ioctl.c | 305 ----------------------------------------------- 1 file changed, 305 deletions(-) diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index 9bd062c92fcd..446b2336725c 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -933,106 +933,6 @@ static noinline int btrfs_mksnapshot(const struct path *parent, return ret; } -/* - * When we're defragging a range, we don't want to kick it off again - * if it is really just waiting for delalloc to send it down. - * If we find a nice big extent or delalloc range for the bytes in the - * file you want to defrag, we return 0 to let you know to skip this - * part of the file - */ -static int check_defrag_in_cache(struct inode *inode, u64 offset, u32 thresh) -{ - struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree; - struct extent_map *em = NULL; - struct extent_map_tree *em_tree = &BTRFS_I(inode)->extent_tree; - u64 end; - - read_lock(&em_tree->lock); - em = lookup_extent_mapping(em_tree, offset, PAGE_SIZE); - read_unlock(&em_tree->lock); - - if (em) { - end = extent_map_end(em); - free_extent_map(em); - if (end - offset > thresh) - return 0; - } - /* if we already have a nice delalloc here, just stop */ - thresh /= 2; - end = count_range_bits(io_tree, &offset, offset + thresh, - thresh, EXTENT_DELALLOC, 1); - if (end >= thresh) - return 0; - return 1; -} - -/* - * helper function to walk through a file and find extents - * newer than a specific transid, and smaller than thresh. - * - * This is used by the defragging code to find new and small - * extents - */ -static int find_new_extents(struct btrfs_root *root, - struct inode *inode, u64 newer_than, - u64 *off, u32 thresh) -{ - struct btrfs_path *path; - struct btrfs_key min_key; - struct extent_buffer *leaf; - struct btrfs_file_extent_item *extent; - int type; - int ret; - u64 ino = btrfs_ino(BTRFS_I(inode)); - - path = btrfs_alloc_path(); - if (!path) - return -ENOMEM; - - min_key.objectid = ino; - min_key.type = BTRFS_EXTENT_DATA_KEY; - min_key.offset = *off; - - while (1) { - ret = btrfs_search_forward(root, &min_key, path, newer_than); - if (ret != 0) - goto none; -process_slot: - if (min_key.objectid != ino) - goto none; - if (min_key.type != BTRFS_EXTENT_DATA_KEY) - goto none; - - leaf = path->nodes[0]; - extent = btrfs_item_ptr(leaf, path->slots[0], - struct btrfs_file_extent_item); - - type = btrfs_file_extent_type(leaf, extent); - if (type == BTRFS_FILE_EXTENT_REG && - btrfs_file_extent_num_bytes(leaf, extent) < thresh && - check_defrag_in_cache(inode, min_key.offset, thresh)) { - *off = min_key.offset; - btrfs_free_path(path); - return 0; - } - - path->slots[0]++; - if (path->slots[0] < btrfs_header_nritems(leaf)) { - btrfs_item_key_to_cpu(leaf, &min_key, path->slots[0]); - goto process_slot; - } - - if (min_key.offset == (u64)-1) - goto none; - - min_key.offset++; - btrfs_release_path(path); - } -none: - btrfs_free_path(path); - return -ENOENT; -} - static struct extent_map *defrag_lookup_extent(struct inode *inode, u64 start, bool locked) { @@ -1089,66 +989,6 @@ static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em, return ret; } -static int should_defrag_range(struct inode *inode, u64 start, u32 thresh, - u64 *last_len, u64 *skip, u64 *defrag_end, - int compress) -{ - struct extent_map *em; - int ret = 1; - bool next_mergeable = true; - bool prev_mergeable = true; - - /* - * make sure that once we start defragging an extent, we keep on - * defragging it - */ - if (start < *defrag_end) - return 1; - - *skip = 0; - - em = defrag_lookup_extent(inode, start, false); - if (!em) - return 0; - - /* this will cover holes, and inline extents */ - if (em->block_start >= EXTENT_MAP_LAST_BYTE) { - ret = 0; - goto out; - } - - if (!*defrag_end) - prev_mergeable = false; - - next_mergeable = defrag_check_next_extent(inode, em, false); - /* - * we hit a real extent, if it is big or the next extent is not a - * real extent, don't bother defragging it - */ - if (!compress && (*last_len == 0 || *last_len >= thresh) && - (em->len >= thresh || (!next_mergeable && !prev_mergeable))) - ret = 0; -out: - /* - * last_len ends up being a counter of how many bytes we've defragged. - * every time we choose not to defrag an extent, we reset *last_len - * so that the next tiny extent will force a defrag. - * - * The end result of this is that tiny extents before a single big - * extent will force at least part of that big extent to be defragged. - */ - if (ret) { - *defrag_end = extent_map_end(em); - } else { - *last_len = 0; - *skip = extent_map_end(em); - *defrag_end = 0; - } - - free_extent_map(em); - return ret; -} - /* * Prepare one page to be defragged. * @@ -1232,151 +1072,6 @@ static struct page *defrag_prepare_one_page(struct btrfs_inode *inode, return page; } -/* - * it doesn't do much good to defrag one or two pages - * at a time. This pulls in a nice chunk of pages - * to COW and defrag. - * - * It also makes sure the delalloc code has enough - * dirty data to avoid making new small extents as part - * of the defrag - * - * It's a good idea to start RA on this range - * before calling this. - */ -static int cluster_pages_for_defrag(struct inode *inode, - struct page **pages, - unsigned long start_index, - unsigned long num_pages) -{ - unsigned long file_end; - u64 isize = i_size_read(inode); - u64 page_start; - u64 page_end; - u64 page_cnt; - u64 start = (u64)start_index << PAGE_SHIFT; - u64 search_start; - int ret; - int i; - int i_done; - struct extent_state *cached_state = NULL; - struct extent_changeset *data_reserved = NULL; - - file_end = (isize - 1) >> PAGE_SHIFT; - if (!isize || start_index > file_end) - return 0; - - page_cnt = min_t(u64, (u64)num_pages, (u64)file_end - start_index + 1); - - ret = btrfs_delalloc_reserve_space(BTRFS_I(inode), &data_reserved, - start, page_cnt << PAGE_SHIFT); - if (ret) - return ret; - i_done = 0; - - /* step one, lock all the pages */ - for (i = 0; i < page_cnt; i++) { - struct page *page; - - page = defrag_prepare_one_page(BTRFS_I(inode), start_index + i); - if (IS_ERR(page)) { - ret = PTR_ERR(page); - break; - } - pages[i] = page; - i_done++; - } - if (!i_done || ret) - goto out; - - if (!(inode->i_sb->s_flags & SB_ACTIVE)) - goto out; - - page_start = page_offset(pages[0]); - page_end = page_offset(pages[i_done - 1]) + PAGE_SIZE; - - lock_extent_bits(&BTRFS_I(inode)->io_tree, - page_start, page_end - 1, &cached_state); - - /* - * When defragmenting we skip ranges that have holes or inline extents, - * (check should_defrag_range()), to avoid unnecessary IO and wasting - * space. At btrfs_defrag_file(), we check if a range should be defragged - * before locking the inode and then, if it should, we trigger a sync - * page cache readahead - we lock the inode only after that to avoid - * blocking for too long other tasks that possibly want to operate on - * other file ranges. But before we were able to get the inode lock, - * some other task may have punched a hole in the range, or we may have - * now an inline extent, in which case we should not defrag. So check - * for that here, where we have the inode and the range locked, and bail - * out if that happened. - */ - search_start = page_start; - while (search_start < page_end) { - struct extent_map *em; - - em = btrfs_get_extent(BTRFS_I(inode), NULL, 0, search_start, - page_end - search_start); - if (IS_ERR(em)) { - ret = PTR_ERR(em); - goto out_unlock_range; - } - if (em->block_start >= EXTENT_MAP_LAST_BYTE) { - free_extent_map(em); - /* Ok, 0 means we did not defrag anything */ - ret = 0; - goto out_unlock_range; - } - search_start = extent_map_end(em); - free_extent_map(em); - } - - clear_extent_bit(&BTRFS_I(inode)->io_tree, page_start, - page_end - 1, EXTENT_DELALLOC | EXTENT_DO_ACCOUNTING | - EXTENT_DEFRAG, 0, 0, &cached_state); - - if (i_done != page_cnt) { - spin_lock(&BTRFS_I(inode)->lock); - btrfs_mod_outstanding_extents(BTRFS_I(inode), 1); - spin_unlock(&BTRFS_I(inode)->lock); - btrfs_delalloc_release_space(BTRFS_I(inode), data_reserved, - start, (page_cnt - i_done) << PAGE_SHIFT, true); - } - - - set_extent_defrag(&BTRFS_I(inode)->io_tree, page_start, page_end - 1, - &cached_state); - - unlock_extent_cached(&BTRFS_I(inode)->io_tree, - page_start, page_end - 1, &cached_state); - - for (i = 0; i < i_done; i++) { - clear_page_dirty_for_io(pages[i]); - ClearPageChecked(pages[i]); - set_page_dirty(pages[i]); - unlock_page(pages[i]); - put_page(pages[i]); - } - btrfs_delalloc_release_extents(BTRFS_I(inode), page_cnt << PAGE_SHIFT); - extent_changeset_free(data_reserved); - return i_done; - -out_unlock_range: - unlock_extent_cached(&BTRFS_I(inode)->io_tree, - page_start, page_end - 1, &cached_state); -out: - for (i = 0; i < i_done; i++) { - unlock_page(pages[i]); - put_page(pages[i]); - } - btrfs_delalloc_release_space(BTRFS_I(inode), data_reserved, - start, page_cnt << PAGE_SHIFT, true); - btrfs_delalloc_release_extents(BTRFS_I(inode), page_cnt << PAGE_SHIFT); - extent_changeset_free(data_reserved); - return ret; - -} - struct defrag_target_range { struct list_head list; u64 start; From patchwork Thu Jun 10 05:09:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 12311811 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41C96C48BDF for ; Thu, 10 Jun 2021 05:09:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2AF29613E3 for ; Thu, 10 Jun 2021 05:09:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229931AbhFJFLf (ORCPT ); Thu, 10 Jun 2021 01:11:35 -0400 Received: from smtp-out1.suse.de ([195.135.220.28]:59028 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229914AbhFJFLf (ORCPT ); Thu, 10 Jun 2021 01:11:35 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id 302A3219A3 for ; Thu, 10 Jun 2021 05:09:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1623301779; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xeYtr4Y17qniPMB1NMaBQtPGKsvlrbQSePkXcBxTMrw=; b=ty6a78fPH+yTjPigYOtH6+eNxMd8cl6r3irwuTJXSJ0UVeYa3o5FsxOqhVuio9fK57UEoJ Hs6/ERejJzuQ+zdxpG7yIQ6ZPR2zYIsJOJOUyEHeEEZe3HqRcAnZE8ZtDt8FZ8F+b+L6ar qYXEvnOTSAVbsaWiIDTliiUbQVJ9/l4= Received: from adam-pc.lan (unknown [10.163.16.38]) by relay2.suse.de (Postfix) with ESMTP id 0E5B3A3B89 for ; Thu, 10 Jun 2021 05:09:37 +0000 (UTC) From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH v4 10/10] btrfs: defrag: enable defrag for subpage case Date: Thu, 10 Jun 2021 13:09:17 +0800 Message-Id: <20210610050917.105458-11-wqu@suse.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210610050917.105458-1-wqu@suse.com> References: <20210610050917.105458-1-wqu@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org With the new infrasturture which has taken subpage into consideration, now we should be safe to allow defrag to work for subpage case. Signed-off-by: Qu Wenruo --- fs/btrfs/ioctl.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index 446b2336725c..edf597ad515f 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -2974,12 +2974,6 @@ static int btrfs_ioctl_defrag(struct file *file, void __user *argp) goto out; } - /* Subpage defrag will be supported in later commits */ - if (root->fs_info->sectorsize < PAGE_SIZE) { - ret = -ENOTTY; - goto out; - } - switch (inode->i_mode & S_IFMT) { case S_IFDIR: if (!capable(CAP_SYS_ADMIN)) {