diff mbox series

[v4,02/12] nilfs2: drop usage of page_index

Message ID 20240502084609.28376-3-ryncsn@gmail.com (mailing list archive)
State New, archived
Headers show
Series mm/swap: clean up and optimize swap cache index | expand

Commit Message

Kairui Song May 2, 2024, 8:45 a.m. UTC
From: Kairui Song <kasong@tencent.com>

page_index is only for mixed usage of page cache and swap cache, for
pure page cache usage, the caller can just use page->index instead.

It can't be a swap cache page here (being part of buffer head),
so just drop it. And while we are at it, optimize the code by retrieving
the offset of the buffer head within the folio directly using bh_offset,
and get rid of the loop and usage of page helpers.

Suggested-by: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Kairui Song <kasong@tencent.com>
Cc: Ryusuke Konishi <konishi.ryusuke@gmail.com>
Cc: linux-nilfs@vger.kernel.org
---
 fs/nilfs2/bmap.c | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

Comments

Ryusuke Konishi May 2, 2024, 11:08 a.m. UTC | #1
On Thu, May 2, 2024 at 5:47 PM Kairui Song wrote:
>
> From: Kairui Song <kasong@tencent.com>
>
> page_index is only for mixed usage of page cache and swap cache, for
> pure page cache usage, the caller can just use page->index instead.
>
> It can't be a swap cache page here (being part of buffer head),
> so just drop it. And while we are at it, optimize the code by retrieving
> the offset of the buffer head within the folio directly using bh_offset,
> and get rid of the loop and usage of page helpers.
>
> Suggested-by: Matthew Wilcox <willy@infradead.org>
> Signed-off-by: Kairui Song <kasong@tencent.com>
> Cc: Ryusuke Konishi <konishi.ryusuke@gmail.com>
> Cc: linux-nilfs@vger.kernel.org
> ---
>  fs/nilfs2/bmap.c | 10 ++--------
>  1 file changed, 2 insertions(+), 8 deletions(-)
>
> diff --git a/fs/nilfs2/bmap.c b/fs/nilfs2/bmap.c
> index 383f0afa2cea..cd14ea25968c 100644
> --- a/fs/nilfs2/bmap.c
> +++ b/fs/nilfs2/bmap.c
> @@ -450,15 +450,9 @@ int nilfs_bmap_test_and_clear_dirty(struct nilfs_bmap *bmap)
>  __u64 nilfs_bmap_data_get_key(const struct nilfs_bmap *bmap,
>                               const struct buffer_head *bh)
>  {
> -       struct buffer_head *pbh;
> -       __u64 key;
> +       loff_t pos = folio_pos(bh->b_folio) + bh_offset(bh);
>
> -       key = page_index(bh->b_page) << (PAGE_SHIFT -
> -                                        bmap->b_inode->i_blkbits);
> -       for (pbh = page_buffers(bh->b_page); pbh != bh; pbh = pbh->b_this_page)
> -               key++;
> -
> -       return key;
> +       return pos >> bmap->b_inode->i_blkbits;
>  }
>
>  __u64 nilfs_bmap_find_target_seq(const struct nilfs_bmap *bmap, __u64 key)
> --
> 2.44.0

Looks good.  Feel free to add:

Acked-by: Ryusuke Konishi <konishi.ryusuke@gmail.com>

Just to be sure, I also tested this change in different environments,
including 4k (page size) and smaller block sizes.  And of course it's
working as expected and so far hasn't broken anything.

Thanks,
Ryusuke Konishi
diff mbox series

Patch

diff --git a/fs/nilfs2/bmap.c b/fs/nilfs2/bmap.c
index 383f0afa2cea..cd14ea25968c 100644
--- a/fs/nilfs2/bmap.c
+++ b/fs/nilfs2/bmap.c
@@ -450,15 +450,9 @@  int nilfs_bmap_test_and_clear_dirty(struct nilfs_bmap *bmap)
 __u64 nilfs_bmap_data_get_key(const struct nilfs_bmap *bmap,
 			      const struct buffer_head *bh)
 {
-	struct buffer_head *pbh;
-	__u64 key;
+	loff_t pos = folio_pos(bh->b_folio) + bh_offset(bh);
 
-	key = page_index(bh->b_page) << (PAGE_SHIFT -
-					 bmap->b_inode->i_blkbits);
-	for (pbh = page_buffers(bh->b_page); pbh != bh; pbh = pbh->b_this_page)
-		key++;
-
-	return key;
+	return pos >> bmap->b_inode->i_blkbits;
 }
 
 __u64 nilfs_bmap_find_target_seq(const struct nilfs_bmap *bmap, __u64 key)