diff mbox series

[1/2] hugetlb: Convert huge_add_to_page_cache() to use a folio

Message ID 20220601192333.1560777-1-willy@infradead.org (mailing list archive)
State New
Headers show
Series [1/2] hugetlb: Convert huge_add_to_page_cache() to use a folio | expand

Commit Message

Matthew Wilcox June 1, 2022, 7:23 p.m. UTC
Remove the last caller of add_to_page_cache()

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/hugetlbfs/inode.c |  2 +-
 mm/hugetlb.c         | 14 ++++++++++----
 2 files changed, 11 insertions(+), 5 deletions(-)

Comments

Mike Kravetz June 1, 2022, 8:50 p.m. UTC | #1
On 6/1/22 12:23, Matthew Wilcox (Oracle) wrote:
> Remove the last caller of add_to_page_cache()
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  fs/hugetlbfs/inode.c |  2 +-
>  mm/hugetlb.c         | 14 ++++++++++----
>  2 files changed, 11 insertions(+), 5 deletions(-)

Thanks,

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Muchun Song June 2, 2022, 2:49 a.m. UTC | #2
On Wed, Jun 01, 2022 at 08:23:32PM +0100, Matthew Wilcox (Oracle) wrote:
> Remove the last caller of add_to_page_cache()
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

Reviewed-by: Muchun Song <songmuchun@bytedance.com>

Thanks.
diff mbox series

Patch

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 62408047e8d7..ae2524480f23 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -759,7 +759,7 @@  static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
 
 		SetHPageMigratable(page);
 		/*
-		 * unlock_page because locked by add_to_page_cache()
+		 * unlock_page because locked by huge_add_to_page_cache()
 		 * put_page() due to reference from alloc_huge_page()
 		 */
 		unlock_page(page);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 7c468ac1d069..eb9d6fe9c492 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5406,19 +5406,25 @@  static bool hugetlbfs_pagecache_present(struct hstate *h,
 int huge_add_to_page_cache(struct page *page, struct address_space *mapping,
 			   pgoff_t idx)
 {
+	struct folio *folio = page_folio(page);
 	struct inode *inode = mapping->host;
 	struct hstate *h = hstate_inode(inode);
-	int err = add_to_page_cache(page, mapping, idx, GFP_KERNEL);
+	int err;
 
-	if (err)
+	__folio_set_locked(folio);
+	err = __filemap_add_folio(mapping, folio, idx, GFP_KERNEL, NULL);
+
+	if (unlikely(err)) {
+		__folio_clear_locked(folio);
 		return err;
+	}
 	ClearHPageRestoreReserve(page);
 
 	/*
-	 * set page dirty so that it will not be removed from cache/file
+	 * mark folio dirty so that it will not be removed from cache/file
 	 * by non-hugetlbfs specific code paths.
 	 */
-	set_page_dirty(page);
+	folio_mark_dirty(folio);
 
 	spin_lock(&inode->i_lock);
 	inode->i_blocks += blocks_per_huge_page(h);