From patchwork Wed Jun 1 19:23:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12867176 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE25DC43334 for ; Wed, 1 Jun 2022 19:23:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4D78B8D0030; Wed, 1 Jun 2022 15:23:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 45F368D0028; Wed, 1 Jun 2022 15:23:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2AFE38D0030; Wed, 1 Jun 2022 15:23:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 16CC18D0028 for ; Wed, 1 Jun 2022 15:23:44 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id DD9EA80A08 for ; Wed, 1 Jun 2022 19:23:43 +0000 (UTC) X-FDA: 79530641526.19.F6D52C2 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id A3A3340099 for ; Wed, 1 Jun 2022 19:23:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:In-Reply-To:References; bh=yHw2wth7/QLOgYffmoVtxIbJyeLsT+RCXVJBn8zWZQY=; b=QG+ZQSqDYzChC/tXlZ0CU21mHr CVnJM+j61y6HDa5o2wWnKmDGvtcvf+7USIFccIBdEI2/Ek1WjbRh7/aj7iKPno6mwujQb9rLdAyzB A5paRudachwOhX7hC3eoWudfqfhaSHpWgVZRKJawguR6q4Rmh9X8CTJp3AwdtIRfB/OuYvCv+6Qw3 l6HnPToPnuuS0Fay3pX1VYsf3kgvX9DOZCitMjlDjf01pMluR5/36XjEjbVwB9vcPym5y3HU4JKHr GfA8x4TB0OuhYd99LTybpScwKkNZI1GxnKEZyuHqaAF9xW2AoR30eOlC3KF26QyEBu3c3Hg7ZR/zp nI7WBKog==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nwTwM-006Y2d-96; Wed, 01 Jun 2022 19:23:34 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-fsdevel@vger.kernel.org, Mike Kravetz , Muchun Song Subject: [PATCH 1/2] hugetlb: Convert huge_add_to_page_cache() to use a folio Date: Wed, 1 Jun 2022 20:23:32 +0100 Message-Id: <20220601192333.1560777-1-willy@infradead.org> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: A3A3340099 X-Stat-Signature: 1zxq5ry6eg5bsrmzoeh9wua7nnfhnor1 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=QG+ZQSqD; dmarc=none; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1654111381-322615 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Remove the last caller of add_to_page_cache() Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Mike Kravetz Reviewed-by: Muchun Song --- fs/hugetlbfs/inode.c | 2 +- mm/hugetlb.c | 14 ++++++++++---- 2 files changed, 11 insertions(+), 5 deletions(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 62408047e8d7..ae2524480f23 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -759,7 +759,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset, SetHPageMigratable(page); /* - * unlock_page because locked by add_to_page_cache() + * unlock_page because locked by huge_add_to_page_cache() * put_page() due to reference from alloc_huge_page() */ unlock_page(page); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 7c468ac1d069..eb9d6fe9c492 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5406,19 +5406,25 @@ static bool hugetlbfs_pagecache_present(struct hstate *h, int huge_add_to_page_cache(struct page *page, struct address_space *mapping, pgoff_t idx) { + struct folio *folio = page_folio(page); struct inode *inode = mapping->host; struct hstate *h = hstate_inode(inode); - int err = add_to_page_cache(page, mapping, idx, GFP_KERNEL); + int err; - if (err) + __folio_set_locked(folio); + err = __filemap_add_folio(mapping, folio, idx, GFP_KERNEL, NULL); + + if (unlikely(err)) { + __folio_clear_locked(folio); return err; + } ClearHPageRestoreReserve(page); /* - * set page dirty so that it will not be removed from cache/file + * mark folio dirty so that it will not be removed from cache/file * by non-hugetlbfs specific code paths. */ - set_page_dirty(page); + folio_mark_dirty(folio); spin_lock(&inode->i_lock); inode->i_blocks += blocks_per_huge_page(h); From patchwork Wed Jun 1 19:23:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12867175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B58CC433EF for ; Wed, 1 Jun 2022 19:23:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9927C8D002F; Wed, 1 Jun 2022 15:23:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 93E5E8D0028; Wed, 1 Jun 2022 15:23:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 803E28D002F; Wed, 1 Jun 2022 15:23:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 6E5E68D0028 for ; Wed, 1 Jun 2022 15:23:42 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 4DE20A4D for ; Wed, 1 Jun 2022 19:23:42 +0000 (UTC) X-FDA: 79530641484.25.45007BD Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf18.hostedemail.com (Postfix) with ESMTP id 001171C0053 for ; Wed, 1 Jun 2022 19:23:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=QNvo9bDwqjOqKyKJfkpGoV4ohPgBkGeKrnqWtFP+ua4=; b=F+stxg/nAPiUl71HYMgRg2gfU5 fgc+FsLvzL+Y7Kvl+0mYax25oSDXqwd0xWWH7x33KVel0tnw6bish4EVWptliv508fycKCuDUhWDW wKnN09nG+BfOePsGyjewHZLR/IG7T8kwxRf//kuu6kVG8JX+wboWZhnKsYDuFfCeq1zHBXM6s/cGT Uo54z+hVrw2gvv0iDwfTgyPN6KQebZDlHRh+KY0LJh82FVdxLIyh7LzfcyYAyCMIJzhr9nxjNZkPU YoT4/kU6Btl//NupBxH2OTBXtjVnXUOCu9AftpwP70LdVCPe4Kzsmx2FEUwddL652/rz5xVjw4YyN +HO3yuQA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nwTwM-006Y2f-C5; Wed, 01 Jun 2022 19:23:34 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-fsdevel@vger.kernel.org, Mike Kravetz , Muchun Song Subject: [PATCH 2/2] filemap: Remove add_to_page_cache() and add_to_page_cache_locked() Date: Wed, 1 Jun 2022 20:23:33 +0100 Message-Id: <20220601192333.1560777-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220601192333.1560777-1-willy@infradead.org> References: <20220601192333.1560777-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 001171C0053 Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="F+stxg/n"; dmarc=none; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Stat-Signature: 6axdoyq71in9osp67onkragc6a1fcnha X-Rspam-User: X-HE-Tag: 1654111400-82830 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These functions have no more users, so delete them. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Kravetz Reviewed-by: Muchun Song --- .../admin-guide/cgroup-v1/memcg_test.rst | 2 +- include/linux/pagemap.h | 18 ----------------- mm/filemap.c | 20 ------------------- mm/shmem.c | 2 +- mm/swap_state.c | 2 +- 5 files changed, 3 insertions(+), 41 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v1/memcg_test.rst b/Documentation/admin-guide/cgroup-v1/memcg_test.rst index 45b94f7b3beb..a402359abb99 100644 --- a/Documentation/admin-guide/cgroup-v1/memcg_test.rst +++ b/Documentation/admin-guide/cgroup-v1/memcg_test.rst @@ -97,7 +97,7 @@ Under below explanation, we assume CONFIG_MEM_RES_CTRL_SWAP=y. ============= Page Cache is charged at - - add_to_page_cache_locked(). + - filemap_add_folio(). The logic is very clear. (About migration, see below) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index ce96866fbec4..5555689ea809 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -1098,8 +1098,6 @@ size_t fault_in_subpage_writeable(char __user *uaddr, size_t size); size_t fault_in_safe_writeable(const char __user *uaddr, size_t size); size_t fault_in_readable(const char __user *uaddr, size_t size); -int add_to_page_cache_locked(struct page *page, struct address_space *mapping, - pgoff_t index, gfp_t gfp); int add_to_page_cache_lru(struct page *page, struct address_space *mapping, pgoff_t index, gfp_t gfp); int filemap_add_folio(struct address_space *mapping, struct folio *folio, @@ -1119,22 +1117,6 @@ bool filemap_release_folio(struct folio *folio, gfp_t gfp); loff_t mapping_seek_hole_data(struct address_space *, loff_t start, loff_t end, int whence); -/* - * Like add_to_page_cache_locked, but used to add newly allocated pages: - * the page is new, so we can just run __SetPageLocked() against it. - */ -static inline int add_to_page_cache(struct page *page, - struct address_space *mapping, pgoff_t offset, gfp_t gfp_mask) -{ - int error; - - __SetPageLocked(page); - error = add_to_page_cache_locked(page, mapping, offset, gfp_mask); - if (unlikely(error)) - __ClearPageLocked(page); - return error; -} - /* Must be non-static for BPF error injection */ int __filemap_add_folio(struct address_space *mapping, struct folio *folio, pgoff_t index, gfp_t gfp, void **shadowp); diff --git a/mm/filemap.c b/mm/filemap.c index 9daeaab36081..1e66eea98a7e 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -929,26 +929,6 @@ noinline int __filemap_add_folio(struct address_space *mapping, } ALLOW_ERROR_INJECTION(__filemap_add_folio, ERRNO); -/** - * add_to_page_cache_locked - add a locked page to the pagecache - * @page: page to add - * @mapping: the page's address_space - * @offset: page index - * @gfp_mask: page allocation mode - * - * This function is used to add a page to the pagecache. It must be locked. - * This function does not add the page to the LRU. The caller must do that. - * - * Return: %0 on success, negative error code otherwise. - */ -int add_to_page_cache_locked(struct page *page, struct address_space *mapping, - pgoff_t offset, gfp_t gfp_mask) -{ - return __filemap_add_folio(mapping, page_folio(page), offset, - gfp_mask, NULL); -} -EXPORT_SYMBOL(add_to_page_cache_locked); - int filemap_add_folio(struct address_space *mapping, struct folio *folio, pgoff_t index, gfp_t gfp) { diff --git a/mm/shmem.c b/mm/shmem.c index a6f565308133..60fdfc0208fd 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -693,7 +693,7 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ /* - * Like add_to_page_cache_locked, but error if expected item has gone. + * Like filemap_add_folio, but error if expected item has gone. */ static int shmem_add_to_page_cache(struct folio *folio, struct address_space *mapping, diff --git a/mm/swap_state.c b/mm/swap_state.c index 778d57d2d92d..f5b6f5638908 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -95,7 +95,7 @@ void *get_shadow_from_swap_cache(swp_entry_t entry) } /* - * add_to_swap_cache resembles add_to_page_cache_locked on swapper_space, + * add_to_swap_cache resembles filemap_add_folio on swapper_space, * but sets SwapCache flag and private instead of mapping and index. */ int add_to_swap_cache(struct page *page, swp_entry_t entry,