From patchwork Mon Aug 8 19:33:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12939049 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6903CC00140 for ; Mon, 8 Aug 2022 19:35:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 94669940008; Mon, 8 Aug 2022 15:35:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 658DD940009; Mon, 8 Aug 2022 15:35:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 433DE940007; Mon, 8 Aug 2022 15:35:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 27BD4940008 for ; Mon, 8 Aug 2022 15:35:00 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id ED4D0AB3BE for ; Mon, 8 Aug 2022 19:34:59 +0000 (UTC) X-FDA: 79777428318.21.BBE31B4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id 8D7861A013A for ; Mon, 8 Aug 2022 19:34:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=pVvOgQThtf/C4s3gexRZB6amKOirvomPfXjR1TGMYTc=; b=P7EPqs8dMvSCZhvW2tx4R2WjJb 7Tc2LWmti+ZaHRiXEF8ZFiBgi4J6MpCbnoEddA1B+rHGRp1pWaDyLyfzEMqQPoAg3l5NrUAy9J1qm q1hM1fpMOv8o19brdrtomlSUvHj1zieJYm1VSskR4nQaPlpWz2rLplu6XA8jGxSHCWpT3Ion3OpBO mh+KbxT4NST8mwFcvzKMKw5fe83bL+D6hPr/x5cpDztgrk470AUfWkCt0gT50aWfAkVMEOVJM8D2a xX0tJReh6jov9jc8C66TvpyDWHd/SyoFBK27MVAX8e7eYdJQh5G+0U1i1muDygylTPX5+3RwJq9cO HWfwQi6w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1oL8Wa-00EAtG-Nk; Mon, 08 Aug 2022 19:34:55 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , hughd@google.com Subject: [PATCH 11/59] shmem: Convert shmem_replace_page() to use folios throughout Date: Mon, 8 Aug 2022 20:33:39 +0100 Message-Id: <20220808193430.3378317-12-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808193430.3378317-1-willy@infradead.org> References: <20220808193430.3378317-1-willy@infradead.org> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659987299; a=rsa-sha256; cv=none; b=NTE7mzasUDjX3AjMkbSTztHbLUZg6I1NfsE2+kX8uTYKrtaAH8h8iqlgBCQ5QVJEw4KgKY OktIY1wHEJbiAAq/p9ORChVJKFyKlO6DbuxHMm/wTvyz6AgVQnzNHeTzCPurm3woAnpEC0 nBZ+4GPvc9mNK/O/SQYQMFg5g5rv51w= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=P7EPqs8d; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659987299; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pVvOgQThtf/C4s3gexRZB6amKOirvomPfXjR1TGMYTc=; b=8X92qxovFOYyq0pTHbxAPkvN7AhNsPNpY8asp/XRwIOE51G+6EooZY4e3DnJnTMf1z3CPV O8szfyx3wLlkqvyJWMtvkGEqcWqwQ5w28U8/TGVvwhgmKQhOb29XSlf7Z5Wl/Yjc2QkOwM UmjYenrYM8oYtb94kGOPP1gBx6PnJEw= X-Rspamd-Queue-Id: 8D7861A013A Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=P7EPqs8d; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: map7hsj71z1bdznu33n7e6bmf6ncme4c X-HE-Tag: 1659987299-82013 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Introduce folio_set_swap_entry() to abstract how both folio->private and swp_entry_t work. Use swap_address_space() directly instead of indirecting through folio_mapping(). Include an assertion that the old folio is not large as we only allocate a single-page folio to replace it. Use folio_put_refs() instead of calling folio_put() twice. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/swap.h | 5 ++++ mm/shmem.c | 63 +++++++++++++++++++++----------------------- 2 files changed, 35 insertions(+), 33 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 333d5588dc2d..afcb76bbd141 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -351,6 +351,11 @@ static inline swp_entry_t folio_swap_entry(struct folio *folio) return entry; } +static inline void folio_set_swap_entry(struct folio *folio, swp_entry_t entry) +{ + folio->private = (void *)entry.val; +} + /* linux/mm/workingset.c */ void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages); void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg); diff --git a/mm/shmem.c b/mm/shmem.c index f561f6e7f53b..eec32307984d 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1560,12 +1560,6 @@ static struct folio *shmem_alloc_folio(gfp_t gfp, return folio; } -static struct page *shmem_alloc_page(gfp_t gfp, - struct shmem_inode_info *info, pgoff_t index) -{ - return &shmem_alloc_folio(gfp, info, index)->page; -} - static struct folio *shmem_alloc_and_acct_folio(gfp_t gfp, struct inode *inode, pgoff_t index, bool huge) { @@ -1617,49 +1611,47 @@ static bool shmem_should_replace_folio(struct folio *folio, gfp_t gfp) static int shmem_replace_page(struct page **pagep, gfp_t gfp, struct shmem_inode_info *info, pgoff_t index) { - struct page *oldpage, *newpage; struct folio *old, *new; struct address_space *swap_mapping; swp_entry_t entry; pgoff_t swap_index; int error; - oldpage = *pagep; - entry.val = page_private(oldpage); + old = page_folio(*pagep); + entry = folio_swap_entry(old); swap_index = swp_offset(entry); - swap_mapping = page_mapping(oldpage); + swap_mapping = swap_address_space(entry); /* * We have arrived here because our zones are constrained, so don't * limit chance of success by further cpuset and node constraints. */ gfp &= ~GFP_CONSTRAINT_MASK; - newpage = shmem_alloc_page(gfp, info, index); - if (!newpage) + VM_BUG_ON_FOLIO(folio_test_large(old), old); + new = shmem_alloc_folio(gfp, info, index); + if (!new) return -ENOMEM; - get_page(newpage); - copy_highpage(newpage, oldpage); - flush_dcache_page(newpage); + folio_get(new); + folio_copy(new, old); + flush_dcache_folio(new); - __SetPageLocked(newpage); - __SetPageSwapBacked(newpage); - SetPageUptodate(newpage); - set_page_private(newpage, entry.val); - SetPageSwapCache(newpage); + __folio_set_locked(new); + __folio_set_swapbacked(new); + folio_mark_uptodate(new); + folio_set_swap_entry(new, entry); + folio_set_swapcache(new); /* * Our caller will very soon move newpage out of swapcache, but it's * a nice clean interface for us to replace oldpage by newpage there. */ xa_lock_irq(&swap_mapping->i_pages); - error = shmem_replace_entry(swap_mapping, swap_index, oldpage, newpage); + error = shmem_replace_entry(swap_mapping, swap_index, old, new); if (!error) { - old = page_folio(oldpage); - new = page_folio(newpage); mem_cgroup_migrate(old, new); - __inc_lruvec_page_state(newpage, NR_FILE_PAGES); - __dec_lruvec_page_state(oldpage, NR_FILE_PAGES); + __lruvec_stat_mod_folio(new, NR_FILE_PAGES, 1); + __lruvec_stat_mod_folio(old, NR_FILE_PAGES, -1); } xa_unlock_irq(&swap_mapping->i_pages); @@ -1669,18 +1661,17 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp, * both PageSwapCache and page_private after getting page lock; * but be defensive. Reverse old to newpage for clear and free. */ - oldpage = newpage; + old = new; } else { - lru_cache_add(newpage); - *pagep = newpage; + folio_add_lru(new); + *pagep = &new->page; } - ClearPageSwapCache(oldpage); - set_page_private(oldpage, 0); + folio_clear_swapcache(old); + old->private = NULL; - unlock_page(oldpage); - put_page(oldpage); - put_page(oldpage); + folio_unlock(old); + folio_put_refs(old, 2); return error; } @@ -2362,6 +2353,12 @@ static struct inode *shmem_get_inode(struct super_block *sb, struct inode *dir, } #ifdef CONFIG_USERFAULTFD +static struct page *shmem_alloc_page(gfp_t gfp, + struct shmem_inode_info *info, pgoff_t index) +{ + return &shmem_alloc_folio(gfp, info, index)->page; +} + int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, struct vm_area_struct *dst_vma,