From patchwork Fri Apr 29 19:23:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832649 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96D40C433EF for ; Fri, 29 Apr 2022 19:24:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D5CA26B007D; Fri, 29 Apr 2022 15:23:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 710346B0087; Fri, 29 Apr 2022 15:23:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0AE766B008C; Fri, 29 Apr 2022 15:23:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 9471D6B0089 for ; Fri, 29 Apr 2022 15:23:42 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 6F81F284EB for ; Fri, 29 Apr 2022 19:23:42 +0000 (UTC) X-FDA: 79410891084.06.5E151B0 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf31.hostedemail.com (Postfix) with ESMTP id DDCDD20066 for ; Fri, 29 Apr 2022 19:23:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ghuyNPaRYcVBS9VmCa9+k4jYQUr1mhpvqirNEm+8qiE=; b=T06oJDalPvMS3Jhsnk53Gzny0t mEq4F2DrXMJ0knAZDUsYBvLUnkIP+DYljrE8+xmTUmjYU9kRkf8FiwLmBYLhIP5yLv3nA+JDCmsle R8xZt3daICt8Swu76oM0FUMs1NkgtAIXXa8VreS9GNOKX1aSRbbjYqGq6ApBI2W+nor3+bKT9W3Wl N5HWmj04GmXnzm4Jlw8Vhqt4sTe/xKZmvFPLEg3tQ5nCC5pwYvv63aCO/t0BE2nbUQIk8Q3zRacXT oCw3bRcXWrRQPr0xRoX/ghqJmrBPWQDJt+ao3oCmNUCYyjlu703BWl43GJKCyYoB6LDc5wLxmyP/X dIeNJepA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDJ-00CjOg-1K; Fri, 29 Apr 2022 19:23:37 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 01/21] shmem: Convert shmem_alloc_hugepage() to use vma_alloc_folio() Date: Fri, 29 Apr 2022 20:23:09 +0100 Message-Id: <20220429192329.3034378-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: ccee735etzfdkmyf3j11qm5zh5eh8eoi X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: DDCDD20066 X-Rspam-User: Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=T06oJDal; dmarc=none; spf=none (imf31.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1651260210-149131 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For now, return the head page of the folio, but remove use of the old alloc_pages_vma() API. Signed-off-by: Matthew Wilcox (Oracle) --- mm/shmem.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 4b2fea33158e..c89394221a7e 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1527,7 +1527,7 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp, struct vm_area_struct pvma; struct address_space *mapping = info->vfs_inode.i_mapping; pgoff_t hindex; - struct page *page; + struct folio *folio; hindex = round_down(index, HPAGE_PMD_NR); if (xa_find(&mapping->i_pages, &hindex, hindex + HPAGE_PMD_NR - 1, @@ -1535,13 +1535,11 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp, return NULL; shmem_pseudo_vma_init(&pvma, info, hindex); - page = alloc_pages_vma(gfp, HPAGE_PMD_ORDER, &pvma, 0, true); + folio = vma_alloc_folio(gfp, HPAGE_PMD_ORDER, &pvma, 0, true); shmem_pseudo_vma_destroy(&pvma); - if (page) - prep_transhuge_page(page); - else + if (!folio) count_vm_event(THP_FILE_FALLBACK); - return page; + return &folio->page; } static struct page *shmem_alloc_page(gfp_t gfp, From patchwork Fri Apr 29 19:23:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832653 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5419DC433EF for ; Fri, 29 Apr 2022 19:24:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2CF0A6B008C; Fri, 29 Apr 2022 15:23:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0F4FF6B0092; Fri, 29 Apr 2022 15:23:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E3E4F6B0093; Fri, 29 Apr 2022 15:23:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id C77C96B008C for ; Fri, 29 Apr 2022 15:23:54 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id A82D120B21 for ; Fri, 29 Apr 2022 19:23:54 +0000 (UTC) X-FDA: 79410891588.29.2E90FA4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf27.hostedemail.com (Postfix) with ESMTP id BBE354005A for ; Fri, 29 Apr 2022 19:23:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dD+NEunQjgzAiwEXmgrvLtkYdrq9ejv5RT97UNQ3VFQ=; b=L4PjlsCXniPDDyv6MGHWrnegOs /r2s+sARXHQ7WpkOZ+toOaJaIRaEHUL9PfF6alxV6Q85d2tCulFj5oeigfeX09lPpSJ9MVaWGGzJ8 jo7hZZg1Li76uSG+0ysYUrmsFnrIzR1L8Fs8KebfkKC5iGzjtxYTdH0vEu6qSt5ubWy1LjCyZdT7F 14ucNlwMBa4AKcfo5Y9rFRRfG9JF0Em2Ldnd5oVGMBlMy/5qEpfAwmMpD0y5HB/e6f2qPhPFBz3pJ /MzPHOPfhS61P8WypGAHpYSMi4BkO439zSFzo3MTQvTtIwOfdX1yyDYGDqj5usXg2yZCFVZPoferW eBD0WB/Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDJ-00CjOi-4M; Fri, 29 Apr 2022 19:23:37 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 02/21] mm/huge_memory: Convert do_huge_pmd_anonymous_page() to use vma_alloc_folio() Date: Fri, 29 Apr 2022 20:23:10 +0100 Message-Id: <20220429192329.3034378-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: BBE354005A X-Stat-Signature: wjg8ds7s58fmfyru1n3pqyggnnb1nmay X-Rspam-User: Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=L4PjlsCX; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam09 X-HE-Tag: 1651260232-644068 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Remove the use of this old API, eliminating a call to prep_transhuge_page(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/huge_memory.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c468fee595ff..caf0e7d27337 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -725,7 +725,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; gfp_t gfp; - struct page *page; + struct folio *folio; unsigned long haddr = vmf->address & HPAGE_PMD_MASK; if (!transhuge_vma_suitable(vma, haddr)) @@ -774,13 +774,12 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) return ret; } gfp = vma_thp_gfp_mask(vma); - page = alloc_hugepage_vma(gfp, vma, haddr, HPAGE_PMD_ORDER); - if (unlikely(!page)) { + folio = vma_alloc_folio(gfp, HPAGE_PMD_ORDER, vma, haddr, true); + if (unlikely(!folio)) { count_vm_event(THP_FAULT_FALLBACK); return VM_FAULT_FALLBACK; } - prep_transhuge_page(page); - return __do_huge_pmd_anonymous_page(vmf, page, gfp); + return __do_huge_pmd_anonymous_page(vmf, &folio->page, gfp); } static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, From patchwork Fri Apr 29 19:23:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832640 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 677C5C433FE for ; Fri, 29 Apr 2022 19:23:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 196A66B0083; Fri, 29 Apr 2022 15:23:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8EA256B0082; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2A0C86B0073; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id B57AD6B007B for ; Fri, 29 Apr 2022 15:23:40 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 946B1809BE for ; Fri, 29 Apr 2022 19:23:40 +0000 (UTC) X-FDA: 79410891000.10.B7E8DC6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id 7C42CC006E for ; Fri, 29 Apr 2022 19:23:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=e8p++ypsPhjLpbbt1OTXt6rwwtevx7qXFtZG+MfUNNA=; b=PU5syKOQDY9+Llq9PrhLtXZVls v3q9+hiSvvR4Dv2zj8kTEDcvB0E5enJh6ez4fNqzMau+/2bJoa1LMY3Sd912vjEMVoUaAoRw8zR8M poAX9eHHnD7wwHPEqCXU6l2/RnpH7ObMlbK9FjpgzGwDCUBDV3qoe9B16rYvZcRO+0gGMHsvHAlYU Sp51O95FNbpMpdH2aPp4s6F7vNYWIL45aQBsg1LesezpKggdBg0IxE6TXlH892xqTJtkbiWow4bhn 7SgNxeYpCYuAuOto0uOfEcQw5iojLd/8xiNOvwYEIUCgybpeU3OVvtO7P8NiJF+UGCdDxPkD8Znlv 7MyPIP8w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDJ-00CjOk-6p; Fri, 29 Apr 2022 19:23:37 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 03/21] mm: Remove alloc_pages_vma() Date: Fri, 29 Apr 2022 20:23:11 +0100 Message-Id: <20220429192329.3034378-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=PU5syKOQ; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 7C42CC006E X-Stat-Signature: h6omu6zi74bp7sebts4fn7k3jsadi9q9 X-HE-Tag: 1651260207-595489 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: All callers have now been converted to use vma_alloc_folio(), so convert the body of alloc_pages_vma() to allocate folios instead. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/gfp.h | 18 +++++++--------- mm/mempolicy.c | 51 ++++++++++++++++++++++----------------------- 2 files changed, 32 insertions(+), 37 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 3e3d36fc2109..2a08a3c4ba95 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -613,13 +613,8 @@ static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask, #ifdef CONFIG_NUMA struct page *alloc_pages(gfp_t gfp, unsigned int order); struct folio *folio_alloc(gfp_t gfp, unsigned order); -struct page *alloc_pages_vma(gfp_t gfp_mask, int order, - struct vm_area_struct *vma, unsigned long addr, - bool hugepage); struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, unsigned long addr, bool hugepage); -#define alloc_hugepage_vma(gfp_mask, vma, addr, order) \ - alloc_pages_vma(gfp_mask, order, vma, addr, true) #else static inline struct page *alloc_pages(gfp_t gfp_mask, unsigned int order) { @@ -629,16 +624,17 @@ static inline struct folio *folio_alloc(gfp_t gfp, unsigned int order) { return __folio_alloc_node(gfp, order, numa_node_id()); } -#define alloc_pages_vma(gfp_mask, order, vma, addr, hugepage) \ - alloc_pages(gfp_mask, order) #define vma_alloc_folio(gfp, order, vma, addr, hugepage) \ folio_alloc(gfp, order) -#define alloc_hugepage_vma(gfp_mask, vma, addr, order) \ - alloc_pages(gfp_mask, order) #endif #define alloc_page(gfp_mask) alloc_pages(gfp_mask, 0) -#define alloc_page_vma(gfp_mask, vma, addr) \ - alloc_pages_vma(gfp_mask, 0, vma, addr, false) +static inline struct page *alloc_page_vma(gfp_t gfp, + struct vm_area_struct *vma, unsigned long addr) +{ + struct folio *folio = vma_alloc_folio(gfp, 0, vma, addr, false); + + return &folio->page; +} extern unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order); extern unsigned long get_zeroed_page(gfp_t gfp_mask); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 8c74107a2b15..174efbee1cb5 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2135,44 +2135,55 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order, } /** - * alloc_pages_vma - Allocate a page for a VMA. + * vma_alloc_folio - Allocate a folio for a VMA. * @gfp: GFP flags. - * @order: Order of the GFP allocation. + * @order: Order of the folio. * @vma: Pointer to VMA or NULL if not available. * @addr: Virtual address of the allocation. Must be inside @vma. * @hugepage: For hugepages try only the preferred node if possible. * - * Allocate a page for a specific address in @vma, using the appropriate + * Allocate a folio for a specific address in @vma, using the appropriate * NUMA policy. When @vma is not NULL the caller must hold the mmap_lock * of the mm_struct of the VMA to prevent it from going away. Should be - * used for all allocations for pages that will be mapped into user space. + * used for all allocations for folios that will be mapped into user space. * - * Return: The page on success or NULL if allocation fails. + * Return: The folio on success or NULL if allocation fails. */ -struct page *alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, +struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, unsigned long addr, bool hugepage) { struct mempolicy *pol; int node = numa_node_id(); - struct page *page; + struct folio *folio; int preferred_nid; nodemask_t *nmask; pol = get_vma_policy(vma, addr); if (pol->mode == MPOL_INTERLEAVE) { + struct page *page; unsigned nid; nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order); mpol_cond_put(pol); + gfp |= __GFP_COMP; page = alloc_page_interleave(gfp, order, nid); + if (page && order > 1) + prep_transhuge_page(page); + folio = (struct folio *)page; goto out; } if (pol->mode == MPOL_PREFERRED_MANY) { + struct page *page; + node = policy_node(gfp, pol, node); + gfp |= __GFP_COMP; page = alloc_pages_preferred_many(gfp, order, node, pol); mpol_cond_put(pol); + if (page && order > 1) + prep_transhuge_page(page); + folio = (struct folio *)page; goto out; } @@ -2199,8 +2210,8 @@ struct page *alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, * First, try to allocate THP only on local node, but * don't reclaim unnecessarily, just compact. */ - page = __alloc_pages_node(hpage_node, - gfp | __GFP_THISNODE | __GFP_NORETRY, order); + folio = __folio_alloc_node(gfp | __GFP_THISNODE | + __GFP_NORETRY, order, hpage_node); /* * If hugepage allocations are configured to always @@ -2208,8 +2219,9 @@ struct page *alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, * to prefer hugepage backing, retry allowing remote * memory with both reclaim and compact as well. */ - if (!page && (gfp & __GFP_DIRECT_RECLAIM)) - page = __alloc_pages(gfp, order, hpage_node, nmask); + if (!folio && (gfp & __GFP_DIRECT_RECLAIM)) + folio = __folio_alloc(gfp, order, hpage_node, + nmask); goto out; } @@ -2217,25 +2229,12 @@ struct page *alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, nmask = policy_nodemask(gfp, pol); preferred_nid = policy_node(gfp, pol, node); - page = __alloc_pages(gfp, order, preferred_nid, nmask); + folio = __folio_alloc(gfp, order, preferred_nid, nmask); mpol_cond_put(pol); out: - return page; -} -EXPORT_SYMBOL(alloc_pages_vma); - -struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, - unsigned long addr, bool hugepage) -{ - struct folio *folio; - - folio = (struct folio *)alloc_pages_vma(gfp, order, vma, addr, - hugepage); - if (folio && order > 1) - prep_transhuge_page(&folio->page); - return folio; } +EXPORT_SYMBOL(vma_alloc_folio); /** * alloc_pages - Allocate pages. From patchwork Fri Apr 29 19:23:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832652 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE5DAC433FE for ; Fri, 29 Apr 2022 19:24:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7897F6B0089; Fri, 29 Apr 2022 15:23:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6E8506B008C; Fri, 29 Apr 2022 15:23:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 53AC76B0092; Fri, 29 Apr 2022 15:23:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 3DA316B0089 for ; Fri, 29 Apr 2022 15:23:53 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id D7B04BBC for ; Fri, 29 Apr 2022 19:23:51 +0000 (UTC) X-FDA: 79410891462.25.3E50238 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 86E232003D for ; Fri, 29 Apr 2022 19:23:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=X+gO3T14DbfhD7ZSFY3GZUXu8JvQaqrsZFRii8darF0=; b=tCy+1uOVAgy5BKSJbLvt5D1u7k xmHkulmiKtu2xNzlaUhqPxzFJpeu3fPK/VrSKzBe02Xm766iTb/f/eAmbii2GK31k/GT9XwoO0r2A HU+s+Cm3C2DqUjYatA9V2H0USIjvHGF32T/x/zKqECzpXBcB6yRwOGIp3CU52pqHcfddwRzdEWNJG k/QjSSKwjKIi9S622QEzr6IwBmxKijV4CKdIiQp/3kHf/uUZxpPmewx9KUMdSRpWOS0d/QMltN8oN xQcT/GNd9ticFC2h6Cl0/ypJp5luP8D/Gu+qkU7Xa8jLiIBv4Zi19iqHxdJNn2ybMieZm3fsfvf2F gKyJTibQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDJ-00CjOm-9l; Fri, 29 Apr 2022 19:23:37 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 04/21] vmscan: Use folio_mapped() in shrink_page_list() Date: Fri, 29 Apr 2022 20:23:12 +0100 Message-Id: <20220429192329.3034378-5-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: hceb46bn78jjhdmsta66yfi8prbafuuk X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 86E232003D X-Rspam-User: Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=tCy+1uOV; dmarc=none; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1651260222-532534 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Remove some legacy function calls. Signed-off-by: Matthew Wilcox (Oracle) --- mm/vmscan.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 1678802e03e7..27be6f9b2ba5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1549,7 +1549,7 @@ static unsigned int shrink_page_list(struct list_head *page_list, if (unlikely(!page_evictable(page))) goto activate_locked; - if (!sc->may_unmap && page_mapped(page)) + if (!sc->may_unmap && folio_mapped(folio)) goto keep_locked; may_enter_fs = (sc->gfp_mask & __GFP_FS) || @@ -1743,21 +1743,21 @@ static unsigned int shrink_page_list(struct list_head *page_list, } /* - * The page is mapped into the page tables of one or more + * The folio is mapped into the page tables of one or more * processes. Try to unmap it here. */ - if (page_mapped(page)) { + if (folio_mapped(folio)) { enum ttu_flags flags = TTU_BATCH_FLUSH; - bool was_swapbacked = PageSwapBacked(page); + bool was_swapbacked = folio_test_swapbacked(folio); - if (PageTransHuge(page) && - thp_order(page) >= HPAGE_PMD_ORDER) + if (folio_test_pmd_mappable(folio)) flags |= TTU_SPLIT_HUGE_PMD; try_to_unmap(folio, flags); - if (page_mapped(page)) { + if (folio_mapped(folio)) { stat->nr_unmap_fail += nr_pages; - if (!was_swapbacked && PageSwapBacked(page)) + if (!was_swapbacked && + folio_test_swapbacked(folio)) stat->nr_lazyfree_fail += nr_pages; goto activate_locked; } From patchwork Fri Apr 29 19:23:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832657 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4CADC433EF for ; Fri, 29 Apr 2022 19:24:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 231B46B0098; Fri, 29 Apr 2022 15:24:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1918A6B0099; Fri, 29 Apr 2022 15:24:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F271A6B009A; Fri, 29 Apr 2022 15:24:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id C336E6B0098 for ; Fri, 29 Apr 2022 15:24:06 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 9A2A020B27 for ; Fri, 29 Apr 2022 19:24:06 +0000 (UTC) X-FDA: 79410892092.11.48B78EC Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf11.hostedemail.com (Postfix) with ESMTP id 03FD84001F for ; Fri, 29 Apr 2022 19:24:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=SVMxgFmUZ7QEEyYt/HGdPYJ2LwLw5sJ91q4XMENzF9g=; b=ao7iDpMczOhU0Rusispp6mjngw l/S2n61Kw3pAhujEmcRPgbydmFG+BwII2TZOpynN27G+mxODow0G+y56OSo/LPLI3uQxxAwBCTQ+t WltcGG6IwbOaW5q1Eeq57PJxlEDLOoP9n1AwpACq3/q/n8pBRZrUMM9/wz653BzvQKg7jfhrKqY/M m7ZFplRzzMbW8jIz7OEJfcLHpLt2HdPAcO+pjyjTe77Vb6QZBE2uF1OEPfslDdf9WtxixSWkpYij3 0QuQahMgzwisUGSuy3bQinXzEXdWeBK7/IsdbsIVzJqXwntZCt3tm2hhV9xLU4Qpzv3ofxWGuM3sg pqb6i17A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDJ-00CjOo-Bz; Fri, 29 Apr 2022 19:23:37 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 05/21] vmscan: Convert the writeback handling in shrink_page_list() to folios Date: Fri, 29 Apr 2022 20:23:13 +0100 Message-Id: <20220429192329.3034378-6-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 03FD84001F X-Stat-Signature: oxd9k9x4x8ek8ndm67hmg8f1ithebcsf X-Rspam-User: Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ao7iDpMc; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1651260242-818031 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Slightly more efficient due to fewer calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/vmscan.c | 77 ++++++++++++++++++++++++++++------------------------- 1 file changed, 41 insertions(+), 36 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 27be6f9b2ba5..19c1bcd886ef 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1578,40 +1578,42 @@ static unsigned int shrink_page_list(struct list_head *page_list, stat->nr_congested += nr_pages; /* - * If a page at the tail of the LRU is under writeback, there + * If a folio at the tail of the LRU is under writeback, there * are three cases to consider. * - * 1) If reclaim is encountering an excessive number of pages - * under writeback and this page is both under writeback and - * PageReclaim then it indicates that pages are being queued - * for IO but are being recycled through the LRU before the - * IO can complete. Waiting on the page itself risks an - * indefinite stall if it is impossible to writeback the - * page due to IO error or disconnected storage so instead - * note that the LRU is being scanned too quickly and the - * caller can stall after page list has been processed. + * 1) If reclaim is encountering an excessive number of folios + * under writeback and this folio is both under + * writeback and has the reclaim flag set then it + * indicates that folios are being queued for I/O but + * are being recycled through the LRU before the I/O + * can complete. Waiting on the folio itself risks an + * indefinite stall if it is impossible to writeback + * the folio due to I/O error or disconnected storage + * so instead note that the LRU is being scanned too + * quickly and the caller can stall after the folio + * list has been processed. * - * 2) Global or new memcg reclaim encounters a page that is + * 2) Global or new memcg reclaim encounters a folio that is * not marked for immediate reclaim, or the caller does not * have __GFP_FS (or __GFP_IO if it's simply going to swap, - * not to fs). In this case mark the page for immediate + * not to fs). In this case mark the folio for immediate * reclaim and continue scanning. * * Require may_enter_fs because we would wait on fs, which - * may not have submitted IO yet. And the loop driver might - * enter reclaim, and deadlock if it waits on a page for + * may not have submitted I/O yet. And the loop driver might + * enter reclaim, and deadlock if it waits on a folio for * which it is needed to do the write (loop masks off * __GFP_IO|__GFP_FS for this reason); but more thought * would probably show more reasons. * - * 3) Legacy memcg encounters a page that is already marked - * PageReclaim. memcg does not have any dirty pages + * 3) Legacy memcg encounters a folio that already has the + * reclaim flag set. memcg does not have any dirty folio * throttling so we could easily OOM just because too many - * pages are in writeback and there is nothing else to + * folios are in writeback and there is nothing else to * reclaim. Wait for the writeback to complete. * - * In cases 1) and 2) we activate the pages to get them out of - * the way while we continue scanning for clean pages on the + * In cases 1) and 2) we activate the folios to get them out of + * the way while we continue scanning for clean folios on the * inactive list and refilling from the active list. The * observation here is that waiting for disk writes is more * expensive than potentially causing reloads down the line. @@ -1619,38 +1621,41 @@ static unsigned int shrink_page_list(struct list_head *page_list, * memory pressure on the cache working set any longer than it * takes to write them to disk. */ - if (PageWriteback(page)) { + if (folio_test_writeback(folio)) { /* Case 1 above */ if (current_is_kswapd() && - PageReclaim(page) && + folio_test_reclaim(folio) && test_bit(PGDAT_WRITEBACK, &pgdat->flags)) { stat->nr_immediate += nr_pages; goto activate_locked; /* Case 2 above */ } else if (writeback_throttling_sane(sc) || - !PageReclaim(page) || !may_enter_fs) { + !folio_test_reclaim(folio) || !may_enter_fs) { /* - * This is slightly racy - end_page_writeback() - * might have just cleared PageReclaim, then - * setting PageReclaim here end up interpreted - * as PageReadahead - but that does not matter - * enough to care. What we do want is for this - * page to have PageReclaim set next time memcg - * reclaim reaches the tests above, so it will - * then wait_on_page_writeback() to avoid OOM; - * and it's also appropriate in global reclaim. + * This is slightly racy - + * folio_end_writeback() might have just + * cleared the reclaim flag, then setting + * reclaim here ends up interpreted as + * the readahead flag - but that does + * not matter enough to care. What we + * do want is for this folio to have + * the reclaim flag set next time memcg + * reclaim reaches the tests above, so + * it will then folio_wait_writeback() + * to avoid OOM; and it's also appropriate + * in global reclaim. */ - SetPageReclaim(page); + folio_set_reclaim(folio); stat->nr_writeback += nr_pages; goto activate_locked; /* Case 3 above */ } else { - unlock_page(page); - wait_on_page_writeback(page); - /* then go back and try same page again */ - list_add_tail(&page->lru, page_list); + folio_unlock(folio); + folio_wait_writeback(folio); + /* then go back and try same folio again */ + list_add_tail(&folio->lru, page_list); continue; } } From patchwork Fri Apr 29 19:23:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832655 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0B28C433EF for ; Fri, 29 Apr 2022 19:24:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 51A826B0095; Fri, 29 Apr 2022 15:24:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 49E238D0001; Fri, 29 Apr 2022 15:24:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2A4426B0098; Fri, 29 Apr 2022 15:24:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 0AC146B0095 for ; Fri, 29 Apr 2022 15:24:01 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D835220AF9 for ; Fri, 29 Apr 2022 19:24:00 +0000 (UTC) X-FDA: 79410891840.07.FEDA4BE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf27.hostedemail.com (Postfix) with ESMTP id B50A840038 for ; Fri, 29 Apr 2022 19:23:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=hK9y/bUt/vs4e4eIys52LzwBlg+oAuyzHsH7awY6lW0=; b=LLOgTCl738vYcjKNnsDb/lXkmi gQYmhSxwyf5ND/K5cO9tQU7mUmzZhvB+396+HGNB0TxsF7J3LsEe9RJLjGTlTGbsVvYRjnjtwLBsb ASF1AnzupjS/Bj18gccTVQJWgLY45PrJySwcNvzFxS/MBLsAaRb2oXIb8COFWd7iuJpKM/pqARojq G5TXwg61l2n+wceZ15q1MMRbj4i60Z1P7XvH3KEiA491G4/h0bjF40DePFipukza5zZ57bgdjI8Yy COMvammevuSzW5wfzBe6vXZKOmbQDHxkRCxnZFBti3bkbN2jQSSPyYe0Pse4yojrxOV9PaQ93GfCz vbVN3C5A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDJ-00CjOs-Ey; Fri, 29 Apr 2022 19:23:37 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 06/21] swap: Turn get_swap_page() into folio_alloc_swap() Date: Fri, 29 Apr 2022 20:23:14 +0100 Message-Id: <20220429192329.3034378-7-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: rgftp4hnc9fhqfia3g3ea3r1t3h1g65b Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=LLOgTCl7; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: B50A840038 X-HE-Tag: 1651260238-353509 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This removes an assumption that a large folio is HPAGE_PMD_NR pages in size. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/swap.h | 13 +++++++------ mm/memcontrol.c | 16 ++++++++-------- mm/shmem.c | 3 ++- mm/swap_slots.c | 14 +++++++------- mm/swap_state.c | 3 ++- mm/swapfile.c | 17 +++++++++-------- 6 files changed, 35 insertions(+), 31 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 27093b477c5f..147a9a173508 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -494,7 +494,7 @@ static inline long get_nr_swap_pages(void) } extern void si_swapinfo(struct sysinfo *); -extern swp_entry_t get_swap_page(struct page *page); +swp_entry_t folio_alloc_swap(struct folio *folio); extern void put_swap_page(struct page *page, swp_entry_t entry); extern swp_entry_t get_swap_page_of_type(int); extern int get_swap_pages(int n, swp_entry_t swp_entries[], int entry_size); @@ -685,7 +685,7 @@ static inline int try_to_free_swap(struct page *page) return 0; } -static inline swp_entry_t get_swap_page(struct page *page) +static inline swp_entry_t folio_alloc_swap(struct folio *folio) { swp_entry_t entry; entry.val = 0; @@ -739,12 +739,13 @@ static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask) #ifdef CONFIG_MEMCG_SWAP void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry); -extern int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry); -static inline int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) +int __mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry); +static inline int mem_cgroup_try_charge_swap(struct folio *folio, + swp_entry_t entry) { if (mem_cgroup_disabled()) return 0; - return __mem_cgroup_try_charge_swap(page, entry); + return __mem_cgroup_try_charge_swap(folio, entry); } extern void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages); @@ -762,7 +763,7 @@ static inline void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry) { } -static inline int mem_cgroup_try_charge_swap(struct page *page, +static inline int mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry) { return 0; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 598fece89e2b..985eff804004 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -7125,17 +7125,17 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry) } /** - * __mem_cgroup_try_charge_swap - try charging swap space for a page - * @page: page being added to swap + * __mem_cgroup_try_charge_swap - try charging swap space for a folio + * @folio: folio being added to swap * @entry: swap entry to charge * - * Try to charge @page's memcg for the swap space at @entry. + * Try to charge @folio's memcg for the swap space at @entry. * * Returns 0 on success, -ENOMEM on failure. */ -int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) +int __mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry) { - unsigned int nr_pages = thp_nr_pages(page); + unsigned int nr_pages = folio_nr_pages(folio); struct page_counter *counter; struct mem_cgroup *memcg; unsigned short oldid; @@ -7143,9 +7143,9 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) return 0; - memcg = page_memcg(page); + memcg = folio_memcg(folio); - VM_WARN_ON_ONCE_PAGE(!memcg, page); + VM_WARN_ON_ONCE_FOLIO(!memcg, folio); if (!memcg) return 0; @@ -7168,7 +7168,7 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) if (nr_pages > 1) mem_cgroup_id_get_many(memcg, nr_pages - 1); oldid = swap_cgroup_record(entry, mem_cgroup_id(memcg), nr_pages); - VM_BUG_ON_PAGE(oldid, page); + VM_BUG_ON_FOLIO(oldid, folio); mod_memcg_state(memcg, MEMCG_SWAP, nr_pages); return 0; diff --git a/mm/shmem.c b/mm/shmem.c index c89394221a7e..85c23696efc6 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1312,6 +1312,7 @@ int shmem_unuse(unsigned int type) */ static int shmem_writepage(struct page *page, struct writeback_control *wbc) { + struct folio *folio = page_folio(page); struct shmem_inode_info *info; struct address_space *mapping; struct inode *inode; @@ -1385,7 +1386,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) SetPageUptodate(page); } - swap = get_swap_page(page); + swap = folio_alloc_swap(folio); if (!swap.val) goto redirty; diff --git a/mm/swap_slots.c b/mm/swap_slots.c index 2b5531840583..0218ec1cd24c 100644 --- a/mm/swap_slots.c +++ b/mm/swap_slots.c @@ -117,7 +117,7 @@ static int alloc_swap_slot_cache(unsigned int cpu) /* * Do allocation outside swap_slots_cache_mutex - * as kvzalloc could trigger reclaim and get_swap_page, + * as kvzalloc could trigger reclaim and folio_alloc_swap, * which can lock swap_slots_cache_mutex. */ slots = kvcalloc(SWAP_SLOTS_CACHE_SIZE, sizeof(swp_entry_t), @@ -213,7 +213,7 @@ static void __drain_swap_slots_cache(unsigned int type) * this function can be invoked in the cpu * hot plug path: * cpu_up -> lock cpu_hotplug -> cpu hotplug state callback - * -> memory allocation -> direct reclaim -> get_swap_page + * -> memory allocation -> direct reclaim -> folio_alloc_swap * -> drain_swap_slots_cache * * Hence the loop over current online cpu below could miss cpu that @@ -301,16 +301,16 @@ int free_swap_slot(swp_entry_t entry) return 0; } -swp_entry_t get_swap_page(struct page *page) +swp_entry_t folio_alloc_swap(struct folio *folio) { swp_entry_t entry; struct swap_slots_cache *cache; entry.val = 0; - if (PageTransHuge(page)) { + if (folio_test_large(folio)) { if (IS_ENABLED(CONFIG_THP_SWAP)) - get_swap_pages(1, &entry, HPAGE_PMD_NR); + get_swap_pages(1, &entry, folio_nr_pages(folio)); goto out; } @@ -344,8 +344,8 @@ swp_entry_t get_swap_page(struct page *page) get_swap_pages(1, &entry, 1); out: - if (mem_cgroup_try_charge_swap(page, entry)) { - put_swap_page(page, entry); + if (mem_cgroup_try_charge_swap(folio, entry)) { + put_swap_page(&folio->page, entry); entry.val = 0; } return entry; diff --git a/mm/swap_state.c b/mm/swap_state.c index 013856004825..989ad18f5468 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -183,13 +183,14 @@ void __delete_from_swap_cache(struct page *page, */ int add_to_swap(struct page *page) { + struct folio *folio = page_folio(page); swp_entry_t entry; int err; VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(!PageUptodate(page), page); - entry = get_swap_page(page); + entry = folio_alloc_swap(folio); if (!entry.val) return 0; diff --git a/mm/swapfile.c b/mm/swapfile.c index 63c61f8b2611..c34f41553144 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -76,9 +76,9 @@ static PLIST_HEAD(swap_active_head); /* * all available (active, not full) swap_info_structs * protected with swap_avail_lock, ordered by priority. - * This is used by get_swap_page() instead of swap_active_head + * This is used by folio_alloc_swap() instead of swap_active_head * because swap_active_head includes all swap_info_structs, - * but get_swap_page() doesn't need to look at full ones. + * but folio_alloc_swap() doesn't need to look at full ones. * This uses its own lock instead of swap_lock because when a * swap_info_struct changes between not-full/full, it needs to * add/remove itself to/from this list, but the swap_info_struct->lock @@ -2093,11 +2093,12 @@ static int try_to_unuse(unsigned int type) * Under global memory pressure, swap entries can be reinserted back * into process space after the mmlist loop above passes over them. * - * Limit the number of retries? No: when mmget_not_zero() above fails, - * that mm is likely to be freeing swap from exit_mmap(), which proceeds - * at its own independent pace; and even shmem_writepage() could have - * been preempted after get_swap_page(), temporarily hiding that swap. - * It's easy and robust (though cpu-intensive) just to keep retrying. + * Limit the number of retries? No: when mmget_not_zero() + * above fails, that mm is likely to be freeing swap from + * exit_mmap(), which proceeds at its own independent pace; + * and even shmem_writepage() could have been preempted after + * folio_alloc_swap(), temporarily hiding that swap. It's easy + * and robust (though cpu-intensive) just to keep retrying. */ if (READ_ONCE(si->inuse_pages)) { if (!signal_pending(current)) @@ -2310,7 +2311,7 @@ static void _enable_swap_info(struct swap_info_struct *p) * which on removal of any swap_info_struct with an auto-assigned * (i.e. negative) priority increments the auto-assigned priority * of any lower-priority swap_info_structs. - * swap_avail_head needs to be priority ordered for get_swap_page(), + * swap_avail_head needs to be priority ordered for folio_alloc_swap(), * which allocates swap pages from the highest available priority * swap_info_struct. */ From patchwork Fri Apr 29 19:23:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832651 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DC3FC433F5 for ; Fri, 29 Apr 2022 19:24:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 146BF6B0088; Fri, 29 Apr 2022 15:23:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EB01F6B0089; Fri, 29 Apr 2022 15:23:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ADE1C6B008C; Fri, 29 Apr 2022 15:23:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 97DC26B0088 for ; Fri, 29 Apr 2022 15:23:48 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 7C636AFD for ; Fri, 29 Apr 2022 19:23:48 +0000 (UTC) X-FDA: 79410891336.11.D19558C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id DC4D240079 for ; Fri, 29 Apr 2022 19:23:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=66RQr5jogRoJnb7RIEyD6u6pEaL+uGBhK9oq+xBYitE=; b=jhyFJbWjOHE4er4vFeUVtNjlJz aAtcpnQnGf0KjS9q2/qmYYVNNY8k+0y4bFZX2vevFtGn9IimQpxBBOpiUeN16LksSdnfyo+u0Szny AuuvYWKXJyigUz9tomBkypEJMkt3ZHp4zKLrZWvx0xqtuMJ6ZpLaVZpk+eBBqhhxCEX5imONwn/MT 4L62AoUQ+KMR/GIG4n6tKS34W3E3YagIVfMr66k5Ia90XUJJD5AbGXKegMTsN8cjQOAJLKb1HKWj3 oOwEo/a4vLAYCWwAvkTvl2YTVIVnJEvPGpuWXUreZ2WBr81IWB9cFinqd+jBkccLA8MjYmGr/J9uC xoHUfDGg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDJ-00CjOu-Hh; Fri, 29 Apr 2022 19:23:37 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 07/21] swap: Convert add_to_swap() to take a folio Date: Fri, 29 Apr 2022 20:23:15 +0100 Message-Id: <20220429192329.3034378-8-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: DC4D240079 X-Stat-Signature: jh9up98xgzi4ed9nxpgoas4zthqef8q9 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=jhyFJbWj; dmarc=none; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-HE-Tag: 1651260222-985255 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The only caller already has a folio available, so this saves a conversion. Also convert the return type to boolean. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/swap.h | 6 +++--- mm/swap_state.c | 47 +++++++++++++++++++++++--------------------- mm/vmscan.c | 6 +++--- 3 files changed, 31 insertions(+), 28 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 147a9a173508..f87bb495e482 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -449,7 +449,7 @@ static inline unsigned long total_swapcache_pages(void) } extern void show_swap_cache_info(void); -extern int add_to_swap(struct page *page); +bool add_to_swap(struct folio *folio); extern void *get_shadow_from_swap_cache(swp_entry_t entry); extern int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp, void **shadowp); @@ -630,9 +630,9 @@ struct page *find_get_incore_page(struct address_space *mapping, pgoff_t index) return find_get_page(mapping, index); } -static inline int add_to_swap(struct page *page) +static inline bool add_to_swap(struct folio *folio) { - return 0; + return false; } static inline void *get_shadow_from_swap_cache(swp_entry_t entry) diff --git a/mm/swap_state.c b/mm/swap_state.c index 989ad18f5468..858d8904b06e 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -175,24 +175,26 @@ void __delete_from_swap_cache(struct page *page, } /** - * add_to_swap - allocate swap space for a page - * @page: page we want to move to swap + * add_to_swap - allocate swap space for a folio + * @folio: folio we want to move to swap * - * Allocate swap space for the page and add the page to the - * swap cache. Caller needs to hold the page lock. + * Allocate swap space for the folio and add the folio to the + * swap cache. + * + * Context: Caller needs to hold the folio lock. + * Return: Whether the folio was added to the swap cache. */ -int add_to_swap(struct page *page) +bool add_to_swap(struct folio *folio) { - struct folio *folio = page_folio(page); swp_entry_t entry; int err; - VM_BUG_ON_PAGE(!PageLocked(page), page); - VM_BUG_ON_PAGE(!PageUptodate(page), page); + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + VM_BUG_ON_FOLIO(!folio_test_uptodate(folio), folio); entry = folio_alloc_swap(folio); if (!entry.val) - return 0; + return false; /* * XArray node allocations from PF_MEMALLOC contexts could @@ -205,7 +207,7 @@ int add_to_swap(struct page *page) /* * Add it to the swap cache. */ - err = add_to_swap_cache(page, entry, + err = add_to_swap_cache(&folio->page, entry, __GFP_HIGH|__GFP_NOMEMALLOC|__GFP_NOWARN, NULL); if (err) /* @@ -214,22 +216,23 @@ int add_to_swap(struct page *page) */ goto fail; /* - * Normally the page will be dirtied in unmap because its pte should be - * dirty. A special case is MADV_FREE page. The page's pte could have - * dirty bit cleared but the page's SwapBacked bit is still set because - * clearing the dirty bit and SwapBacked bit has no lock protected. For - * such page, unmap will not set dirty bit for it, so page reclaim will - * not write the page out. This can cause data corruption when the page - * is swap in later. Always setting the dirty bit for the page solves - * the problem. + * Normally the folio will be dirtied in unmap because its + * pte should be dirty. A special case is MADV_FREE page. The + * page's pte could have dirty bit cleared but the folio's + * SwapBacked flag is still set because clearing the dirty bit + * and SwapBacked flag has no lock protected. For such folio, + * unmap will not set dirty bit for it, so folio reclaim will + * not write the folio out. This can cause data corruption when + * the folio is swapped in later. Always setting the dirty flag + * for the folio solves the problem. */ - set_page_dirty(page); + folio_mark_dirty(folio); - return 1; + return true; fail: - put_swap_page(page, entry); - return 0; + put_swap_page(&folio->page, entry); + return false; } /* diff --git a/mm/vmscan.c b/mm/vmscan.c index 19c1bcd886ef..8f7c32b3d65e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1710,8 +1710,8 @@ static unsigned int shrink_page_list(struct list_head *page_list, page_list)) goto activate_locked; } - if (!add_to_swap(page)) { - if (!PageTransHuge(page)) + if (!add_to_swap(folio)) { + if (!folio_test_large(folio)) goto activate_locked_split; /* Fallback to swap normal pages */ if (split_folio_to_list(folio, @@ -1720,7 +1720,7 @@ static unsigned int shrink_page_list(struct list_head *page_list, #ifdef CONFIG_TRANSPARENT_HUGEPAGE count_vm_event(THP_SWPOUT_FALLBACK); #endif - if (!add_to_swap(page)) + if (!add_to_swap(folio)) goto activate_locked_split; } From patchwork Fri Apr 29 19:23:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832654 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B00E9C433F5 for ; Fri, 29 Apr 2022 19:24:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 08AA56B0092; Fri, 29 Apr 2022 15:23:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EAEC66B0093; Fri, 29 Apr 2022 15:23:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CDB5A6B0095; Fri, 29 Apr 2022 15:23:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id B11086B0092 for ; Fri, 29 Apr 2022 15:23:57 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 8EE39287EB for ; Fri, 29 Apr 2022 19:23:57 +0000 (UTC) X-FDA: 79410891714.08.F3AA7F0 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id A9E96A0071 for ; Fri, 29 Apr 2022 19:23:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=cn5OhfkY+HX3+evyy98EyBGWIVQZySLtCMdQHTOXi3c=; b=pC3Rnsvz2he2UeJQxOcx06TG6V uKsu1Az8/GoO1TKDeBlm7bFN4HOh2sJDuHEr2Hos+8ZpRxJbF7jAw+a0tlcQC0sD/BQDa800NahF4 /ostemL+9MUcssq1+U4zgKLYP6LDR17jkBXYf7GOxvDuPq9ZzMXKj6uCAmTMg1zSS+MREA5+nzqN0 beRWCTzW8osbFlyaBb2JCuw3UKxGSYmIvC7XPK9bdcHEZZu0gPcac8rff4JcjSbO5ZLAH8uBExp0a QkbiP9JdGa4J92ZCNxJ0gmetJVOoCFP2pAy7ke7bLJTbUhuCRex0Nf1Q5Z71UjMeCJlfQYItGDkyz XpwhO1xQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDJ-00CjOw-KJ; Fri, 29 Apr 2022 19:23:37 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 08/21] vmscan: Convert dirty page handling to folios Date: Fri, 29 Apr 2022 20:23:16 +0100 Message-Id: <20220429192329.3034378-9-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: A9E96A0071 X-Stat-Signature: ee3hw5qza4qz38muxad1qdxkjgj79id3 X-Rspam-User: Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=pC3Rnsvz; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1651260227-341587 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Mostly this just eliminates calls to compound_head(), but NR_VMSCAN_IMMEDIATE was being incremented by 1 instead of by nr_pages. Signed-off-by: Matthew Wilcox (Oracle) --- mm/vmscan.c | 48 ++++++++++++++++++++++++++---------------------- 1 file changed, 26 insertions(+), 22 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 8f7c32b3d65e..950eeb2f759b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1768,28 +1768,31 @@ static unsigned int shrink_page_list(struct list_head *page_list, } } - if (PageDirty(page)) { + if (folio_test_dirty(folio)) { /* - * Only kswapd can writeback filesystem pages + * Only kswapd can writeback filesystem folios * to avoid risk of stack overflow. But avoid - * injecting inefficient single-page IO into + * injecting inefficient single-folio I/O into * flusher writeback as much as possible: only - * write pages when we've encountered many - * dirty pages, and when we've already scanned - * the rest of the LRU for clean pages and see - * the same dirty pages again (PageReclaim). + * write folios when we've encountered many + * dirty folios, and when we've already scanned + * the rest of the LRU for clean folios and see + * the same dirty folios again (with the reclaim + * flag set). */ - if (page_is_file_lru(page) && - (!current_is_kswapd() || !PageReclaim(page) || + if (folio_is_file_lru(folio) && + (!current_is_kswapd() || + !folio_test_reclaim(folio) || !test_bit(PGDAT_DIRTY, &pgdat->flags))) { /* * Immediately reclaim when written back. - * Similar in principal to deactivate_page() - * except we already have the page isolated + * Similar in principle to deactivate_page() + * except we already have the folio isolated * and know it's dirty */ - inc_node_page_state(page, NR_VMSCAN_IMMEDIATE); - SetPageReclaim(page); + node_stat_mod_folio(folio, NR_VMSCAN_IMMEDIATE, + nr_pages); + folio_set_reclaim(folio); goto activate_locked; } @@ -1802,8 +1805,8 @@ static unsigned int shrink_page_list(struct list_head *page_list, goto keep_locked; /* - * Page is dirty. Flush the TLB if a writable entry - * potentially exists to avoid CPU writes after IO + * Folio is dirty. Flush the TLB if a writable entry + * potentially exists to avoid CPU writes after I/O * starts and then write it out here. */ try_to_unmap_flush_dirty(); @@ -1815,23 +1818,24 @@ static unsigned int shrink_page_list(struct list_head *page_list, case PAGE_SUCCESS: stat->nr_pageout += nr_pages; - if (PageWriteback(page)) + if (folio_test_writeback(folio)) goto keep; - if (PageDirty(page)) + if (folio_test_dirty(folio)) goto keep; /* * A synchronous write - probably a ramdisk. Go - * ahead and try to reclaim the page. + * ahead and try to reclaim the folio. */ - if (!trylock_page(page)) + if (!folio_trylock(folio)) goto keep; - if (PageDirty(page) || PageWriteback(page)) + if (folio_test_dirty(folio) || + folio_test_writeback(folio)) goto keep_locked; - mapping = page_mapping(page); + mapping = folio_mapping(folio); fallthrough; case PAGE_CLEAN: - ; /* try to free the page below */ + ; /* try to free the folio below */ } } From patchwork Fri Apr 29 19:23:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832656 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55B6AC433FE for ; Fri, 29 Apr 2022 19:24:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 319676B0096; Fri, 29 Apr 2022 15:24:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2A2CA6B0098; Fri, 29 Apr 2022 15:24:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0A6F36B0099; Fri, 29 Apr 2022 15:24:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id E22066B0096 for ; Fri, 29 Apr 2022 15:24:03 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 9349425E64 for ; Fri, 29 Apr 2022 19:24:03 +0000 (UTC) X-FDA: 79410891966.13.8D56052 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id 770D180075 for ; Fri, 29 Apr 2022 19:23:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=VouUEevqAJoImC1PsqzC77NF117LJuXxB1hfk11vLmg=; b=UygGnB9sQ4Q/ZF3i0wXjwJgME7 P1fkYi8ntKHNnqxwQ8SfD5Qr3/qFqY2HN/6h8rNuEujmhBvIeFV/84hTmgq6XOW/cWT5QRVCDHD60 r+DZ4e/R9jbF6sj4MbV3HlMueLcB2maoidGijaHkMmJZzq5pW7kwJ3jbiDIT7am+zPp/sYUrbO1lz SKpKGCzxV1ZcdpUa7vVAqlmx9s4kQIMKPOTm0aSR2BsyIFtndT8CseImms5ieQ7EzkDygKJiIizK1 Ftcu4XPZ7IXo5LC2G0t3UO/+Prrfh6QOMzuojCSsJHYrjt+Po80oQ69SYHhySbxSA9fI3KxPdmbsE P1xbe9Ug==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDJ-00CjOz-N3; Fri, 29 Apr 2022 19:23:37 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 09/21] vmscan: Convert page buffer handling to use folios Date: Fri, 29 Apr 2022 20:23:17 +0100 Message-Id: <20220429192329.3034378-10-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 770D180075 X-Stat-Signature: 8jnpo3b6ry14dc854e6icap7jzewcbn9 Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=UygGnB9s; dmarc=none; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-HE-Tag: 1651260239-528211 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This mostly just removes calls to compound_head() although nr_reclaimed should be incremented by the number of pages, not just 1. Signed-off-by: Matthew Wilcox (Oracle) --- mm/vmscan.c | 50 ++++++++++++++++++++++++++------------------------ 1 file changed, 26 insertions(+), 24 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 950eeb2f759b..cda43f0bb285 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1840,42 +1840,44 @@ static unsigned int shrink_page_list(struct list_head *page_list, } /* - * If the page has buffers, try to free the buffer mappings - * associated with this page. If we succeed we try to free - * the page as well. + * If the folio has buffers, try to free the buffer + * mappings associated with this folio. If we succeed + * we try to free the folio as well. * - * We do this even if the page is PageDirty(). - * try_to_release_page() does not perform I/O, but it is - * possible for a page to have PageDirty set, but it is actually - * clean (all its buffers are clean). This happens if the - * buffers were written out directly, with submit_bh(). ext3 - * will do this, as well as the blockdev mapping. - * try_to_release_page() will discover that cleanness and will - * drop the buffers and mark the page clean - it can be freed. + * We do this even if the folio is dirty. + * filemap_release_folio() does not perform I/O, but it + * is possible for a folio to have the dirty flag set, + * but it is actually clean (all its buffers are clean). + * This happens if the buffers were written out directly, + * with submit_bh(). ext3 will do this, as well as + * the blockdev mapping. filemap_release_folio() will + * discover that cleanness and will drop the buffers + * and mark the folio clean - it can be freed. * - * Rarely, pages can have buffers and no ->mapping. These are - * the pages which were not successfully invalidated in - * truncate_cleanup_page(). We try to drop those buffers here - * and if that worked, and the page is no longer mapped into - * process address space (page_count == 1) it can be freed. - * Otherwise, leave the page on the LRU so it is swappable. + * Rarely, folios can have buffers and no ->mapping. + * These are the folios which were not successfully + * invalidated in truncate_cleanup_folio(). We try to + * drop those buffers here and if that worked, and the + * folio is no longer mapped into process address space + * (refcount == 1) it can be freed. Otherwise, leave + * the folio on the LRU so it is swappable. */ - if (page_has_private(page)) { - if (!try_to_release_page(page, sc->gfp_mask)) + if (folio_has_private(folio)) { + if (!filemap_release_folio(folio, sc->gfp_mask)) goto activate_locked; - if (!mapping && page_count(page) == 1) { - unlock_page(page); - if (put_page_testzero(page)) + if (!mapping && folio_ref_count(folio) == 1) { + folio_unlock(folio); + if (folio_put_testzero(folio)) goto free_it; else { /* * rare race with speculative reference. * the speculative reference will free - * this page shortly, so we may + * this folio shortly, so we may * increment nr_reclaimed here (and * leave it off the LRU). */ - nr_reclaimed++; + nr_reclaimed += nr_pages; continue; } } From patchwork Fri Apr 29 19:23:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832641 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3E50C433EF for ; Fri, 29 Apr 2022 19:23:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5148B6B007E; Fri, 29 Apr 2022 15:23:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A6B4D6B0073; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E3AF6B007E; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 46F9C6B0081 for ; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 0AF7F20B17 for ; Fri, 29 Apr 2022 19:23:41 +0000 (UTC) X-FDA: 79410891042.23.F7FDB7F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 4E66CC0066 for ; Fri, 29 Apr 2022 19:23:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=N9v77wlY0AkilHHu4UiOmCVb+LAs7M6W0wTgDYMmZa8=; b=uvAWJW/zhTLaXpr+PnqD+jMzXx 9/KpX0PdAE+wO9cC8xkRrrAawql2FjJiMz+YpK8xxio6Y/fscO5sjyQL/S2hJn8BbiZ3Fw9Wetkt3 KsT4zugVtZYWlHKlF8+zQ+n+fGF2vIzNxWejDMeFwQFImk4PvFVkSX4gdLBklH9r2I2f7HtRECfXc qd7SUoU2ngR43BHvjrsQCMOspj2i3ToBD65nR1yy5BVA9pSZkYnD2cfYUutZRxbOdJVMaQLoCQM5Y Y4d5opDpc24G/tGirUqkWr+tR7lemqraP+olNxstPjqCcTzUbIplfaGURnte9LuLQP3qkzRkZqG2+ 8heLUzaA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDJ-00CjP1-Pg; Fri, 29 Apr 2022 19:23:37 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 10/21] vmscan: Convert lazy freeing to folios Date: Fri, 29 Apr 2022 20:23:18 +0100 Message-Id: <20220429192329.3034378-11-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="uvAWJW/z"; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 4E66CC0066 X-Stat-Signature: qh4ab39mrsqa64pyoq97x5t95zbgtody X-HE-Tag: 1651260218-548371 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Remove a hidden call to compound_head(), and account nr_pages instead of a single page. This matches the code in lru_lazyfree_fn() that accounts nr_pages to PGLAZYFREE. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/memcontrol.h | 14 ++++++++++++++ mm/vmscan.c | 18 +++++++++--------- 2 files changed, 23 insertions(+), 9 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 89b14729d59f..06a16c82558b 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1061,6 +1061,15 @@ static inline void count_memcg_page_event(struct page *page, count_memcg_events(memcg, idx, 1); } +static inline void count_memcg_folio_events(struct folio *folio, + enum vm_event_item idx, unsigned long nr) +{ + struct mem_cgroup *memcg = folio_memcg(folio); + + if (memcg) + count_memcg_events(memcg, idx, nr); +} + static inline void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx) { @@ -1498,6 +1507,11 @@ static inline void count_memcg_page_event(struct page *page, { } +static inline void count_memcg_folio_events(struct folio *folio, + enum vm_event_item idx, unsigned long nr) +{ +} + static inline void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx) { diff --git a/mm/vmscan.c b/mm/vmscan.c index cda43f0bb285..0368ea3e9880 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1883,20 +1883,20 @@ static unsigned int shrink_page_list(struct list_head *page_list, } } - if (PageAnon(page) && !PageSwapBacked(page)) { + if (folio_test_anon(folio) && !folio_test_swapbacked(folio)) { /* follow __remove_mapping for reference */ - if (!page_ref_freeze(page, 1)) + if (!folio_ref_freeze(folio, 1)) goto keep_locked; /* - * The page has only one reference left, which is + * The folio has only one reference left, which is * from the isolation. After the caller puts the - * page back on lru and drops the reference, the - * page will be freed anyway. It doesn't matter - * which lru it goes. So we don't bother checking - * PageDirty here. + * folio back on the lru and drops the reference, the + * folio will be freed anyway. It doesn't matter + * which lru it goes on. So we don't bother checking + * the dirty flag here. */ - count_vm_event(PGLAZYFREED); - count_memcg_page_event(page, PGLAZYFREED); + count_vm_events(PGLAZYFREED, nr_pages); + count_memcg_folio_events(folio, PGLAZYFREED, nr_pages); } else if (!mapping || !__remove_mapping(mapping, folio, true, sc->target_mem_cgroup)) goto keep_locked; From patchwork Fri Apr 29 19:23:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832648 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 124F6C433FE for ; Fri, 29 Apr 2022 19:23:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A21226B0081; Fri, 29 Apr 2022 15:23:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 600B46B007D; Fri, 29 Apr 2022 15:23:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F6026B0085; Fri, 29 Apr 2022 15:23:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id DC65F6B0088 for ; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id C265060253 for ; Fri, 29 Apr 2022 19:23:41 +0000 (UTC) X-FDA: 79410891042.16.B53D8BA Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 53A89C0066 for ; Fri, 29 Apr 2022 19:23:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=pGA+ufhTLejZs6UQD/U24KGfzpBgMkKhRJ/yYi2BWVE=; b=AtbyDaHg0Ojx/uNmzzqM+VGd9i x2qeP8S98f2GUMcgmmmEAG5rJvJLq8Rnjb05ijuyRfZvDbDBkRqV1Z1QXGsImGOIsAyrZS1/CYcGq 0r/pSs41vZH0sYNp3e2J3nGnM5/ok+q40Z/Lw7VsfJL+KPZxpSP+4oOVG+8DNIlNR8okQRXtaUks0 twFEvUpm6vhOAcsfEzlhFSejj5OdgOFVamLxTpvwP/rQ7HAHal1HwN0NQjF81BWgGUFRkFun1J6sH YadKqjp3OZp9xS1J/eGy20rOT4nHbOe94F4ZxRxs5b60btX27Zy1cHv3b9CxGPRAEVjfLIH7e+b4O rsEGh6fA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDJ-00CjPG-Tm; Fri, 29 Apr 2022 19:23:37 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 11/21] vmscan: Move initialisation of mapping down Date: Fri, 29 Apr 2022 20:23:19 +0100 Message-Id: <20220429192329.3034378-12-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=AtbyDaHg; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 53A89C0066 X-Stat-Signature: 8jgrfi11696m1rwmyp4gbdjdkedoj7np X-HE-Tag: 1651260220-18155 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that we don't interrogate the BDI for congestion, we can delay looking up the folio's mapping until we've got further through the function, reducing register pressure and saving a call to folio_mapping for folios we're adding to the swap cache. Signed-off-by: Matthew Wilcox (Oracle) --- mm/vmscan.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 0368ea3e9880..9ac2583ca5e5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1568,12 +1568,11 @@ static unsigned int shrink_page_list(struct list_head *page_list, stat->nr_unqueued_dirty += nr_pages; /* - * Treat this page as congested if the underlying BDI is or if + * Treat this page as congested if * pages are cycling through the LRU so quickly that the * pages marked for immediate reclaim are making it to the * end of the LRU a second time. */ - mapping = page_mapping(page); if (writeback && PageReclaim(page)) stat->nr_congested += nr_pages; @@ -1725,9 +1724,6 @@ static unsigned int shrink_page_list(struct list_head *page_list, } may_enter_fs = true; - - /* Adding to swap updated mapping */ - mapping = page_mapping(page); } } else if (PageSwapBacked(page) && PageTransHuge(page)) { /* Split shmem THP */ @@ -1768,6 +1764,7 @@ static unsigned int shrink_page_list(struct list_head *page_list, } } + mapping = folio_mapping(folio); if (folio_test_dirty(folio)) { /* * Only kswapd can writeback filesystem folios From patchwork Fri Apr 29 19:23:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832639 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5E01C433F5 for ; Fri, 29 Apr 2022 19:23:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B35146B0072; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 784696B0085; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 146666B007D; Fri, 29 Apr 2022 15:23:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id AF9586B0073 for ; Fri, 29 Apr 2022 15:23:40 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 844C42099E for ; Fri, 29 Apr 2022 19:23:40 +0000 (UTC) X-FDA: 79410891000.13.B4621F8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf31.hostedemail.com (Postfix) with ESMTP id 6F7A820069 for ; Fri, 29 Apr 2022 19:23:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=5quld6PgXDQsno1p/cxFOWTDpBSk6Vh4OG/wcGOVDgY=; b=kCj8khlO4IKPEsVGbJ1geu9qPi YCrdv3pYTzURzF38AFcsCprOq0MYn4Aj2747klJl3rjOng785yj9mkwR1oW1/WPCeLgwYfkjh2G5+ iatUghFtKLVV+cbOnEfDoi71uM4KTYVrES18ECF9tXfAqxJyXTIidfZvScbmg0c0uii+wtfBrX+8E EpZPEpt2UReIKR+OtNKfRTl36UXxjH7VszqJD7PUL3N5MYn7zBSpOrLI6pls2I2gxsILrYeKp6S2u dd9l+fTIpxdLakEtwCxUYHAgpKp++WhR2oqOfhO2xujYtE61P1wcbZtV+ZCgn15356du+WppUFFtD wxmHcn7g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDJ-00CjPM-WD; Fri, 29 Apr 2022 19:23:38 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 12/21] vmscan: Convert the activate_locked portion of shrink_page_list to folios Date: Fri, 29 Apr 2022 20:23:20 +0100 Message-Id: <20220429192329.3034378-13-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: 7fxs5e6sqxzyay8yqe7x6uutr3oqme4u X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 6F7A820069 X-Rspam-User: Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=kCj8khlO; dmarc=none; spf=none (imf31.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1651260208-279869 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This accounts the number of pages activated correctly for large folios. Signed-off-by: Matthew Wilcox (Oracle) --- mm/vmscan.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 9ac2583ca5e5..85c9758f6f32 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1927,15 +1927,16 @@ static unsigned int shrink_page_list(struct list_head *page_list, } activate_locked: /* Not a candidate for swapping, so reclaim swap space. */ - if (PageSwapCache(page) && (mem_cgroup_swap_full(page) || - PageMlocked(page))) - try_to_free_swap(page); - VM_BUG_ON_PAGE(PageActive(page), page); - if (!PageMlocked(page)) { - int type = page_is_file_lru(page); - SetPageActive(page); + if (folio_test_swapcache(folio) && + (mem_cgroup_swap_full(&folio->page) || + folio_test_mlocked(folio))) + try_to_free_swap(&folio->page); + VM_BUG_ON_FOLIO(folio_test_active(folio), folio); + if (!folio_test_mlocked(folio)) { + int type = folio_is_file_lru(folio); + folio_set_active(folio); stat->nr_activate[type] += nr_pages; - count_memcg_page_event(page, PGACTIVATE); + count_memcg_folio_events(folio, PGACTIVATE, nr_pages); } keep_locked: unlock_page(page); From patchwork Fri Apr 29 19:23:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832638 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BD14C433FE for ; Fri, 29 Apr 2022 19:23:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6053E6B0071; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 513896B0072; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E3A256B0071; Fri, 29 Apr 2022 15:23:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id B35856B0075 for ; Fri, 29 Apr 2022 15:23:40 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 9724960C23 for ; Fri, 29 Apr 2022 19:23:40 +0000 (UTC) X-FDA: 79410891000.08.1C90564 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf27.hostedemail.com (Postfix) with ESMTP id 3B7644002F for ; Fri, 29 Apr 2022 19:23:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=P6QvF8JxKYVK3JaAiM/lJRQQHIKzq5/0+eHDwmXKge8=; b=qUxyjb3hNyH4xYQyeq+ZfqDDld KONC6Xpl/mK6j/q4mShvlP8703GUJ4sr975f5vVigCWl7iHG1Fl5o0nkVzDZodNVGD0b3QkdymA92 aYQZbiRpqQ/c8XJf6bWFFDfxmAeYAPETltXgQmfRLt/AtNHXUq6Q9bU1pYknAL3m4O6ykCpd5MHxQ 10OsOrO87osaJ/+8weevQL5CDDuPniUBNlJ8M+lTIZrXZPZEINtnEZqNOBECSOIQMovL/mjJTP4Uk A7Pmy5k3zkM1Ci0GlljkOp2ihZTlaZ4XOfuPmT7SasNSB9EzT2/mSbihul0ROP6O1qMcBbceykbMU Yr6FQO+w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDK-00CjPS-3l; Fri, 29 Apr 2022 19:23:38 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 13/21] vmscan: Remove remaining uses of page in shrink_page_list Date: Fri, 29 Apr 2022 20:23:21 +0100 Message-Id: <20220429192329.3034378-14-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 3B7644002F X-Stat-Signature: jbocstnsjnztabubyijkz76p4hwz3trp X-Rspam-User: Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=qUxyjb3h; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1651260218-238392 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These are all straightforward conversions to the folio API. Signed-off-by: Matthew Wilcox (Oracle) --- mm/vmscan.c | 115 ++++++++++++++++++++++++++-------------------------- 1 file changed, 57 insertions(+), 58 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 85c9758f6f32..cc9b93c7fa0c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1524,7 +1524,6 @@ static unsigned int shrink_page_list(struct list_head *page_list, retry: while (!list_empty(page_list)) { struct address_space *mapping; - struct page *page; struct folio *folio; enum page_references references = PAGEREF_RECLAIM; bool dirty, writeback, may_enter_fs; @@ -1534,31 +1533,31 @@ static unsigned int shrink_page_list(struct list_head *page_list, folio = lru_to_folio(page_list); list_del(&folio->lru); - page = &folio->page; - if (!trylock_page(page)) + if (!folio_trylock(folio)) goto keep; - VM_BUG_ON_PAGE(PageActive(page), page); + VM_BUG_ON_FOLIO(folio_test_active(folio), folio); - nr_pages = compound_nr(page); + nr_pages = folio_nr_pages(folio); - /* Account the number of base pages even though THP */ + /* Account the number of base pages */ sc->nr_scanned += nr_pages; - if (unlikely(!page_evictable(page))) + if (unlikely(!folio_evictable(folio))) goto activate_locked; if (!sc->may_unmap && folio_mapped(folio)) goto keep_locked; may_enter_fs = (sc->gfp_mask & __GFP_FS) || - (PageSwapCache(page) && (sc->gfp_mask & __GFP_IO)); + (folio_test_swapcache(folio) && + (sc->gfp_mask & __GFP_IO)); /* * The number of dirty pages determines if a node is marked * reclaim_congested. kswapd will stall and start writing - * pages if the tail of the LRU is all dirty unqueued pages. + * folios if the tail of the LRU is all dirty unqueued folios. */ folio_check_dirty_writeback(folio, &dirty, &writeback); if (dirty || writeback) @@ -1568,21 +1567,21 @@ static unsigned int shrink_page_list(struct list_head *page_list, stat->nr_unqueued_dirty += nr_pages; /* - * Treat this page as congested if - * pages are cycling through the LRU so quickly that the - * pages marked for immediate reclaim are making it to the - * end of the LRU a second time. + * Treat this folio as congested if folios are cycling + * through the LRU so quickly that the folios marked + * for immediate reclaim are making it to the end of + * the LRU a second time. */ - if (writeback && PageReclaim(page)) + if (writeback && folio_test_reclaim(folio)) stat->nr_congested += nr_pages; /* * If a folio at the tail of the LRU is under writeback, there * are three cases to consider. * - * 1) If reclaim is encountering an excessive number of folios - * under writeback and this folio is both under - * writeback and has the reclaim flag set then it + * 1) If reclaim is encountering an excessive number + * of folios under writeback and this folio has both + * the writeback and reclaim flags set, then it * indicates that folios are being queued for I/O but * are being recycled through the LRU before the I/O * can complete. Waiting on the folio itself risks an @@ -1633,16 +1632,16 @@ static unsigned int shrink_page_list(struct list_head *page_list, !folio_test_reclaim(folio) || !may_enter_fs) { /* * This is slightly racy - - * folio_end_writeback() might have just - * cleared the reclaim flag, then setting - * reclaim here ends up interpreted as - * the readahead flag - but that does - * not matter enough to care. What we - * do want is for this folio to have - * the reclaim flag set next time memcg - * reclaim reaches the tests above, so - * it will then folio_wait_writeback() - * to avoid OOM; and it's also appropriate + * folio_end_writeback() might have + * just cleared the reclaim flag, then + * setting the reclaim flag here ends up + * interpreted as the readahead flag - but + * that does not matter enough to care. + * What we do want is for this folio to + * have the reclaim flag set next time + * memcg reclaim reaches the tests above, + * so it will then wait for writeback to + * avoid OOM; and it's also appropriate * in global reclaim. */ folio_set_reclaim(folio); @@ -1670,37 +1669,37 @@ static unsigned int shrink_page_list(struct list_head *page_list, goto keep_locked; case PAGEREF_RECLAIM: case PAGEREF_RECLAIM_CLEAN: - ; /* try to reclaim the page below */ + ; /* try to reclaim the folio below */ } /* - * Before reclaiming the page, try to relocate + * Before reclaiming the folio, try to relocate * its contents to another node. */ if (do_demote_pass && - (thp_migration_supported() || !PageTransHuge(page))) { - list_add(&page->lru, &demote_pages); - unlock_page(page); + (thp_migration_supported() || !folio_test_large(folio))) { + list_add(&folio->lru, &demote_pages); + folio_unlock(folio); continue; } /* * Anonymous process memory has backing store? * Try to allocate it some swap space here. - * Lazyfree page could be freed directly + * Lazyfree folio could be freed directly */ - if (PageAnon(page) && PageSwapBacked(page)) { - if (!PageSwapCache(page)) { + if (folio_test_anon(folio) && folio_test_swapbacked(folio)) { + if (!folio_test_swapcache(folio)) { if (!(sc->gfp_mask & __GFP_IO)) goto keep_locked; if (folio_maybe_dma_pinned(folio)) goto keep_locked; - if (PageTransHuge(page)) { - /* cannot split THP, skip it */ + if (folio_test_large(folio)) { + /* cannot split folio, skip it */ if (!can_split_folio(folio, NULL)) goto activate_locked; /* - * Split pages without a PMD map right + * Split folios without a PMD map right * away. Chances are some or all of the * tail pages can be freed without IO. */ @@ -1725,20 +1724,19 @@ static unsigned int shrink_page_list(struct list_head *page_list, may_enter_fs = true; } - } else if (PageSwapBacked(page) && PageTransHuge(page)) { - /* Split shmem THP */ + } else if (folio_test_swapbacked(folio) && + folio_test_large(folio)) { + /* Split shmem folio */ if (split_folio_to_list(folio, page_list)) goto keep_locked; } /* - * THP may get split above, need minus tail pages and update - * nr_pages to avoid accounting tail pages twice. - * - * The tail pages that are added into swap cache successfully - * reach here. + * If the folio was split above, the tail pages will make + * their own pass through this function and be accounted + * then. */ - if ((nr_pages > 1) && !PageTransHuge(page)) { + if ((nr_pages > 1) && !folio_test_large(folio)) { sc->nr_scanned -= (nr_pages - 1); nr_pages = 1; } @@ -1898,11 +1896,11 @@ static unsigned int shrink_page_list(struct list_head *page_list, sc->target_mem_cgroup)) goto keep_locked; - unlock_page(page); + folio_unlock(folio); free_it: /* - * THP may get swapped out in a whole, need account - * all base pages. + * Folio may get swapped out as a whole, need to account + * all pages in it. */ nr_reclaimed += nr_pages; @@ -1910,10 +1908,10 @@ static unsigned int shrink_page_list(struct list_head *page_list, * Is there need to periodically free_page_list? It would * appear not as the counts should be low */ - if (unlikely(PageTransHuge(page))) - destroy_compound_page(page); + if (unlikely(folio_test_large(folio))) + destroy_compound_page(&folio->page); else - list_add(&page->lru, &free_pages); + list_add(&folio->lru, &free_pages); continue; activate_locked_split: @@ -1939,18 +1937,19 @@ static unsigned int shrink_page_list(struct list_head *page_list, count_memcg_folio_events(folio, PGACTIVATE, nr_pages); } keep_locked: - unlock_page(page); + folio_unlock(folio); keep: - list_add(&page->lru, &ret_pages); - VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page), page); + list_add(&folio->lru, &ret_pages); + VM_BUG_ON_FOLIO(folio_test_lru(folio) || + folio_test_unevictable(folio), folio); } /* 'page_list' is always empty here */ - /* Migrate pages selected for demotion */ + /* Migrate folios selected for demotion */ nr_reclaimed += demote_page_list(&demote_pages, pgdat); - /* Pages that could not be demoted are still in @demote_pages */ + /* Folios that could not be demoted are still in @demote_pages */ if (!list_empty(&demote_pages)) { - /* Pages which failed to demoted go back on @page_list for retry: */ + /* Folios which weren't demoted go back on @page_list for retry: */ list_splice_init(&demote_pages, page_list); do_demote_pass = false; goto retry; From patchwork Fri Apr 29 19:23:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832637 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90C64C433F5 for ; Fri, 29 Apr 2022 19:23:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2ED586B0074; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 190486B007E; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CC1536B0074; Fri, 29 Apr 2022 15:23:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id A76A76B0072 for ; Fri, 29 Apr 2022 15:23:40 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 6F0BE263CE for ; Fri, 29 Apr 2022 19:23:40 +0000 (UTC) X-FDA: 79410891000.15.C4DE680 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf08.hostedemail.com (Postfix) with ESMTP id A09A3160062 for ; Fri, 29 Apr 2022 19:23:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=56J//iEqV/S82wl4E6wMXPWkr2G51NtkjdnB2PcQ4Ks=; b=tBiQyLcF8I13tYLKhQPx/4riKm DlbiicDERtWNUNCODnORg4JRoGvisS+9ft07vvdMHQebSzTXBb8eLC/ZNqdPWGrgj8bRw0X1wEOEW MOPxVsk6bUKYxyvEDHqV1BHK66omnwqI08kvcJslTcLcKzNBj4YPzAjlm6LXkcjHOY81YPSMXredV mEl1xTp1qK8ltAH2vG62mb3f+IAZi4/02key/V6VzozRfnYw3uEHpkADJawSJ0uhQJtJZa5nxpjKl dMLn3xC0yXeGmKS+XLB57EvgA9fRVXsHd+DG/o0vaXMgDR2/QVDBsScns620HMUSTD/YUXiU21biY DwwxCZ7A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDK-00CjPY-7p; Fri, 29 Apr 2022 19:23:38 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 14/21] mm/shmem: Use a folio in shmem_unused_huge_shrink Date: Fri, 29 Apr 2022 20:23:22 +0100 Message-Id: <20220429192329.3034378-15-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=tBiQyLcF; dmarc=none; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: A09A3160062 X-Rspam-User: X-Stat-Signature: 4q643j6wpbqjrjgpitu7k1ien6fjgucm X-HE-Tag: 1651260212-478869 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When calling split_huge_page() we usually have to find the precise page, but that's not necessary here because we only need to unlock and put the folio afterwards. Saves 231 bytes of text (20% of this function). Signed-off-by: Matthew Wilcox (Oracle) --- mm/shmem.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 85c23696efc6..3461bdec6b38 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -553,7 +553,7 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, LIST_HEAD(to_remove); struct inode *inode; struct shmem_inode_info *info; - struct page *page; + struct folio *folio; unsigned long batch = sc ? sc->nr_to_scan : 128; int split = 0; @@ -597,6 +597,7 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, list_for_each_safe(pos, next, &list) { int ret; + pgoff_t index; info = list_entry(pos, struct shmem_inode_info, shrinklist); inode = &info->vfs_inode; @@ -604,14 +605,14 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, if (nr_to_split && split >= nr_to_split) goto move_back; - page = find_get_page(inode->i_mapping, - (inode->i_size & HPAGE_PMD_MASK) >> PAGE_SHIFT); - if (!page) + index = (inode->i_size & HPAGE_PMD_MASK) >> PAGE_SHIFT; + folio = filemap_get_folio(inode->i_mapping, index); + if (!folio) goto drop; /* No huge page at the end of the file: nothing to split */ - if (!PageTransHuge(page)) { - put_page(page); + if (!folio_test_large(folio)) { + folio_put(folio); goto drop; } @@ -622,14 +623,14 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, * Waiting for the lock may lead to deadlock in the * reclaim path. */ - if (!trylock_page(page)) { - put_page(page); + if (!folio_trylock(folio)) { + folio_put(folio); goto move_back; } - ret = split_huge_page(page); - unlock_page(page); - put_page(page); + ret = split_huge_page(&folio->page); + folio_unlock(folio); + folio_put(folio); /* If split failed move the inode on the list back to shrinklist */ if (ret) From patchwork Fri Apr 29 19:23:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832636 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B00ABC433EF for ; Fri, 29 Apr 2022 19:23:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E4B706B0075; Fri, 29 Apr 2022 15:23:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DC4CF6B0072; Fri, 29 Apr 2022 15:23:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C00686B007D; Fri, 29 Apr 2022 15:23:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 9B37A6B0071 for ; Fri, 29 Apr 2022 15:23:40 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 73BC06052E for ; Fri, 29 Apr 2022 19:23:40 +0000 (UTC) X-FDA: 79410891000.14.7FB60AD Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id E804E80070 for ; Fri, 29 Apr 2022 19:23:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=n9NMj4BZarLYBsrQRxi23hUYBJtKn1+fa2ahXmhQQvA=; b=dwApAtRsaQK3p7LY9gBcX9/Nny whCN7UTqjixLMEXKxY25V2nEwuh9fq3rdOAW6xFZDA8SsEcrqsEVWzEHddfdQSA0kerTuE0yHDc9b r7CijD9RH+cDdMwaPU/Yo3IYiJrHOXFt2P1fzdWp36iMqEhmqyeeRkzBCxrmGMIuugtcjHpAE8jhS hP1FeUW8citp9ZnYPxK7iKDLvDGMqo+hwO+zWX9Pybpm2WseDAs8R+eXrbg+KxEHk05FMJx1UVmZ2 B92REqcrJk2s/WMnJ3OnvibZZ2GN/O/9yKZtqQF8yR6UhNSLi24ZFkUSUBauFA1P5tucsHKV1xD2A Ylb/Swmw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDK-00CjPe-BL; Fri, 29 Apr 2022 19:23:38 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 15/21] mm/swap: Add folio_throttle_swaprate Date: Fri, 29 Apr 2022 20:23:23 +0100 Message-Id: <20220429192329.3034378-16-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: E804E80070 X-Stat-Signature: woffcbtij89py8qrk4pn7dw7enhdbtwx Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=dwApAtRs; dmarc=none; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1651260210-566562 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The only use of the page argument to cgroup_throttle_swaprate() is to get the node ID, and this will be the same for all pages in the folio, so just pass in the first page of the folio. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/swap.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/linux/swap.h b/include/linux/swap.h index f87bb495e482..96f7129f6ee2 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -736,6 +736,10 @@ static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask) { } #endif +static inline void folio_throttle_swaprate(struct folio *folio, gfp_t gfp) +{ + cgroup_throttle_swaprate(&folio->page, gfp); +} #ifdef CONFIG_MEMCG_SWAP void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry); From patchwork Fri Apr 29 19:23:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832647 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF6FFC433F5 for ; Fri, 29 Apr 2022 19:23:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7FBC76B0082; Fri, 29 Apr 2022 15:23:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3E5186B0081; Fri, 29 Apr 2022 15:23:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7D1BB6B0073; Fri, 29 Apr 2022 15:23:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id DCA766B0089 for ; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id BADF360C17 for ; Fri, 29 Apr 2022 19:23:41 +0000 (UTC) X-FDA: 79410891042.17.95EF2A5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf31.hostedemail.com (Postfix) with ESMTP id 2E8C020066 for ; Fri, 29 Apr 2022 19:23:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=fnCFMayjBnNKCGZqlvAEO8HEoxNH5bwG0DF0QQPLock=; b=FwGMaUHwTX7b88LxCrcpPGbFFZ 3Vdau7eRHoLpfg6Skoab2ROImEUDKMPY+kbDsxoo/ou841ATNdVNb5S5DYOJ+BEnwo4NbFLmBwltd 1Y/uLyILLq7yEYLUqzKu75kb8BA0M/C6SAgGnEorSd3pGldkrt5JsFS6IcRgGXLW4lqGw24Gbesg/ 0RXaTLDUWO/5qoc/kat9kVa5WOlP6YkbcUobqRXbBT5IXLuA/nUVtqG0AJe2E2p9y2ebqUWwQj8g7 cPp8PHXC0W1cJAuwtKv8w4fgf2WiHTd9EbkyRg2L+0UoUw82WGcoNXcsHmD+EhgjBbCSnVNhMbEjF /pamLN/A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDK-00CjPl-FQ; Fri, 29 Apr 2022 19:23:38 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 16/21] mm/shmem: Convert shmem_add_to_page_cache to take a folio Date: Fri, 29 Apr 2022 20:23:24 +0100 Message-Id: <20220429192329.3034378-17-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: s1r5qtpohd7m5rhxne5bs7abkys954ba X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 2E8C020066 X-Rspam-User: Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=FwGMaUHw; dmarc=none; spf=none (imf31.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1651260210-78306 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Shrinks shmem_add_to_page_cache() by 16 bytes. All the callers grow, but this is temporary as they will all be converted to folios soon. Signed-off-by: Matthew Wilcox (Oracle) --- mm/shmem.c | 57 +++++++++++++++++++++++++++++------------------------- 1 file changed, 31 insertions(+), 26 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 3461bdec6b38..4331a4daac01 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -695,36 +695,35 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, /* * Like add_to_page_cache_locked, but error if expected item has gone. */ -static int shmem_add_to_page_cache(struct page *page, +static int shmem_add_to_page_cache(struct folio *folio, struct address_space *mapping, pgoff_t index, void *expected, gfp_t gfp, struct mm_struct *charge_mm) { - XA_STATE_ORDER(xas, &mapping->i_pages, index, compound_order(page)); - unsigned long nr = compound_nr(page); + XA_STATE_ORDER(xas, &mapping->i_pages, index, folio_order(folio)); + long nr = folio_nr_pages(folio); int error; - VM_BUG_ON_PAGE(PageTail(page), page); - VM_BUG_ON_PAGE(index != round_down(index, nr), page); - VM_BUG_ON_PAGE(!PageLocked(page), page); - VM_BUG_ON_PAGE(!PageSwapBacked(page), page); - VM_BUG_ON(expected && PageTransHuge(page)); + VM_BUG_ON_FOLIO(index != round_down(index, nr), folio); + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + VM_BUG_ON_FOLIO(!folio_test_swapbacked(folio), folio); + VM_BUG_ON(expected && folio_test_large(folio)); - page_ref_add(page, nr); - page->mapping = mapping; - page->index = index; + folio_ref_add(folio, nr); + folio->mapping = mapping; + folio->index = index; - if (!PageSwapCache(page)) { - error = mem_cgroup_charge(page_folio(page), charge_mm, gfp); + if (!folio_test_swapcache(folio)) { + error = mem_cgroup_charge(folio, charge_mm, gfp); if (error) { - if (PageTransHuge(page)) { + if (folio_test_large(folio)) { count_vm_event(THP_FILE_FALLBACK); count_vm_event(THP_FILE_FALLBACK_CHARGE); } goto error; } } - cgroup_throttle_swaprate(page, gfp); + folio_throttle_swaprate(folio, gfp); do { xas_lock_irq(&xas); @@ -736,16 +735,16 @@ static int shmem_add_to_page_cache(struct page *page, xas_set_err(&xas, -EEXIST); goto unlock; } - xas_store(&xas, page); + xas_store(&xas, folio); if (xas_error(&xas)) goto unlock; - if (PageTransHuge(page)) { + if (folio_test_large(folio)) { count_vm_event(THP_FILE_ALLOC); - __mod_lruvec_page_state(page, NR_SHMEM_THPS, nr); + __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr); } mapping->nrpages += nr; - __mod_lruvec_page_state(page, NR_FILE_PAGES, nr); - __mod_lruvec_page_state(page, NR_SHMEM, nr); + __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr); + __lruvec_stat_mod_folio(folio, NR_SHMEM, nr); unlock: xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp)); @@ -757,8 +756,8 @@ static int shmem_add_to_page_cache(struct page *page, return 0; error: - page->mapping = NULL; - page_ref_sub(page, nr); + folio->mapping = NULL; + folio_ref_sub(folio, nr); return error; } @@ -1690,7 +1689,8 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, struct address_space *mapping = inode->i_mapping; struct shmem_inode_info *info = SHMEM_I(inode); struct mm_struct *charge_mm = vma ? vma->vm_mm : NULL; - struct page *page; + struct page *page = NULL; + struct folio *folio; swp_entry_t swap; int error; @@ -1740,7 +1740,8 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, goto failed; } - error = shmem_add_to_page_cache(page, mapping, index, + folio = page_folio(page); + error = shmem_add_to_page_cache(folio, mapping, index, swp_to_radix_entry(swap), gfp, charge_mm); if (error) @@ -1791,6 +1792,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, struct shmem_inode_info *info = SHMEM_I(inode); struct shmem_sb_info *sbinfo; struct mm_struct *charge_mm; + struct folio *folio; struct page *page; pgoff_t hindex = index; gfp_t huge_gfp; @@ -1905,7 +1907,8 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, if (sgp == SGP_WRITE) __SetPageReferenced(page); - error = shmem_add_to_page_cache(page, mapping, hindex, + folio = page_folio(page); + error = shmem_add_to_page_cache(folio, mapping, hindex, NULL, gfp & GFP_RECLAIM_MASK, charge_mm); if (error) @@ -2327,6 +2330,7 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, gfp_t gfp = mapping_gfp_mask(mapping); pgoff_t pgoff = linear_page_index(dst_vma, dst_addr); void *page_kaddr; + struct folio *folio; struct page *page; int ret; pgoff_t max_off; @@ -2385,7 +2389,8 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (unlikely(pgoff >= max_off)) goto out_release; - ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL, + folio = page_folio(page); + ret = shmem_add_to_page_cache(folio, mapping, pgoff, NULL, gfp & GFP_RECLAIM_MASK, dst_mm); if (ret) goto out_release; From patchwork Fri Apr 29 19:23:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832646 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D415C433EF for ; Fri, 29 Apr 2022 19:23:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 473F66B0085; Fri, 29 Apr 2022 15:23:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C23CF6B0082; Fri, 29 Apr 2022 15:23:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 403586B0081; Fri, 29 Apr 2022 15:23:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 958CB6B0087 for ; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) Received: from smtpin31.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 6E1BEB91 for ; Fri, 29 Apr 2022 19:23:41 +0000 (UTC) X-FDA: 79410891042.31.DE8BD98 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf27.hostedemail.com (Postfix) with ESMTP id 66B464002F for ; Fri, 29 Apr 2022 19:23:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=CveTGJI0YjF2Yy/R7A6tUzPOEXCYhi5/J1KQ5SFgEWw=; b=uv+Ov3+dLw+Y5MYp6U6Qz7Qugi jmBRfpcagXqD6xRtiW1HvUB6dyUJ5CifRr20h0sCukZ2SL77gAezvHK2vXlKtJS7KZdA2qUKmlcmd 39ARnRr8rBKVDalBlOyoSCh77TdOXiCGSg4vV5eBBfuAVhMciCGH6KyRsmtP+Osbf4YFqBoHUnHmZ pei+wMKrIKzH5deGDE0WC7qBpsrBg/JLZ9QuIxMA1hy784sAQiEAtP08I8Dp4a5aKUMjey9L6ngoh CtlRw0h8TPC8te5SKS/t5688fE7l2uJq5FX83hfa4/vvIsctC0Owd7vqFtAMsfJiQmen1kPbYxCBc nereYmew==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDK-00CjPs-Lu; Fri, 29 Apr 2022 19:23:38 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 17/21] mm/shmem: Turn shmem_should_replace_page into shmem_should_replace_folio Date: Fri, 29 Apr 2022 20:23:25 +0100 Message-Id: <20220429192329.3034378-18-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 66B464002F X-Stat-Signature: t5zmm7uacqrzm51j9j8ycdqmk5pabutj X-Rspam-User: Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=uv+Ov3+d; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1651260219-326650 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a straightforward conversion. Signed-off-by: Matthew Wilcox (Oracle) --- mm/shmem.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 4331a4daac01..4b8d0972bf72 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1600,9 +1600,9 @@ static struct page *shmem_alloc_and_acct_page(gfp_t gfp, * NUMA mempolicy, and applied also to anonymous pages in do_swap_page(); * but for now it is a simple matter of zone. */ -static bool shmem_should_replace_page(struct page *page, gfp_t gfp) +static bool shmem_should_replace_folio(struct folio *folio, gfp_t gfp) { - return page_zonenum(page) > gfp_zone(gfp); + return folio_zonenum(folio) > gfp_zone(gfp); } static int shmem_replace_page(struct page **pagep, gfp_t gfp, @@ -1734,13 +1734,13 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, */ arch_swap_restore(swap, page); - if (shmem_should_replace_page(page, gfp)) { + folio = page_folio(page); + if (shmem_should_replace_folio(folio, gfp)) { error = shmem_replace_page(&page, gfp, info, index); if (error) goto failed; } - folio = page_folio(page); error = shmem_add_to_page_cache(folio, mapping, index, swp_to_radix_entry(swap), gfp, charge_mm); From patchwork Fri Apr 29 19:23:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832644 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6239BC433EF for ; Fri, 29 Apr 2022 19:23:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D67136B008A; Fri, 29 Apr 2022 15:23:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 70D766B0087; Fri, 29 Apr 2022 15:23:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 07E6C6B007B; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 72B426B0083 for ; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 48E7D60C23 for ; Fri, 29 Apr 2022 19:23:41 +0000 (UTC) X-FDA: 79410891042.05.6EBE54A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id 9E13AC006E for ; Fri, 29 Apr 2022 19:23:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ZdRQFmY7tYVAkShpTt0BWKTj7dJYhrBtcAdwDXEUpmc=; b=WaLi21lXwfPxfXMpJ/Q8QS6xzF UsbS4NDgW+cZT8Nwo1FR7tvxo+USDOtaXCc+vCLUR+QmoV4SrTgACDDjWgtI7PKjVMyrs9ODj3o/b DsfNfCzPxdK/rArfcOTV6wFyYo3zC7QQDis64mEiAXhaLxkcrbcO77lr6Sj+dzq0svaPvcl+FqY6P 6l/JlCHMzr0eBDcluXL/iCEflaFs3ZjH6ptNjvWVfOvLaCnJXYJ3I8Y21u9QezsuJPDwJPdhYyFDr tNdVrBQS7fGTRQVHqzZyDIBoSst/URm62x0bWn1xQ95oSVExOWiM/h4NpClZ0VyII7zY29Ac7nV/N 11i4t+Lw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDK-00CjQ3-SM; Fri, 29 Apr 2022 19:23:38 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 18/21] mm/shmem: Turn shmem_alloc_page() into shmem_alloc_folio() Date: Fri, 29 Apr 2022 20:23:26 +0100 Message-Id: <20220429192329.3034378-19-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=WaLi21lX; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 9E13AC006E X-Stat-Signature: s3zb3jjmpuxwb7sdzd83kh597j73y6j8 X-HE-Tag: 1651260209-487155 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Call vma_alloc_folio() directly instead of alloc_page_vma(). It's a bit messy in the callers, but they're about to be cleaned up when they get converted to folios. Signed-off-by: Matthew Wilcox (Oracle) --- mm/shmem.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 4b8d0972bf72..afee80747647 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1543,17 +1543,17 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp, return &folio->page; } -static struct page *shmem_alloc_page(gfp_t gfp, +static struct folio *shmem_alloc_folio(gfp_t gfp, struct shmem_inode_info *info, pgoff_t index) { struct vm_area_struct pvma; - struct page *page; + struct folio *folio; shmem_pseudo_vma_init(&pvma, info, index); - page = alloc_page_vma(gfp, &pvma, 0); + folio = vma_alloc_folio(gfp, 0, &pvma, 0, false); shmem_pseudo_vma_destroy(&pvma); - return page; + return folio; } static struct page *shmem_alloc_and_acct_page(gfp_t gfp, @@ -1575,7 +1575,7 @@ static struct page *shmem_alloc_and_acct_page(gfp_t gfp, if (huge) page = shmem_alloc_hugepage(gfp, info, index); else - page = shmem_alloc_page(gfp, info, index); + page = &shmem_alloc_folio(gfp, info, index)->page; if (page) { __SetPageLocked(page); __SetPageSwapBacked(page); @@ -1625,7 +1625,7 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp, * limit chance of success by further cpuset and node constraints. */ gfp &= ~GFP_CONSTRAINT_MASK; - newpage = shmem_alloc_page(gfp, info, index); + newpage = &shmem_alloc_folio(gfp, info, index)->page; if (!newpage) return -ENOMEM; @@ -2350,7 +2350,6 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (!*pagep) { ret = -ENOMEM; - page = shmem_alloc_page(gfp, info, pgoff); if (!page) goto out_unacct_blocks; From patchwork Fri Apr 29 19:23:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832642 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56C55C433F5 for ; Fri, 29 Apr 2022 19:23:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 81C7F6B007B; Fri, 29 Apr 2022 15:23:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DE9AE6B008A; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9CF516B0081; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 390406B007B for ; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 1EB806052E for ; Fri, 29 Apr 2022 19:23:41 +0000 (UTC) X-FDA: 79410891042.17.EAE0C4D Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id 82CBA80070 for ; Fri, 29 Apr 2022 19:23:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=+zAiCyBjvrTuBhEtoMOqrBMm0fkCkbcSgpHZRa3d/Yo=; b=j/w0u/De0/iMn9Y+w0aIyIN9E0 /L0+fc16fKLCpMr2UZvb4UdgnUxV7DR92wCQ7A6kTCF7DuUIe3kbsVKA+dtYDrhxTsdQOVAFSD9Y5 zp8RSMNIWO37W/CF6D605tzCkKQeZSBrifRsIWG9QBT91Lo2O7rDx/x22hYdroZvg5SNxC+bQywlL sMwZ6AlXKRIdoedCGJcZ6qyV6KiXF7G5ynd26NLWe5iCITMHOvPvqsladbTSSBI1IOtM1VXtH4RjK McZuu3NHBHydCowif5LpGkndAXWtQW8tYDz0uLoQEUanSPcvsWBoJzNHIaMnAKHTSm0BzuBUK4Fzu SRPFJRlg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDL-00CjQA-0E; Fri, 29 Apr 2022 19:23:39 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 19/21] mm/shmem: Convert shmem_alloc_and_acct_page to use a folio Date: Fri, 29 Apr 2022 20:23:27 +0100 Message-Id: <20220429192329.3034378-20-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 82CBA80070 X-Stat-Signature: smkxds1hkfwa556auagbsynyxtkfnroy Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="j/w0u/De"; dmarc=none; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1651260211-847011 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert shmem_alloc_hugepage() to return the folio that it uses and use a folio throughout shmem_alloc_and_acct_page(). Continue to return a page from shmem_alloc_and_acct_page() for now. Signed-off-by: Matthew Wilcox (Oracle) --- mm/shmem.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index afee80747647..e65daf511a9b 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1522,7 +1522,7 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp) return result; } -static struct page *shmem_alloc_hugepage(gfp_t gfp, +static struct folio *shmem_alloc_hugefolio(gfp_t gfp, struct shmem_inode_info *info, pgoff_t index) { struct vm_area_struct pvma; @@ -1540,7 +1540,7 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp, shmem_pseudo_vma_destroy(&pvma); if (!folio) count_vm_event(THP_FILE_FALLBACK); - return &folio->page; + return folio; } static struct folio *shmem_alloc_folio(gfp_t gfp, @@ -1561,7 +1561,7 @@ static struct page *shmem_alloc_and_acct_page(gfp_t gfp, pgoff_t index, bool huge) { struct shmem_inode_info *info = SHMEM_I(inode); - struct page *page; + struct folio *folio; int nr; int err = -ENOSPC; @@ -1573,13 +1573,13 @@ static struct page *shmem_alloc_and_acct_page(gfp_t gfp, goto failed; if (huge) - page = shmem_alloc_hugepage(gfp, info, index); + folio = shmem_alloc_hugefolio(gfp, info, index); else - page = &shmem_alloc_folio(gfp, info, index)->page; - if (page) { - __SetPageLocked(page); - __SetPageSwapBacked(page); - return page; + folio = shmem_alloc_folio(gfp, info, index); + if (folio) { + __folio_set_locked(folio); + __folio_set_swapbacked(folio); + return &folio->page; } err = -ENOMEM; From patchwork Fri Apr 29 19:23:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832643 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5B0CC433FE for ; Fri, 29 Apr 2022 19:23:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A73026B0080; Fri, 29 Apr 2022 15:23:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 476916B007D; Fri, 29 Apr 2022 15:23:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F25EC6B007E; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 61FA06B0080 for ; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 38A9A809CE for ; Fri, 29 Apr 2022 19:23:41 +0000 (UTC) X-FDA: 79410891042.08.9B40B51 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf31.hostedemail.com (Postfix) with ESMTP id 759992006E for ; Fri, 29 Apr 2022 19:23:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dmIri7GwNac8LHdfpD0PO6siDXCFAa0MVeKCP5AbxNc=; b=P5CCgU4JHK4LFgFawoHGCbRjDH orSuL94tk2Q1f/e1Dbswk02ictduALN32uBpkrKwyWyHdt2HLAMvWbGW+b2KdR+LF+Vh4Q3srdyoH VkkRRM6QtWVxlrr/ZYJZfdAcN0I9OfK3HBnQ/oUYfOKvNg+LTq+O+Oajo2GCSxnlTpFbAc2wQQXQY IrwVHkFKIU4sWKP9FEB3t0lnYX1T0U+YNB9Bbm4u4HEiS6czvzJYRLyxSVaKd6cMGb32VQLvQuXyP dknLhywKAc7x2mF00nLtb3jU924qRqrsnMat4OOgoVKz6XHPOwnNUEs/nB8o0Ol9d6xcmlifAuCUo VTxK4TLg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDL-00CjQI-4A; Fri, 29 Apr 2022 19:23:39 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 20/21] mm/shmem: Convert shmem_getpage_gfp to use a folio Date: Fri, 29 Apr 2022 20:23:28 +0100 Message-Id: <20220429192329.3034378-21-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: grx4ctcpx8jat7s3j7o5mmyiwoptsu85 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 759992006E X-Rspam-User: Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=P5CCgU4J; dmarc=none; spf=none (imf31.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1651260209-86516 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Rename shmem_alloc_and_acct_page() to shmem_alloc_and_acct_folio() and have it return a folio, then use a folio throuughout shmem_getpage_gfp(). It continues to return a struct page. Signed-off-by: Matthew Wilcox (Oracle) --- mm/shmem.c | 92 +++++++++++++++++++++++++----------------------------- 1 file changed, 43 insertions(+), 49 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index e65daf511a9b..7457f352cf9f 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1556,8 +1556,7 @@ static struct folio *shmem_alloc_folio(gfp_t gfp, return folio; } -static struct page *shmem_alloc_and_acct_page(gfp_t gfp, - struct inode *inode, +static struct folio *shmem_alloc_and_acct_folio(gfp_t gfp, struct inode *inode, pgoff_t index, bool huge) { struct shmem_inode_info *info = SHMEM_I(inode); @@ -1579,7 +1578,7 @@ static struct page *shmem_alloc_and_acct_page(gfp_t gfp, if (folio) { __folio_set_locked(folio); __folio_set_swapbacked(folio); - return &folio->page; + return folio; } err = -ENOMEM; @@ -1793,7 +1792,6 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, struct shmem_sb_info *sbinfo; struct mm_struct *charge_mm; struct folio *folio; - struct page *page; pgoff_t hindex = index; gfp_t huge_gfp; int error; @@ -1811,19 +1809,18 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, sbinfo = SHMEM_SB(inode->i_sb); charge_mm = vma ? vma->vm_mm : NULL; - page = pagecache_get_page(mapping, index, - FGP_ENTRY | FGP_HEAD | FGP_LOCK, 0); - - if (page && vma && userfaultfd_minor(vma)) { - if (!xa_is_value(page)) { - unlock_page(page); - put_page(page); + folio = __filemap_get_folio(mapping, index, FGP_ENTRY | FGP_LOCK, 0); + if (folio && vma && userfaultfd_minor(vma)) { + if (!xa_is_value(folio)) { + folio_unlock(folio); + folio_put(folio); } *fault_type = handle_userfault(vmf, VM_UFFD_MINOR); return 0; } - if (xa_is_value(page)) { + if (xa_is_value(folio)) { + struct page *page = &folio->page; error = shmem_swapin_page(inode, index, &page, sgp, gfp, vma, fault_type); if (error == -EEXIST) @@ -1833,17 +1830,17 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, return error; } - if (page) { - hindex = page->index; + if (folio) { + hindex = folio->index; if (sgp == SGP_WRITE) - mark_page_accessed(page); - if (PageUptodate(page)) + folio_mark_accessed(folio); + if (folio_test_uptodate(folio)) goto out; /* fallocated page */ if (sgp != SGP_READ) goto clear; - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); } /* @@ -1870,17 +1867,16 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, huge_gfp = vma_thp_gfp_mask(vma); huge_gfp = limit_gfp_mask(huge_gfp, gfp); - page = shmem_alloc_and_acct_page(huge_gfp, inode, index, true); - if (IS_ERR(page)) { + folio = shmem_alloc_and_acct_folio(huge_gfp, inode, index, true); + if (IS_ERR(folio)) { alloc_nohuge: - page = shmem_alloc_and_acct_page(gfp, inode, - index, false); + folio = shmem_alloc_and_acct_folio(gfp, inode, index, false); } - if (IS_ERR(page)) { + if (IS_ERR(folio)) { int retry = 5; - error = PTR_ERR(page); - page = NULL; + error = PTR_ERR(folio); + folio = NULL; if (error != -ENOSPC) goto unlock; /* @@ -1899,30 +1895,29 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, goto unlock; } - if (PageTransHuge(page)) + if (folio_test_large(folio)) hindex = round_down(index, HPAGE_PMD_NR); else hindex = index; if (sgp == SGP_WRITE) - __SetPageReferenced(page); + __folio_set_referenced(folio); - folio = page_folio(page); error = shmem_add_to_page_cache(folio, mapping, hindex, NULL, gfp & GFP_RECLAIM_MASK, charge_mm); if (error) goto unacct; - lru_cache_add(page); + folio_add_lru(folio); spin_lock_irq(&info->lock); - info->alloced += compound_nr(page); - inode->i_blocks += BLOCKS_PER_PAGE << compound_order(page); + info->alloced += folio_nr_pages(folio); + inode->i_blocks += BLOCKS_PER_PAGE << folio_order(folio); shmem_recalc_inode(inode); spin_unlock_irq(&info->lock); alloced = true; - if (PageTransHuge(page) && + if (folio_test_large(folio) && DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE) < hindex + HPAGE_PMD_NR - 1) { /* @@ -1953,22 +1948,21 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, * but SGP_FALLOC on a page fallocated earlier must initialize * it now, lest undo on failure cancel our earlier guarantee. */ - if (sgp != SGP_WRITE && !PageUptodate(page)) { - int i; + if (sgp != SGP_WRITE && !folio_test_uptodate(folio)) { + long i, n = folio_nr_pages(folio); - for (i = 0; i < compound_nr(page); i++) { - clear_highpage(page + i); - flush_dcache_page(page + i); - } - SetPageUptodate(page); + for (i = 0; i < n; i++) + clear_highpage(folio_page(folio, i)); + flush_dcache_folio(folio); + folio_mark_uptodate(folio); } /* Perhaps the file has been truncated since we checked */ if (sgp <= SGP_CACHE && ((loff_t)index << PAGE_SHIFT) >= i_size_read(inode)) { if (alloced) { - ClearPageDirty(page); - delete_from_page_cache(page); + folio_clear_dirty(folio); + filemap_remove_folio(folio); spin_lock_irq(&info->lock); shmem_recalc_inode(inode); spin_unlock_irq(&info->lock); @@ -1977,24 +1971,24 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, goto unlock; } out: - *pagep = page + index - hindex; + *pagep = folio_page(folio, index - hindex); return 0; /* * Error recovery. */ unacct: - shmem_inode_unacct_blocks(inode, compound_nr(page)); + shmem_inode_unacct_blocks(inode, folio_nr_pages(folio)); - if (PageTransHuge(page)) { - unlock_page(page); - put_page(page); + if (folio_test_large(folio)) { + folio_unlock(folio); + folio_put(folio); goto alloc_nohuge; } unlock: - if (page) { - unlock_page(page); - put_page(page); + if (folio) { + folio_unlock(folio); + folio_put(folio); } if (error == -ENOSPC && !once++) { spin_lock_irq(&info->lock); From patchwork Fri Apr 29 19:23:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12832645 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6D22C433F5 for ; Fri, 29 Apr 2022 19:23:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 12B6F6B0073; Fri, 29 Apr 2022 15:23:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AB73B6B0088; Fri, 29 Apr 2022 15:23:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1902B6B0080; Fri, 29 Apr 2022 15:23:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 82F4D6B007D for ; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 5E6EF287E5 for ; Fri, 29 Apr 2022 19:23:41 +0000 (UTC) X-FDA: 79410891042.02.331AEDF Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf08.hostedemail.com (Postfix) with ESMTP id 8D866160062 for ; Fri, 29 Apr 2022 19:23:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=id9I4pxawutVxIeUomS2jM+vR5ywyi2Hw9Bf5PUsZUU=; b=nS/PtFOBDzlfbIbnvQjgdM8nlM JmvIIvMDqgSwAyGSrFH47nu+IyhiGDbdUvuXtxqISshUgGmZ859i84v90Y4by1X5AvakdrAUptaOH 7hHfSamh1wVXq6CDm6DC8DtOt+MgLNERLYqrhNjdc4198mqsxTNK6r/2AACsxellq8x5SOnfO5Pzt pl9gqwK/hhuccSUjG5in09GJ7UvdpaZatd5Rvvq1eWWiAtMRGcr1MssRn2KtemRKBBxOKRA9xuqXt 2aNUlvV7FE9aERsx5mfxSBBdP5pr/zJDPlhTKW8d5pv+1cnzg4+C1GclzWTY0BxylM4w2wvbo5T4y 5RGACBnA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDL-00CjQQ-89; Fri, 29 Apr 2022 19:23:39 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 21/21] mm/shmem: Convert shmem_swapin_page() to shmem_swapin_folio() Date: Fri, 29 Apr 2022 20:23:29 +0100 Message-Id: <20220429192329.3034378-22-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="nS/PtFOB"; dmarc=none; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 8D866160062 X-Rspam-User: X-Stat-Signature: r71jjnjgdmkc96oytstpa8x64prbhu88 X-HE-Tag: 1651260213-870082 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: shmem_swapin_page() only brings in order-0 pages, which are folios by definition. Signed-off-by: Matthew Wilcox (Oracle) --- arch/arm64/include/asm/pgtable.h | 6 +- include/linux/pgtable.h | 2 +- mm/shmem.c | 108 ++++++++++++++----------------- 3 files changed, 54 insertions(+), 62 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index dff2b483ea50..27cb6a355fb0 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -964,10 +964,10 @@ static inline void arch_swap_invalidate_area(int type) } #define __HAVE_ARCH_SWAP_RESTORE -static inline void arch_swap_restore(swp_entry_t entry, struct page *page) +static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) { - if (system_supports_mte() && mte_restore_tags(entry, page)) - set_bit(PG_mte_tagged, &page->flags); + if (system_supports_mte() && mte_restore_tags(entry, &folio->page)) + set_bit(PG_mte_tagged, &folio->flags); } #endif /* CONFIG_ARM64_MTE */ diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index f4f4077b97aa..a1c44b015463 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -738,7 +738,7 @@ static inline void arch_swap_invalidate_area(int type) #endif #ifndef __HAVE_ARCH_SWAP_RESTORE -static inline void arch_swap_restore(swp_entry_t entry, struct page *page) +static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) { } #endif diff --git a/mm/shmem.c b/mm/shmem.c index 7457f352cf9f..673a0e783496 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -134,8 +134,8 @@ static unsigned long shmem_default_max_inodes(void) } #endif -static int shmem_swapin_page(struct inode *inode, pgoff_t index, - struct page **pagep, enum sgp_type sgp, +static int shmem_swapin_folio(struct inode *inode, pgoff_t index, + struct folio **foliop, enum sgp_type sgp, gfp_t gfp, struct vm_area_struct *vma, vm_fault_t *fault_type); static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, @@ -1158,69 +1158,64 @@ static void shmem_evict_inode(struct inode *inode) } static int shmem_find_swap_entries(struct address_space *mapping, - pgoff_t start, unsigned int nr_entries, - struct page **entries, pgoff_t *indices, - unsigned int type) + pgoff_t start, struct folio_batch *fbatch, + pgoff_t *indices, unsigned int type) { XA_STATE(xas, &mapping->i_pages, start); - struct page *page; + struct folio *folio; swp_entry_t entry; unsigned int ret = 0; - if (!nr_entries) - return 0; - rcu_read_lock(); - xas_for_each(&xas, page, ULONG_MAX) { - if (xas_retry(&xas, page)) + xas_for_each(&xas, folio, ULONG_MAX) { + if (xas_retry(&xas, folio)) continue; - if (!xa_is_value(page)) + if (!xa_is_value(folio)) continue; - entry = radix_to_swp_entry(page); + entry = radix_to_swp_entry(folio); if (swp_type(entry) != type) continue; indices[ret] = xas.xa_index; - entries[ret] = page; + if (!folio_batch_add(fbatch, folio)) + break; if (need_resched()) { xas_pause(&xas); cond_resched_rcu(); } - if (++ret == nr_entries) - break; } rcu_read_unlock(); - return ret; + return xas.xa_index; } /* * Move the swapped pages for an inode to page cache. Returns the count * of pages swapped in, or the error in case of failure. */ -static int shmem_unuse_swap_entries(struct inode *inode, struct pagevec pvec, - pgoff_t *indices) +static int shmem_unuse_swap_entries(struct inode *inode, + struct folio_batch *fbatch, pgoff_t *indices) { int i = 0; int ret = 0; int error = 0; struct address_space *mapping = inode->i_mapping; - for (i = 0; i < pvec.nr; i++) { - struct page *page = pvec.pages[i]; + for (i = 0; i < folio_batch_count(fbatch); i++) { + struct folio *folio = fbatch->folios[i]; - if (!xa_is_value(page)) + if (!xa_is_value(folio)) continue; - error = shmem_swapin_page(inode, indices[i], - &page, SGP_CACHE, + error = shmem_swapin_folio(inode, indices[i], + &folio, SGP_CACHE, mapping_gfp_mask(mapping), NULL, NULL); if (error == 0) { - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); ret++; } if (error == -ENOMEM) @@ -1237,26 +1232,23 @@ static int shmem_unuse_inode(struct inode *inode, unsigned int type) { struct address_space *mapping = inode->i_mapping; pgoff_t start = 0; - struct pagevec pvec; + struct folio_batch fbatch; pgoff_t indices[PAGEVEC_SIZE]; int ret = 0; - pagevec_init(&pvec); do { - unsigned int nr_entries = PAGEVEC_SIZE; - - pvec.nr = shmem_find_swap_entries(mapping, start, nr_entries, - pvec.pages, indices, type); - if (pvec.nr == 0) { + folio_batch_init(&fbatch); + shmem_find_swap_entries(mapping, start, &fbatch, indices, type); + if (folio_batch_count(&fbatch) == 0) { ret = 0; break; } - ret = shmem_unuse_swap_entries(inode, pvec, indices); + ret = shmem_unuse_swap_entries(inode, &fbatch, indices); if (ret < 0) break; - start = indices[pvec.nr - 1]; + start = indices[folio_batch_count(&fbatch) - 1]; } while (true); return ret; @@ -1680,22 +1672,22 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp, * Returns 0 and the page in pagep if success. On failure, returns the * error code and NULL in *pagep. */ -static int shmem_swapin_page(struct inode *inode, pgoff_t index, - struct page **pagep, enum sgp_type sgp, +static int shmem_swapin_folio(struct inode *inode, pgoff_t index, + struct folio **foliop, enum sgp_type sgp, gfp_t gfp, struct vm_area_struct *vma, vm_fault_t *fault_type) { struct address_space *mapping = inode->i_mapping; struct shmem_inode_info *info = SHMEM_I(inode); struct mm_struct *charge_mm = vma ? vma->vm_mm : NULL; - struct page *page = NULL; + struct page *page; struct folio *folio; swp_entry_t swap; int error; - VM_BUG_ON(!*pagep || !xa_is_value(*pagep)); - swap = radix_to_swp_entry(*pagep); - *pagep = NULL; + VM_BUG_ON(!*foliop || !xa_is_value(*foliop)); + swap = radix_to_swp_entry(*foliop); + *foliop = NULL; /* Look it up and read it in.. */ page = lookup_swap_cache(swap, NULL, 0); @@ -1713,27 +1705,28 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, goto failed; } } + folio = page_folio(page); /* We have to do this with page locked to prevent races */ - lock_page(page); - if (!PageSwapCache(page) || page_private(page) != swap.val || + folio_lock(folio); + if (!folio_test_swapcache(folio) || + folio_swap_entry(folio).val != swap.val || !shmem_confirm_swap(mapping, index, swap)) { error = -EEXIST; goto unlock; } - if (!PageUptodate(page)) { + if (!folio_test_uptodate(folio)) { error = -EIO; goto failed; } - wait_on_page_writeback(page); + folio_wait_writeback(folio); /* * Some architectures may have to restore extra metadata to the - * physical page after reading from swap. + * folio after reading from swap. */ - arch_swap_restore(swap, page); + arch_swap_restore(swap, folio); - folio = page_folio(page); if (shmem_should_replace_folio(folio, gfp)) { error = shmem_replace_page(&page, gfp, info, index); if (error) @@ -1752,21 +1745,21 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, spin_unlock_irq(&info->lock); if (sgp == SGP_WRITE) - mark_page_accessed(page); + folio_mark_accessed(folio); - delete_from_swap_cache(page); - set_page_dirty(page); + delete_from_swap_cache(&folio->page); + folio_mark_dirty(folio); swap_free(swap); - *pagep = page; + *foliop = folio; return 0; failed: if (!shmem_confirm_swap(mapping, index, swap)) error = -EEXIST; unlock: - if (page) { - unlock_page(page); - put_page(page); + if (folio) { + folio_unlock(folio); + folio_put(folio); } return error; @@ -1820,13 +1813,12 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, } if (xa_is_value(folio)) { - struct page *page = &folio->page; - error = shmem_swapin_page(inode, index, &page, + error = shmem_swapin_folio(inode, index, &folio, sgp, gfp, vma, fault_type); if (error == -EEXIST) goto repeat; - *pagep = page; + *pagep = &folio->page; return error; }