From patchwork Wed May 4 18:28:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838354 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1D5AC433FE for ; Wed, 4 May 2022 18:29:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8DFCC6B007D; Wed, 4 May 2022 14:29:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 873E06B0073; Wed, 4 May 2022 14:29:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 393B66B0073; Wed, 4 May 2022 14:29:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 0D5926B0074 for ; Wed, 4 May 2022 14:29:04 -0400 (EDT) Received: from smtpin31.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id E140D121073 for ; Wed, 4 May 2022 18:29:03 +0000 (UTC) X-FDA: 79428897366.31.353092C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 5125D18009C for ; Wed, 4 May 2022 18:29:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ghuyNPaRYcVBS9VmCa9+k4jYQUr1mhpvqirNEm+8qiE=; b=myME5ngsVUlZmcOvGw7VUJjlhs 0ZWX28hzEQMvgD4bYeX7HyavTC2jGUXIc/nTZ7fOnWblEbFRxrbSOT27x4if3ZGbxKwO5O1DYQ2M/ 0etizNQrDdQNadOZ3sa9Ml53tdSXSce+uYyIfo/j5zsm7XIf4s32bdCIbHFkWVoZHyh04vdHsrphB 4BKOyCluntwHod4trHOaj9bJr0gmyA38ciMtAmliynPhLc3SKfnsMccbHG98C9KyQ5JubBTrTYc1c LgUFC29KdfokiiMnZly62GKywIJjBqWDiEztDs59xtsF4OnFTypRvcAOyRUti2LZLuRcsWtrAt+Xl mHdeAZQQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkD-00Gq5v-Nc; Wed, 04 May 2022 18:29:01 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 01/26] shmem: Convert shmem_alloc_hugepage() to use vma_alloc_folio() Date: Wed, 4 May 2022 19:28:32 +0100 Message-Id: <20220504182857.4013401-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5125D18009C X-Stat-Signature: 86c5c6xxfnmxmgexponqzjowydoodud9 Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=myME5ngs; dmarc=none; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1651688941-557011 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For now, return the head page of the folio, but remove use of the old alloc_pages_vma() API. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/shmem.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 4b2fea33158e..c89394221a7e 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1527,7 +1527,7 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp, struct vm_area_struct pvma; struct address_space *mapping = info->vfs_inode.i_mapping; pgoff_t hindex; - struct page *page; + struct folio *folio; hindex = round_down(index, HPAGE_PMD_NR); if (xa_find(&mapping->i_pages, &hindex, hindex + HPAGE_PMD_NR - 1, @@ -1535,13 +1535,11 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp, return NULL; shmem_pseudo_vma_init(&pvma, info, hindex); - page = alloc_pages_vma(gfp, HPAGE_PMD_ORDER, &pvma, 0, true); + folio = vma_alloc_folio(gfp, HPAGE_PMD_ORDER, &pvma, 0, true); shmem_pseudo_vma_destroy(&pvma); - if (page) - prep_transhuge_page(page); - else + if (!folio) count_vm_event(THP_FILE_FALLBACK); - return page; + return &folio->page; } static struct page *shmem_alloc_page(gfp_t gfp, From patchwork Wed May 4 18:28:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838356 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E91EC433F5 for ; Wed, 4 May 2022 18:29:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 08CEE6B0073; Wed, 4 May 2022 14:29:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A22566B007E; Wed, 4 May 2022 14:29:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 73CF66B0080; Wed, 4 May 2022 14:29:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 3C2656B0078 for ; Wed, 4 May 2022 14:29:04 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 17BE1121185 for ; Wed, 4 May 2022 18:29:04 +0000 (UTC) X-FDA: 79428897408.15.C9CAED8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id 22869A008D for ; Wed, 4 May 2022 18:28:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dD+NEunQjgzAiwEXmgrvLtkYdrq9ejv5RT97UNQ3VFQ=; b=K1fFokJqNkEwu3c0GyY2fdttGK WCFJ7VdVgtcVwPVRtpo8BQXFgd5ED4awwV9oMAZPxj7Bm9pxq8w99qGDptaW8RAS5g/xiiovxWS/0 yeFlj2Q6isvb/OZqeDhE8xm2msAYi4guUmvtySpQEOETsfWEQKIrK7pLC2t6CdhFJO/nyxdF+Bh01 AE9qxMHsc3B4m5YqTyDU9iqjAM3rMBM1xJRhdsdF1XC5lzESTlM18vXgZHSTgB1EM4r3QgYrwSjyi 9VheBdxQoms+eiEiFMTAvh8F+ACllk+la66DDs5TgO7fhgWrPsUz6ehN8wfE+toS5goDt1Oc75flJ GCtmcEaw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkD-00Gq5x-QI; Wed, 04 May 2022 18:29:01 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 02/26] mm/huge_memory: Convert do_huge_pmd_anonymous_page() to use vma_alloc_folio() Date: Wed, 4 May 2022 19:28:33 +0100 Message-Id: <20220504182857.4013401-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 22869A008D X-Stat-Signature: gqmp5b4bqo75577d4sixpif3ohxx6beh Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=K1fFokJq; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1651688929-324737 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Remove the use of this old API, eliminating a call to prep_transhuge_page(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/huge_memory.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c468fee595ff..caf0e7d27337 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -725,7 +725,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; gfp_t gfp; - struct page *page; + struct folio *folio; unsigned long haddr = vmf->address & HPAGE_PMD_MASK; if (!transhuge_vma_suitable(vma, haddr)) @@ -774,13 +774,12 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) return ret; } gfp = vma_thp_gfp_mask(vma); - page = alloc_hugepage_vma(gfp, vma, haddr, HPAGE_PMD_ORDER); - if (unlikely(!page)) { + folio = vma_alloc_folio(gfp, HPAGE_PMD_ORDER, vma, haddr, true); + if (unlikely(!folio)) { count_vm_event(THP_FAULT_FALLBACK); return VM_FAULT_FALLBACK; } - prep_transhuge_page(page); - return __do_huge_pmd_anonymous_page(vmf, page, gfp); + return __do_huge_pmd_anonymous_page(vmf, &folio->page, gfp); } static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, From patchwork Wed May 4 18:28:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838394 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6B83C433F5 for ; Wed, 4 May 2022 18:48:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5A2B76B0073; Wed, 4 May 2022 14:48:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5526B6B0074; Wed, 4 May 2022 14:48:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4406D6B0075; Wed, 4 May 2022 14:48:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 36CD06B0073 for ; Wed, 4 May 2022 14:48:05 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 02A5F2BEE1 for ; Wed, 4 May 2022 18:48:04 +0000 (UTC) X-FDA: 79428945330.09.5AFB343 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id C55B6180098 for ; Wed, 4 May 2022 18:47:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Fur7XOYILmCbxXQncWOmlYrjpQpAVlGBnyHX+qJZsqU=; b=AZ9m0MLAX27LavCcucZG9ua3NW gztw9sA7l1PbES07NNYYf3won56Q5MZA+/R+84PEw+O6S/3jgYfmWa3+QIR2hbV15LF4EHAe+Xjho nAPj4hdCxdh3wkKsWVuW/vS/Q2hB+k32HAk5DnnInrPY9AW5DuzgJk7b69CKUksrwwaoTIntOtrmc JHEk+zLFPvY6RE0e/u/OAJlJv+1vYq74iBJ04YzEJ1BNIX10afy62UGI/SPe9h6dNykHgcHw8UTr3 87J7QqnNXxCgeEosJV8cdRYATy2wpiQ5e2Dz5AkfBEvNP9AbetGqpv5jIIy+aMacmBd2iEyp6KKOp QhrNAfRw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkD-00Gq63-UN; Wed, 04 May 2022 18:29:01 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , kernel test robot Subject: [PATCH v2 03/26] alpha: Fix alloc_zeroed_user_highpage_movable() Date: Wed, 4 May 2022 19:28:34 +0100 Message-Id: <20220504182857.4013401-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: C55B6180098 X-Stat-Signature: wg4sjh76a5ga56uk9kfxnhyos8j9z93j Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=AZ9m0MLA; dmarc=none; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-HE-Tag: 1651690077-453160 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Due to a typo, the final argument to alloc_page_vma() didn't refer to a real variable. This only affected CONFIG_NUMA, which was marked BROKEN in 2006 and removed from alpha in 2021. Found due to a refactoring patch. Reported-by: kernel test robot Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- arch/alpha/include/asm/page.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/alpha/include/asm/page.h b/arch/alpha/include/asm/page.h index 18f48a6f2ff6..8f3f5eecba28 100644 --- a/arch/alpha/include/asm/page.h +++ b/arch/alpha/include/asm/page.h @@ -18,7 +18,7 @@ extern void clear_page(void *page); #define clear_user_page(page, vaddr, pg) clear_page(page) #define alloc_zeroed_user_highpage_movable(vma, vaddr) \ - alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vmaddr) + alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr) #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE extern void copy_page(void * _to, void * _from); From patchwork Wed May 4 18:28:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838353 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9E3CC433EF for ; Wed, 4 May 2022 18:29:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 63E476B0071; Wed, 4 May 2022 14:29:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 524BA6B007B; Wed, 4 May 2022 14:29:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 25A626B0075; Wed, 4 May 2022 14:29:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 024336B0073 for ; Wed, 4 May 2022 14:29:03 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id CB24961726 for ; Wed, 4 May 2022 18:29:03 +0000 (UTC) X-FDA: 79428897366.15.F6BAC88 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id F1A111C009E for ; Wed, 4 May 2022 18:28:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=e8p++ypsPhjLpbbt1OTXt6rwwtevx7qXFtZG+MfUNNA=; b=P/3LB4EWjeXRO5FdLlMby5g5Ng Vm6N/h1jHlFgz0hqdbCr7mUy2HHBptgRGDgG79XrvFl363+jhUafrGj6CZMO1rueSRqAS4qA83GcO UBAJeEMDoXTYSpT+VHJ6yAec3wlqZpihQLoKceUgkUwCp+Jcs3et57FqdL2TGwz8G5pl8rd4shtG1 1ShdIcYAP4QWU9sjg6DIMjkDH4l7pnoGZN3omPJt/V3nISvuiTqelcAGRuOFXxBl2b609lPFqjZvm XI6eHcC2KRdemzXfDRcH9axcZdavnWgvHcDxy0p3b+pNM73hiKZm060ElWOw3fova7Oklczi/Q3a6 vgs30zeQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkE-00Gq69-1f; Wed, 04 May 2022 18:29:02 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 04/26] mm: Remove alloc_pages_vma() Date: Wed, 4 May 2022 19:28:35 +0100 Message-Id: <20220504182857.4013401-5-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: 7dodqip67cpj7ec7sijznz6pq958aykd X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: F1A111C009E Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="P/3LB4EW"; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspam-User: X-HE-Tag: 1651688936-676160 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: All callers have now been converted to use vma_alloc_folio(), so convert the body of alloc_pages_vma() to allocate folios instead. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/gfp.h | 18 +++++++--------- mm/mempolicy.c | 51 ++++++++++++++++++++++----------------------- 2 files changed, 32 insertions(+), 37 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 3e3d36fc2109..2a08a3c4ba95 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -613,13 +613,8 @@ static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask, #ifdef CONFIG_NUMA struct page *alloc_pages(gfp_t gfp, unsigned int order); struct folio *folio_alloc(gfp_t gfp, unsigned order); -struct page *alloc_pages_vma(gfp_t gfp_mask, int order, - struct vm_area_struct *vma, unsigned long addr, - bool hugepage); struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, unsigned long addr, bool hugepage); -#define alloc_hugepage_vma(gfp_mask, vma, addr, order) \ - alloc_pages_vma(gfp_mask, order, vma, addr, true) #else static inline struct page *alloc_pages(gfp_t gfp_mask, unsigned int order) { @@ -629,16 +624,17 @@ static inline struct folio *folio_alloc(gfp_t gfp, unsigned int order) { return __folio_alloc_node(gfp, order, numa_node_id()); } -#define alloc_pages_vma(gfp_mask, order, vma, addr, hugepage) \ - alloc_pages(gfp_mask, order) #define vma_alloc_folio(gfp, order, vma, addr, hugepage) \ folio_alloc(gfp, order) -#define alloc_hugepage_vma(gfp_mask, vma, addr, order) \ - alloc_pages(gfp_mask, order) #endif #define alloc_page(gfp_mask) alloc_pages(gfp_mask, 0) -#define alloc_page_vma(gfp_mask, vma, addr) \ - alloc_pages_vma(gfp_mask, 0, vma, addr, false) +static inline struct page *alloc_page_vma(gfp_t gfp, + struct vm_area_struct *vma, unsigned long addr) +{ + struct folio *folio = vma_alloc_folio(gfp, 0, vma, addr, false); + + return &folio->page; +} extern unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order); extern unsigned long get_zeroed_page(gfp_t gfp_mask); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 8c74107a2b15..174efbee1cb5 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2135,44 +2135,55 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order, } /** - * alloc_pages_vma - Allocate a page for a VMA. + * vma_alloc_folio - Allocate a folio for a VMA. * @gfp: GFP flags. - * @order: Order of the GFP allocation. + * @order: Order of the folio. * @vma: Pointer to VMA or NULL if not available. * @addr: Virtual address of the allocation. Must be inside @vma. * @hugepage: For hugepages try only the preferred node if possible. * - * Allocate a page for a specific address in @vma, using the appropriate + * Allocate a folio for a specific address in @vma, using the appropriate * NUMA policy. When @vma is not NULL the caller must hold the mmap_lock * of the mm_struct of the VMA to prevent it from going away. Should be - * used for all allocations for pages that will be mapped into user space. + * used for all allocations for folios that will be mapped into user space. * - * Return: The page on success or NULL if allocation fails. + * Return: The folio on success or NULL if allocation fails. */ -struct page *alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, +struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, unsigned long addr, bool hugepage) { struct mempolicy *pol; int node = numa_node_id(); - struct page *page; + struct folio *folio; int preferred_nid; nodemask_t *nmask; pol = get_vma_policy(vma, addr); if (pol->mode == MPOL_INTERLEAVE) { + struct page *page; unsigned nid; nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order); mpol_cond_put(pol); + gfp |= __GFP_COMP; page = alloc_page_interleave(gfp, order, nid); + if (page && order > 1) + prep_transhuge_page(page); + folio = (struct folio *)page; goto out; } if (pol->mode == MPOL_PREFERRED_MANY) { + struct page *page; + node = policy_node(gfp, pol, node); + gfp |= __GFP_COMP; page = alloc_pages_preferred_many(gfp, order, node, pol); mpol_cond_put(pol); + if (page && order > 1) + prep_transhuge_page(page); + folio = (struct folio *)page; goto out; } @@ -2199,8 +2210,8 @@ struct page *alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, * First, try to allocate THP only on local node, but * don't reclaim unnecessarily, just compact. */ - page = __alloc_pages_node(hpage_node, - gfp | __GFP_THISNODE | __GFP_NORETRY, order); + folio = __folio_alloc_node(gfp | __GFP_THISNODE | + __GFP_NORETRY, order, hpage_node); /* * If hugepage allocations are configured to always @@ -2208,8 +2219,9 @@ struct page *alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, * to prefer hugepage backing, retry allowing remote * memory with both reclaim and compact as well. */ - if (!page && (gfp & __GFP_DIRECT_RECLAIM)) - page = __alloc_pages(gfp, order, hpage_node, nmask); + if (!folio && (gfp & __GFP_DIRECT_RECLAIM)) + folio = __folio_alloc(gfp, order, hpage_node, + nmask); goto out; } @@ -2217,25 +2229,12 @@ struct page *alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, nmask = policy_nodemask(gfp, pol); preferred_nid = policy_node(gfp, pol, node); - page = __alloc_pages(gfp, order, preferred_nid, nmask); + folio = __folio_alloc(gfp, order, preferred_nid, nmask); mpol_cond_put(pol); out: - return page; -} -EXPORT_SYMBOL(alloc_pages_vma); - -struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, - unsigned long addr, bool hugepage) -{ - struct folio *folio; - - folio = (struct folio *)alloc_pages_vma(gfp, order, vma, addr, - hugepage); - if (folio && order > 1) - prep_transhuge_page(&folio->page); - return folio; } +EXPORT_SYMBOL(vma_alloc_folio); /** * alloc_pages - Allocate pages. From patchwork Wed May 4 18:28:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07775C433EF for ; Wed, 4 May 2022 18:29:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AE3566B0082; Wed, 4 May 2022 14:29:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6A2E76B0081; Wed, 4 May 2022 14:29:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 11A676B0081; Wed, 4 May 2022 14:29:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id BE1496B0073 for ; Wed, 4 May 2022 14:29:04 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 9F9642131A for ; Wed, 4 May 2022 18:29:04 +0000 (UTC) X-FDA: 79428897408.07.A9CC2DE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id 14E51A0081 for ; Wed, 4 May 2022 18:28:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=X+gO3T14DbfhD7ZSFY3GZUXu8JvQaqrsZFRii8darF0=; b=fnCEP8MhL/Yc7jTk+rt/jf8cqO uGy1/bvfLvu7KdVi2MXXDnAUvoo6UxEq0v8cI199qrP0oIBIpC9JB16OH+OOjwOmUAWeNKk5qbXfW 3pt5bl20oHtKMRawXZyqRRGES75lM9JHdaVDIxXOaNd0ojf1Mc/kzFbPdZAnU18BABo5XK2k0gPqM 0hGJcv+pZ8aeQI5u7+WU+RmI5yBBFnoCJ4owgeJzpBLPd+j4hl+ttvTUdRDoX9Z99BqdfkKrPC+Xj GmwKjn0H2tD0gu7q9bcVw801KDjCcbqfZDq9drZWvZkuqxRIvMDiJW9Yqk1tzOb6AB/vEkXLbSsva Dd9QrWIA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkE-00Gq6B-4L; Wed, 04 May 2022 18:29:02 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 05/26] vmscan: Use folio_mapped() in shrink_page_list() Date: Wed, 4 May 2022 19:28:36 +0100 Message-Id: <20220504182857.4013401-6-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 14E51A0081 X-Stat-Signature: c3z37iuf1ektqod9g4frfdee94iqtikr Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=fnCEP8Mh; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1651688930-874582 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Remove some legacy function calls. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/vmscan.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 1678802e03e7..27be6f9b2ba5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1549,7 +1549,7 @@ static unsigned int shrink_page_list(struct list_head *page_list, if (unlikely(!page_evictable(page))) goto activate_locked; - if (!sc->may_unmap && page_mapped(page)) + if (!sc->may_unmap && folio_mapped(folio)) goto keep_locked; may_enter_fs = (sc->gfp_mask & __GFP_FS) || @@ -1743,21 +1743,21 @@ static unsigned int shrink_page_list(struct list_head *page_list, } /* - * The page is mapped into the page tables of one or more + * The folio is mapped into the page tables of one or more * processes. Try to unmap it here. */ - if (page_mapped(page)) { + if (folio_mapped(folio)) { enum ttu_flags flags = TTU_BATCH_FLUSH; - bool was_swapbacked = PageSwapBacked(page); + bool was_swapbacked = folio_test_swapbacked(folio); - if (PageTransHuge(page) && - thp_order(page) >= HPAGE_PMD_ORDER) + if (folio_test_pmd_mappable(folio)) flags |= TTU_SPLIT_HUGE_PMD; try_to_unmap(folio, flags); - if (page_mapped(page)) { + if (folio_mapped(folio)) { stat->nr_unmap_fail += nr_pages; - if (!was_swapbacked && PageSwapBacked(page)) + if (!was_swapbacked && + folio_test_swapbacked(folio)) stat->nr_lazyfree_fail += nr_pages; goto activate_locked; } From patchwork Wed May 4 18:28:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 164EBC4332F for ; Wed, 4 May 2022 18:29:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C4A566B0075; Wed, 4 May 2022 14:29:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8D6B66B007B; Wed, 4 May 2022 14:29:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5924E6B0075; Wed, 4 May 2022 14:29:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 2F4516B007D for ; Wed, 4 May 2022 14:29:04 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 127316127D for ; Wed, 4 May 2022 18:29:04 +0000 (UTC) X-FDA: 79428897408.13.DD59FCC Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP id E3B0614008D for ; Wed, 4 May 2022 18:28:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=SVMxgFmUZ7QEEyYt/HGdPYJ2LwLw5sJ91q4XMENzF9g=; b=GSc91hVbpBkptXYYY38dDEhqa8 G7VbwNb6RyTl2Fzp9SqC58qKC44elmeMFG0ypQ33sy6z8BlZKG0CpCZOtTaozHXxdTrOvGXRMpdFZ lpiJe2BACXTybS3IfDt7lrDGO6zYqZnn8cy3piCgKNoUcYGdK/NVRwrbqK1LvOTdth/1YMrHAKzF9 NPCbmo4cr9dfb9/G9bkcbllo1TGBhziAWvfWO9Aa3gDX5w0w+FGafUQDnDV9gZ8Lw8eVrgEhgvKrc ecq8o9H1uhw3BphLRaX2h9jZiQaXA9CYBe9BN1fll08EuyVpTeaGBVez0vh1Pol/7HjEUwcyc5QXq pdDQJD2Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkE-00Gq6H-7u; Wed, 04 May 2022 18:29:02 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 06/26] vmscan: Convert the writeback handling in shrink_page_list() to folios Date: Wed, 4 May 2022 19:28:37 +0100 Message-Id: <20220504182857.4013401-7-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: 7i5t3fq5t7cw5gxsnctrrsuu3urmfe87 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: E3B0614008D X-Rspam-User: Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=GSc91hVb; dmarc=none; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1651688933-162465 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Slightly more efficient due to fewer calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/vmscan.c | 77 ++++++++++++++++++++++++++++------------------------- 1 file changed, 41 insertions(+), 36 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 27be6f9b2ba5..19c1bcd886ef 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1578,40 +1578,42 @@ static unsigned int shrink_page_list(struct list_head *page_list, stat->nr_congested += nr_pages; /* - * If a page at the tail of the LRU is under writeback, there + * If a folio at the tail of the LRU is under writeback, there * are three cases to consider. * - * 1) If reclaim is encountering an excessive number of pages - * under writeback and this page is both under writeback and - * PageReclaim then it indicates that pages are being queued - * for IO but are being recycled through the LRU before the - * IO can complete. Waiting on the page itself risks an - * indefinite stall if it is impossible to writeback the - * page due to IO error or disconnected storage so instead - * note that the LRU is being scanned too quickly and the - * caller can stall after page list has been processed. + * 1) If reclaim is encountering an excessive number of folios + * under writeback and this folio is both under + * writeback and has the reclaim flag set then it + * indicates that folios are being queued for I/O but + * are being recycled through the LRU before the I/O + * can complete. Waiting on the folio itself risks an + * indefinite stall if it is impossible to writeback + * the folio due to I/O error or disconnected storage + * so instead note that the LRU is being scanned too + * quickly and the caller can stall after the folio + * list has been processed. * - * 2) Global or new memcg reclaim encounters a page that is + * 2) Global or new memcg reclaim encounters a folio that is * not marked for immediate reclaim, or the caller does not * have __GFP_FS (or __GFP_IO if it's simply going to swap, - * not to fs). In this case mark the page for immediate + * not to fs). In this case mark the folio for immediate * reclaim and continue scanning. * * Require may_enter_fs because we would wait on fs, which - * may not have submitted IO yet. And the loop driver might - * enter reclaim, and deadlock if it waits on a page for + * may not have submitted I/O yet. And the loop driver might + * enter reclaim, and deadlock if it waits on a folio for * which it is needed to do the write (loop masks off * __GFP_IO|__GFP_FS for this reason); but more thought * would probably show more reasons. * - * 3) Legacy memcg encounters a page that is already marked - * PageReclaim. memcg does not have any dirty pages + * 3) Legacy memcg encounters a folio that already has the + * reclaim flag set. memcg does not have any dirty folio * throttling so we could easily OOM just because too many - * pages are in writeback and there is nothing else to + * folios are in writeback and there is nothing else to * reclaim. Wait for the writeback to complete. * - * In cases 1) and 2) we activate the pages to get them out of - * the way while we continue scanning for clean pages on the + * In cases 1) and 2) we activate the folios to get them out of + * the way while we continue scanning for clean folios on the * inactive list and refilling from the active list. The * observation here is that waiting for disk writes is more * expensive than potentially causing reloads down the line. @@ -1619,38 +1621,41 @@ static unsigned int shrink_page_list(struct list_head *page_list, * memory pressure on the cache working set any longer than it * takes to write them to disk. */ - if (PageWriteback(page)) { + if (folio_test_writeback(folio)) { /* Case 1 above */ if (current_is_kswapd() && - PageReclaim(page) && + folio_test_reclaim(folio) && test_bit(PGDAT_WRITEBACK, &pgdat->flags)) { stat->nr_immediate += nr_pages; goto activate_locked; /* Case 2 above */ } else if (writeback_throttling_sane(sc) || - !PageReclaim(page) || !may_enter_fs) { + !folio_test_reclaim(folio) || !may_enter_fs) { /* - * This is slightly racy - end_page_writeback() - * might have just cleared PageReclaim, then - * setting PageReclaim here end up interpreted - * as PageReadahead - but that does not matter - * enough to care. What we do want is for this - * page to have PageReclaim set next time memcg - * reclaim reaches the tests above, so it will - * then wait_on_page_writeback() to avoid OOM; - * and it's also appropriate in global reclaim. + * This is slightly racy - + * folio_end_writeback() might have just + * cleared the reclaim flag, then setting + * reclaim here ends up interpreted as + * the readahead flag - but that does + * not matter enough to care. What we + * do want is for this folio to have + * the reclaim flag set next time memcg + * reclaim reaches the tests above, so + * it will then folio_wait_writeback() + * to avoid OOM; and it's also appropriate + * in global reclaim. */ - SetPageReclaim(page); + folio_set_reclaim(folio); stat->nr_writeback += nr_pages; goto activate_locked; /* Case 3 above */ } else { - unlock_page(page); - wait_on_page_writeback(page); - /* then go back and try same page again */ - list_add_tail(&page->lru, page_list); + folio_unlock(folio); + folio_wait_writeback(folio); + /* then go back and try same folio again */ + list_add_tail(&folio->lru, page_list); continue; } } From patchwork Wed May 4 18:28:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BCC9C433EF for ; Wed, 4 May 2022 18:29:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 39B3E6B0080; Wed, 4 May 2022 14:29:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E1F386B0085; Wed, 4 May 2022 14:29:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B3E796B0078; Wed, 4 May 2022 14:29:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8FF066B0075 for ; Wed, 4 May 2022 14:29:04 -0400 (EDT) Received: from smtpin31.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 675E32B316 for ; Wed, 4 May 2022 18:29:04 +0000 (UTC) X-FDA: 79428897408.31.26386BB Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id 9C1184007E for ; Wed, 4 May 2022 18:28:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=hK9y/bUt/vs4e4eIys52LzwBlg+oAuyzHsH7awY6lW0=; b=BqEvwtOwiEiXjHfqvOxt4dttwT MLIvyLQNCMpN7GNEC2wCugINt9uZRrej4kpvady+mUO7puY2rTdsQLzuWlsMfBiB8lzd0ePc7iA7f PQnh72rHSjIgtAywL2auci1P5tkAqFAn403+yIiWte44IFCDHxnYoQZEbR+z8EDwTlAMjaVQaAgfM D6P1B852v4QFExdVzHISCj50MVYSSupLYxp05tOflkn1imgfWVxRkepeCuHrZ/uZzKAgIw4J8wZuj jHMvYWJDnNo0Y7hDJbwsd07n17ngShfKPFDBjQDqhB4ISO3WvD8n0V34FWVOyqGsnTWmxSrNxCXto nxAmOrLA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkE-00Gq6N-Bi; Wed, 04 May 2022 18:29:02 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 07/26] swap: Turn get_swap_page() into folio_alloc_swap() Date: Wed, 4 May 2022 19:28:38 +0100 Message-Id: <20220504182857.4013401-8-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=BqEvwtOw; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 9C1184007E X-Stat-Signature: b5f8zy6hkpoquie6qfq8ki3xpnwgpj8j X-HE-Tag: 1651688927-608297 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This removes an assumption that a large folio is HPAGE_PMD_NR pages in size. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/swap.h | 13 +++++++------ mm/memcontrol.c | 16 ++++++++-------- mm/shmem.c | 3 ++- mm/swap_slots.c | 14 +++++++------- mm/swap_state.c | 3 ++- mm/swapfile.c | 17 +++++++++-------- 6 files changed, 35 insertions(+), 31 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 27093b477c5f..147a9a173508 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -494,7 +494,7 @@ static inline long get_nr_swap_pages(void) } extern void si_swapinfo(struct sysinfo *); -extern swp_entry_t get_swap_page(struct page *page); +swp_entry_t folio_alloc_swap(struct folio *folio); extern void put_swap_page(struct page *page, swp_entry_t entry); extern swp_entry_t get_swap_page_of_type(int); extern int get_swap_pages(int n, swp_entry_t swp_entries[], int entry_size); @@ -685,7 +685,7 @@ static inline int try_to_free_swap(struct page *page) return 0; } -static inline swp_entry_t get_swap_page(struct page *page) +static inline swp_entry_t folio_alloc_swap(struct folio *folio) { swp_entry_t entry; entry.val = 0; @@ -739,12 +739,13 @@ static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask) #ifdef CONFIG_MEMCG_SWAP void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry); -extern int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry); -static inline int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) +int __mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry); +static inline int mem_cgroup_try_charge_swap(struct folio *folio, + swp_entry_t entry) { if (mem_cgroup_disabled()) return 0; - return __mem_cgroup_try_charge_swap(page, entry); + return __mem_cgroup_try_charge_swap(folio, entry); } extern void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages); @@ -762,7 +763,7 @@ static inline void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry) { } -static inline int mem_cgroup_try_charge_swap(struct page *page, +static inline int mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry) { return 0; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 598fece89e2b..985eff804004 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -7125,17 +7125,17 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry) } /** - * __mem_cgroup_try_charge_swap - try charging swap space for a page - * @page: page being added to swap + * __mem_cgroup_try_charge_swap - try charging swap space for a folio + * @folio: folio being added to swap * @entry: swap entry to charge * - * Try to charge @page's memcg for the swap space at @entry. + * Try to charge @folio's memcg for the swap space at @entry. * * Returns 0 on success, -ENOMEM on failure. */ -int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) +int __mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry) { - unsigned int nr_pages = thp_nr_pages(page); + unsigned int nr_pages = folio_nr_pages(folio); struct page_counter *counter; struct mem_cgroup *memcg; unsigned short oldid; @@ -7143,9 +7143,9 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) return 0; - memcg = page_memcg(page); + memcg = folio_memcg(folio); - VM_WARN_ON_ONCE_PAGE(!memcg, page); + VM_WARN_ON_ONCE_FOLIO(!memcg, folio); if (!memcg) return 0; @@ -7168,7 +7168,7 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) if (nr_pages > 1) mem_cgroup_id_get_many(memcg, nr_pages - 1); oldid = swap_cgroup_record(entry, mem_cgroup_id(memcg), nr_pages); - VM_BUG_ON_PAGE(oldid, page); + VM_BUG_ON_FOLIO(oldid, folio); mod_memcg_state(memcg, MEMCG_SWAP, nr_pages); return 0; diff --git a/mm/shmem.c b/mm/shmem.c index c89394221a7e..85c23696efc6 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1312,6 +1312,7 @@ int shmem_unuse(unsigned int type) */ static int shmem_writepage(struct page *page, struct writeback_control *wbc) { + struct folio *folio = page_folio(page); struct shmem_inode_info *info; struct address_space *mapping; struct inode *inode; @@ -1385,7 +1386,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) SetPageUptodate(page); } - swap = get_swap_page(page); + swap = folio_alloc_swap(folio); if (!swap.val) goto redirty; diff --git a/mm/swap_slots.c b/mm/swap_slots.c index 2b5531840583..0218ec1cd24c 100644 --- a/mm/swap_slots.c +++ b/mm/swap_slots.c @@ -117,7 +117,7 @@ static int alloc_swap_slot_cache(unsigned int cpu) /* * Do allocation outside swap_slots_cache_mutex - * as kvzalloc could trigger reclaim and get_swap_page, + * as kvzalloc could trigger reclaim and folio_alloc_swap, * which can lock swap_slots_cache_mutex. */ slots = kvcalloc(SWAP_SLOTS_CACHE_SIZE, sizeof(swp_entry_t), @@ -213,7 +213,7 @@ static void __drain_swap_slots_cache(unsigned int type) * this function can be invoked in the cpu * hot plug path: * cpu_up -> lock cpu_hotplug -> cpu hotplug state callback - * -> memory allocation -> direct reclaim -> get_swap_page + * -> memory allocation -> direct reclaim -> folio_alloc_swap * -> drain_swap_slots_cache * * Hence the loop over current online cpu below could miss cpu that @@ -301,16 +301,16 @@ int free_swap_slot(swp_entry_t entry) return 0; } -swp_entry_t get_swap_page(struct page *page) +swp_entry_t folio_alloc_swap(struct folio *folio) { swp_entry_t entry; struct swap_slots_cache *cache; entry.val = 0; - if (PageTransHuge(page)) { + if (folio_test_large(folio)) { if (IS_ENABLED(CONFIG_THP_SWAP)) - get_swap_pages(1, &entry, HPAGE_PMD_NR); + get_swap_pages(1, &entry, folio_nr_pages(folio)); goto out; } @@ -344,8 +344,8 @@ swp_entry_t get_swap_page(struct page *page) get_swap_pages(1, &entry, 1); out: - if (mem_cgroup_try_charge_swap(page, entry)) { - put_swap_page(page, entry); + if (mem_cgroup_try_charge_swap(folio, entry)) { + put_swap_page(&folio->page, entry); entry.val = 0; } return entry; diff --git a/mm/swap_state.c b/mm/swap_state.c index 013856004825..989ad18f5468 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -183,13 +183,14 @@ void __delete_from_swap_cache(struct page *page, */ int add_to_swap(struct page *page) { + struct folio *folio = page_folio(page); swp_entry_t entry; int err; VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(!PageUptodate(page), page); - entry = get_swap_page(page); + entry = folio_alloc_swap(folio); if (!entry.val) return 0; diff --git a/mm/swapfile.c b/mm/swapfile.c index 63c61f8b2611..c34f41553144 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -76,9 +76,9 @@ static PLIST_HEAD(swap_active_head); /* * all available (active, not full) swap_info_structs * protected with swap_avail_lock, ordered by priority. - * This is used by get_swap_page() instead of swap_active_head + * This is used by folio_alloc_swap() instead of swap_active_head * because swap_active_head includes all swap_info_structs, - * but get_swap_page() doesn't need to look at full ones. + * but folio_alloc_swap() doesn't need to look at full ones. * This uses its own lock instead of swap_lock because when a * swap_info_struct changes between not-full/full, it needs to * add/remove itself to/from this list, but the swap_info_struct->lock @@ -2093,11 +2093,12 @@ static int try_to_unuse(unsigned int type) * Under global memory pressure, swap entries can be reinserted back * into process space after the mmlist loop above passes over them. * - * Limit the number of retries? No: when mmget_not_zero() above fails, - * that mm is likely to be freeing swap from exit_mmap(), which proceeds - * at its own independent pace; and even shmem_writepage() could have - * been preempted after get_swap_page(), temporarily hiding that swap. - * It's easy and robust (though cpu-intensive) just to keep retrying. + * Limit the number of retries? No: when mmget_not_zero() + * above fails, that mm is likely to be freeing swap from + * exit_mmap(), which proceeds at its own independent pace; + * and even shmem_writepage() could have been preempted after + * folio_alloc_swap(), temporarily hiding that swap. It's easy + * and robust (though cpu-intensive) just to keep retrying. */ if (READ_ONCE(si->inuse_pages)) { if (!signal_pending(current)) @@ -2310,7 +2311,7 @@ static void _enable_swap_info(struct swap_info_struct *p) * which on removal of any swap_info_struct with an auto-assigned * (i.e. negative) priority increments the auto-assigned priority * of any lower-priority swap_info_structs. - * swap_avail_head needs to be priority ordered for get_swap_page(), + * swap_avail_head needs to be priority ordered for folio_alloc_swap(), * which allocates swap pages from the highest available priority * swap_info_struct. */ From patchwork Wed May 4 18:28:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838374 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFE68C4332F for ; Wed, 4 May 2022 18:29:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BDCFD6B0098; Wed, 4 May 2022 14:29:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0132D6B0095; Wed, 4 May 2022 14:29:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 914846B0095; Wed, 4 May 2022 14:29:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0AE506B0099 for ; Wed, 4 May 2022 14:29:07 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A6CFD215D0 for ; Wed, 4 May 2022 18:29:04 +0000 (UTC) X-FDA: 79428897408.10.0434ED4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 4A11C180098 for ; Wed, 4 May 2022 18:29:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=66RQr5jogRoJnb7RIEyD6u6pEaL+uGBhK9oq+xBYitE=; b=asSHsDOW8nKSiW71lyPNrEo94C DCGgrHrGRGqWrwF1kTt3l1Qi9ZTRLyunY7SGhKPeulf9BF+rpGAWOJwtEIMdjKU5w+FjjJc79wu3p XuV4Yxmkv+vS+F3fe7kA4dq1C8a7yg+hOZzcG8mbRI5xL243AAXVGt+cvNzXutNoEZBDmoXYAcN6O bxG89hF1CeG3zBQfp6ozUfLOKZ4NC9AjL1MOxDi6JkmDAtk+/aoy0+zU3m5BwdAfydxClraI81OEW BL7cCKUxJOKloCMjF0beShn3t75itktZmS0/lMnvM9hlUYfgV+F+pfd0MAeqn/wUIl3vRxgg6E7fa 1WlZaQ2A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkE-00Gq6T-G3; Wed, 04 May 2022 18:29:02 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 08/26] swap: Convert add_to_swap() to take a folio Date: Wed, 4 May 2022 19:28:39 +0100 Message-Id: <20220504182857.4013401-9-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4A11C180098 X-Stat-Signature: en9w39pouaordzu56nbc1pwd4rpw5qry Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=asSHsDOW; dmarc=none; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1651688942-608001 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The only caller already has a folio available, so this saves a conversion. Also convert the return type to boolean. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/swap.h | 6 +++--- mm/swap_state.c | 47 +++++++++++++++++++++++--------------------- mm/vmscan.c | 6 +++--- 3 files changed, 31 insertions(+), 28 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 147a9a173508..f87bb495e482 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -449,7 +449,7 @@ static inline unsigned long total_swapcache_pages(void) } extern void show_swap_cache_info(void); -extern int add_to_swap(struct page *page); +bool add_to_swap(struct folio *folio); extern void *get_shadow_from_swap_cache(swp_entry_t entry); extern int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp, void **shadowp); @@ -630,9 +630,9 @@ struct page *find_get_incore_page(struct address_space *mapping, pgoff_t index) return find_get_page(mapping, index); } -static inline int add_to_swap(struct page *page) +static inline bool add_to_swap(struct folio *folio) { - return 0; + return false; } static inline void *get_shadow_from_swap_cache(swp_entry_t entry) diff --git a/mm/swap_state.c b/mm/swap_state.c index 989ad18f5468..858d8904b06e 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -175,24 +175,26 @@ void __delete_from_swap_cache(struct page *page, } /** - * add_to_swap - allocate swap space for a page - * @page: page we want to move to swap + * add_to_swap - allocate swap space for a folio + * @folio: folio we want to move to swap * - * Allocate swap space for the page and add the page to the - * swap cache. Caller needs to hold the page lock. + * Allocate swap space for the folio and add the folio to the + * swap cache. + * + * Context: Caller needs to hold the folio lock. + * Return: Whether the folio was added to the swap cache. */ -int add_to_swap(struct page *page) +bool add_to_swap(struct folio *folio) { - struct folio *folio = page_folio(page); swp_entry_t entry; int err; - VM_BUG_ON_PAGE(!PageLocked(page), page); - VM_BUG_ON_PAGE(!PageUptodate(page), page); + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + VM_BUG_ON_FOLIO(!folio_test_uptodate(folio), folio); entry = folio_alloc_swap(folio); if (!entry.val) - return 0; + return false; /* * XArray node allocations from PF_MEMALLOC contexts could @@ -205,7 +207,7 @@ int add_to_swap(struct page *page) /* * Add it to the swap cache. */ - err = add_to_swap_cache(page, entry, + err = add_to_swap_cache(&folio->page, entry, __GFP_HIGH|__GFP_NOMEMALLOC|__GFP_NOWARN, NULL); if (err) /* @@ -214,22 +216,23 @@ int add_to_swap(struct page *page) */ goto fail; /* - * Normally the page will be dirtied in unmap because its pte should be - * dirty. A special case is MADV_FREE page. The page's pte could have - * dirty bit cleared but the page's SwapBacked bit is still set because - * clearing the dirty bit and SwapBacked bit has no lock protected. For - * such page, unmap will not set dirty bit for it, so page reclaim will - * not write the page out. This can cause data corruption when the page - * is swap in later. Always setting the dirty bit for the page solves - * the problem. + * Normally the folio will be dirtied in unmap because its + * pte should be dirty. A special case is MADV_FREE page. The + * page's pte could have dirty bit cleared but the folio's + * SwapBacked flag is still set because clearing the dirty bit + * and SwapBacked flag has no lock protected. For such folio, + * unmap will not set dirty bit for it, so folio reclaim will + * not write the folio out. This can cause data corruption when + * the folio is swapped in later. Always setting the dirty flag + * for the folio solves the problem. */ - set_page_dirty(page); + folio_mark_dirty(folio); - return 1; + return true; fail: - put_swap_page(page, entry); - return 0; + put_swap_page(&folio->page, entry); + return false; } /* diff --git a/mm/vmscan.c b/mm/vmscan.c index 19c1bcd886ef..8f7c32b3d65e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1710,8 +1710,8 @@ static unsigned int shrink_page_list(struct list_head *page_list, page_list)) goto activate_locked; } - if (!add_to_swap(page)) { - if (!PageTransHuge(page)) + if (!add_to_swap(folio)) { + if (!folio_test_large(folio)) goto activate_locked_split; /* Fallback to swap normal pages */ if (split_folio_to_list(folio, @@ -1720,7 +1720,7 @@ static unsigned int shrink_page_list(struct list_head *page_list, #ifdef CONFIG_TRANSPARENT_HUGEPAGE count_vm_event(THP_SWPOUT_FALLBACK); #endif - if (!add_to_swap(page)) + if (!add_to_swap(folio)) goto activate_locked_split; } From patchwork Wed May 4 18:28:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838358 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D3FBC433F5 for ; Wed, 4 May 2022 18:29:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 65E906B007B; Wed, 4 May 2022 14:29:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2D1066B0078; Wed, 4 May 2022 14:29:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 012456B0080; Wed, 4 May 2022 14:29:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id B3FF86B0081 for ; Wed, 4 May 2022 14:29:04 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 8B9AC165C for ; Wed, 4 May 2022 18:29:04 +0000 (UTC) X-FDA: 79428897408.20.0928565 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 96C3F1C009D for ; Wed, 4 May 2022 18:28:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=cn5OhfkY+HX3+evyy98EyBGWIVQZySLtCMdQHTOXi3c=; b=rNGiayjY7HGU5hxzL2bRwgMVCk 96umsa/mlfTByY6Y+HsziEN88ghY8gAie9U/mfe8X3P4vniSl567C6h7Ig2W1D4Cq7IAWEmMIPoYx IglCQMFp0erlY/D4xaHUYbHoSlETrXuKEK7nBkumC20YYeOLOJXN47qDsC2FOWQdNF6aJCLzyZPx7 Ocsj+0F6+pkcR1UnhY4/MoX0+qpnjCcNhY7H1pfNrjrSC4iDCpW3y2EMEgQ2LgkAy17uEikAwPckQ bWbjXof3R0td+uGAdWJZLps8C9CPIj+MpaUWO9COYPoBDnKOZ0YK+c1GoV+yoCS65sxp1VXN+KhcO fwffVDdw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkE-00Gq6Z-KN; Wed, 04 May 2022 18:29:02 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 09/26] vmscan: Convert dirty page handling to folios Date: Wed, 4 May 2022 19:28:40 +0100 Message-Id: <20220504182857.4013401-10-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: xkhyaoqy8eu6nyq9y4q8n5k7maqenjwu X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 96C3F1C009D Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=rNGiayjY; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspam-User: X-HE-Tag: 1651688937-659130 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Mostly this just eliminates calls to compound_head(), but NR_VMSCAN_IMMEDIATE was being incremented by 1 instead of by nr_pages. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/vmscan.c | 48 ++++++++++++++++++++++++++---------------------- 1 file changed, 26 insertions(+), 22 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 8f7c32b3d65e..950eeb2f759b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1768,28 +1768,31 @@ static unsigned int shrink_page_list(struct list_head *page_list, } } - if (PageDirty(page)) { + if (folio_test_dirty(folio)) { /* - * Only kswapd can writeback filesystem pages + * Only kswapd can writeback filesystem folios * to avoid risk of stack overflow. But avoid - * injecting inefficient single-page IO into + * injecting inefficient single-folio I/O into * flusher writeback as much as possible: only - * write pages when we've encountered many - * dirty pages, and when we've already scanned - * the rest of the LRU for clean pages and see - * the same dirty pages again (PageReclaim). + * write folios when we've encountered many + * dirty folios, and when we've already scanned + * the rest of the LRU for clean folios and see + * the same dirty folios again (with the reclaim + * flag set). */ - if (page_is_file_lru(page) && - (!current_is_kswapd() || !PageReclaim(page) || + if (folio_is_file_lru(folio) && + (!current_is_kswapd() || + !folio_test_reclaim(folio) || !test_bit(PGDAT_DIRTY, &pgdat->flags))) { /* * Immediately reclaim when written back. - * Similar in principal to deactivate_page() - * except we already have the page isolated + * Similar in principle to deactivate_page() + * except we already have the folio isolated * and know it's dirty */ - inc_node_page_state(page, NR_VMSCAN_IMMEDIATE); - SetPageReclaim(page); + node_stat_mod_folio(folio, NR_VMSCAN_IMMEDIATE, + nr_pages); + folio_set_reclaim(folio); goto activate_locked; } @@ -1802,8 +1805,8 @@ static unsigned int shrink_page_list(struct list_head *page_list, goto keep_locked; /* - * Page is dirty. Flush the TLB if a writable entry - * potentially exists to avoid CPU writes after IO + * Folio is dirty. Flush the TLB if a writable entry + * potentially exists to avoid CPU writes after I/O * starts and then write it out here. */ try_to_unmap_flush_dirty(); @@ -1815,23 +1818,24 @@ static unsigned int shrink_page_list(struct list_head *page_list, case PAGE_SUCCESS: stat->nr_pageout += nr_pages; - if (PageWriteback(page)) + if (folio_test_writeback(folio)) goto keep; - if (PageDirty(page)) + if (folio_test_dirty(folio)) goto keep; /* * A synchronous write - probably a ramdisk. Go - * ahead and try to reclaim the page. + * ahead and try to reclaim the folio. */ - if (!trylock_page(page)) + if (!folio_trylock(folio)) goto keep; - if (PageDirty(page) || PageWriteback(page)) + if (folio_test_dirty(folio) || + folio_test_writeback(folio)) goto keep_locked; - mapping = page_mapping(page); + mapping = folio_mapping(folio); fallthrough; case PAGE_CLEAN: - ; /* try to free the page below */ + ; /* try to free the folio below */ } } From patchwork Wed May 4 18:28:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838360 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56634C433F5 for ; Wed, 4 May 2022 18:29:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EE68D6B0088; Wed, 4 May 2022 14:29:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7B8596B0087; Wed, 4 May 2022 14:29:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 394416B007E; Wed, 4 May 2022 14:29:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id D4D796B0082 for ; Wed, 4 May 2022 14:29:04 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id A7D652A5B6 for ; Wed, 4 May 2022 18:29:04 +0000 (UTC) X-FDA: 79428897408.06.6104DE6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf15.hostedemail.com (Postfix) with ESMTP id 6DDA4A008A for ; Wed, 4 May 2022 18:28:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=VouUEevqAJoImC1PsqzC77NF117LJuXxB1hfk11vLmg=; b=uGU6mybcJfY5e5dbeJfGJfLSdC YUPvy3JiqRqhX+qfD+jXSjKVuk5SCoVhas7wsMhfzBUmDaz5zf6SpOZJaiUi29oE5VvvB56iiiBnv PVVbzxwDOIYOuwqfQCbWqwX4XadBGuqcBsxxYouB6s4fygXItllWxVEtvz7mX5Sbqp0CkTeGBaIi+ mGdKFJZqGMFU6XCvakfnLOF8Gx+ILMQYoUg/eQZKHV7sBH2Aa2Ln/qQltOW/BuhF2Hnt+sINT0aAe z47PJWYNoWCZm7MgoqY+2DGxfS9wKvjL5j3PpwJOvLFtG9DeoEumBWVMu5eLjPb3+qgI7FW4TXgc0 ALArphmQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkE-00Gq6h-Po; Wed, 04 May 2022 18:29:02 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 10/26] vmscan: Convert page buffer handling to use folios Date: Wed, 4 May 2022 19:28:41 +0100 Message-Id: <20220504182857.4013401-11-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 6DDA4A008A Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=uGU6mybc; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspam-User: X-Stat-Signature: m6hqerer1ooet9sin5frjmnexqkyd1nw X-HE-Tag: 1651688935-663856 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This mostly just removes calls to compound_head() although nr_reclaimed should be incremented by the number of pages, not just 1. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/vmscan.c | 50 ++++++++++++++++++++++++++------------------------ 1 file changed, 26 insertions(+), 24 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 950eeb2f759b..cda43f0bb285 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1840,42 +1840,44 @@ static unsigned int shrink_page_list(struct list_head *page_list, } /* - * If the page has buffers, try to free the buffer mappings - * associated with this page. If we succeed we try to free - * the page as well. + * If the folio has buffers, try to free the buffer + * mappings associated with this folio. If we succeed + * we try to free the folio as well. * - * We do this even if the page is PageDirty(). - * try_to_release_page() does not perform I/O, but it is - * possible for a page to have PageDirty set, but it is actually - * clean (all its buffers are clean). This happens if the - * buffers were written out directly, with submit_bh(). ext3 - * will do this, as well as the blockdev mapping. - * try_to_release_page() will discover that cleanness and will - * drop the buffers and mark the page clean - it can be freed. + * We do this even if the folio is dirty. + * filemap_release_folio() does not perform I/O, but it + * is possible for a folio to have the dirty flag set, + * but it is actually clean (all its buffers are clean). + * This happens if the buffers were written out directly, + * with submit_bh(). ext3 will do this, as well as + * the blockdev mapping. filemap_release_folio() will + * discover that cleanness and will drop the buffers + * and mark the folio clean - it can be freed. * - * Rarely, pages can have buffers and no ->mapping. These are - * the pages which were not successfully invalidated in - * truncate_cleanup_page(). We try to drop those buffers here - * and if that worked, and the page is no longer mapped into - * process address space (page_count == 1) it can be freed. - * Otherwise, leave the page on the LRU so it is swappable. + * Rarely, folios can have buffers and no ->mapping. + * These are the folios which were not successfully + * invalidated in truncate_cleanup_folio(). We try to + * drop those buffers here and if that worked, and the + * folio is no longer mapped into process address space + * (refcount == 1) it can be freed. Otherwise, leave + * the folio on the LRU so it is swappable. */ - if (page_has_private(page)) { - if (!try_to_release_page(page, sc->gfp_mask)) + if (folio_has_private(folio)) { + if (!filemap_release_folio(folio, sc->gfp_mask)) goto activate_locked; - if (!mapping && page_count(page) == 1) { - unlock_page(page); - if (put_page_testzero(page)) + if (!mapping && folio_ref_count(folio) == 1) { + folio_unlock(folio); + if (folio_put_testzero(folio)) goto free_it; else { /* * rare race with speculative reference. * the speculative reference will free - * this page shortly, so we may + * this folio shortly, so we may * increment nr_reclaimed here (and * leave it off the LRU). */ - nr_reclaimed++; + nr_reclaimed += nr_pages; continue; } } From patchwork Wed May 4 18:28:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4D58C433FE for ; Wed, 4 May 2022 18:29:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 286136B0083; Wed, 4 May 2022 14:29:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A975B6B0078; Wed, 4 May 2022 14:29:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5DB786B0089; Wed, 4 May 2022 14:29:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 032CC6B007B for ; Wed, 4 May 2022 14:29:05 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D013D212F6 for ; Wed, 4 May 2022 18:29:04 +0000 (UTC) X-FDA: 79428897408.15.831D725 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id 3E02E1A0088 for ; Wed, 4 May 2022 18:28:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=N9v77wlY0AkilHHu4UiOmCVb+LAs7M6W0wTgDYMmZa8=; b=TZ8Bk87zr5n1WW+Etn2uvBjwIP gVnqsE15Hw8PuO346VdMk8v9JOAIJhuTRNg2SDFIhBHSjZoWvP6dyOpBepNEIqaeR5HwjE8V7H5AP TjS4i2SwsTLwMdsWtltib4bX6ZyP4AQYRhO5JW6WQZxlwGJk3QNoTVbCKyVBDz1eQIF3MaQIpN8LB I/2yHNMk2b9+UL/tg1GZBZ09GSu3t4lHlTcY9SvERrYAfLvHRdHofAWbv4CDF0j0+Aqaw57qs6aAA upGacj3B4xAz8DqeAvM9i7GlLg+ziu9QYCqUhTtuWoNaTdOVWVLVf62m7gmST2p9xyn+XncYm4V1x ocQUCLGA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkE-00Gq6q-St; Wed, 04 May 2022 18:29:02 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 11/26] vmscan: Convert lazy freeing to folios Date: Wed, 4 May 2022 19:28:42 +0100 Message-Id: <20220504182857.4013401-12-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 3E02E1A0088 X-Stat-Signature: x3wh5rdctm8wfzbmngg9pj4opyr1f1mo Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=TZ8Bk87z; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-HE-Tag: 1651688938-958412 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Remove a hidden call to compound_head(), and account nr_pages instead of a single page. This matches the code in lru_lazyfree_fn() that accounts nr_pages to PGLAZYFREE. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/memcontrol.h | 14 ++++++++++++++ mm/vmscan.c | 18 +++++++++--------- 2 files changed, 23 insertions(+), 9 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 89b14729d59f..06a16c82558b 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1061,6 +1061,15 @@ static inline void count_memcg_page_event(struct page *page, count_memcg_events(memcg, idx, 1); } +static inline void count_memcg_folio_events(struct folio *folio, + enum vm_event_item idx, unsigned long nr) +{ + struct mem_cgroup *memcg = folio_memcg(folio); + + if (memcg) + count_memcg_events(memcg, idx, nr); +} + static inline void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx) { @@ -1498,6 +1507,11 @@ static inline void count_memcg_page_event(struct page *page, { } +static inline void count_memcg_folio_events(struct folio *folio, + enum vm_event_item idx, unsigned long nr) +{ +} + static inline void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx) { diff --git a/mm/vmscan.c b/mm/vmscan.c index cda43f0bb285..0368ea3e9880 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1883,20 +1883,20 @@ static unsigned int shrink_page_list(struct list_head *page_list, } } - if (PageAnon(page) && !PageSwapBacked(page)) { + if (folio_test_anon(folio) && !folio_test_swapbacked(folio)) { /* follow __remove_mapping for reference */ - if (!page_ref_freeze(page, 1)) + if (!folio_ref_freeze(folio, 1)) goto keep_locked; /* - * The page has only one reference left, which is + * The folio has only one reference left, which is * from the isolation. After the caller puts the - * page back on lru and drops the reference, the - * page will be freed anyway. It doesn't matter - * which lru it goes. So we don't bother checking - * PageDirty here. + * folio back on the lru and drops the reference, the + * folio will be freed anyway. It doesn't matter + * which lru it goes on. So we don't bother checking + * the dirty flag here. */ - count_vm_event(PGLAZYFREED); - count_memcg_page_event(page, PGLAZYFREED); + count_vm_events(PGLAZYFREED, nr_pages); + count_memcg_folio_events(folio, PGLAZYFREED, nr_pages); } else if (!mapping || !__remove_mapping(mapping, folio, true, sc->target_mem_cgroup)) goto keep_locked; From patchwork Wed May 4 18:28:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BC4BC433FE for ; Wed, 4 May 2022 18:29:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7FDB36B007E; Wed, 4 May 2022 14:29:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E9D66B0092; Wed, 4 May 2022 14:29:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B36446B0083; Wed, 4 May 2022 14:29:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 19EBA6B0083 for ; Wed, 4 May 2022 14:29:05 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id E3CBD6128F for ; Wed, 4 May 2022 18:29:04 +0000 (UTC) X-FDA: 79428897408.08.0829275 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf17.hostedemail.com (Postfix) with ESMTP id 3871740084 for ; Wed, 4 May 2022 18:28:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=pGA+ufhTLejZs6UQD/U24KGfzpBgMkKhRJ/yYi2BWVE=; b=uIFU0sr+qfdsys2Xdv355OZrt/ uCvK2bpJOSADRsRcUFbNZdSwUOQ8J03hdqfPbq8wIViTqIB60yUWbEI0XaeOfxi8GELlhpqAlLoHT rM88ZzFT5Fqabvyqst359880KhWm1u0EaUnjOqzclCGeR+RvjO/oYFOl+6afqnw7xmV37JQp+5QlJ PSwpHciDCGVduHgZdYozV/8Oo1nHRQXyMNNxKM299uFBXu0by44aK/smKBHJSCgKXFot9XmWKL7r1 /iGOGOdsnhOsvg+xTAwbIbswku1Tb6Qogtkmj29kFvNTOShFvERpUnFZYKA9Bv+HsSrkXrTqRPaUf eVNDjcFQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkF-00Gq6w-0K; Wed, 04 May 2022 18:29:03 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 12/26] vmscan: Move initialisation of mapping down Date: Wed, 4 May 2022 19:28:43 +0100 Message-Id: <20220504182857.4013401-13-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 3871740084 X-Stat-Signature: yzjmnzdfm63u5w8fxtmp9tno8qtudswn Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=uIFU0sr+; dmarc=none; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-HE-Tag: 1651688931-751876 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that we don't interrogate the BDI for congestion, we can delay looking up the folio's mapping until we've got further through the function, reducing register pressure and saving a call to folio_mapping for folios we're adding to the swap cache. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/vmscan.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 0368ea3e9880..9ac2583ca5e5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1568,12 +1568,11 @@ static unsigned int shrink_page_list(struct list_head *page_list, stat->nr_unqueued_dirty += nr_pages; /* - * Treat this page as congested if the underlying BDI is or if + * Treat this page as congested if * pages are cycling through the LRU so quickly that the * pages marked for immediate reclaim are making it to the * end of the LRU a second time. */ - mapping = page_mapping(page); if (writeback && PageReclaim(page)) stat->nr_congested += nr_pages; @@ -1725,9 +1724,6 @@ static unsigned int shrink_page_list(struct list_head *page_list, } may_enter_fs = true; - - /* Adding to swap updated mapping */ - mapping = page_mapping(page); } } else if (PageSwapBacked(page) && PageTransHuge(page)) { /* Split shmem THP */ @@ -1768,6 +1764,7 @@ static unsigned int shrink_page_list(struct list_head *page_list, } } + mapping = folio_mapping(folio); if (folio_test_dirty(folio)) { /* * Only kswapd can writeback filesystem folios From patchwork Wed May 4 18:28:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838362 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEA0DC433EF for ; Wed, 4 May 2022 18:29:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5667E6B0089; Wed, 4 May 2022 14:29:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EDEA76B0085; Wed, 4 May 2022 14:29:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9AD636B0085; Wed, 4 May 2022 14:29:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 29DB86B0088 for ; Wed, 4 May 2022 14:29:05 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 01B4912119B for ; Wed, 4 May 2022 18:29:04 +0000 (UTC) X-FDA: 79428897450.02.4D005C7 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf26.hostedemail.com (Postfix) with ESMTP id 7A142140081 for ; Wed, 4 May 2022 18:29:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=5quld6PgXDQsno1p/cxFOWTDpBSk6Vh4OG/wcGOVDgY=; b=jfVMuiZa1kxC3NWFPYwNDRnGK2 inVKcSmH0ZUnvHJlDFXKliUCs2zqjg2jMylMelBFKgKCiNKTF2okbpOmuMQ6ALSwLlwTBQqvmwpuv 2piPlkl3YngDv+YUjzlnd57kfsVO+7/+A9YLutqYVrAGnAN2ZebQs9OujUNckZ/HhFcZhlW07WZOI jGFRZNa3c4DrxZaXtiB0wVPafVdaia/JJeQxQlLbXeAaC5/nGm9fzfB8LEuvF4E3/MfSygik07BFs GVEugeTFgohKKgdMPEtcwbGIQFKz4cluiFkmGm4W4di096kzxmL8bnsZTu/TF+OzqOHZSCR8JUNrF Z/u3evDg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkF-00Gq72-4p; Wed, 04 May 2022 18:29:03 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 13/26] vmscan: Convert the activate_locked portion of shrink_page_list to folios Date: Wed, 4 May 2022 19:28:44 +0100 Message-Id: <20220504182857.4013401-14-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: ikpwme5npmdefytm7qpks38czaxqn3um X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 7A142140081 X-Rspam-User: Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=jfVMuiZa; dmarc=none; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1651688942-201497 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This accounts the number of pages activated correctly for large folios. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/vmscan.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 9ac2583ca5e5..85c9758f6f32 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1927,15 +1927,16 @@ static unsigned int shrink_page_list(struct list_head *page_list, } activate_locked: /* Not a candidate for swapping, so reclaim swap space. */ - if (PageSwapCache(page) && (mem_cgroup_swap_full(page) || - PageMlocked(page))) - try_to_free_swap(page); - VM_BUG_ON_PAGE(PageActive(page), page); - if (!PageMlocked(page)) { - int type = page_is_file_lru(page); - SetPageActive(page); + if (folio_test_swapcache(folio) && + (mem_cgroup_swap_full(&folio->page) || + folio_test_mlocked(folio))) + try_to_free_swap(&folio->page); + VM_BUG_ON_FOLIO(folio_test_active(folio), folio); + if (!folio_test_mlocked(folio)) { + int type = folio_is_file_lru(folio); + folio_set_active(folio); stat->nr_activate[type] += nr_pages; - count_memcg_page_event(page, PGACTIVATE); + count_memcg_folio_events(folio, PGACTIVATE, nr_pages); } keep_locked: unlock_page(page); From patchwork Wed May 4 18:28:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA4A8C433FE for ; Wed, 4 May 2022 18:29:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2CDB06B0085; Wed, 4 May 2022 14:29:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id ABFB66B0081; Wed, 4 May 2022 14:29:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A32D6B0087; Wed, 4 May 2022 14:29:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id DF9C26B0098 for ; Wed, 4 May 2022 14:29:06 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 31249215D7 for ; Wed, 4 May 2022 18:29:05 +0000 (UTC) X-FDA: 79428897450.22.0E3E8CE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id 6992340079 for ; Wed, 4 May 2022 18:28:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=2eGvpgCwBgNKMZ/RCFgo9zn6gdnt9R9SMtUMcNvwgyo=; b=dX93gx0+N0Xl0w1JNpPBYCA0P6 1HC6Uqwp+xYtI9I2Mq7ZSyqdqYyQvA2Z9VhogiFAm0ISxOAUQLfkLLcfiO0ywpfhe7OxsZMnYyMAm lb5kbtoDxfiWxc5zVSepRZYhZECM3WXI35sQh890J0sUUtqx6Quv09gRMRfo9SNOXbLZlsFhPh/qg GBYLgf9OHyBicS3p5jWqCyBmgtzcrau5/QOytUsStDWnRL9ypiwCctd1Etlw8hLuJhqOrGN23R1y8 L+Z/OJ6nDQxUY5AJfNdLH3Hjze4t7ZLdaVhREYDD0CaF3A43CojMELgH9CmqGHqvwZ9WuW8+57k2Y m/Or1q3Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkF-00Gq7A-8n; Wed, 04 May 2022 18:29:03 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 14/26] mm: Allow can_split_folio() to be called when THP are disabled Date: Wed, 4 May 2022 19:28:45 +0100 Message-Id: <20220504182857.4013401-15-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=dX93gx0+; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 6992340079 X-Stat-Signature: iab1mpg7bfuitpnxj8gknw7j5jrcp6di X-HE-Tag: 1651688928-846085 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The call to can_split_folio() in vmscan is currently guarded by a test of PageTransHuge() so the BUILD_BUG() is eliminated if THP are disabled. The next patch replaces that test with folio_test_large() which may be true, even when THP are disabled. However, if THP are disabled, we cannot split, so an unconditional return of false is appropriate. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/huge_mm.h | 1 - 1 file changed, 1 deletion(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 2999190adc22..e9e0d591061d 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -347,7 +347,6 @@ static inline void prep_transhuge_page(struct page *page) {} static inline bool can_split_folio(struct folio *folio, int *pextra_pins) { - BUILD_BUG(); return false; } static inline int From patchwork Wed May 4 18:28:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838366 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E6B8C433EF for ; Wed, 4 May 2022 18:29:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 17EB96B0093; Wed, 4 May 2022 14:29:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CC83C6B0095; Wed, 4 May 2022 14:29:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 481FF6B008C; Wed, 4 May 2022 14:29:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C225E6B0089 for ; Wed, 4 May 2022 14:29:05 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id A62532B2FA for ; Wed, 4 May 2022 18:29:05 +0000 (UTC) X-FDA: 79428897450.10.E9CA99F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf17.hostedemail.com (Postfix) with ESMTP id BE4F040081 for ; Wed, 4 May 2022 18:28:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=P6QvF8JxKYVK3JaAiM/lJRQQHIKzq5/0+eHDwmXKge8=; b=hDGkCsGb7dmUOBr+1Aagcnr4y7 GLe2S+92A8eNYm8GbbRFHcqBGpupsOza0ESlsdqpkgIzFQH6oE1xzfFn1IKmP5JkOG4A8Oa1qZ7Xr Qr0TO4MsDR6HXSk3cni7jF9u4wNAmNC3bapRXHFSUgxm1aq3GESA9OaVC/mq3vj2vEhadEPpumJ2X O6tFL4HQiKrF+xk3WOBu9Ox6vgahH/pCLqj6mxe4E7i2UIt6FOCDpUh2OzhkH9Hw/2TtvWGyAsd1q EDSOWNR7idqwFLY6yfC6Z4Qm8+NPT6h8K9qlAWcmV2EHTbR4EI2PfE9lyB2mTG81z2VPWRobfHfqX AR0+E+cA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkF-00Gq7J-Fa; Wed, 04 May 2022 18:29:03 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 15/26] vmscan: Remove remaining uses of page in shrink_page_list Date: Wed, 4 May 2022 19:28:46 +0100 Message-Id: <20220504182857.4013401-16-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: BE4F040081 X-Stat-Signature: on9gff8xx8bu6xqy1iax8stscjet1tk7 Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hDGkCsGb; dmarc=none; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-HE-Tag: 1651688931-600695 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These are all straightforward conversions to the folio API. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/vmscan.c | 115 ++++++++++++++++++++++++++-------------------------- 1 file changed, 57 insertions(+), 58 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 85c9758f6f32..cc9b93c7fa0c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1524,7 +1524,6 @@ static unsigned int shrink_page_list(struct list_head *page_list, retry: while (!list_empty(page_list)) { struct address_space *mapping; - struct page *page; struct folio *folio; enum page_references references = PAGEREF_RECLAIM; bool dirty, writeback, may_enter_fs; @@ -1534,31 +1533,31 @@ static unsigned int shrink_page_list(struct list_head *page_list, folio = lru_to_folio(page_list); list_del(&folio->lru); - page = &folio->page; - if (!trylock_page(page)) + if (!folio_trylock(folio)) goto keep; - VM_BUG_ON_PAGE(PageActive(page), page); + VM_BUG_ON_FOLIO(folio_test_active(folio), folio); - nr_pages = compound_nr(page); + nr_pages = folio_nr_pages(folio); - /* Account the number of base pages even though THP */ + /* Account the number of base pages */ sc->nr_scanned += nr_pages; - if (unlikely(!page_evictable(page))) + if (unlikely(!folio_evictable(folio))) goto activate_locked; if (!sc->may_unmap && folio_mapped(folio)) goto keep_locked; may_enter_fs = (sc->gfp_mask & __GFP_FS) || - (PageSwapCache(page) && (sc->gfp_mask & __GFP_IO)); + (folio_test_swapcache(folio) && + (sc->gfp_mask & __GFP_IO)); /* * The number of dirty pages determines if a node is marked * reclaim_congested. kswapd will stall and start writing - * pages if the tail of the LRU is all dirty unqueued pages. + * folios if the tail of the LRU is all dirty unqueued folios. */ folio_check_dirty_writeback(folio, &dirty, &writeback); if (dirty || writeback) @@ -1568,21 +1567,21 @@ static unsigned int shrink_page_list(struct list_head *page_list, stat->nr_unqueued_dirty += nr_pages; /* - * Treat this page as congested if - * pages are cycling through the LRU so quickly that the - * pages marked for immediate reclaim are making it to the - * end of the LRU a second time. + * Treat this folio as congested if folios are cycling + * through the LRU so quickly that the folios marked + * for immediate reclaim are making it to the end of + * the LRU a second time. */ - if (writeback && PageReclaim(page)) + if (writeback && folio_test_reclaim(folio)) stat->nr_congested += nr_pages; /* * If a folio at the tail of the LRU is under writeback, there * are three cases to consider. * - * 1) If reclaim is encountering an excessive number of folios - * under writeback and this folio is both under - * writeback and has the reclaim flag set then it + * 1) If reclaim is encountering an excessive number + * of folios under writeback and this folio has both + * the writeback and reclaim flags set, then it * indicates that folios are being queued for I/O but * are being recycled through the LRU before the I/O * can complete. Waiting on the folio itself risks an @@ -1633,16 +1632,16 @@ static unsigned int shrink_page_list(struct list_head *page_list, !folio_test_reclaim(folio) || !may_enter_fs) { /* * This is slightly racy - - * folio_end_writeback() might have just - * cleared the reclaim flag, then setting - * reclaim here ends up interpreted as - * the readahead flag - but that does - * not matter enough to care. What we - * do want is for this folio to have - * the reclaim flag set next time memcg - * reclaim reaches the tests above, so - * it will then folio_wait_writeback() - * to avoid OOM; and it's also appropriate + * folio_end_writeback() might have + * just cleared the reclaim flag, then + * setting the reclaim flag here ends up + * interpreted as the readahead flag - but + * that does not matter enough to care. + * What we do want is for this folio to + * have the reclaim flag set next time + * memcg reclaim reaches the tests above, + * so it will then wait for writeback to + * avoid OOM; and it's also appropriate * in global reclaim. */ folio_set_reclaim(folio); @@ -1670,37 +1669,37 @@ static unsigned int shrink_page_list(struct list_head *page_list, goto keep_locked; case PAGEREF_RECLAIM: case PAGEREF_RECLAIM_CLEAN: - ; /* try to reclaim the page below */ + ; /* try to reclaim the folio below */ } /* - * Before reclaiming the page, try to relocate + * Before reclaiming the folio, try to relocate * its contents to another node. */ if (do_demote_pass && - (thp_migration_supported() || !PageTransHuge(page))) { - list_add(&page->lru, &demote_pages); - unlock_page(page); + (thp_migration_supported() || !folio_test_large(folio))) { + list_add(&folio->lru, &demote_pages); + folio_unlock(folio); continue; } /* * Anonymous process memory has backing store? * Try to allocate it some swap space here. - * Lazyfree page could be freed directly + * Lazyfree folio could be freed directly */ - if (PageAnon(page) && PageSwapBacked(page)) { - if (!PageSwapCache(page)) { + if (folio_test_anon(folio) && folio_test_swapbacked(folio)) { + if (!folio_test_swapcache(folio)) { if (!(sc->gfp_mask & __GFP_IO)) goto keep_locked; if (folio_maybe_dma_pinned(folio)) goto keep_locked; - if (PageTransHuge(page)) { - /* cannot split THP, skip it */ + if (folio_test_large(folio)) { + /* cannot split folio, skip it */ if (!can_split_folio(folio, NULL)) goto activate_locked; /* - * Split pages without a PMD map right + * Split folios without a PMD map right * away. Chances are some or all of the * tail pages can be freed without IO. */ @@ -1725,20 +1724,19 @@ static unsigned int shrink_page_list(struct list_head *page_list, may_enter_fs = true; } - } else if (PageSwapBacked(page) && PageTransHuge(page)) { - /* Split shmem THP */ + } else if (folio_test_swapbacked(folio) && + folio_test_large(folio)) { + /* Split shmem folio */ if (split_folio_to_list(folio, page_list)) goto keep_locked; } /* - * THP may get split above, need minus tail pages and update - * nr_pages to avoid accounting tail pages twice. - * - * The tail pages that are added into swap cache successfully - * reach here. + * If the folio was split above, the tail pages will make + * their own pass through this function and be accounted + * then. */ - if ((nr_pages > 1) && !PageTransHuge(page)) { + if ((nr_pages > 1) && !folio_test_large(folio)) { sc->nr_scanned -= (nr_pages - 1); nr_pages = 1; } @@ -1898,11 +1896,11 @@ static unsigned int shrink_page_list(struct list_head *page_list, sc->target_mem_cgroup)) goto keep_locked; - unlock_page(page); + folio_unlock(folio); free_it: /* - * THP may get swapped out in a whole, need account - * all base pages. + * Folio may get swapped out as a whole, need to account + * all pages in it. */ nr_reclaimed += nr_pages; @@ -1910,10 +1908,10 @@ static unsigned int shrink_page_list(struct list_head *page_list, * Is there need to periodically free_page_list? It would * appear not as the counts should be low */ - if (unlikely(PageTransHuge(page))) - destroy_compound_page(page); + if (unlikely(folio_test_large(folio))) + destroy_compound_page(&folio->page); else - list_add(&page->lru, &free_pages); + list_add(&folio->lru, &free_pages); continue; activate_locked_split: @@ -1939,18 +1937,19 @@ static unsigned int shrink_page_list(struct list_head *page_list, count_memcg_folio_events(folio, PGACTIVATE, nr_pages); } keep_locked: - unlock_page(page); + folio_unlock(folio); keep: - list_add(&page->lru, &ret_pages); - VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page), page); + list_add(&folio->lru, &ret_pages); + VM_BUG_ON_FOLIO(folio_test_lru(folio) || + folio_test_unevictable(folio), folio); } /* 'page_list' is always empty here */ - /* Migrate pages selected for demotion */ + /* Migrate folios selected for demotion */ nr_reclaimed += demote_page_list(&demote_pages, pgdat); - /* Pages that could not be demoted are still in @demote_pages */ + /* Folios that could not be demoted are still in @demote_pages */ if (!list_empty(&demote_pages)) { - /* Pages which failed to demoted go back on @page_list for retry: */ + /* Folios which weren't demoted go back on @page_list for retry: */ list_splice_init(&demote_pages, page_list); do_demote_pass = false; goto retry; From patchwork Wed May 4 18:28:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9A18C433FE for ; Wed, 4 May 2022 18:29:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E2B596B0078; Wed, 4 May 2022 14:29:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 989DE6B008A; Wed, 4 May 2022 14:29:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 282996B007E; Wed, 4 May 2022 14:29:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8546D6B007E for ; Wed, 4 May 2022 14:29:05 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 5F2532B416 for ; Wed, 4 May 2022 18:29:05 +0000 (UTC) X-FDA: 79428897450.05.73A05CE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id B4F04A0081 for ; Wed, 4 May 2022 18:28:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=56J//iEqV/S82wl4E6wMXPWkr2G51NtkjdnB2PcQ4Ks=; b=UrxFBJkYwKTMc17sDZylKrtgXH uidSTU+6RxNDUSEK5IUdk9oIUkfYydnlN0FkTIT2hnypMf/t5vBgRUBgvlB2yKQBNtd4Vq4Q9VrqW 8+g46tapBuWQNETZ6WMkO7C30WmMfgFo6j3cFRhZW+90HwV0LDosXo7liJzhwljqT9+3jrTTzqx/K EjFRBKxZaShcy1oGgE6/0S1QI9lUfW+gcwY5Ikzb4zshZ2PVb1VoLpjNs0zfiJk1degcmsW+OIMmS LMqrWvLQhS1VO4X/dppGi1VVuHrCUJ0C4ISNzCgqxV5G/d/l1icXYPZ/hnrOMuAkYxmV0NXNKx/H6 M9Buj0pg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkF-00Gq7b-Nw; Wed, 04 May 2022 18:29:03 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 16/26] mm/shmem: Use a folio in shmem_unused_huge_shrink Date: Wed, 4 May 2022 19:28:47 +0100 Message-Id: <20220504182857.4013401-17-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: B4F04A0081 X-Stat-Signature: yqw887nqg1tuaqigyx7jpuiand59c5hn Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=UrxFBJkY; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1651688931-106908 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When calling split_huge_page() we usually have to find the precise page, but that's not necessary here because we only need to unlock and put the folio afterwards. Saves 231 bytes of text (20% of this function). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/shmem.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 85c23696efc6..3461bdec6b38 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -553,7 +553,7 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, LIST_HEAD(to_remove); struct inode *inode; struct shmem_inode_info *info; - struct page *page; + struct folio *folio; unsigned long batch = sc ? sc->nr_to_scan : 128; int split = 0; @@ -597,6 +597,7 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, list_for_each_safe(pos, next, &list) { int ret; + pgoff_t index; info = list_entry(pos, struct shmem_inode_info, shrinklist); inode = &info->vfs_inode; @@ -604,14 +605,14 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, if (nr_to_split && split >= nr_to_split) goto move_back; - page = find_get_page(inode->i_mapping, - (inode->i_size & HPAGE_PMD_MASK) >> PAGE_SHIFT); - if (!page) + index = (inode->i_size & HPAGE_PMD_MASK) >> PAGE_SHIFT; + folio = filemap_get_folio(inode->i_mapping, index); + if (!folio) goto drop; /* No huge page at the end of the file: nothing to split */ - if (!PageTransHuge(page)) { - put_page(page); + if (!folio_test_large(folio)) { + folio_put(folio); goto drop; } @@ -622,14 +623,14 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, * Waiting for the lock may lead to deadlock in the * reclaim path. */ - if (!trylock_page(page)) { - put_page(page); + if (!folio_trylock(folio)) { + folio_put(folio); goto move_back; } - ret = split_huge_page(page); - unlock_page(page); - put_page(page); + ret = split_huge_page(&folio->page); + folio_unlock(folio); + folio_put(folio); /* If split failed move the inode on the list back to shrinklist */ if (ret) From patchwork Wed May 4 18:28:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838364 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E433C433EF for ; Wed, 4 May 2022 18:29:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF3376B008C; Wed, 4 May 2022 14:29:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4F4D06B0078; Wed, 4 May 2022 14:29:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D0A376B008A; Wed, 4 May 2022 14:29:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 95DF46B008C for ; Wed, 4 May 2022 14:29:05 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 6DF3A1654 for ; Wed, 4 May 2022 18:29:05 +0000 (UTC) X-FDA: 79428897450.30.2ED7CC4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id C95BC1A008C for ; Wed, 4 May 2022 18:28:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=n9NMj4BZarLYBsrQRxi23hUYBJtKn1+fa2ahXmhQQvA=; b=hZY0WpxV35ESBVMZLTMMzE1lMU QapQ8OJF2i9dqQ1lTA//2ocDKqJbkw1zILK1+Es/R5HNmy7ETOUKIF9seQnUGxom+OVA00zWJoKpe k6SV2JMl2OdtoRUaeecjg8SCHWBSH2Phbo+NQ1n0rS7MnmJTxDOB/Z+U3TrB3lRDWB7Oy1VV7W6J7 q6kiK+GK74dq7nvgq2kB7zkarfbtcTHFEoqD+UZruCavr8zpzP9OUWgp+GcfNQSyL/84XItCGvVRV mrZL3CPg080a959ZT6rgyD/dTVTjkA0+T3nd5iV+bQUS3XkYoPjR33EDFx6mtgCJ/YqHAs7OBC7sR ThrRR+lg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkF-00Gq7q-T9; Wed, 04 May 2022 18:29:03 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 17/26] mm/swap: Add folio_throttle_swaprate Date: Wed, 4 May 2022 19:28:48 +0100 Message-Id: <20220504182857.4013401-18-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: C95BC1A008C X-Stat-Signature: 7xygkci3yirer1nh9zuh3gh46yx1oft7 Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hZY0WpxV; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-HE-Tag: 1651688938-643750 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The only use of the page argument to cgroup_throttle_swaprate() is to get the node ID, and this will be the same for all pages in the folio, so just pass in the first page of the folio. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/swap.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/linux/swap.h b/include/linux/swap.h index f87bb495e482..96f7129f6ee2 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -736,6 +736,10 @@ static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask) { } #endif +static inline void folio_throttle_swaprate(struct folio *folio, gfp_t gfp) +{ + cgroup_throttle_swaprate(&folio->page, gfp); +} #ifdef CONFIG_MEMCG_SWAP void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry); From patchwork Wed May 4 18:28:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE601C433F5 for ; Wed, 4 May 2022 18:29:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4AC5B6B0092; Wed, 4 May 2022 14:29:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D7FB86B0081; Wed, 4 May 2022 14:29:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 76A866B0096; Wed, 4 May 2022 14:29:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id DAA186B0081 for ; Wed, 4 May 2022 14:29:05 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id AC2B31211BC for ; Wed, 4 May 2022 18:29:05 +0000 (UTC) X-FDA: 79428897450.29.8100EC6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id 0898140025 for ; Wed, 4 May 2022 18:28:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=+Vj1x48r9VGeyqIEm3n/pGD8bysQzfubSyQVCu6W4/s=; b=TCjwIlpvDQ9Rl+HOircWfELOd4 bFPZ7wwoYHEjfS/O8lJeB8JAIsl/Od9nGbp5x8AZ9SnwO9xDC+eFWD3CEGAIHUiaxT7O9eYMjrJ5V xsjW3xAqdCOCJOsLwBd4RgE0P/IiyhOYWFUxE9ECAiahPAPmWloSpm7YNE7EZtVkJ+DW6hwcHOmoZ Qo9RdQoX1X8bhrpE3Yo8FscaWbQghTN13qprbhckaGSnnC2eHykQl7GNwOHCC8kLARnGtKTAlS4vS u0LYoCAnuJtxnXqJPFn8nmWfTKRYzK84Kj8qC5LCZGEK18K/XJ0O4Ax6uEQFD15TDs42WgAyOKm9R PIMydsTQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkG-00Gq88-2X; Wed, 04 May 2022 18:29:04 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 18/26] mm/shmem: Convert shmem_add_to_page_cache to take a folio Date: Wed, 4 May 2022 19:28:49 +0100 Message-Id: <20220504182857.4013401-19-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=TCjwIlpv; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 0898140025 X-Stat-Signature: zkgbnd1t4z9z1mo4h35ot44ssst1yn1r X-HE-Tag: 1651688928-130203 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Shrinks shmem_add_to_page_cache() by 16 bytes. All the callers grow, but this is temporary as they will all be converted to folios soon. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/shmem.c | 57 +++++++++++++++++++++++++++++------------------------- 1 file changed, 31 insertions(+), 26 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 3461bdec6b38..f77456aaae96 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -695,36 +695,35 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, /* * Like add_to_page_cache_locked, but error if expected item has gone. */ -static int shmem_add_to_page_cache(struct page *page, +static int shmem_add_to_page_cache(struct folio *folio, struct address_space *mapping, pgoff_t index, void *expected, gfp_t gfp, struct mm_struct *charge_mm) { - XA_STATE_ORDER(xas, &mapping->i_pages, index, compound_order(page)); - unsigned long nr = compound_nr(page); + XA_STATE_ORDER(xas, &mapping->i_pages, index, folio_order(folio)); + long nr = folio_nr_pages(folio); int error; - VM_BUG_ON_PAGE(PageTail(page), page); - VM_BUG_ON_PAGE(index != round_down(index, nr), page); - VM_BUG_ON_PAGE(!PageLocked(page), page); - VM_BUG_ON_PAGE(!PageSwapBacked(page), page); - VM_BUG_ON(expected && PageTransHuge(page)); + VM_BUG_ON_FOLIO(index != round_down(index, nr), folio); + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + VM_BUG_ON_FOLIO(!folio_test_swapbacked(folio), folio); + VM_BUG_ON(expected && folio_test_large(folio)); - page_ref_add(page, nr); - page->mapping = mapping; - page->index = index; + folio_ref_add(folio, nr); + folio->mapping = mapping; + folio->index = index; - if (!PageSwapCache(page)) { - error = mem_cgroup_charge(page_folio(page), charge_mm, gfp); + if (!folio_test_swapcache(folio)) { + error = mem_cgroup_charge(folio, charge_mm, gfp); if (error) { - if (PageTransHuge(page)) { + if (folio_test_pmd_mappable(folio)) { count_vm_event(THP_FILE_FALLBACK); count_vm_event(THP_FILE_FALLBACK_CHARGE); } goto error; } } - cgroup_throttle_swaprate(page, gfp); + folio_throttle_swaprate(folio, gfp); do { xas_lock_irq(&xas); @@ -736,16 +735,16 @@ static int shmem_add_to_page_cache(struct page *page, xas_set_err(&xas, -EEXIST); goto unlock; } - xas_store(&xas, page); + xas_store(&xas, folio); if (xas_error(&xas)) goto unlock; - if (PageTransHuge(page)) { + if (folio_test_pmd_mappable(folio)) { count_vm_event(THP_FILE_ALLOC); - __mod_lruvec_page_state(page, NR_SHMEM_THPS, nr); + __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr); } mapping->nrpages += nr; - __mod_lruvec_page_state(page, NR_FILE_PAGES, nr); - __mod_lruvec_page_state(page, NR_SHMEM, nr); + __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr); + __lruvec_stat_mod_folio(folio, NR_SHMEM, nr); unlock: xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp)); @@ -757,8 +756,8 @@ static int shmem_add_to_page_cache(struct page *page, return 0; error: - page->mapping = NULL; - page_ref_sub(page, nr); + folio->mapping = NULL; + folio_ref_sub(folio, nr); return error; } @@ -1690,7 +1689,8 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, struct address_space *mapping = inode->i_mapping; struct shmem_inode_info *info = SHMEM_I(inode); struct mm_struct *charge_mm = vma ? vma->vm_mm : NULL; - struct page *page; + struct page *page = NULL; + struct folio *folio; swp_entry_t swap; int error; @@ -1740,7 +1740,8 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, goto failed; } - error = shmem_add_to_page_cache(page, mapping, index, + folio = page_folio(page); + error = shmem_add_to_page_cache(folio, mapping, index, swp_to_radix_entry(swap), gfp, charge_mm); if (error) @@ -1791,6 +1792,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, struct shmem_inode_info *info = SHMEM_I(inode); struct shmem_sb_info *sbinfo; struct mm_struct *charge_mm; + struct folio *folio; struct page *page; pgoff_t hindex = index; gfp_t huge_gfp; @@ -1905,7 +1907,8 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, if (sgp == SGP_WRITE) __SetPageReferenced(page); - error = shmem_add_to_page_cache(page, mapping, hindex, + folio = page_folio(page); + error = shmem_add_to_page_cache(folio, mapping, hindex, NULL, gfp & GFP_RECLAIM_MASK, charge_mm); if (error) @@ -2327,6 +2330,7 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, gfp_t gfp = mapping_gfp_mask(mapping); pgoff_t pgoff = linear_page_index(dst_vma, dst_addr); void *page_kaddr; + struct folio *folio; struct page *page; int ret; pgoff_t max_off; @@ -2385,7 +2389,8 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (unlikely(pgoff >= max_off)) goto out_release; - ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL, + folio = page_folio(page); + ret = shmem_add_to_page_cache(folio, mapping, pgoff, NULL, gfp & GFP_RECLAIM_MASK, dst_mm); if (ret) goto out_release; From patchwork Wed May 4 18:28:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838368 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFCDCC433FE for ; Wed, 4 May 2022 18:29:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7E7366B0096; Wed, 4 May 2022 14:29:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F5286B009B; Wed, 4 May 2022 14:29:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AF56B6B0092; Wed, 4 May 2022 14:29:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 318CB6B0087 for ; Wed, 4 May 2022 14:29:06 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 1421F1656 for ; Wed, 4 May 2022 18:29:06 +0000 (UTC) X-FDA: 79428897492.12.1C1872D Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id 6E9921A008B for ; Wed, 4 May 2022 18:28:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=uabWLQ5m8/94bMf9QUU5RwciH7K0ufNKxK4Ld1Z9V+4=; b=eOTFXQQwjIGVJgswvkNK8Ji6kA 4uToUC2z/hSAUSuafgIFM6qgaIGXcnn4Cw1Kg31Jt3ImlxkzxNVnXlh5JhWiOH9vmWeQO7ktEJVAk vIhT6m1Z6UlMUDo1SkHW3rI3OlEKECMgLeYk8XBsPJOBLv7atCe1FopRLeqDqGanJ83V4+MJ4N0Ml IfK58TdSYJv6eA3wB+wH6G38R0Uwxrlo4kbYszOWelbpu3RPRJFC3cS7LdWx51zDN4+dBuW0R2wg7 oceN5/GmKG38W8dl0X9XwH1dNg9hsweSGe/K+NXxtu+4cNuoXWbid1jjsnTBduVvLHKzQg1BxtLxP DlvblsUA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkG-00Gq8J-7C; Wed, 04 May 2022 18:29:04 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 19/26] mm/shmem: Turn shmem_should_replace_page into shmem_should_replace_folio Date: Wed, 4 May 2022 19:28:50 +0100 Message-Id: <20220504182857.4013401-20-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 6E9921A008B X-Stat-Signature: 8odzmtq744ngssidiuu3ekyt73pamkhi Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=eOTFXQQw; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-HE-Tag: 1651688939-260744 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a straightforward conversion. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/shmem.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index f77456aaae96..7946ccbc60bf 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1600,9 +1600,9 @@ static struct page *shmem_alloc_and_acct_page(gfp_t gfp, * NUMA mempolicy, and applied also to anonymous pages in do_swap_page(); * but for now it is a simple matter of zone. */ -static bool shmem_should_replace_page(struct page *page, gfp_t gfp) +static bool shmem_should_replace_folio(struct folio *folio, gfp_t gfp) { - return page_zonenum(page) > gfp_zone(gfp); + return folio_zonenum(folio) > gfp_zone(gfp); } static int shmem_replace_page(struct page **pagep, gfp_t gfp, @@ -1734,13 +1734,13 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, */ arch_swap_restore(swap, page); - if (shmem_should_replace_page(page, gfp)) { + folio = page_folio(page); + if (shmem_should_replace_folio(folio, gfp)) { error = shmem_replace_page(&page, gfp, info, index); if (error) goto failed; } - folio = page_folio(page); error = shmem_add_to_page_cache(folio, mapping, index, swp_to_radix_entry(swap), gfp, charge_mm); From patchwork Wed May 4 18:28:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838370 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB4BFC433EF for ; Wed, 4 May 2022 18:29:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 03A2C6B009A; Wed, 4 May 2022 14:29:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7DA0F6B0085; Wed, 4 May 2022 14:29:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B18B6B0096; Wed, 4 May 2022 14:29:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7138B6B0095 for ; Wed, 4 May 2022 14:29:06 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 46CA26127B for ; Wed, 4 May 2022 18:29:06 +0000 (UTC) X-FDA: 79428897492.14.C8E49F8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 97C85C0096 for ; Wed, 4 May 2022 18:29:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=3pwir8YI2R1oRmHK3o1ZbbZuL5pCSoelPTjYvCw/rxQ=; b=hRm9ox9sHZnS+Lw4I7WgqdgYJ5 u77ckq9tCKkR6GjxtSEIixDXSUhMAjsbiwDS8anyPHofO4RaMVAxMlZqLLUCQ4j78Reb4XG1+tcjP Qn3Cm9lx/xmWki+i8nrL3mpCtsh/G3uA5bda/804OA3qoTzeoGzvWHOuSOY3BXRYlmoqKehWct4Cg myljfBfpuB722hC92IwOZB5B2tevwMP7dUBcE94I5V6O6sgftK9yT/OSE2fbj96wa0+Xql532bT1g 4/VoyfQjE3el6+97kzfSdOCppa371vdqnTb3sGm5J3jfySrJrR6KyrDtlI5VoqT4ND/Gp7rVxxwkW z3siPs5w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkG-00Gq8P-Bi; Wed, 04 May 2022 18:29:04 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 20/26] mm/shmem: Add shmem_alloc_folio() Date: Wed, 4 May 2022 19:28:51 +0100 Message-Id: <20220504182857.4013401-21-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 97C85C0096 X-Stat-Signature: yu4hzw1bg7xq7or889ax4dfgytmhbtp1 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hRm9ox9s; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1651688944-807200 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Call vma_alloc_folio() directly instead of alloc_page_vma(). Add a shmem_alloc_page() wrapper to avoid changing the callers. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/shmem.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 7946ccbc60bf..36a4d7f07e0b 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1543,17 +1543,23 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp, return &folio->page; } -static struct page *shmem_alloc_page(gfp_t gfp, +static struct folio *shmem_alloc_folio(gfp_t gfp, struct shmem_inode_info *info, pgoff_t index) { struct vm_area_struct pvma; - struct page *page; + struct folio *folio; shmem_pseudo_vma_init(&pvma, info, index); - page = alloc_page_vma(gfp, &pvma, 0); + folio = vma_alloc_folio(gfp, 0, &pvma, 0, false); shmem_pseudo_vma_destroy(&pvma); - return page; + return folio; +} + +static struct page *shmem_alloc_page(gfp_t gfp, + struct shmem_inode_info *info, pgoff_t index) +{ + return &shmem_alloc_folio(gfp, info, index)->page; } static struct page *shmem_alloc_and_acct_page(gfp_t gfp, From patchwork Wed May 4 18:28:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61336C4167B for ; Wed, 4 May 2022 18:29:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C018B6B0099; Wed, 4 May 2022 14:29:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 40CA36B009A; Wed, 4 May 2022 14:29:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF3256B0087; Wed, 4 May 2022 14:29:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 984416B0085 for ; Wed, 4 May 2022 14:29:06 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 6B7142B316 for ; Wed, 4 May 2022 18:29:06 +0000 (UTC) X-FDA: 79428897492.22.2E29682 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id CE4D480097 for ; Wed, 4 May 2022 18:29:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=KCGFMWstlwCQri3/MYlLlNDeEw20HeCV8bFed16Pygc=; b=vhftwwRUBge/q+QcD7LI9Jnr7v h/f1whj9goTzPf2iJglvZK7LLkOflrUG+InXqmOt6y4HJeHXni3P5hpjEMX8bvG2660uB2QGK0+t0 eK3xGZ3Ed91h1XiACy5aOCYQO1Gchwk5YjbVvGJL2GHJiyhfuYzC6+opxnUoG5w5j3T7OnBSh8bH3 lw4lP89z4NCkFtvA1BS1kR4AKIGWcZ2JxKEQKPgutzavHv/f861xW6VfAxoEM6q3a+sz2CBxXCBos tMePoOc6SgrFKKCY+PiU4mxzyo0JjWHyQtU/0KzqcvNEQx9y07lwbhHsFWUMXyqHFzed/kNWwFfnq rTYsy1nA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkG-00Gq8f-HN; Wed, 04 May 2022 18:29:04 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 21/26] mm/shmem: Convert shmem_alloc_and_acct_page to use a folio Date: Wed, 4 May 2022 19:28:52 +0100 Message-Id: <20220504182857.4013401-22-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=vhftwwRU; dmarc=none; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: CE4D480097 X-Rspam-User: X-Stat-Signature: tjywc8kwtoeegod5krfoo4t44u1rpsoh X-HE-Tag: 1651688940-538907 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert shmem_alloc_hugepage() to return the folio that it uses and use a folio throughout shmem_alloc_and_acct_page(). Continue to return a page from shmem_alloc_and_acct_page() for now. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/shmem.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 36a4d7f07e0b..352137f0090a 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1522,7 +1522,7 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp) return result; } -static struct page *shmem_alloc_hugepage(gfp_t gfp, +static struct folio *shmem_alloc_hugefolio(gfp_t gfp, struct shmem_inode_info *info, pgoff_t index) { struct vm_area_struct pvma; @@ -1540,7 +1540,7 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp, shmem_pseudo_vma_destroy(&pvma); if (!folio) count_vm_event(THP_FILE_FALLBACK); - return &folio->page; + return folio; } static struct folio *shmem_alloc_folio(gfp_t gfp, @@ -1567,7 +1567,7 @@ static struct page *shmem_alloc_and_acct_page(gfp_t gfp, pgoff_t index, bool huge) { struct shmem_inode_info *info = SHMEM_I(inode); - struct page *page; + struct folio *folio; int nr; int err = -ENOSPC; @@ -1579,13 +1579,13 @@ static struct page *shmem_alloc_and_acct_page(gfp_t gfp, goto failed; if (huge) - page = shmem_alloc_hugepage(gfp, info, index); + folio = shmem_alloc_hugefolio(gfp, info, index); else - page = shmem_alloc_page(gfp, info, index); - if (page) { - __SetPageLocked(page); - __SetPageSwapBacked(page); - return page; + folio = shmem_alloc_folio(gfp, info, index); + if (folio) { + __folio_set_locked(folio); + __folio_set_swapbacked(folio); + return &folio->page; } err = -ENOMEM; From patchwork Wed May 4 18:28:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B384C433EF for ; Wed, 4 May 2022 18:29:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DD1516B0095; Wed, 4 May 2022 14:29:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4092A6B009D; Wed, 4 May 2022 14:29:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E48076B0087; Wed, 4 May 2022 14:29:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 313786B009D for ; Wed, 4 May 2022 14:29:07 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id DD68A215C2 for ; Wed, 4 May 2022 18:29:06 +0000 (UTC) X-FDA: 79428897492.14.050D5F7 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id 0EA2E1A0087 for ; Wed, 4 May 2022 18:28:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=gw6nQrQ8gssQOmEzSUA8G2WVRiC+SP2HO+dZUCZB0xQ=; b=FGXFgN7FB68+dJjq58wBEqxZgU vfChp0WlSevQ2I8GQ2BjdT082nNdpC92Vi6IkOk0XA1BBg2V2gg3O+jTvLyoqL6Hfa5vid/D7qGu/ vI6FwyY5MmRIGXD8/mhE3vWK8bwFOpKFL5uWyhKOTml066oF4crMzWQA5FyCuaoztadr+v/RgnirG /NVGzUTF99rR6UpIie6p55lDbndw4v4VgHwQl7eu18JuVpZoL9pN374HH16gO5YUMKJ0mN1AJ2ziO 8+x5Bs2hiiufuj1Py3g+znm4jXfuOSh34g/jLv7qStGZD/SonaqZc01ktph0AJJVys1Hjn+qPjbhg VDcEi1cw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkG-00Gq8r-N7; Wed, 04 May 2022 18:29:04 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 22/26] mm/shmem: Convert shmem_getpage_gfp to use a folio Date: Wed, 4 May 2022 19:28:53 +0100 Message-Id: <20220504182857.4013401-23-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 0EA2E1A0087 X-Stat-Signature: q1yjk44b14gqgnng98qe345m19kip4dm Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=FGXFgN7F; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-HE-Tag: 1651688939-20566 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Rename shmem_alloc_and_acct_page() to shmem_alloc_and_acct_folio() and have it return a folio, then use a folio throuughout shmem_getpage_gfp(). It continues to return a struct page. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/shmem.c | 95 ++++++++++++++++++++++++------------------------------ 1 file changed, 43 insertions(+), 52 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 352137f0090a..236641c346e8 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1562,8 +1562,7 @@ static struct page *shmem_alloc_page(gfp_t gfp, return &shmem_alloc_folio(gfp, info, index)->page; } -static struct page *shmem_alloc_and_acct_page(gfp_t gfp, - struct inode *inode, +static struct folio *shmem_alloc_and_acct_folio(gfp_t gfp, struct inode *inode, pgoff_t index, bool huge) { struct shmem_inode_info *info = SHMEM_I(inode); @@ -1585,7 +1584,7 @@ static struct page *shmem_alloc_and_acct_page(gfp_t gfp, if (folio) { __folio_set_locked(folio); __folio_set_swapbacked(folio); - return &folio->page; + return folio; } err = -ENOMEM; @@ -1799,7 +1798,6 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, struct shmem_sb_info *sbinfo; struct mm_struct *charge_mm; struct folio *folio; - struct page *page; pgoff_t hindex = index; gfp_t huge_gfp; int error; @@ -1817,19 +1815,18 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, sbinfo = SHMEM_SB(inode->i_sb); charge_mm = vma ? vma->vm_mm : NULL; - page = pagecache_get_page(mapping, index, - FGP_ENTRY | FGP_HEAD | FGP_LOCK, 0); - - if (page && vma && userfaultfd_minor(vma)) { - if (!xa_is_value(page)) { - unlock_page(page); - put_page(page); + folio = __filemap_get_folio(mapping, index, FGP_ENTRY | FGP_LOCK, 0); + if (folio && vma && userfaultfd_minor(vma)) { + if (!xa_is_value(folio)) { + folio_unlock(folio); + folio_put(folio); } *fault_type = handle_userfault(vmf, VM_UFFD_MINOR); return 0; } - if (xa_is_value(page)) { + if (xa_is_value(folio)) { + struct page *page = &folio->page; error = shmem_swapin_page(inode, index, &page, sgp, gfp, vma, fault_type); if (error == -EEXIST) @@ -1839,17 +1836,17 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, return error; } - if (page) { - hindex = page->index; + if (folio) { + hindex = folio->index; if (sgp == SGP_WRITE) - mark_page_accessed(page); - if (PageUptodate(page)) + folio_mark_accessed(folio); + if (folio_test_uptodate(folio)) goto out; /* fallocated page */ if (sgp != SGP_READ) goto clear; - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); } /* @@ -1876,17 +1873,16 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, huge_gfp = vma_thp_gfp_mask(vma); huge_gfp = limit_gfp_mask(huge_gfp, gfp); - page = shmem_alloc_and_acct_page(huge_gfp, inode, index, true); - if (IS_ERR(page)) { + folio = shmem_alloc_and_acct_folio(huge_gfp, inode, index, true); + if (IS_ERR(folio)) { alloc_nohuge: - page = shmem_alloc_and_acct_page(gfp, inode, - index, false); + folio = shmem_alloc_and_acct_folio(gfp, inode, index, false); } - if (IS_ERR(page)) { + if (IS_ERR(folio)) { int retry = 5; - error = PTR_ERR(page); - page = NULL; + error = PTR_ERR(folio); + folio = NULL; if (error != -ENOSPC) goto unlock; /* @@ -1905,30 +1901,26 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, goto unlock; } - if (PageTransHuge(page)) - hindex = round_down(index, HPAGE_PMD_NR); - else - hindex = index; + hindex = round_down(index, folio_nr_pages(folio)); if (sgp == SGP_WRITE) - __SetPageReferenced(page); + __folio_set_referenced(folio); - folio = page_folio(page); error = shmem_add_to_page_cache(folio, mapping, hindex, NULL, gfp & GFP_RECLAIM_MASK, charge_mm); if (error) goto unacct; - lru_cache_add(page); + folio_add_lru(folio); spin_lock_irq(&info->lock); - info->alloced += compound_nr(page); - inode->i_blocks += BLOCKS_PER_PAGE << compound_order(page); + info->alloced += folio_nr_pages(folio); + inode->i_blocks += BLOCKS_PER_PAGE << folio_order(folio); shmem_recalc_inode(inode); spin_unlock_irq(&info->lock); alloced = true; - if (PageTransHuge(page) && + if (folio_test_pmd_mappable(folio) && DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE) < hindex + HPAGE_PMD_NR - 1) { /* @@ -1959,22 +1951,21 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, * but SGP_FALLOC on a page fallocated earlier must initialize * it now, lest undo on failure cancel our earlier guarantee. */ - if (sgp != SGP_WRITE && !PageUptodate(page)) { - int i; + if (sgp != SGP_WRITE && !folio_test_uptodate(folio)) { + long i, n = folio_nr_pages(folio); - for (i = 0; i < compound_nr(page); i++) { - clear_highpage(page + i); - flush_dcache_page(page + i); - } - SetPageUptodate(page); + for (i = 0; i < n; i++) + clear_highpage(folio_page(folio, i)); + flush_dcache_folio(folio); + folio_mark_uptodate(folio); } /* Perhaps the file has been truncated since we checked */ if (sgp <= SGP_CACHE && ((loff_t)index << PAGE_SHIFT) >= i_size_read(inode)) { if (alloced) { - ClearPageDirty(page); - delete_from_page_cache(page); + folio_clear_dirty(folio); + filemap_remove_folio(folio); spin_lock_irq(&info->lock); shmem_recalc_inode(inode); spin_unlock_irq(&info->lock); @@ -1983,24 +1974,24 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, goto unlock; } out: - *pagep = page + index - hindex; + *pagep = folio_page(folio, index - hindex); return 0; /* * Error recovery. */ unacct: - shmem_inode_unacct_blocks(inode, compound_nr(page)); + shmem_inode_unacct_blocks(inode, folio_nr_pages(folio)); - if (PageTransHuge(page)) { - unlock_page(page); - put_page(page); + if (folio_test_large(folio)) { + folio_unlock(folio); + folio_put(folio); goto alloc_nohuge; } unlock: - if (page) { - unlock_page(page); - put_page(page); + if (folio) { + folio_unlock(folio); + folio_put(folio); } if (error == -ENOSPC && !once++) { spin_lock_irq(&info->lock); From patchwork Wed May 4 18:28:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838372 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2AF2FC433F5 for ; Wed, 4 May 2022 18:29:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5D07E6B0081; Wed, 4 May 2022 14:29:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BD0016B008A; Wed, 4 May 2022 14:29:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A5476B009C; Wed, 4 May 2022 14:29:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id F14106B0085 for ; Wed, 4 May 2022 14:29:06 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id CDBC01653 for ; Wed, 4 May 2022 18:29:06 +0000 (UTC) X-FDA: 79428897492.19.2FA758B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id 56639C0080 for ; Wed, 4 May 2022 18:28:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=DiPrvSpg+to434uQqC3ER0kQQwijw/k13y162BktD/U=; b=HMEeQcu5gWw+RXcqBZIivDudqD DS7S/1+41/US8r/RLbo7xtGYMev5PRnf/ALc54hxbFKiNSPRggL91FTHUWU4huBMXKEW3GCFqHP2S a6ahH7jO+XQevJ8XVCEUFAnxP0dTadEiuk4nQ7tTUR7V3NmXHap5o1TETZLjNWTaP3UlFIwd0B4D2 JaBrdm9VE7KLalLRL77mJWqa6ve7NCo28WOOx7xlYdJHznYkjEmCu6rwgsxoWDyj2LLVzb+EQY4hg CX+M3uLcK1obORE0r4TMiRCdawdr7Nxd9SZg65yqxoQfMVRBy4jna+7Jo/aKlR7E9svAWlho3BZv+ 88gyL5ww==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkG-00Gq96-S8; Wed, 04 May 2022 18:29:04 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 23/26] mm/shmem: Convert shmem_swapin_page() to shmem_swapin_folio() Date: Wed, 4 May 2022 19:28:54 +0100 Message-Id: <20220504182857.4013401-24-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: zf6r5r7bjz8sexeto1mefmkrehxoobh1 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 56639C0080 X-Rspam-User: Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=HMEeQcu5; dmarc=none; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1651688932-577874 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: shmem_swapin_page() only brings in order-0 pages, which are folios by definition. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- arch/arm64/include/asm/pgtable.h | 6 +- include/linux/pgtable.h | 2 +- mm/shmem.c | 110 ++++++++++++++----------------- 3 files changed, 55 insertions(+), 63 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index dff2b483ea50..27cb6a355fb0 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -964,10 +964,10 @@ static inline void arch_swap_invalidate_area(int type) } #define __HAVE_ARCH_SWAP_RESTORE -static inline void arch_swap_restore(swp_entry_t entry, struct page *page) +static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) { - if (system_supports_mte() && mte_restore_tags(entry, page)) - set_bit(PG_mte_tagged, &page->flags); + if (system_supports_mte() && mte_restore_tags(entry, &folio->page)) + set_bit(PG_mte_tagged, &folio->flags); } #endif /* CONFIG_ARM64_MTE */ diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index f4f4077b97aa..a1c44b015463 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -738,7 +738,7 @@ static inline void arch_swap_invalidate_area(int type) #endif #ifndef __HAVE_ARCH_SWAP_RESTORE -static inline void arch_swap_restore(swp_entry_t entry, struct page *page) +static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) { } #endif diff --git a/mm/shmem.c b/mm/shmem.c index 236641c346e8..b0bdfb8d4c15 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -134,8 +134,8 @@ static unsigned long shmem_default_max_inodes(void) } #endif -static int shmem_swapin_page(struct inode *inode, pgoff_t index, - struct page **pagep, enum sgp_type sgp, +static int shmem_swapin_folio(struct inode *inode, pgoff_t index, + struct folio **foliop, enum sgp_type sgp, gfp_t gfp, struct vm_area_struct *vma, vm_fault_t *fault_type); static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, @@ -1158,69 +1158,64 @@ static void shmem_evict_inode(struct inode *inode) } static int shmem_find_swap_entries(struct address_space *mapping, - pgoff_t start, unsigned int nr_entries, - struct page **entries, pgoff_t *indices, - unsigned int type) + pgoff_t start, struct folio_batch *fbatch, + pgoff_t *indices, unsigned int type) { XA_STATE(xas, &mapping->i_pages, start); - struct page *page; + struct folio *folio; swp_entry_t entry; unsigned int ret = 0; - if (!nr_entries) - return 0; - rcu_read_lock(); - xas_for_each(&xas, page, ULONG_MAX) { - if (xas_retry(&xas, page)) + xas_for_each(&xas, folio, ULONG_MAX) { + if (xas_retry(&xas, folio)) continue; - if (!xa_is_value(page)) + if (!xa_is_value(folio)) continue; - entry = radix_to_swp_entry(page); + entry = radix_to_swp_entry(folio); if (swp_type(entry) != type) continue; indices[ret] = xas.xa_index; - entries[ret] = page; + if (!folio_batch_add(fbatch, folio)) + break; if (need_resched()) { xas_pause(&xas); cond_resched_rcu(); } - if (++ret == nr_entries) - break; } rcu_read_unlock(); - return ret; + return xas.xa_index; } /* * Move the swapped pages for an inode to page cache. Returns the count * of pages swapped in, or the error in case of failure. */ -static int shmem_unuse_swap_entries(struct inode *inode, struct pagevec pvec, - pgoff_t *indices) +static int shmem_unuse_swap_entries(struct inode *inode, + struct folio_batch *fbatch, pgoff_t *indices) { int i = 0; int ret = 0; int error = 0; struct address_space *mapping = inode->i_mapping; - for (i = 0; i < pvec.nr; i++) { - struct page *page = pvec.pages[i]; + for (i = 0; i < folio_batch_count(fbatch); i++) { + struct folio *folio = fbatch->folios[i]; - if (!xa_is_value(page)) + if (!xa_is_value(folio)) continue; - error = shmem_swapin_page(inode, indices[i], - &page, SGP_CACHE, + error = shmem_swapin_folio(inode, indices[i], + &folio, SGP_CACHE, mapping_gfp_mask(mapping), NULL, NULL); if (error == 0) { - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); ret++; } if (error == -ENOMEM) @@ -1237,26 +1232,23 @@ static int shmem_unuse_inode(struct inode *inode, unsigned int type) { struct address_space *mapping = inode->i_mapping; pgoff_t start = 0; - struct pagevec pvec; + struct folio_batch fbatch; pgoff_t indices[PAGEVEC_SIZE]; int ret = 0; - pagevec_init(&pvec); do { - unsigned int nr_entries = PAGEVEC_SIZE; - - pvec.nr = shmem_find_swap_entries(mapping, start, nr_entries, - pvec.pages, indices, type); - if (pvec.nr == 0) { + folio_batch_init(&fbatch); + shmem_find_swap_entries(mapping, start, &fbatch, indices, type); + if (folio_batch_count(&fbatch) == 0) { ret = 0; break; } - ret = shmem_unuse_swap_entries(inode, pvec, indices); + ret = shmem_unuse_swap_entries(inode, &fbatch, indices); if (ret < 0) break; - start = indices[pvec.nr - 1]; + start = indices[folio_batch_count(&fbatch) - 1]; } while (true); return ret; @@ -1686,22 +1678,22 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp, * Returns 0 and the page in pagep if success. On failure, returns the * error code and NULL in *pagep. */ -static int shmem_swapin_page(struct inode *inode, pgoff_t index, - struct page **pagep, enum sgp_type sgp, +static int shmem_swapin_folio(struct inode *inode, pgoff_t index, + struct folio **foliop, enum sgp_type sgp, gfp_t gfp, struct vm_area_struct *vma, vm_fault_t *fault_type) { struct address_space *mapping = inode->i_mapping; struct shmem_inode_info *info = SHMEM_I(inode); struct mm_struct *charge_mm = vma ? vma->vm_mm : NULL; - struct page *page = NULL; - struct folio *folio; + struct page *page; + struct folio *folio = NULL; swp_entry_t swap; int error; - VM_BUG_ON(!*pagep || !xa_is_value(*pagep)); - swap = radix_to_swp_entry(*pagep); - *pagep = NULL; + VM_BUG_ON(!*foliop || !xa_is_value(*foliop)); + swap = radix_to_swp_entry(*foliop); + *foliop = NULL; /* Look it up and read it in.. */ page = lookup_swap_cache(swap, NULL, 0); @@ -1719,27 +1711,28 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, goto failed; } } + folio = page_folio(page); /* We have to do this with page locked to prevent races */ - lock_page(page); - if (!PageSwapCache(page) || page_private(page) != swap.val || + folio_lock(folio); + if (!folio_test_swapcache(folio) || + folio_swap_entry(folio).val != swap.val || !shmem_confirm_swap(mapping, index, swap)) { error = -EEXIST; goto unlock; } - if (!PageUptodate(page)) { + if (!folio_test_uptodate(folio)) { error = -EIO; goto failed; } - wait_on_page_writeback(page); + folio_wait_writeback(folio); /* * Some architectures may have to restore extra metadata to the - * physical page after reading from swap. + * folio after reading from swap. */ - arch_swap_restore(swap, page); + arch_swap_restore(swap, folio); - folio = page_folio(page); if (shmem_should_replace_folio(folio, gfp)) { error = shmem_replace_page(&page, gfp, info, index); if (error) @@ -1758,21 +1751,21 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, spin_unlock_irq(&info->lock); if (sgp == SGP_WRITE) - mark_page_accessed(page); + folio_mark_accessed(folio); - delete_from_swap_cache(page); - set_page_dirty(page); + delete_from_swap_cache(&folio->page); + folio_mark_dirty(folio); swap_free(swap); - *pagep = page; + *foliop = folio; return 0; failed: if (!shmem_confirm_swap(mapping, index, swap)) error = -EEXIST; unlock: - if (page) { - unlock_page(page); - put_page(page); + if (folio) { + folio_unlock(folio); + folio_put(folio); } return error; @@ -1826,13 +1819,12 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, } if (xa_is_value(folio)) { - struct page *page = &folio->page; - error = shmem_swapin_page(inode, index, &page, + error = shmem_swapin_folio(inode, index, &folio, sgp, gfp, vma, fault_type); if (error == -EEXIST) goto repeat; - *pagep = page; + *pagep = &folio->page; return error; } From patchwork Wed May 4 18:28:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AB99C433EF for ; Wed, 4 May 2022 18:29:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8FCE86B008A; Wed, 4 May 2022 14:29:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 23A376B009B; Wed, 4 May 2022 14:29:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B5C836B0098; Wed, 4 May 2022 14:29:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 15DC86B008A for ; Wed, 4 May 2022 14:29:07 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id EBF6761730 for ; Wed, 4 May 2022 18:29:06 +0000 (UTC) X-FDA: 79428897492.05.F62A43D Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id 88CF080090 for ; Wed, 4 May 2022 18:28:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=LAMyJ4IE7ii6aITIzT2pAymK7IiskbwFe401xxeoRNk=; b=pqX9zb9DciMFfHF9PiapWB6vUl pdpt6cxf27ZegFJRKgQ3Z5MvTGWkRNPd0S3L3DtwWErS+YOJp8F4/XeqQVxkvru3jjXs2pE0UxdDE Z6HVxRlRZvpKDrgBMviAmUHCOVVp4pXycZLxpoIk4Opeb9eO1ZsxRZKU9aLRtLzgSOuwaF+UoGe52 2fP/B5HWAToXsnGPehe9Fs0omMrGosPFfrgkLHmqJ4DgRRhNEbltUKaj0N3IpG4xsXEqR/o7mw3o6 boPtsrEduuCZwVOw3lJMMe+NnUo2N7xuq+aSmPYAA7yVeBCopRxpiy7XYQDSdSuB+GF0LZkNTn4fY Z3VOS3AQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkH-00Gq9M-1N; Wed, 04 May 2022 18:29:05 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 24/26] mm: Add folio_mapping_flags() Date: Wed, 4 May 2022 19:28:55 +0100 Message-Id: <20220504182857.4013401-25-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 88CF080090 X-Stat-Signature: 1x3aooz8nxinkfkjtqrsbdnac9i4r7ka Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=pqX9zb9D; dmarc=none; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-HE-Tag: 1651688933-172086 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is the equivalent of PageMappingFlags and is needed for converting mm/migrate.c to folios. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/page-flags.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 9d8eeaa67d05..6bfa9918321d 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -641,6 +641,11 @@ __PAGEFLAG(Reported, reported, PF_NO_COMPOUND) #define PAGE_MAPPING_KSM (PAGE_MAPPING_ANON | PAGE_MAPPING_MOVABLE) #define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_MOVABLE) +static __always_inline bool folio_mapping_flags(struct folio *folio) +{ + return ((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS) != 0; +} + static __always_inline int PageMappingFlags(struct page *page) { return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) != 0; From patchwork Wed May 4 18:28:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838376 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 947EDC433F5 for ; Wed, 4 May 2022 18:29:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 16DD66B009B; Wed, 4 May 2022 14:29:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 58ECD6B0087; Wed, 4 May 2022 14:29:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 282116B0098; Wed, 4 May 2022 14:29:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 3C9036B009E for ; Wed, 4 May 2022 14:29:07 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 10D2926573 for ; Wed, 4 May 2022 18:29:07 +0000 (UTC) X-FDA: 79428897534.19.449F6B6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 11AC0100085 for ; Wed, 4 May 2022 18:29:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=nTEkyGMNV3YIsBhZJ3SG87n83LNasMy5ITvvN2uOSLM=; b=XHeL/IFPoKPKRXrq30EfqqF4m4 5EbzUbGNuP2d8gnsd5a5204y+jvEcKWZOXwGg71oR+6ftol6oDD1BI7TRH/mOjQHdh1dsi2oaaXvH iAWeBVDJKEPbQOyGzZWFaLopLPO3fiNNHR+OD08HVebl0QEbc5c65szYkPpbo+JCwzm5zPUZAhdcV E+psAz9wIEaWwS9vOIU/ZMxr6JNsMjo0u4OvTMOkpXHw+xHnJroofzyFUposXoHP8+ozbQwbOkwOI cTNFdJBwm20HhrDEQPuDTpOBNnDJAWEHJbVWCW5ylRcllYSMaSwGKXXAH+EYk1J8KviGrlzJ/qkhE 8Asqks8w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkH-00Gq9U-5S; Wed, 04 May 2022 18:29:05 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 25/26] mm: Add folio_test_movable() Date: Wed, 4 May 2022 19:28:56 +0100 Message-Id: <20220504182857.4013401-26-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: yaz5xnem799win1gpspsrx8d6codoptm X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 11AC0100085 Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="XHeL/IFP"; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspam-User: X-HE-Tag: 1651688944-335127 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is the folio equivalent of PageMovable() which is needed to convert mm/migrate.c to folios. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/migrate.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 90e75d5a54d6..8b9d1e3d243d 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -100,6 +100,11 @@ static inline void __ClearPageMovable(struct page *page) } #endif +static inline bool folio_test_movable(struct folio *folio) +{ + return PageMovable(&folio->page); +} + #ifdef CONFIG_NUMA_BALANCING extern int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, int node); From patchwork Wed May 4 18:28:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838377 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0D99C433FE for ; Wed, 4 May 2022 18:29:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C4746B0087; Wed, 4 May 2022 14:29:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9E4DC6B009E; Wed, 4 May 2022 14:29:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5265E6B009C; Wed, 4 May 2022 14:29:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5EEC96B009F for ; Wed, 4 May 2022 14:29:07 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 3F3F91653 for ; Wed, 4 May 2022 18:29:07 +0000 (UTC) X-FDA: 79428897534.25.EE88539 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id 7B49F40087 for ; Wed, 4 May 2022 18:28:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ZiXX1ec3Aock3u4mY4UTS1lsOFn1Ilvwgy0/X5MGdzw=; b=biMOq83D8Zd1J6y3HAZlvk2Ymq 5bMubzw5T+70ibkDnd1/aw+mka2u/77GVsyMP0DfY45DPdrjaSwJb+CrNn049SxhEWvE45+BJVNvF svc2jocq1TtpFDZeqg1H2N/YjJpyBTxOiLqXP7IjGZun80TnmzKpxpFJLjaURQKqm8NDZVWwdORTP 7RJZgfkFyi9ponYwb3DwBdBz6dQ5E2xe2saInmYbxj1nCGKma5+N3flgZz238jH9M8ahz4bP69Np0 hwcc2kAZoFELnAiVtxw0oa3bgBtPwRwSCXjNtAMtRaHceml0DOdCs/Hk1HbZLgGJsRXSF0xLoObA0 IdVrGWdw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkH-00Gq9b-8z; Wed, 04 May 2022 18:29:05 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 26/26] mm/migrate: Convert move_to_new_page() into move_to_new_folio() Date: Wed, 4 May 2022 19:28:57 +0100 Message-Id: <20220504182857.4013401-27-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 7B49F40087 X-Stat-Signature: fgwj4uxrdzmkxq7kgmkaapw7anowyc6j X-Rspam-User: Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=biMOq83D; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1651688939-186440 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pass in the folios that we already have in each caller. Saves a lot of calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/migrate.c | 58 ++++++++++++++++++++++++++-------------------------- 1 file changed, 29 insertions(+), 29 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 6c31ee1e1c9b..b664673d5833 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -836,21 +836,21 @@ static int fallback_migrate_page(struct address_space *mapping, * < 0 - error code * MIGRATEPAGE_SUCCESS - success */ -static int move_to_new_page(struct page *newpage, struct page *page, +static int move_to_new_folio(struct folio *dst, struct folio *src, enum migrate_mode mode) { struct address_space *mapping; int rc = -EAGAIN; - bool is_lru = !__PageMovable(page); + bool is_lru = !__PageMovable(&src->page); - VM_BUG_ON_PAGE(!PageLocked(page), page); - VM_BUG_ON_PAGE(!PageLocked(newpage), newpage); + VM_BUG_ON_FOLIO(!folio_test_locked(src), src); + VM_BUG_ON_FOLIO(!folio_test_locked(dst), dst); - mapping = page_mapping(page); + mapping = folio_mapping(src); if (likely(is_lru)) { if (!mapping) - rc = migrate_page(mapping, newpage, page, mode); + rc = migrate_page(mapping, &dst->page, &src->page, mode); else if (mapping->a_ops->migratepage) /* * Most pages have a mapping and most filesystems @@ -859,54 +859,54 @@ static int move_to_new_page(struct page *newpage, struct page *page, * migratepage callback. This is the most common path * for page migration. */ - rc = mapping->a_ops->migratepage(mapping, newpage, - page, mode); + rc = mapping->a_ops->migratepage(mapping, &dst->page, + &src->page, mode); else - rc = fallback_migrate_page(mapping, newpage, - page, mode); + rc = fallback_migrate_page(mapping, &dst->page, + &src->page, mode); } else { /* * In case of non-lru page, it could be released after * isolation step. In that case, we shouldn't try migration. */ - VM_BUG_ON_PAGE(!PageIsolated(page), page); - if (!PageMovable(page)) { + VM_BUG_ON_FOLIO(!folio_test_isolated(src), src); + if (!folio_test_movable(src)) { rc = MIGRATEPAGE_SUCCESS; - ClearPageIsolated(page); + folio_clear_isolated(src); goto out; } - rc = mapping->a_ops->migratepage(mapping, newpage, - page, mode); + rc = mapping->a_ops->migratepage(mapping, &dst->page, + &src->page, mode); WARN_ON_ONCE(rc == MIGRATEPAGE_SUCCESS && - !PageIsolated(page)); + !folio_test_isolated(src)); } /* - * When successful, old pagecache page->mapping must be cleared before - * page is freed; but stats require that PageAnon be left as PageAnon. + * When successful, old pagecache src->mapping must be cleared before + * src is freed; but stats require that PageAnon be left as PageAnon. */ if (rc == MIGRATEPAGE_SUCCESS) { - if (__PageMovable(page)) { - VM_BUG_ON_PAGE(!PageIsolated(page), page); + if (__PageMovable(&src->page)) { + VM_BUG_ON_FOLIO(!folio_test_isolated(src), src); /* * We clear PG_movable under page_lock so any compactor * cannot try to migrate this page. */ - ClearPageIsolated(page); + folio_clear_isolated(src); } /* - * Anonymous and movable page->mapping will be cleared by + * Anonymous and movable src->mapping will be cleared by * free_pages_prepare so don't reset it here for keeping * the type to work PageAnon, for example. */ - if (!PageMappingFlags(page)) - page->mapping = NULL; + if (!folio_mapping_flags(src)) + src->mapping = NULL; - if (likely(!is_zone_device_page(newpage))) - flush_dcache_folio(page_folio(newpage)); + if (likely(!folio_is_zone_device(dst))) + flush_dcache_folio(dst); } out: return rc; @@ -994,7 +994,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage, goto out_unlock; if (unlikely(!is_lru)) { - rc = move_to_new_page(newpage, page, mode); + rc = move_to_new_folio(dst, folio, mode); goto out_unlock_both; } @@ -1025,7 +1025,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage, } if (!page_mapped(page)) - rc = move_to_new_page(newpage, page, mode); + rc = move_to_new_folio(dst, folio, mode); /* * When successful, push newpage to LRU immediately: so that if it @@ -1256,7 +1256,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page, } if (!page_mapped(hpage)) - rc = move_to_new_page(new_hpage, hpage, mode); + rc = move_to_new_folio(dst, src, mode); if (page_was_mapped) remove_migration_ptes(src,