From patchwork Tue Jul 30 06:46:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13746753 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 49B90C3DA61 for ; Tue, 30 Jul 2024 06:43:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NbqOH0yrT4uztFf6D2Sp0Xp/JgtDnAeC4c73u8gppko=; b=XHhV4WyFW7et1A r6gvLsJrKsQiWUnR+3thUq0inTF4Acov86lVVeQuSnP6+9hlFxZALe303A1jV6gJVYzqXDlbHmp0A oAaB9hC6OoUgSsNInL96eeBbNqv83SHYo3aXmsmFWKwYWupfx35gzm5Xxz4KidxgL2WV/ZZSnZoC9 HuAzYh/MW85pCtVhrI+s6AzSPFXl8VTsIrb+qi3rdbcNJpxnLqFbCSDlFu+33hYaB7kjYhXZ4BnrG l3fDiXKS9gHLfKli/VELKtMWKzo+JWw+kN+Hm2Qu0SBZBeuG4VB4C+QogkJlgry94XunmCHX2n6cy ABvzeIVn2sCxO8dopQVA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYgZf-0000000Ds2J-2A96; Tue, 30 Jul 2024 06:43:07 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYgZa-0000000Drzy-3RSi; Tue, 30 Jul 2024 06:43:04 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 87D5DCE0E11; Tue, 30 Jul 2024 06:43:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CD7E2C4AF10; Tue, 30 Jul 2024 06:42:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722321779; bh=gTArplwlUTbTCuHNeuVct9T0SiQYRSRnIpLhqPQ+rcE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EMNS+x8HI+8Q08E7NcZ4WD5BkiWh9xAxJgNUSeEC7wUsbadoC1FFu91tQMZE7Hy2S IUmoRLjOcThG09j1Sqjri8IS5v5L/GqTmjdQarB0bmky4bNiqAj/NZnJ7JFHJeD8eM xp4GUDXu8Pgl+WLlnMHA4AQrs4fRuuLAioV9xJyUj3Qfg+O5uVNHyjdnKujlXeeMze y4pE/ENss3Y27SBqOaPJvTM3lQjwDoA5fmkCORb6Q7jSG49BTJJjCdSyOBxPURAgT7 N7ujc9Lmfynti/YNZ3I4QClP7CXUI+TSJh/h+u7SmrhYbHOtbMTzdHI4QxlG3cF9IU WbYLxc+yhe31Q== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , Andrew Morton Subject: [RFC PATCH 01/18] mm/pgtable: use ptdesc in pte_free_now/pte_free_defer Date: Tue, 30 Jul 2024 14:46:55 +0800 Message-ID: <20240730064712.3714387-2-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730064712.3714387-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240729_234303_238511_B24CCBE4 X-CRM114-Status: GOOD ( 14.63 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Alex Shi page table descriptor is splited from struct page, use it to replace struct page in right place. Signed-off-by: Alex Shi Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Cc: Andrew Morton Cc: Matthew Wilcox Cc: David Hildenbrand --- mm/pgtable-generic.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 13a7705df3f8..2ce714f1dd15 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -238,18 +238,17 @@ pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address, #ifndef pte_free_defer static void pte_free_now(struct rcu_head *head) { - struct page *page; + struct ptdesc *ptdesc; - page = container_of(head, struct page, rcu_head); - pte_free(NULL /* mm not passed and not used */, (pgtable_t)page); + ptdesc = container_of(head, struct ptdesc, pt_rcu_head); + pte_free(NULL /* mm not passed and not used */, (pgtable_t)ptdesc); } void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable) { - struct page *page; + struct ptdesc *ptdesc = page_ptdesc(pgtable); - page = pgtable; - call_rcu(&page->rcu_head, pte_free_now); + call_rcu(&ptdesc->pt_rcu_head, pte_free_now); } #endif /* pte_free_defer */ #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -330,7 +329,7 @@ pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd, * kmapped if necessary (when CONFIG_HIGHPTE), and locked against concurrent * modification by software, with a pointer to that spinlock in ptlp (in some * configs mm->page_table_lock, in SPLIT_PTLOCK configs a spinlock in table's - * struct page). pte_unmap_unlock(pte, ptl) to unlock and unmap afterwards. + * struct ptdesc). pte_unmap_unlock(pte, ptl) to unlock and unmap afterwards. * * But it is unsuccessful, returning NULL with *ptlp unchanged, if there is no * page table at *pmd: if, for example, the page table has just been removed, From patchwork Tue Jul 30 06:46:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13746754 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D8260C3DA49 for ; Tue, 30 Jul 2024 06:43:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=jgj9+tbAyjYUa0UkBXlhHOQkbZoSPIGKNIYk7iXCSWM=; b=UEmjt/RQ91P48r 8nnBkXqUS2fbif9xPi6M6WTsvoClpdbeUGR/4+wLEkm5OXR+Pxa3kcLEVXkLX9LBGp+hohmHS749+ XONhD1D+LWzz9TPdQXAXiPoJ0UVYv2ltwOLbAs/uEM77kYCivyVSYxdwfQ9fLTgnMryyY5L5Sqx1e pKmYeguzlmc7ZUKtim3wKV9EmTsNFXG2X/kSxm3rMCpUfuZj4z63BfxjARvPdd6OmpHsgoMULZfav Mh3GTsbHoeujmM8GVFLDzBnWnHdjBUCnK0RsetzRDyLe3LT9/7uxjdjeYSGQT8zyaesd1WD2YXJy+ eLlBaTnjaFBxAvMOGb2Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYgZt-0000000Ds9P-2wGH; Tue, 30 Jul 2024 06:43:21 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYgZp-0000000Ds5M-0Jfl; Tue, 30 Jul 2024 06:43:18 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id B8E46CE0EA6; Tue, 30 Jul 2024 06:43:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 469FBC32782; Tue, 30 Jul 2024 06:43:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722321793; bh=9NZb0rrKQqKYXT4j44hct+A4v/K9XblH1QcI1NJ86wY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uXb4byD4QEB/pykGQSUxfRgInQG2nhMHKXm5xZNegBtn3rrGWMaLewVYiqUNRYDtJ aQ+opzP1rC4aJUxCXkL2wAnqAIW3nIQYdRxHYnksJ0PJsaNlzE/MV3jYxsY7CWC6i6 A55O5dfE29WFwTXyQHh6asj1EX4qez2hBgWChYq1nW9wZhzPJt7adzM3dujj4fTqjW GPMlZQsWyciJ/EykTgqqAmIjcZelbGlK8I8ok7QSAhrcTaitTlPHXoD4wXFz71j46R BpZv8MZ6nqA/BUfQ66+uiCN6lG0xAGpqKsFp7FzaM7qSm50/Hdt7uFpAhZjkPrnNQ4 JZOzjdQ+jlf0A== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , sparclinux@vger.kernel.org, Andreas Larsson , "David S . Miller" , Zi Yan Subject: [RFC PATCH 02/18] mm/pgtable: convert ptdesc.pmd_huge_pte to ptdesc pointer Date: Tue, 30 Jul 2024 14:46:56 +0800 Message-ID: <20240730064712.3714387-3-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730064712.3714387-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240729_234317_527504_A48D3703 X-CRM114-Status: GOOD ( 16.00 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Alex Shi folio/page.pmd_huge_pte is a pointer to pagetable descriptor: pgtable_t. In most arch, it is a typedef of 'struct page *'. But we have ptdesc now, tt's better to convert it to right one: struct ptdesc pointer. Different from others, s390/sparc use typedef 'pte_t *' as pgtable_t, so they need different cost in their arch. Thanks for lkp found the build issue for s390/sparc, it fixed now. Signed-off-by: Alex Shi Cc: linux-mm@kvack.org Cc: sparclinux@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-s390@vger.kernel.org Cc: Ryan Roberts Cc: Andreas Larsson Cc: David S. Miller Cc: Sven Schnelle Cc: Christian Borntraeger Cc: Vasily Gorbik Cc: Heiko Carstens Cc: Gerald Schaefer Cc: Alexander Gordeev Cc: Zi Yan Cc: Matthew Wilcox Cc: Mike Rapoport Cc: David Hildenbrand --- arch/s390/mm/pgtable.c | 6 +++--- arch/sparc/mm/tlb.c | 6 +++--- include/linux/mm_types.h | 4 ++-- mm/pgtable-generic.c | 16 ++++++++-------- 4 files changed, 16 insertions(+), 16 deletions(-) diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c index 2c944bafb030..201d350abd1e 100644 --- a/arch/s390/mm/pgtable.c +++ b/arch/s390/mm/pgtable.c @@ -574,7 +574,7 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, INIT_LIST_HEAD(lh); else list_add(lh, (struct list_head *) pmd_huge_pte(mm, pmdp)); - pmd_huge_pte(mm, pmdp) = pgtable; + pmd_huge_pte(mm, pmdp) = (struct ptdesc *)pgtable; } pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) @@ -586,12 +586,12 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) assert_spin_locked(pmd_lockptr(mm, pmdp)); /* FIFO */ - pgtable = pmd_huge_pte(mm, pmdp); + pgtable = (pte_t *)pmd_huge_pte(mm, pmdp); lh = (struct list_head *) pgtable; if (list_empty(lh)) pmd_huge_pte(mm, pmdp) = NULL; else { - pmd_huge_pte(mm, pmdp) = (pgtable_t) lh->next; + pmd_huge_pte(mm, pmdp) = (struct ptdesc *) lh->next; list_del(lh); } ptep = (pte_t *) pgtable; diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c index 8648a50afe88..903825b4c997 100644 --- a/arch/sparc/mm/tlb.c +++ b/arch/sparc/mm/tlb.c @@ -278,7 +278,7 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, INIT_LIST_HEAD(lh); else list_add(lh, (struct list_head *) pmd_huge_pte(mm, pmdp)); - pmd_huge_pte(mm, pmdp) = pgtable; + pmd_huge_pte(mm, pmdp) = (struct ptdesc *)pgtable; } pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) @@ -289,12 +289,12 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) assert_spin_locked(&mm->page_table_lock); /* FIFO */ - pgtable = pmd_huge_pte(mm, pmdp); + pgtable = (pte_t *)pmd_huge_pte(mm, pmdp); lh = (struct list_head *) pgtable; if (list_empty(lh)) pmd_huge_pte(mm, pmdp) = NULL; else { - pmd_huge_pte(mm, pmdp) = (pgtable_t) lh->next; + pmd_huge_pte(mm, pmdp) = (struct ptdesc *) lh->next; list_del(lh); } pte_val(pgtable[0]) = 0; diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 485424979254..2e3eddf6edc9 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -462,7 +462,7 @@ struct ptdesc { struct list_head pt_list; struct { unsigned long _pt_pad_1; - pgtable_t pmd_huge_pte; + struct ptdesc *pmd_huge_pte; }; }; unsigned long __page_mapping; @@ -948,7 +948,7 @@ struct mm_struct { struct mmu_notifier_subscriptions *notifier_subscriptions; #endif #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && !USE_SPLIT_PMD_PTLOCKS - pgtable_t pmd_huge_pte; /* protected by page_table_lock */ + struct ptdesc *pmd_huge_pte; /* protected by page_table_lock */ #endif #ifdef CONFIG_NUMA_BALANCING /* diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 2ce714f1dd15..f34a8d115f5b 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -171,8 +171,8 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, if (!pmd_huge_pte(mm, pmdp)) INIT_LIST_HEAD(&pgtable->lru); else - list_add(&pgtable->lru, &pmd_huge_pte(mm, pmdp)->lru); - pmd_huge_pte(mm, pmdp) = pgtable; + list_add(&pgtable->lru, &pmd_huge_pte(mm, pmdp)->pt_list); + pmd_huge_pte(mm, pmdp) = page_ptdesc(pgtable); } #endif @@ -180,17 +180,17 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, /* no "address" argument so destroys page coloring of some arch */ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) { - pgtable_t pgtable; + struct ptdesc *ptdesc; assert_spin_locked(pmd_lockptr(mm, pmdp)); /* FIFO */ - pgtable = pmd_huge_pte(mm, pmdp); - pmd_huge_pte(mm, pmdp) = list_first_entry_or_null(&pgtable->lru, - struct page, lru); + ptdesc = pmd_huge_pte(mm, pmdp); + pmd_huge_pte(mm, pmdp) = list_first_entry_or_null(&ptdesc->pt_list, + struct ptdesc, pt_list); if (pmd_huge_pte(mm, pmdp)) - list_del(&pgtable->lru); - return pgtable; + list_del(&ptdesc->pt_list); + return ptdesc_page(ptdesc); } #endif From patchwork Tue Jul 30 06:46:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13746755 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 26D61C3DA61 for ; Tue, 30 Jul 2024 06:43:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=OuvEk2lX9KJzbyQjsUY6NtevbLTkNGcvyU/S5HbQ8Vg=; b=RxgN3yllaCdYqQ CL79QvXfHgDmgH+ytbKlWdKI+Exgq4IT/hr+XEUw+c4pW84ohTENyr2rQrZGZvzxwNw+iqTg/j1tF qJTBfMRowMoRBAQdQnDF1czZwvXyhXd6FA57mzxEiCHVS2+gaJ1tUryspXNrLA8nm05iKwqBNfnc7 iet0Z1E6bvbUIsMKCdHAFCAvtQlWIdCVTjYZmI08FPp1pmOv9XucZ9nZFJ0QMqcfd8vtzKB94eqJ2 TBl9dwZCiq1D1KLLLWMd+0YQa0i7suy3qFrZL6WoBcnogY3eF2sJsnhmBsl1bWx+PpqKuqgdwF93X 3xzyGiW0YB39PciFJREQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYga6-0000000DsFu-1hvF; Tue, 30 Jul 2024 06:43:34 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYga1-0000000DsCU-34ne; Tue, 30 Jul 2024 06:43:31 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 88F4661D65; Tue, 30 Jul 2024 06:43:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 50D8BC4AF10; Tue, 30 Jul 2024 06:43:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722321808; bh=zlev6U+OoFjkfoKmEXM32LjGGbg3IzN8MVfe5YQ2ABk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WjS+HiBAB8GnrP16+FWhPBhBioxJ8kmlN6GNWX0t7MoSEgpFt9udUClAwzyOYEIxt GLWwkGU4O6zyc/VGIYKFle6KtAV0AVQ6WLPz1ptPTAt/cK1x1zSGjLfSPVGHaFL7Aw Y4AKOOT2SpoOjc/Wbh1+/cP2W5ZkFPUFKciWkTsqP4dtf5muBEfTCKVF7OjlDaXXFP Mk1LMgQy97vcUdIahsxWds+dQ+j5TRS6Lhw6QAeYu9XrrqZGdvSMQyXycNkwaPempx E3DYobdwtOelveqsy4Y98Iq66A3nNqzOwW6shcZICnV51PmNjmCB9mjEsRRSoolbPG qB3gp0EMlnwSw== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , nvdimm@lists.linux.dev, linux-fsdevel@vger.kernel.org, Christian Brauner , Alexander Viro , Jan Kara , Dan Williams Subject: [RFC PATCH 03/18] fs/dax: use ptdesc in dax_pmd_load_hole Date: Tue, 30 Jul 2024 14:46:57 +0800 Message-ID: <20240730064712.3714387-4-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730064712.3714387-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240729_234329_909498_7A7A6066 X-CRM114-Status: GOOD ( 11.84 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Alex Shi Since we have ptdesc struct now, better to use replace pgtable_t, aka 'struct page *'. It's a prepare for return ptdesc pointer in pte_alloc_one series function. Signed-off-by: Alex Shi Cc: linux-kernel@vger.kernel.org Cc: nvdimm@lists.linux.dev Cc: linux-fsdevel@vger.kernel.org Cc: Christian Brauner Cc: Alexander Viro Cc: Jan Kara Cc: Matthew Wilcox Cc: Dan Williams --- fs/dax.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index becb4a6920c6..6f7cea248206 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1206,7 +1206,7 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf, unsigned long pmd_addr = vmf->address & PMD_MASK; struct vm_area_struct *vma = vmf->vma; struct inode *inode = mapping->host; - pgtable_t pgtable = NULL; + struct ptdesc *ptdesc = NULL; struct folio *zero_folio; spinlock_t *ptl; pmd_t pmd_entry; @@ -1222,8 +1222,8 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf, DAX_PMD | DAX_ZERO_PAGE); if (arch_needs_pgtable_deposit()) { - pgtable = pte_alloc_one(vma->vm_mm); - if (!pgtable) + ptdesc = page_ptdesc(pte_alloc_one(vma->vm_mm)); + if (!ptdesc) return VM_FAULT_OOM; } @@ -1233,8 +1233,8 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf, goto fallback; } - if (pgtable) { - pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); + if (ptdesc) { + pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, ptdesc_page(ptdesc)); mm_inc_nr_ptes(vma->vm_mm); } pmd_entry = mk_pmd(&zero_folio->page, vmf->vma->vm_page_prot); @@ -1245,8 +1245,8 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf, return VM_FAULT_NOPAGE; fallback: - if (pgtable) - pte_free(vma->vm_mm, pgtable); + if (ptdesc) + pte_free(vma->vm_mm, ptdesc_page(ptdesc)); trace_dax_pmd_load_hole_fallback(inode, vmf, zero_folio, *entry); return VM_FAULT_FALLBACK; } From patchwork Tue Jul 30 06:46:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13746756 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B1115C3DA61 for ; Tue, 30 Jul 2024 06:43:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zf+J1P6WPqQ9t59pO1+2GXR0/iSDBuN3n0ErjV1LGoQ=; b=VnpJ9pgsQB7cpX qir8DkCmR5gapYCnCCIZj2AFjyZR6rSttJYV4KnTQ3tRbxJGgWe3RozMEbgq5NUI3xxzE1/CyUBp+ ZkZwcYdOMXZZmzQmvT/GVjd8YUrMjOVksd0GPo9RzlAcLlYT4tfjyCI4VwdVwmQ0/kDPHkHLh9Jca 5gWAwVe+T+PXeKUwOr9h8WKqPhFKdhr2fjsZYDIuNehJhSLEqT0kLZGZWdpejNCiwWH4psPywNP85 rPTi8SJKIVHdrTYDZtQ0pFcqXcni466ShhyuDVfzgvRvpMgXybjxtShf7q11v8QSgdk65ZlXrTJ69 ojh/PzbrZwE2ZN7FHomA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYgaK-0000000DsN2-3nNV; Tue, 30 Jul 2024 06:43:48 +0000 Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYgaG-0000000DsKz-33Cq; Tue, 30 Jul 2024 06:43:46 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 6F0E4CE0EB9; Tue, 30 Jul 2024 06:43:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BACF0C4AF0F; Tue, 30 Jul 2024 06:43:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722321821; bh=0HxdBM1LE6ysqiZRX6SJTg2axmKV4cup3qnH1WxeykQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pCvcQ31Oh/rZrX6vYGdEMdYSwjAA1+34RzrEr/t9o7MT5D+PNTa7YGGRObsY3Vj9J MUxGysHSwKrQcTkpl2SvBl6MqrXIGFBmI4eQ8UJaSIMIEoLnBkMvA28EEmbuYs8lJK mCDTcdUtjWr6glb6vgLgL4UY0BRUmhgrr7fA+xoTqQOS70P7y/nWIOhbfFqpU6R5sg QonRXyIz/vNUbS7vAW+pLYDApyNCiOl6Tc9/c2usw+ur/OSWJW0zIG0rK19z5A7Fpr j1UM3Hg0YFwY8/lXeCo/dTo4rnWTG7ZllMILNLVedgbzdeKrt+O1TPgn6MC8iP/Fif 4XWWZnrr+crcw== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , Andrew Morton Subject: [RFC PATCH 04/18] mm/thp: use ptdesc pointer in __do_huge_pmd_anonymous_page Date: Tue, 30 Jul 2024 14:46:58 +0800 Message-ID: <20240730064712.3714387-5-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730064712.3714387-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240729_234345_138522_72A1A907 X-CRM114-Status: GOOD ( 11.52 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Alex Shi Since we have ptdesc struct now, better to use replace pgtable_t, aka 'struct page *'. It's alaos a preparation for return ptdesc pointer in pte_alloc_one series function. Signed-off-by: Alex Shi Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Cc: Andrew Morton --- mm/huge_memory.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 0167dc27e365..0ee104093121 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -943,7 +943,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, { struct vm_area_struct *vma = vmf->vma; struct folio *folio = page_folio(page); - pgtable_t pgtable; + struct ptdesc *ptdesc; unsigned long haddr = vmf->address & HPAGE_PMD_MASK; vm_fault_t ret = 0; @@ -959,8 +959,8 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, } folio_throttle_swaprate(folio, gfp); - pgtable = pte_alloc_one(vma->vm_mm); - if (unlikely(!pgtable)) { + ptdesc = page_ptdesc(pte_alloc_one(vma->vm_mm)); + if (unlikely(!ptdesc)) { ret = VM_FAULT_OOM; goto release; } @@ -987,7 +987,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, if (userfaultfd_missing(vma)) { spin_unlock(vmf->ptl); folio_put(folio); - pte_free(vma->vm_mm, pgtable); + pte_free(vma->vm_mm, ptdesc_page(ptdesc)); ret = handle_userfault(vmf, VM_UFFD_MISSING); VM_BUG_ON(ret & VM_FAULT_FALLBACK); return ret; @@ -997,7 +997,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE); folio_add_lru_vma(folio, vma); - pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); + pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, ptdesc_page(ptdesc)); set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry); update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); @@ -1012,8 +1012,8 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, unlock_release: spin_unlock(vmf->ptl); release: - if (pgtable) - pte_free(vma->vm_mm, pgtable); + if (ptdesc) + pte_free(vma->vm_mm, ptdesc_page(ptdesc)); folio_put(folio); return ret; From patchwork Tue Jul 30 06:46:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13746757 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A0185C3DA61 for ; Tue, 30 Jul 2024 06:44:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=t70jRTBco+eLvMwlzL8axJkpnTsprapvtGAFQMRP0AI=; b=0IJiRYqqOobtOg fD0oZQ7r8juKniDVi+DrJuwY7f2KH0MxGGJmhVZ5GLeZekcKkRSiiKzeIZj8AZGKykkqCdETCe6hi omREm9HMQUfnS/HXMmkmvzH2F6GUChGwcZJSpm66V0NMt/sM/bn+phr3vwcbuvIpuKbZspUwJo2vf CzIE0sgh71zy6x5qcTvil3Uan1kv568gTwSxsP/ymc7R8TdeWHYYKUffgGhP5nZFsSGbYF81OM6aW Oi1Af76g7dadTgpYxu0M85RuEhl/vCaseR/YWiBe9EttbFwrjN5ptzFIRssXGnPzLzobxsE7Q0doM kSHM4yftU/ZhSyC+yRwg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYgaW-0000000DsVF-3lWq; Tue, 30 Jul 2024 06:44:00 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYgaS-0000000DsRt-0aQd; Tue, 30 Jul 2024 06:43:57 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 6F55661D93; Tue, 30 Jul 2024 06:43:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 33505C4AF10; Tue, 30 Jul 2024 06:43:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722321835; bh=liNll9iqljHOGZGyoe5+iBjU60hIZmqEawxmeeNx05k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sF6QdOTYXsuhZ5/wKcNXr1fpLi2Nc7NTYTTlW0GyhipNPb/9ArXCaTfUMRF4glafN ERGFWbhPiJp0rpmYk//9iOQ4Y6AWC6KbdqdnXw9EAsyfZb9SiKLpZrjeJa4W887xAU iVeioOBZRlSigQTkAI1svU6u1CTnxum7RI995sRZM+TABKT73TI9KeunNb+F7Daix0 V49XC5PL7gFKWpc7LTU2ULT4wVE0DoSUB5onWtjcXwdrpZRUDJf6X0eNb/eT2TMkmJ 5rFakvhSK2oYTuzfh3G24YHCdvf2JT6UzyA+rwvZu+uZaNsor+hHhabSQIlD9nTc6+ JA10DmcfN2EZw== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , Andrew Morton Subject: [RFC PATCH 05/18] mm/thp: use ptdesc in do_huge_pmd_anonymous_page Date: Tue, 30 Jul 2024 14:46:59 +0800 Message-ID: <20240730064712.3714387-6-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730064712.3714387-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240729_234356_340187_349E5D9F X-CRM114-Status: GOOD ( 12.42 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Alex Shi ince we have ptdesc struct now, better to use replace pgtable_t, aka 'struct page *'. It's alaos a preparation for return ptdesc pointer in pte_alloc_one series function. Signed-off-by: Alex Shi Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Cc: Andrew Morton --- mm/huge_memory.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 0ee104093121..d86108d81a99 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1087,16 +1087,16 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) if (!(vmf->flags & FAULT_FLAG_WRITE) && !mm_forbids_zeropage(vma->vm_mm) && transparent_hugepage_use_zero_page()) { - pgtable_t pgtable; + struct ptdesc *ptdesc; struct folio *zero_folio; vm_fault_t ret; - pgtable = pte_alloc_one(vma->vm_mm); - if (unlikely(!pgtable)) + ptdesc = page_ptdesc(pte_alloc_one(vma->vm_mm)); + if (unlikely(!ptdesc)) return VM_FAULT_OOM; zero_folio = mm_get_huge_zero_folio(vma->vm_mm); if (unlikely(!zero_folio)) { - pte_free(vma->vm_mm, pgtable); + pte_free(vma->vm_mm, ptdesc_page(ptdesc)); count_vm_event(THP_FAULT_FALLBACK); return VM_FAULT_FALLBACK; } @@ -1106,21 +1106,21 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) ret = check_stable_address_space(vma->vm_mm); if (ret) { spin_unlock(vmf->ptl); - pte_free(vma->vm_mm, pgtable); + pte_free(vma->vm_mm, ptdesc_page(ptdesc)); } else if (userfaultfd_missing(vma)) { spin_unlock(vmf->ptl); - pte_free(vma->vm_mm, pgtable); + pte_free(vma->vm_mm, ptdesc_page(ptdesc)); ret = handle_userfault(vmf, VM_UFFD_MISSING); VM_BUG_ON(ret & VM_FAULT_FALLBACK); } else { - set_huge_zero_folio(pgtable, vma->vm_mm, vma, + set_huge_zero_folio(ptdesc_page(ptdesc), vma->vm_mm, vma, haddr, vmf->pmd, zero_folio); update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); spin_unlock(vmf->ptl); } } else { spin_unlock(vmf->ptl); - pte_free(vma->vm_mm, pgtable); + pte_free(vma->vm_mm, ptdesc_page(ptdesc)); } return ret; } From patchwork Tue Jul 30 06:47:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13746758 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CFF78C3DA61 for ; Tue, 30 Jul 2024 06:44:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=AuYlFh2aqATy3YcM9LNPpdBJwT69rOoP8GpsB9f90ag=; b=gAyRU3U+F/Gy6d DXsdC9XUAt/0wL0JIy3m7Ycw7DE0Z5UcKcLJPmyFqpkf7i0EsjbZ7x55JSkWdTqt+jz20V3Wp+fev 4nqEZEY+xC6GpYibJI5UUG4h57Vz0JegkFGT2LVUFNlJkWUYNDYUNH/pvbZ/XHMFMIHLzlLAblmfw QgPU/afzkS2IzPv8ul6GEp5p0CKHh8QDa1JfSqMdiep9y8Te2z1xCv+/0jux1u4+OXq2NaebRMpUH kEtWl5T2/k86Tv/Bir1g/j0NRTA1UG1Z+f7goJX2m3vcrAzukqo602Rc1pxJebNkW1zYK4d5dOiDF 5RiTzRgZDZCTybskhrUg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYgap-0000000Dsfz-0EsB; Tue, 30 Jul 2024 06:44:19 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYgai-0000000Dsas-1sV5; Tue, 30 Jul 2024 06:44:15 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 83232CE0E51; Tue, 30 Jul 2024 06:44:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A1846C4AF11; Tue, 30 Jul 2024 06:43:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722321848; bh=PV8eOJ4vyzS4FxAkaZ5NDLLSz9ktugALPQl2HkO57GQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZEtbsRJnMf7Ux0oUP43wBe2ysz4FoXhhZTuwRiPtnQvwBzcF04hW3I71Xi5W3eYCz vb88N88O7flGQsN6anDDUISBm6/xyE43fRkDhAbxLnCjDlVv82fGw6FotjAYvt6GYX QFHcPrRinsx/+c7KPzDEsLxpolwpWompiMzb0pXGb0blFP07SEq1UyKFHAQv8XC/pa pwgu/eKYgUvwjUgeNQmkj2V0n+sfCR/zx78ePeUsj8CmLWrWcX3OzbvvjpOQap3f7i 4IH69bYV6ej5Ph0nh99eBqc8supUb+vBuZOERYMAwubQqITEfEu+UKFYgdgEO9AhVB gqDximJu4aCiQ== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , Andrew Morton Subject: [RFC PATCH 06/18] mm/thp: convert insert_pfn_pmd and its caller to use ptdesc Date: Tue, 30 Jul 2024 14:47:00 +0800 Message-ID: <20240730064712.3714387-7-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730064712.3714387-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240729_234412_876028_0AA9CC46 X-CRM114-Status: GOOD ( 12.76 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Alex Shi Since we have ptdesc struct now, better to use replace pgtable_t, aka 'struct page *'. It's alaos a preparation for return ptdesc pointer in pte_alloc_one series function. Signed-off-by: Alex Shi Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Cc: Andrew Morton --- mm/huge_memory.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d86108d81a99..a331d4504d52 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1136,7 +1136,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, pfn_t pfn, pgprot_t prot, bool write, - pgtable_t pgtable) + struct ptdesc *ptdesc) { struct mm_struct *mm = vma->vm_mm; pmd_t entry; @@ -1166,10 +1166,10 @@ static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, entry = maybe_pmd_mkwrite(entry, vma); } - if (pgtable) { - pgtable_trans_huge_deposit(mm, pmd, pgtable); + if (ptdesc) { + pgtable_trans_huge_deposit(mm, pmd, ptdesc_page(ptdesc)); mm_inc_nr_ptes(mm); - pgtable = NULL; + ptdesc = NULL; } set_pmd_at(mm, addr, pmd, entry); @@ -1177,8 +1177,8 @@ static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, out_unlock: spin_unlock(ptl); - if (pgtable) - pte_free(mm, pgtable); + if (ptdesc) + pte_free(mm, ptdesc_page(ptdesc)); } /** @@ -1196,7 +1196,7 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write) unsigned long addr = vmf->address & PMD_MASK; struct vm_area_struct *vma = vmf->vma; pgprot_t pgprot = vma->vm_page_prot; - pgtable_t pgtable = NULL; + struct ptdesc *ptdesc = NULL; /* * If we had pmd_special, we could avoid all these restrictions, @@ -1213,14 +1213,14 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write) return VM_FAULT_SIGBUS; if (arch_needs_pgtable_deposit()) { - pgtable = pte_alloc_one(vma->vm_mm); - if (!pgtable) + ptdesc = page_ptdesc(pte_alloc_one(vma->vm_mm)); + if (!ptdesc) return VM_FAULT_OOM; } track_pfn_insert(vma, &pgprot, pfn); - insert_pfn_pmd(vma, addr, vmf->pmd, pfn, pgprot, write, pgtable); + insert_pfn_pmd(vma, addr, vmf->pmd, pfn, pgprot, write, ptdesc); return VM_FAULT_NOPAGE; } EXPORT_SYMBOL_GPL(vmf_insert_pfn_pmd); From patchwork Tue Jul 30 06:47:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13746759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7674AC3DA61 for ; Tue, 30 Jul 2024 06:44:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=pfhHAZa+iVeCHL19ha9QHqhGF1Sew4zgTix9XCjfrfE=; b=gPqov1yn5XZ2DH Fo6gPWFexLG1rEjOWiFqwwcEx0A8uWBskRg0bWQgr2MhbKQma6h/VoP6eYwgWTgrVbpktdnwSJ7oc bLmn4EGoW1YmVtCWc370N/Be/YLNpMJigB9yfhnwEbXbzuceN7zz6ZIVpGIkqX34g/920NqnQYw7W GRe/vAR17HXRH0g3cQWu6MDOZ+YcUwRVNIGBd4ZyVq/qYrzvvNkv6CM214FpYSdjo1HtHaGbRQ+Yd AltGlIfHbqMOz8+GdHZ+pdKwHEPbnKhqdnfG6sc2buRowTztNd9AdAmUEOUv/0OaIHaXxby0UNJJB BYEVY3eIgjAXUfqav3SQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYgaz-0000000Dsmx-0Lkv; Tue, 30 Jul 2024 06:44:29 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYgav-0000000DskV-0m0O; Tue, 30 Jul 2024 06:44:26 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 5F6ED61DB0; Tue, 30 Jul 2024 06:44:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 20371C4AF09; Tue, 30 Jul 2024 06:44:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722321862; bh=G//eVJ9Nw8aOOKl2d7JUp/+QzzL5npEnufgRCiBDKNQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=J/Njo24pZ085kd/eT/gy5nmywtW77TTpW3xwRWYKhIkRHK+lzJdbLggDAN6TpyGvN rW07M9G7E1rNbOiK/BYavaoAKy5hQTRctqPFFo3zD8RaDkoOT734VNnkfzYGuRTgPr 7lKhNURsW3JqfLiFVe6G4I5vB1kH5jyn1aLmIMd7FJoJlqMApaFJRvyhUsEwo7ZIpN 7RM5QFW8/cV4jRPKXNJ+uwP9fbxQTUYzoz9IEwP+GKQXDzHKvni3jbtRsnh4VwsG8Q iDZw7zJUhoXwEXAb/FQFLoYiOb7hjhzSksX+Dz2XjNV5rBm5sV5pprZ9SrmxaTGPRh KrVL8p90HnjKQ== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , Andrew Morton Subject: [RFC PATCH 07/18] mm/thp: use ptdesc in copy_huge_pmd Date: Tue, 30 Jul 2024 14:47:01 +0800 Message-ID: <20240730064712.3714387-8-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730064712.3714387-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240729_234425_363504_C78B934D X-CRM114-Status: GOOD ( 12.60 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Alex Shi Since we have ptdesc struct now, better to use replace pgtable_t, aka 'struct page *'. It's alaos a preparation for return ptdesc pointer in pte_alloc_one series function. Signed-off-by: Alex Shi Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Cc: Andrew Morton --- mm/huge_memory.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a331d4504d52..236e1582d97e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1369,15 +1369,15 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, struct page *src_page; struct folio *src_folio; pmd_t pmd; - pgtable_t pgtable = NULL; + struct ptdesc *ptdesc = NULL; int ret = -ENOMEM; /* Skip if can be re-fill on fault */ if (!vma_is_anonymous(dst_vma)) return 0; - pgtable = pte_alloc_one(dst_mm); - if (unlikely(!pgtable)) + ptdesc = page_ptdesc(pte_alloc_one(dst_mm)); + if (unlikely(!ptdesc)) goto out; dst_ptl = pmd_lock(dst_mm, dst_pmd); @@ -1404,7 +1404,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, } add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); mm_inc_nr_ptes(dst_mm); - pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable); + pgtable_trans_huge_deposit(dst_mm, dst_pmd, ptdesc_page(ptdesc)); if (!userfaultfd_wp(dst_vma)) pmd = pmd_swp_clear_uffd_wp(pmd); set_pmd_at(dst_mm, addr, dst_pmd, pmd); @@ -1414,7 +1414,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, #endif if (unlikely(!pmd_trans_huge(pmd))) { - pte_free(dst_mm, pgtable); + pte_free(dst_mm, ptdesc_page(ptdesc)); goto out_unlock; } /* @@ -1440,7 +1440,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, if (unlikely(folio_try_dup_anon_rmap_pmd(src_folio, src_page, src_vma))) { /* Page maybe pinned: split and retry the fault on PTEs. */ folio_put(src_folio); - pte_free(dst_mm, pgtable); + pte_free(dst_mm, ptdesc_page(ptdesc)); spin_unlock(src_ptl); spin_unlock(dst_ptl); __split_huge_pmd(src_vma, src_pmd, addr, false, NULL); @@ -1449,7 +1449,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); out_zero_page: mm_inc_nr_ptes(dst_mm); - pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable); + pgtable_trans_huge_deposit(dst_mm, dst_pmd, ptdesc_page(ptdesc)); pmdp_set_wrprotect(src_mm, addr, src_pmd); if (!userfaultfd_wp(dst_vma)) pmd = pmd_clear_uffd_wp(pmd); From patchwork Tue Jul 30 06:47:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13746760 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C129EC3DA61 for ; Tue, 30 Jul 2024 06:44:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5aV5MwX8FU/+tIKY/RD+5jyksAxiq4I+tT3O51FCzi4=; b=SMCetomWts7tsA UunopEJqGbXwBvw8fKlH+a10rrVtfzBM3lrDmfQPYTcAitm7QaglnYqv17g8nB5uvp31UKk4Y3T+s Ps2J+C8J9TWJwNYI05dbwRWdoOmOaOLZAolVqGMR3fNr4flHdU32pgXFXaMyjd2sARd0Xhbw2YOo2 X4Pnhb81QGWhPSH8aQ0xmDc1SvH/qwbKaVkl7trwjCH0ZAIk+bmEXC/Png5XG1L3LlTi13AguBLQw lKciKTuqFvtVGF8XqNgllG/GWKH4zP0PISqFU31G6YqybbbygDQwxJRiZUJsSGpGcwmpVuzdzwgKW W+flj7PgO0Knh8us0ESA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYgbC-0000000DstT-2EnV; Tue, 30 Jul 2024 06:44:42 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYgb8-0000000Dsqr-1pzh; Tue, 30 Jul 2024 06:44:40 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 3F38ACE0E51; Tue, 30 Jul 2024 06:44:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 939DCC4AF09; Tue, 30 Jul 2024 06:44:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722321875; bh=ccXL2chU8a/94eL1T1pI+a3Q5/w60GPmzE5TdAMCqlY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mfAajOsY0bas1KjIZ7zbi0ly7eR8srfqY9/dGjSCEZBhCcX9YS6VakfWING1fsGF9 W0YCBavaz4bKXWCT+t6Yhfu/nqQvhZRU8uQ4308gV21op4ifIO73UuI1kTRWXnNtuQ s5Z/K60Rw4yOKEM2lNhxjV21z9g0mIa7otDskKJSUtzhlDHiE0nSamwvKeBMBIax9v uECArXHvXKdVLqGvan7uo4dKgcLJgys3j/9lFLYcEfUC1FUurphSmLjUuWStQ30LeH mcqLnZdVd1vSIjm4Q5ofH7j6+1Ir6OhcZ/gLzfEGTM1DHna6l6J8fJTKaOJDBS/8EQ RPhVDl0tGnBTA== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , Andrew Morton Subject: [RFC PATCH 08/18] mm/memory: use ptdesc in __pte_alloc Date: Tue, 30 Jul 2024 14:47:02 +0800 Message-ID: <20240730064712.3714387-9-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730064712.3714387-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240729_234438_838814_44C6D546 X-CRM114-Status: GOOD ( 12.04 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Alex Shi Replace pgtable_t by ptdesc in function __pte_alloc. We will remove pgtable_t from all place. Signed-off-by: Alex Shi Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Cc: Andrew Morton --- mm/memory.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index b9f5cc0db3eb..5b01d94a0b5f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -445,13 +445,13 @@ void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte) int __pte_alloc(struct mm_struct *mm, pmd_t *pmd) { - pgtable_t new = pte_alloc_one(mm); - if (!new) + struct ptdesc *ptdesc = page_ptdesc(pte_alloc_one(mm)); + if (!ptdesc) return -ENOMEM; - pmd_install(mm, pmd, &new); - if (new) - pte_free(mm, new); + pmd_install(mm, pmd, (pgtable_t *)&ptdesc); + if (ptdesc) + pte_free(mm, ptdesc_page(ptdesc)); return 0; } From patchwork Tue Jul 30 06:47:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13746761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 97245C3DA49 for ; Tue, 30 Jul 2024 06:45:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Y8V45aMSle4yvwVtPefq+/5X6ZivovMz1j+z+h+XsbM=; b=zllorjX9vesS9v hXDvXU6JPB+GiwEtB0A6vMZm/SNMTRNGr32FbwYmEXjnJ0cyma4oRw6ZYNvk6Iz9x5gje1E3SK74N dwqL9kN0Yb+PJrncCWx+XbLi/vj9btrw1czelQ/UhxdOiNIiBZ1CJWk7XRsz7WR4nOjqBr/vkCKy3 ItXb9bmHwajy/iOubaUqOradtv+6P1H8nRPU1SUZ2e1Yzl4fCJKATPXRZxf59aMqrPEPOzBC9+Iai THC/kUuJrgKsaVzvIcetUhJm93bH7SpEI9cmeFkthxPs+tPWA3WSqfwJj80FD6L1bIZ4b+Kg1PuhZ QyLedw3rPT0sIW0S3ezg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYgbT-0000000Dt3z-0Zo3; Tue, 30 Jul 2024 06:44:59 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYgbN-0000000Dt00-253n; Tue, 30 Jul 2024 06:44:55 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 24D1F61DB8; Tue, 30 Jul 2024 06:44:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 03182C32782; Tue, 30 Jul 2024 06:44:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722321891; bh=DVj1dF/3LIG4QvumgddsFqj1lx0SQkF4MqPzNl7vGkQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XU5gA41fGeeGuG84jrc0u9UbdT/1OsrKiK6gQ3j2Dsfzspr/VFoGGY+9jRktUTxPK eCGcueThv1EkrEWTBCdEOfYYfg98cNkh9aL9IF5PQmYQLJsXHenG6BTEDfpnx4HRTt TG8HbzuCVQ0Yi4yCrfqsrt0bVyQPCbBoH3u3b1xRrZCKzGXrJd+WVkan/f/ggLnn2L OAxW9p4eTi5ixJZtACGTpHef4YO3O2bTkOma9P1V+YtTDRQRofXG8ycaO2YYLPUoLA G8KSSn7XyBfjWFhT9lu+2dLE6tceUhkIIcFz3EtYj5naYaEqlGB/vOaIaJXYdxgv9q dGnAH3je2sPvA== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , nvdimm@lists.linux.dev, linux-fsdevel@vger.kernel.org, sparclinux@vger.kernel.org, Dawei Li , Arnd Bergmann , Christian Brauner , Alexander Viro , Jan Kara , Dan Williams , Max Filippov , Chris Zankel , "David S . Miller" , "Naveen N . Rao" , Bjorn Helgaas , Sam Ravnborg , Jason Gunthorpe Subject: [RFC PATCH 09/18] mm/pgtable: fully use ptdesc in pte_alloc_one series functions Date: Tue, 30 Jul 2024 14:47:03 +0800 Message-ID: <20240730064712.3714387-10-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730064712.3714387-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240729_234453_676995_67C1A611 X-CRM114-Status: GOOD ( 21.55 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Alex Shi Replace pgtable_t and struct page by ptdesc in pte_alloc_one series functions. Signed-off-by: Alex Shi Cc: linux-mm@kvack.org Cc: linux-arch@vger.kernel.org Cc: nvdimm@lists.linux.dev Cc: linux-fsdevel@vger.kernel.org Cc: sparclinux@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: Dawei Li Cc: Vishal Moola Cc: Arnd Bergmann Cc: Christian Brauner Cc: Alexander Viro Cc: Jan Kara Cc: Matthew Wilcox Cc: Dan Williams Cc: Max Filippov Cc: Chris Zankel Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: H. Peter Anvin Cc: x86@kernel.org Cc: Dave Hansen Cc: Borislav Petkov Cc: Thomas Gleixner Cc: David S. Miller Cc: Naveen N. Rao Cc: Christophe Leroy Cc: Nicholas Piggin Cc: Michael Ellerman Cc: Russell King Cc: Breno Leitao Cc: Josh Poimboeuf Cc: Bjorn Helgaas Cc: Sam Ravnborg Cc: Peter Xu Cc: Jason Gunthorpe Cc: Mike Rapoport Cc: Hugh Dickins --- arch/arm/include/asm/pgalloc.h | 9 ++++----- arch/powerpc/include/asm/pgalloc.h | 4 ++-- arch/s390/include/asm/pgalloc.h | 2 +- arch/sparc/include/asm/pgalloc_32.h | 2 +- arch/sparc/include/asm/pgalloc_64.h | 2 +- arch/sparc/mm/init_64.c | 2 +- arch/sparc/mm/srmmu.c | 4 ++-- arch/x86/include/asm/pgalloc.h | 2 +- arch/x86/mm/pgtable.c | 2 +- arch/xtensa/include/asm/pgalloc.h | 12 ++++++------ fs/dax.c | 2 +- include/asm-generic/pgalloc.h | 6 +++--- mm/huge_memory.c | 8 ++++---- mm/memory.c | 8 ++++---- 14 files changed, 32 insertions(+), 33 deletions(-) diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h index a17f01235c29..e8501a6c3336 100644 --- a/arch/arm/include/asm/pgalloc.h +++ b/arch/arm/include/asm/pgalloc.h @@ -91,16 +91,15 @@ pte_alloc_one_kernel(struct mm_struct *mm) #define PGTABLE_HIGHMEM 0 #endif -static inline pgtable_t -pte_alloc_one(struct mm_struct *mm) +static inline struct ptdesc *pte_alloc_one(struct mm_struct *mm) { - struct page *pte; + struct ptdesc *pte; pte = __pte_alloc_one(mm, GFP_PGTABLE_USER | PGTABLE_HIGHMEM); if (!pte) return NULL; - if (!PageHighMem(pte)) - clean_pte_table(page_address(pte)); + if (!PageHighMem(ptdesc_page(pte))) + clean_pte_table(ptdesc_address(pte)); return pte; } diff --git a/arch/powerpc/include/asm/pgalloc.h b/arch/powerpc/include/asm/pgalloc.h index 3a971e2a8c73..37512f344b37 100644 --- a/arch/powerpc/include/asm/pgalloc.h +++ b/arch/powerpc/include/asm/pgalloc.h @@ -27,9 +27,9 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) return (pte_t *)pte_fragment_alloc(mm, 1); } -static inline pgtable_t pte_alloc_one(struct mm_struct *mm) +static inline struct ptdesc *pte_alloc_one(struct mm_struct *mm) { - return (pgtable_t)pte_fragment_alloc(mm, 0); + return (struct ptdesc *)pte_fragment_alloc(mm, 0); } void pte_frag_destroy(void *pte_frag); diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h index 7b84ef6dc4b6..771494526f6e 100644 --- a/arch/s390/include/asm/pgalloc.h +++ b/arch/s390/include/asm/pgalloc.h @@ -137,7 +137,7 @@ static inline void pmd_populate(struct mm_struct *mm, * page table entry allocation/free routines. */ #define pte_alloc_one_kernel(mm) ((pte_t *)page_table_alloc(mm)) -#define pte_alloc_one(mm) ((pte_t *)page_table_alloc(mm)) +#define pte_alloc_one(mm) ((struct ptdesc *)page_table_alloc(mm)) #define pte_free_kernel(mm, pte) page_table_free(mm, (unsigned long *) pte) #define pte_free(mm, pte) page_table_free(mm, (unsigned long *) pte) diff --git a/arch/sparc/include/asm/pgalloc_32.h b/arch/sparc/include/asm/pgalloc_32.h index 4f73e87b22a3..bc3ef54d9564 100644 --- a/arch/sparc/include/asm/pgalloc_32.h +++ b/arch/sparc/include/asm/pgalloc_32.h @@ -55,7 +55,7 @@ static inline void free_pmd_fast(pmd_t * pmd) void pmd_set(pmd_t *pmdp, pte_t *ptep); #define pmd_populate_kernel pmd_populate -pgtable_t pte_alloc_one(struct mm_struct *mm); +struct ptdesc *pte_alloc_one(struct mm_struct *mm); static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) { diff --git a/arch/sparc/include/asm/pgalloc_64.h b/arch/sparc/include/asm/pgalloc_64.h index caa7632be4c2..285aa7958912 100644 --- a/arch/sparc/include/asm/pgalloc_64.h +++ b/arch/sparc/include/asm/pgalloc_64.h @@ -61,7 +61,7 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) } pte_t *pte_alloc_one_kernel(struct mm_struct *mm); -pgtable_t pte_alloc_one(struct mm_struct *mm); +struct ptdesc *pte_alloc_one(struct mm_struct *mm); void pte_free_kernel(struct mm_struct *mm, pte_t *pte); void pte_free(struct mm_struct *mm, pgtable_t ptepage); diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 53d7cb5bbffe..e1b33f996469 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -2900,7 +2900,7 @@ pte_t *pte_alloc_one_kernel(struct mm_struct *mm) return pte; } -pgtable_t pte_alloc_one(struct mm_struct *mm) +struct ptdesc *pte_alloc_one(struct mm_struct *mm) { struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0); diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c index 9df51a62333d..60bb8628bb1f 100644 --- a/arch/sparc/mm/srmmu.c +++ b/arch/sparc/mm/srmmu.c @@ -346,7 +346,7 @@ pgd_t *get_pgd_fast(void) * Alignments up to the page size are the same for physical and virtual * addresses of the nocache area. */ -pgtable_t pte_alloc_one(struct mm_struct *mm) +struct ptdesc *pte_alloc_one(struct mm_struct *mm) { pte_t *ptep; struct page *page; @@ -362,7 +362,7 @@ pgtable_t pte_alloc_one(struct mm_struct *mm) } spin_unlock(&mm->page_table_lock); - return ptep; + return (struct ptdesc *)ptep; } void pte_free(struct mm_struct *mm, pgtable_t ptep) diff --git a/arch/x86/include/asm/pgalloc.h b/arch/x86/include/asm/pgalloc.h index dcd836b59beb..497c757b5b98 100644 --- a/arch/x86/include/asm/pgalloc.h +++ b/arch/x86/include/asm/pgalloc.h @@ -51,7 +51,7 @@ extern gfp_t __userpte_alloc_gfp; extern pgd_t *pgd_alloc(struct mm_struct *); extern void pgd_free(struct mm_struct *mm, pgd_t *pgd); -extern pgtable_t pte_alloc_one(struct mm_struct *); +extern struct ptdesc *pte_alloc_one(struct mm_struct *); extern void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte); diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 93e54ba91fbf..c27d15cd01b9 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -28,7 +28,7 @@ void paravirt_tlb_remove_table(struct mmu_gather *tlb, void *table) gfp_t __userpte_alloc_gfp = GFP_PGTABLE_USER | PGTABLE_HIGHMEM; -pgtable_t pte_alloc_one(struct mm_struct *mm) +struct ptdesc *pte_alloc_one(struct mm_struct *mm) { return __pte_alloc_one(mm, __userpte_alloc_gfp); } diff --git a/arch/xtensa/include/asm/pgalloc.h b/arch/xtensa/include/asm/pgalloc.h index 7fc0f9126dd3..a9206c02956e 100644 --- a/arch/xtensa/include/asm/pgalloc.h +++ b/arch/xtensa/include/asm/pgalloc.h @@ -51,15 +51,15 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) return ptep; } -static inline pgtable_t pte_alloc_one(struct mm_struct *mm) +static inline struct ptdesc *pte_alloc_one(struct mm_struct *mm) { - struct page *page; + struct ptdesc *ptdesc; - page = __pte_alloc_one(mm, GFP_PGTABLE_USER); - if (!page) + ptdesc = __pte_alloc_one(mm, GFP_PGTABLE_USER); + if (!ptdesc) return NULL; - ptes_clear(page_address(page)); - return page; + ptes_clear(ptdesc_address(ptdesc)); + return ptdesc; } #endif /* CONFIG_MMU */ diff --git a/fs/dax.c b/fs/dax.c index 6f7cea248206..51cbc08b22e7 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1222,7 +1222,7 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf, DAX_PMD | DAX_ZERO_PAGE); if (arch_needs_pgtable_deposit()) { - ptdesc = page_ptdesc(pte_alloc_one(vma->vm_mm)); + ptdesc = pte_alloc_one(vma->vm_mm); if (!ptdesc) return VM_FAULT_OOM; } diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h index 7c48f5fbf8aa..1a4070f8d5dd 100644 --- a/include/asm-generic/pgalloc.h +++ b/include/asm-generic/pgalloc.h @@ -63,7 +63,7 @@ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) * * Return: `struct page` referencing the ptdesc or %NULL on error */ -static inline pgtable_t __pte_alloc_one_noprof(struct mm_struct *mm, gfp_t gfp) +static inline struct ptdesc *__pte_alloc_one_noprof(struct mm_struct *mm, gfp_t gfp) { struct ptdesc *ptdesc; @@ -75,7 +75,7 @@ static inline pgtable_t __pte_alloc_one_noprof(struct mm_struct *mm, gfp_t gfp) return NULL; } - return ptdesc_page(ptdesc); + return ptdesc; } #define __pte_alloc_one(...) alloc_hooks(__pte_alloc_one_noprof(__VA_ARGS__)) @@ -88,7 +88,7 @@ static inline pgtable_t __pte_alloc_one_noprof(struct mm_struct *mm, gfp_t gfp) * * Return: `struct page` referencing the ptdesc or %NULL on error */ -static inline pgtable_t pte_alloc_one_noprof(struct mm_struct *mm) +static inline struct ptdesc *pte_alloc_one_noprof(struct mm_struct *mm) { return __pte_alloc_one_noprof(mm, GFP_PGTABLE_USER); } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 236e1582d97e..6274eb7559ac 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -959,7 +959,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, } folio_throttle_swaprate(folio, gfp); - ptdesc = page_ptdesc(pte_alloc_one(vma->vm_mm)); + ptdesc = pte_alloc_one(vma->vm_mm); if (unlikely(!ptdesc)) { ret = VM_FAULT_OOM; goto release; @@ -1091,7 +1091,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) struct folio *zero_folio; vm_fault_t ret; - ptdesc = page_ptdesc(pte_alloc_one(vma->vm_mm)); + ptdesc = pte_alloc_one(vma->vm_mm); if (unlikely(!ptdesc)) return VM_FAULT_OOM; zero_folio = mm_get_huge_zero_folio(vma->vm_mm); @@ -1213,7 +1213,7 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write) return VM_FAULT_SIGBUS; if (arch_needs_pgtable_deposit()) { - ptdesc = page_ptdesc(pte_alloc_one(vma->vm_mm)); + ptdesc = pte_alloc_one(vma->vm_mm); if (!ptdesc) return VM_FAULT_OOM; } @@ -1376,7 +1376,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, if (!vma_is_anonymous(dst_vma)) return 0; - ptdesc = page_ptdesc(pte_alloc_one(dst_mm)); + ptdesc = pte_alloc_one(dst_mm); if (unlikely(!ptdesc)) goto out; diff --git a/mm/memory.c b/mm/memory.c index 5b01d94a0b5f..37529e0a9ce2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -445,7 +445,7 @@ void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte) int __pte_alloc(struct mm_struct *mm, pmd_t *pmd) { - struct ptdesc *ptdesc = page_ptdesc(pte_alloc_one(mm)); + struct ptdesc *ptdesc = pte_alloc_one(mm); if (!ptdesc) return -ENOMEM; @@ -4647,7 +4647,7 @@ static vm_fault_t __do_fault(struct vm_fault *vmf) * # flush A, B to clear the writeback */ if (pmd_none(*vmf->pmd) && !vmf->prealloc_pte) { - vmf->prealloc_pte = pte_alloc_one(vma->vm_mm); + vmf->prealloc_pte = ptdesc_page(pte_alloc_one(vma->vm_mm)); if (!vmf->prealloc_pte) return VM_FAULT_OOM; } @@ -4725,7 +4725,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) * related to pte entry. Use the preallocated table for that. */ if (arch_needs_pgtable_deposit() && !vmf->prealloc_pte) { - vmf->prealloc_pte = pte_alloc_one(vma->vm_mm); + vmf->prealloc_pte = ptdesc_page(pte_alloc_one(vma->vm_mm)); if (!vmf->prealloc_pte) return VM_FAULT_OOM; } @@ -5010,7 +5010,7 @@ static vm_fault_t do_fault_around(struct vm_fault *vmf) pte_off + vma_pages(vmf->vma) - vma_off) - 1; if (pmd_none(*vmf->pmd)) { - vmf->prealloc_pte = pte_alloc_one(vmf->vma->vm_mm); + vmf->prealloc_pte = ptdesc_page(pte_alloc_one(vmf->vma->vm_mm)); if (!vmf->prealloc_pte) return VM_FAULT_OOM; } From patchwork Tue Jul 30 06:47:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13746762 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 37FC7C3DA61 for ; Tue, 30 Jul 2024 06:45:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GBnxLl4Ay+4X4CR39bBdiuyYdn1jZt0Qks/SXVh+wik=; b=f0Joe+nvNv/FdD ajLj35tu9jjYaUt1uPrEYr6FsDJXnoe+b9tLD1GSajlFDrxRg/63k+f1a1dB/g02imCluqSZnzc+D /gtuU9bNsqlPb2PDdujrr+SFvln22DvId8475Z6DZ5n0/yx0ljwfq2gJmPJe8XGnyq+u7frhuw6vJ Xaab6pc51/xNysL2aYWKDbMC0SaxojVQNa2KZs2E0hTjYznfGft9szSHfbLjp5UHAXHIl8PZna6Dy lESt7afh0rKkOn8Zo983rX5V2hiJRy9JppwXS899tGTITxc/QvMUVreolt5Tda4kjNnhtiJwqTCjL kgE96LLz9EDVLIN2QF1g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYgbi-0000000DtCM-1NZJ; Tue, 30 Jul 2024 06:45:14 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYgbc-0000000Dt8b-1m5L; Tue, 30 Jul 2024 06:45:11 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id B9AE160B97; Tue, 30 Jul 2024 06:45:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 59076C4AF0E; Tue, 30 Jul 2024 06:44:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722321907; bh=W5tYBso4UoPS/fKSVOluC4Ch4cJZaP/H2xHcE2b1FHU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=otPDGIS+BP+zJU1WerV1XStPgq7KbeiiHRdcUEyZGtj4yf/zK6A7xDzbi5ECaayAt YVSoB/3bQTSLE8x8/QN40AJ0s2PdSq3df6HoYbaIj4z13Vl5vZw9N5ye4PEjdht//7 bqv0egHAAAYkaLavBipdxSHoduTULRxYgtDM4b4mjLw9jYnjamZKyZ8Y6tyDakNpzC Y0Ij1GnGY4gY870o0InOzrbMFIggAcxm9mvmF741jpwmx9ZohF6o5cMeki7Psq4dLD /4dfI13pYO7FfkQIWJrOHrtaZ1MHS6fgo+N7tVvA1Q6NzphZJuPFR1n3WFSvbuCYFD UKCxIcO9HG0sA== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , nvdimm@lists.linux.dev, linux-fsdevel@vger.kernel.org, sparclinux@vger.kernel.org, Bjorn Helgaas , Arnd Bergmann , Christian Brauner , Alexander Viro , Jan Kara , Dan Williams , "David S . Miller" , "Naveen N . Rao" , Dawei Li Subject: [RFC PATCH 10/18] mm/pgtable: pass ptdesc to pte_free() Date: Tue, 30 Jul 2024 14:47:04 +0800 Message-ID: <20240730064712.3714387-11-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730064712.3714387-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240729_234509_192603_9D8B2429 X-CRM114-Status: GOOD ( 19.41 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Alex Shi Now we could remove couple of page<->ptdesc converters now. Signed-off-by: Alex Shi Cc: linux-mm@kvack.org Cc: linux-arch@vger.kernel.org Cc: nvdimm@lists.linux.dev Cc: linux-fsdevel@vger.kernel.org Cc: sparclinux@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-m68k@lists.linux-m68k.org Cc: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: Bjorn Helgaas Cc: Vishal Moola Cc: Arnd Bergmann Cc: Christian Brauner Cc: Alexander Viro Cc: Jan Kara Cc: Matthew Wilcox Cc: Dan Williams Cc: David S. Miller Cc: Naveen N. Rao Cc: Christophe Leroy Cc: Nicholas Piggin Cc: Michael Ellerman Cc: Geert Uytterhoeven Cc: Russell King Cc: Mike Rapoport Cc: Dawei Li Cc: Hugh Dickins --- arch/arm/mm/pgd.c | 2 +- arch/m68k/include/asm/motorola_pgalloc.h | 4 ++-- arch/powerpc/include/asm/book3s/64/pgalloc.h | 2 +- arch/powerpc/include/asm/pgalloc.h | 2 +- arch/sparc/include/asm/pgalloc_32.h | 2 +- arch/sparc/mm/srmmu.c | 2 +- fs/dax.c | 2 +- include/asm-generic/pgalloc.h | 4 +--- mm/debug_vm_pgtable.c | 2 +- mm/huge_memory.c | 20 ++++++++++---------- mm/memory.c | 4 ++-- mm/pgtable-generic.c | 2 +- 12 files changed, 23 insertions(+), 25 deletions(-) diff --git a/arch/arm/mm/pgd.c b/arch/arm/mm/pgd.c index f8e9bc58a84f..c384b734d752 100644 --- a/arch/arm/mm/pgd.c +++ b/arch/arm/mm/pgd.c @@ -168,7 +168,7 @@ void pgd_free(struct mm_struct *mm, pgd_t *pgd_base) pte = pmd_pgtable(*pmd); pmd_clear(pmd); - pte_free(mm, pte); + pte_free(mm, page_ptdesc(pte)); mm_dec_nr_ptes(mm); no_pmd: pud_clear(pud); diff --git a/arch/m68k/include/asm/motorola_pgalloc.h b/arch/m68k/include/asm/motorola_pgalloc.h index 74a817d9387f..f6bb375971dc 100644 --- a/arch/m68k/include/asm/motorola_pgalloc.h +++ b/arch/m68k/include/asm/motorola_pgalloc.h @@ -39,9 +39,9 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm) return get_pointer_table(TABLE_PTE); } -static inline void pte_free(struct mm_struct *mm, pgtable_t pgtable) +static inline void pte_free(struct mm_struct *mm, struct ptdesc *ptdesc) { - free_pointer_table(pgtable, TABLE_PTE); + free_pointer_table(ptdesc_page(ptdesc), TABLE_PTE); } static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pgtable, diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h b/arch/powerpc/include/asm/book3s/64/pgalloc.h index dd2cff53a111..eb7d2ca59f62 100644 --- a/arch/powerpc/include/asm/book3s/64/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h @@ -162,7 +162,7 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, } static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, - pgtable_t pte_page) + struct ptdesc *pte_page) { *pmd = __pmd(__pgtable_ptr_val(pte_page) | PMD_VAL_BITS); } diff --git a/arch/powerpc/include/asm/pgalloc.h b/arch/powerpc/include/asm/pgalloc.h index 37512f344b37..12520521163e 100644 --- a/arch/powerpc/include/asm/pgalloc.h +++ b/arch/powerpc/include/asm/pgalloc.h @@ -40,7 +40,7 @@ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) pte_fragment_free((unsigned long *)pte, 1); } -static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage) +static inline void pte_free(struct mm_struct *mm, struct ptdesc *ptepage) { pte_fragment_free((unsigned long *)ptepage, 0); } diff --git a/arch/sparc/include/asm/pgalloc_32.h b/arch/sparc/include/asm/pgalloc_32.h index bc3ef54d9564..addaade56f21 100644 --- a/arch/sparc/include/asm/pgalloc_32.h +++ b/arch/sparc/include/asm/pgalloc_32.h @@ -71,7 +71,7 @@ static inline void free_pte_fast(pte_t *pte) #define pte_free_kernel(mm, pte) free_pte_fast(pte) -void pte_free(struct mm_struct * mm, pgtable_t pte); +void pte_free(struct mm_struct *mm, struct ptdesc *pte); #define __pte_free_tlb(tlb, pte, addr) pte_free((tlb)->mm, pte) #endif /* _SPARC_PGALLOC_H */ diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c index 60bb8628bb1f..05be7d86eda3 100644 --- a/arch/sparc/mm/srmmu.c +++ b/arch/sparc/mm/srmmu.c @@ -365,7 +365,7 @@ struct ptdesc *pte_alloc_one(struct mm_struct *mm) return (struct ptdesc *)ptep; } -void pte_free(struct mm_struct *mm, pgtable_t ptep) +void pte_free(struct mm_struct *mm, struct ptdesc *ptep) { struct page *page; diff --git a/fs/dax.c b/fs/dax.c index 51cbc08b22e7..61b9bd5200da 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1246,7 +1246,7 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf, fallback: if (ptdesc) - pte_free(vma->vm_mm, ptdesc_page(ptdesc)); + pte_free(vma->vm_mm, ptdesc); trace_dax_pmd_load_hole_fallback(inode, vmf, zero_folio, *entry); return VM_FAULT_FALLBACK; } diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h index 1a4070f8d5dd..5f249ec9d289 100644 --- a/include/asm-generic/pgalloc.h +++ b/include/asm-generic/pgalloc.h @@ -105,10 +105,8 @@ static inline struct ptdesc *pte_alloc_one_noprof(struct mm_struct *mm) * @mm: the mm_struct of the current context * @pte_page: the `struct page` referencing the ptdesc */ -static inline void pte_free(struct mm_struct *mm, struct page *pte_page) +static inline void pte_free(struct mm_struct *mm, struct ptdesc *ptdesc) { - struct ptdesc *ptdesc = page_ptdesc(pte_page); - pagetable_pte_dtor(ptdesc); pagetable_free(ptdesc); } diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index e4969fb54da3..f256bc816744 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -1049,7 +1049,7 @@ static void __init destroy_args(struct pgtable_debug_args *args) /* Free page table entries */ if (args->start_ptep) { - pte_free(args->mm, args->start_ptep); + pte_free(args->mm, page_ptdesc(args->start_ptep)); mm_dec_nr_ptes(args->mm); } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6274eb7559ac..dc323453fa02 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -987,7 +987,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, if (userfaultfd_missing(vma)) { spin_unlock(vmf->ptl); folio_put(folio); - pte_free(vma->vm_mm, ptdesc_page(ptdesc)); + pte_free(vma->vm_mm, ptdesc); ret = handle_userfault(vmf, VM_UFFD_MISSING); VM_BUG_ON(ret & VM_FAULT_FALLBACK); return ret; @@ -1013,7 +1013,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, spin_unlock(vmf->ptl); release: if (ptdesc) - pte_free(vma->vm_mm, ptdesc_page(ptdesc)); + pte_free(vma->vm_mm, ptdesc); folio_put(folio); return ret; @@ -1096,7 +1096,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) return VM_FAULT_OOM; zero_folio = mm_get_huge_zero_folio(vma->vm_mm); if (unlikely(!zero_folio)) { - pte_free(vma->vm_mm, ptdesc_page(ptdesc)); + pte_free(vma->vm_mm, ptdesc); count_vm_event(THP_FAULT_FALLBACK); return VM_FAULT_FALLBACK; } @@ -1106,10 +1106,10 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) ret = check_stable_address_space(vma->vm_mm); if (ret) { spin_unlock(vmf->ptl); - pte_free(vma->vm_mm, ptdesc_page(ptdesc)); + pte_free(vma->vm_mm, ptdesc); } else if (userfaultfd_missing(vma)) { spin_unlock(vmf->ptl); - pte_free(vma->vm_mm, ptdesc_page(ptdesc)); + pte_free(vma->vm_mm, ptdesc); ret = handle_userfault(vmf, VM_UFFD_MISSING); VM_BUG_ON(ret & VM_FAULT_FALLBACK); } else { @@ -1120,7 +1120,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) } } else { spin_unlock(vmf->ptl); - pte_free(vma->vm_mm, ptdesc_page(ptdesc)); + pte_free(vma->vm_mm, ptdesc); } return ret; } @@ -1178,7 +1178,7 @@ static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, out_unlock: spin_unlock(ptl); if (ptdesc) - pte_free(mm, ptdesc_page(ptdesc)); + pte_free(mm, ptdesc); } /** @@ -1414,7 +1414,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, #endif if (unlikely(!pmd_trans_huge(pmd))) { - pte_free(dst_mm, ptdesc_page(ptdesc)); + pte_free(dst_mm, ptdesc); goto out_unlock; } /* @@ -1440,7 +1440,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, if (unlikely(folio_try_dup_anon_rmap_pmd(src_folio, src_page, src_vma))) { /* Page maybe pinned: split and retry the fault on PTEs. */ folio_put(src_folio); - pte_free(dst_mm, ptdesc_page(ptdesc)); + pte_free(dst_mm, ptdesc); spin_unlock(src_ptl); spin_unlock(dst_ptl); __split_huge_pmd(src_vma, src_pmd, addr, false, NULL); @@ -1830,7 +1830,7 @@ static inline void zap_deposited_table(struct mm_struct *mm, pmd_t *pmd) pgtable_t pgtable; pgtable = pgtable_trans_huge_withdraw(mm, pmd); - pte_free(mm, pgtable); + pte_free(mm, page_ptdesc(pgtable)); mm_dec_nr_ptes(mm); } diff --git a/mm/memory.c b/mm/memory.c index 37529e0a9ce2..3014168e7296 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -451,7 +451,7 @@ int __pte_alloc(struct mm_struct *mm, pmd_t *pmd) pmd_install(mm, pmd, (pgtable_t *)&ptdesc); if (ptdesc) - pte_free(mm, ptdesc_page(ptdesc)); + pte_free(mm, ptdesc); return 0; } @@ -5196,7 +5196,7 @@ static vm_fault_t do_fault(struct vm_fault *vmf) /* preallocated pagetable is unused: free it */ if (vmf->prealloc_pte) { - pte_free(vm_mm, vmf->prealloc_pte); + pte_free(vm_mm, page_ptdesc(vmf->prealloc_pte)); vmf->prealloc_pte = NULL; } return ret; diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index f34a8d115f5b..92245a32656b 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -241,7 +241,7 @@ static void pte_free_now(struct rcu_head *head) struct ptdesc *ptdesc; ptdesc = container_of(head, struct ptdesc, pt_rcu_head); - pte_free(NULL /* mm not passed and not used */, (pgtable_t)ptdesc); + pte_free(NULL /* mm not passed and not used */, ptdesc); } void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable) From patchwork Tue Jul 30 07:27:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13746847 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 42D89C3DA61 for ; Tue, 30 Jul 2024 07:22:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=EH0IhAW/MOVNnp0Yegf/tZ5ue4CKPFt0tBfq6JmxT5Y=; b=Uqjc3gue/L+nWq Gfp9kdA08031JqcVGAZEyBpwUriiL9SSOi/3aWf6p6vcSJHJ8XC9KsdInQsZ8PvzRfwKOf7hEO6Mo 6njrz4YHBjYVgRR84EQTjLl/12i6Ja6PCA7oty6fTKMg0QyMKKtLQQZUXVzGkbDxbNzIfRip1Hb/h sjNuU/pZpNMAow+yMgA3k3TPWBWwuIfE1YqgtFakyLiuIuGIsFTCdc2VhqdDOP0RT5fzbkWENfG0m ax7gnIv+uCU/e5oE0htQrbB92aTNC62/5tAMVI+1wNHrTRIiNVH3yY+GiXm/cnXNJxKkUo57DsJT1 86nDw8BLahbIVMGuLWzw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYhBn-0000000DziY-1J8k; Tue, 30 Jul 2024 07:22:31 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYhBi-0000000Dzgm-0V9I; Tue, 30 Jul 2024 07:22:28 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 3321061DAC; Tue, 30 Jul 2024 07:22:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B5824C4AF09; Tue, 30 Jul 2024 07:22:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722324144; bh=HFILjDyuOkFueGdpeZ+expeCNrW1pem0IxRTIfXPmJE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Vja3T5454h4Q+VtrSRmXah9kscyuQ0sdUaWF4cc6GQNdR4jZLqcvzt/NApj4QHVhI pqrlzT+GWHsnAid/s7t2kxwJYFUtrUPlxfokjC/Txb/CQVYpaKg0WAWtESCPzRq71x NXeojkPNmEJrPbsYGdO5TaC1GuWCiXQv76PuZ6FU1/c+ZAWYWlluAS+r9mBDcGbOt5 1D/+y4VcHKQVl6RBo7gAGoNGP9Euq87+ecG3V1+RQ5tu2ME9dlPBD4ssUxQBUeIFyI uYUT3ZAY3PQJ/mREouQl40/x4dEbUyqiFsmeMNBhlEBFnnYo1M91KLOZ77KIeR/1sW TD6wdpb7tGFTQ== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , "Naveen N . Rao" Subject: [RFC PATCH 11/18] mm/pgtable: introduce ptdesc_pfn and use ptdesc in free_pte_range() Date: Tue, 30 Jul 2024 15:27:12 +0800 Message-ID: <20240730072719.3715016-1-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730064712.3714387-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240730_002226_295796_EFCA8A2F X-CRM114-Status: GOOD ( 18.65 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Alex Shi Replace pgtable_t by ptdesc in free_pte_range and it's callee pte_free_tlb series functions. And save some converters now. We have to use type casting for pmd_pgtable() instead of page_ptdesc() helper since different arch has different type of pgtable_t. btw, we can not simplify pmd_ptdesc() via replace pmd_pgtable_page by pmd_page, since some arch may have no pmd_page yet. Signed-off-by: Alex Shi Cc: Anup Patel Cc: Samuel Holland Cc: Jisheng Zhang Cc: Alexandre Ghiti Cc: Oscar Salvador Cc: Palmer Dabbelt Cc: Guo Ren Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-openrisc@vger.kernel.org Cc: linux-m68k@lists.linux-m68k.org Cc: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mm@kvack.org Cc: linux-arch@vger.kernel.org Cc: Andy Lutomirski Cc: H. Peter Anvin Cc: x86@kernel.org Cc: Dave Hansen Cc: Borislav Petkov Cc: Thomas Gleixner Cc: Naveen N. Rao Cc: Christophe Leroy Cc: Michael Ellerman Cc: Stafford Horne Cc: Stefan Kristiansson Cc: Jonas Bonn Cc: Geert Uytterhoeven Cc: Catalin Marinas Cc: Russell King Cc: Peter Zijlstra Cc: Nick Piggin Cc: Aneesh Kumar K.V Cc: Will Deacon Cc: Breno Leitao Cc: Josh Poimboeuf Cc: Vishal Moola Cc: Mike Rapoport --- arch/arm/include/asm/tlb.h | 4 +--- arch/arm64/include/asm/tlb.h | 4 +--- arch/csky/include/asm/pgalloc.h | 4 ++-- arch/hexagon/include/asm/pgalloc.h | 4 ++-- arch/loongarch/include/asm/pgalloc.h | 4 ++-- arch/m68k/include/asm/motorola_pgalloc.h | 4 ++-- arch/openrisc/include/asm/pgalloc.h | 4 ++-- arch/powerpc/include/asm/book3s/32/pgalloc.h | 2 +- arch/powerpc/include/asm/book3s/64/pgalloc.h | 2 +- arch/riscv/include/asm/pgalloc.h | 8 +++----- arch/x86/include/asm/pgalloc.h | 4 ++-- arch/x86/mm/pgtable.c | 6 +++--- include/linux/mm.h | 14 ++++++++++++++ mm/memory.c | 3 ++- 14 files changed, 38 insertions(+), 29 deletions(-) diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index f40d06ad5d2a..ed6aa4255518 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -37,10 +37,8 @@ static inline void __tlb_remove_table(void *_table) #include static inline void -__pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) +__pte_free_tlb(struct mmu_gather *tlb, struct ptdesc *ptdesc, unsigned long addr) { - struct ptdesc *ptdesc = page_ptdesc(pte); - pagetable_pte_dtor(ptdesc); #ifndef CONFIG_ARM_LPAE diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index a947c6e784ed..cee7234af6e7 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -77,11 +77,9 @@ static inline void tlb_flush(struct mmu_gather *tlb) last_level, tlb_level); } -static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, +static inline void __pte_free_tlb(struct mmu_gather *tlb, struct ptdesc *ptdesc, unsigned long addr) { - struct ptdesc *ptdesc = page_ptdesc(pte); - pagetable_pte_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); } diff --git a/arch/csky/include/asm/pgalloc.h b/arch/csky/include/asm/pgalloc.h index 9c84c9012e53..b24b4611436e 100644 --- a/arch/csky/include/asm/pgalloc.h +++ b/arch/csky/include/asm/pgalloc.h @@ -63,8 +63,8 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) #define __pte_free_tlb(tlb, pte, address) \ do { \ - pagetable_pte_dtor(page_ptdesc(pte)); \ - tlb_remove_page_ptdesc(tlb, page_ptdesc(pte)); \ + pagetable_pte_dtor(pte); \ + tlb_remove_page_ptdesc(tlb, pte); \ } while (0) extern void pagetable_init(void); diff --git a/arch/hexagon/include/asm/pgalloc.h b/arch/hexagon/include/asm/pgalloc.h index 55988625e6fb..a3e082e54b74 100644 --- a/arch/hexagon/include/asm/pgalloc.h +++ b/arch/hexagon/include/asm/pgalloc.h @@ -89,8 +89,8 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, #define __pte_free_tlb(tlb, pte, addr) \ do { \ - pagetable_pte_dtor((page_ptdesc(pte))); \ - tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ + pagetable_pte_dtor((pte)); \ + tlb_remove_page_ptdesc((tlb), (pte)); \ } while (0) #endif diff --git a/arch/loongarch/include/asm/pgalloc.h b/arch/loongarch/include/asm/pgalloc.h index 4e2d6b7ca2ee..c96d7160babc 100644 --- a/arch/loongarch/include/asm/pgalloc.h +++ b/arch/loongarch/include/asm/pgalloc.h @@ -46,8 +46,8 @@ extern pgd_t *pgd_alloc(struct mm_struct *mm); #define __pte_free_tlb(tlb, pte, address) \ do { \ - pagetable_pte_dtor(page_ptdesc(pte)); \ - tlb_remove_page_ptdesc((tlb), page_ptdesc(pte)); \ + pagetable_pte_dtor(pte); \ + tlb_remove_page_ptdesc((tlb), pte); \ } while (0) #ifndef __PAGETABLE_PMD_FOLDED diff --git a/arch/m68k/include/asm/motorola_pgalloc.h b/arch/m68k/include/asm/motorola_pgalloc.h index f6bb375971dc..f9ee5ec4574d 100644 --- a/arch/m68k/include/asm/motorola_pgalloc.h +++ b/arch/m68k/include/asm/motorola_pgalloc.h @@ -44,10 +44,10 @@ static inline void pte_free(struct mm_struct *mm, struct ptdesc *ptdesc) free_pointer_table(ptdesc_page(ptdesc), TABLE_PTE); } -static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pgtable, +static inline void __pte_free_tlb(struct mmu_gather *tlb, struct ptdesc *ptdesc, unsigned long address) { - free_pointer_table(pgtable, TABLE_PTE); + free_pointer_table(ptdesc_page(ptdesc), TABLE_PTE); } diff --git a/arch/openrisc/include/asm/pgalloc.h b/arch/openrisc/include/asm/pgalloc.h index c6a73772a546..2251d940c3d8 100644 --- a/arch/openrisc/include/asm/pgalloc.h +++ b/arch/openrisc/include/asm/pgalloc.h @@ -68,8 +68,8 @@ extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm); #define __pte_free_tlb(tlb, pte, addr) \ do { \ - pagetable_pte_dtor(page_ptdesc(pte)); \ - tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ + pagetable_pte_dtor(pte); \ + tlb_remove_page_ptdesc((tlb), (pte)); \ } while (0) #endif diff --git a/arch/powerpc/include/asm/book3s/32/pgalloc.h b/arch/powerpc/include/asm/book3s/32/pgalloc.h index dd4eb3063175..a435c84d1f9a 100644 --- a/arch/powerpc/include/asm/book3s/32/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/32/pgalloc.h @@ -64,7 +64,7 @@ static inline void __tlb_remove_table(void *_table) pgtable_free(table, shift); } -static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, +static inline void __pte_free_tlb(struct mmu_gather *tlb, struct ptdesc *table, unsigned long address) { pgtable_free_tlb(tlb, table, 0); diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h b/arch/powerpc/include/asm/book3s/64/pgalloc.h index eb7d2ca59f62..675eca34fe40 100644 --- a/arch/powerpc/include/asm/book3s/64/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h @@ -167,7 +167,7 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, *pmd = __pmd(__pgtable_ptr_val(pte_page) | PMD_VAL_BITS); } -static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, +static inline void __pte_free_tlb(struct mmu_gather *tlb, struct ptdesc *table, unsigned long address) { pgtable_free_tlb(tlb, table, PTE_INDEX); diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index f52264304f77..63596efcd528 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -183,13 +183,11 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, #endif /* __PAGETABLE_PMD_FOLDED */ -static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, +static inline void __pte_free_tlb(struct mmu_gather *tlb, struct ptdesc *pte, unsigned long addr) { - struct ptdesc *ptdesc = page_ptdesc(pte); - - pagetable_pte_dtor(ptdesc); - riscv_tlb_remove_ptdesc(tlb, ptdesc); + pagetable_pte_dtor(pte); + riscv_tlb_remove_ptdesc(tlb, pte); } #endif /* CONFIG_MMU */ diff --git a/arch/x86/include/asm/pgalloc.h b/arch/x86/include/asm/pgalloc.h index 497c757b5b98..06a9a5867a86 100644 --- a/arch/x86/include/asm/pgalloc.h +++ b/arch/x86/include/asm/pgalloc.h @@ -53,9 +53,9 @@ extern void pgd_free(struct mm_struct *mm, pgd_t *pgd); extern struct ptdesc *pte_alloc_one(struct mm_struct *); -extern void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte); +extern void ___pte_free_tlb(struct mmu_gather *tlb, struct ptdesc *pte); -static inline void __pte_free_tlb(struct mmu_gather *tlb, struct page *pte, +static inline void __pte_free_tlb(struct mmu_gather *tlb, struct ptdesc *pte, unsigned long address) { ___pte_free_tlb(tlb, pte); diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index c27d15cd01b9..3cf9c0d25dbd 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -50,10 +50,10 @@ static int __init setup_userpte(char *arg) } early_param("userpte", setup_userpte); -void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte) +void ___pte_free_tlb(struct mmu_gather *tlb, struct ptdesc *pte) { - pagetable_pte_dtor(page_ptdesc(pte)); - paravirt_release_pte(page_to_pfn(pte)); + pagetable_pte_dtor(pte); + paravirt_release_pte(ptdesc_pfn(pte)); paravirt_tlb_remove_table(tlb, pte); } diff --git a/include/linux/mm.h b/include/linux/mm.h index 381750f41767..7424f964dff3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2859,6 +2859,20 @@ static inline bool pagetable_is_reserved(struct ptdesc *pt) return folio_test_reserved(ptdesc_folio(pt)); } +/** + * ptdesc_pfn - Return the Page Frame Number of a ptdesc. + * @ptdesc: The ptdesc. + * + * A ptdesc may contain multiple pages. The pages have consecutive + * Page Frame Numbers. + * + * Return: The Page Frame Number of the first page in the ptdesc. + */ +static inline unsigned long ptdesc_pfn(struct ptdesc *ptdesc) +{ + return page_to_pfn(ptdesc_page(ptdesc)); +} + /** * pagetable_alloc - Allocate pagetables * @gfp: GFP flags diff --git a/mm/memory.c b/mm/memory.c index 3014168e7296..27c2f63b7487 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -189,7 +189,8 @@ void mm_trace_rss_stat(struct mm_struct *mm, int member) static void free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, unsigned long addr) { - pgtable_t token = pmd_pgtable(*pmd); + struct ptdesc *token = (struct ptdesc *)pmd_pgtable(*pmd); + pmd_clear(pmd); pte_free_tlb(tlb, token, addr); mm_dec_nr_ptes(tlb->mm); From patchwork Tue Jul 30 07:27:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13746848 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AEC5CC3DA61 for ; Tue, 30 Jul 2024 07:22:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=c0AHcUj3DB9FGOrcI3Z7kN8DXR8ejzlgaIcgTiv2p50=; b=IhpLDq2WZl3xv7 ENOVd51zCTmWpsI4PzFyEDoZpCgNnn54Ql5HtxfkJ91nSZ8A/OGoPYRHxgkIZflmEbxqM4lz4zgHP 0uuMEdblSM0M0NoeAA/nsnoHO7Aku8qluJDHfmT1WeLdy2qedXE7zNPADK0aC7PznEZRGdRBlGrtf ka/S+0O9NPr/P7sDUeZH4sntUeaX3ZLLFh7gAt0wu659SkLrtAokk9QmMBqedqg3S1xrkraVmodr6 zHInF0faeMnoYbOCC5cONiuo16/1YFCvXJjPdPMb0qTpGYvhREsv3c1h+ZGrBeP7tVYyoId3rnBz9 9N4E80kJNNKmpLD49o5A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYhC0-0000000Dznp-0CB9; Tue, 30 Jul 2024 07:22:44 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYhBw-0000000Dzlp-1wsU; Tue, 30 Jul 2024 07:22:42 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id C3B0761DDA; Tue, 30 Jul 2024 07:22:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 684EBC4AF17; Tue, 30 Jul 2024 07:22:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722324159; bh=F4rT1yReaNgoNPAOrss44yWnUe6Elb31IIk03kOZru4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=I93puTgZYyNM8LhENhjVfRqUnOmgCNGDgnG5jenN7sEcro1xD8o4LG0tV9ptaSFwY PJLaerU/r2mHKee1c2YvQSyvlJg4Wwwvxp1DBp3niYcmBM0VqGPFTQP0rUYMj+49cB SBIYzw7Mq5blsyH6UBISlMfioxXEJu7mMlQJHH4yDgrhD4CTQ9hIsmJZsmVNBl3JKi WBQjq0R3FTO/cpgH/VFebdmXxc1+QxwdDz9Jy5MJW1j7QPpkEyrb970dyzBwR9zmXp QzRAanhlQYpwwtfa6YKWEQwsDF/tgyU1IBDAE2+6tQC7zeslhLzdguUkGZ689CjSZ6 ORaa3vXAkOiLQ== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , "Naveen N . Rao" , Andrew Morton Subject: [RFC PATCH 12/18] mm/thp: pass ptdesc to set_huge_zero_folio function Date: Tue, 30 Jul 2024 15:27:13 +0800 Message-ID: <20240730072719.3715016-2-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730072719.3715016-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> <20240730072719.3715016-1-alexs@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240730_002240_639510_8DDC1D5B X-CRM114-Status: GOOD ( 11.34 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Alex Shi Aim is still replace struct page to ptdesc. Signed-off-by: Alex Shi Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Cc: Andrew Morton --- mm/huge_memory.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index dc323453fa02..1c121ec85447 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1055,7 +1055,7 @@ gfp_t vma_thp_gfp_mask(struct vm_area_struct *vma) } /* Caller must hold page table lock. */ -static void set_huge_zero_folio(pgtable_t pgtable, struct mm_struct *mm, +static void set_huge_zero_folio(struct ptdesc *ptdesc, struct mm_struct *mm, struct vm_area_struct *vma, unsigned long haddr, pmd_t *pmd, struct folio *zero_folio) { @@ -1064,7 +1064,7 @@ static void set_huge_zero_folio(pgtable_t pgtable, struct mm_struct *mm, return; entry = mk_pmd(&zero_folio->page, vma->vm_page_prot); entry = pmd_mkhuge(entry); - pgtable_trans_huge_deposit(mm, pmd, pgtable); + pgtable_trans_huge_deposit(mm, pmd, ptdesc_page(ptdesc)); set_pmd_at(mm, haddr, pmd, entry); mm_inc_nr_ptes(mm); } @@ -1113,7 +1113,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) ret = handle_userfault(vmf, VM_UFFD_MISSING); VM_BUG_ON(ret & VM_FAULT_FALLBACK); } else { - set_huge_zero_folio(ptdesc_page(ptdesc), vma->vm_mm, vma, + set_huge_zero_folio(ptdesc, vma->vm_mm, vma, haddr, vmf->pmd, zero_folio); update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); spin_unlock(vmf->ptl); From patchwork Tue Jul 30 07:27:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13746849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0660BC3DA61 for ; Tue, 30 Jul 2024 07:23:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=gEyfWumdGltflTxFgIUcfij2t8tYNKDXumZohp4nEtw=; b=wOLX5VxAC0VMv6 yM71z+W/DvY0vx9lLrrH+ZNROKF/3EDqdgepX6+9i54+YJSNzLZZEzzXDzbELa5iiIddUxluHVaTf p51GrND8ddsbfGOqTwobwLhO1BFfbcA1C2LdC4fTpFOx1e50Vs//qdCN+FjLJqVuEjyGO9FJhzx7+ Wka4sJh9KTwCqLTMuhYdxeSuusqMw4uBauspT5rswJ49+F+psiBy12r98WPPvuw5Jl0ZPWWrys/Cq CIX24/fF2Ce+fgwnYQz9pRr6YY18WRo+kVmr4lSDUylf3PSoVQg0XuAqf2Aqs//ZzFX42NqBynuCA EEj4vmboSDeXGZv71Dcw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYhCH-0000000Dzvn-0FoK; Tue, 30 Jul 2024 07:23:01 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYhCC-0000000Dzsz-0S0h; Tue, 30 Jul 2024 07:22:57 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 6D9DA61DC2; Tue, 30 Jul 2024 07:22:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 02314C4AF18; Tue, 30 Jul 2024 07:22:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722324175; bh=0TbHGOJe6xjBQAxgBmZt5/6tHpzUCuFv7Fnp2SPF0/s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VX1gqElX7WCCLiUWy2ntLa+Ucwphw4/QSxbhHNT0raPC4/UkeHmzo9W0nHE9Mpecx GiufKIepmOHPcYPzSaWYuLKiSsVBbybFMQZ9B61DOBMKp6hANUQcKP3+nRBipeuwBf ugpyKDaDVAkIMoeBe17I8tOhrD6R4mB8TeoZK7sFzVjOFfs948DRJNpd6K2obeDmkP Jo8Bqwcsk9Y8AJMGzmYkRltiRneJQdQnfGXx+EqBl49hPEStt5gmDVcD28UqmIHves ETiZ03dTkJdj+XtouaSH1+gTnn7AFo+OzJm5Opx0xYfz4rxDS0SAqH+YYi9e12gtZu JdZQXBJ9yyREw== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , "Naveen N . Rao" , sparclinux@vger.kernel.org, Kinsey Ho , Benjamin Gray , Andreas Larsson , "David S . Miller" , Jason Gunthorpe Subject: [RFC PATCH 13/18] mm/pgtable: return ptdesc pointer in pgtable_trans_huge_withdraw Date: Tue, 30 Jul 2024 15:27:14 +0800 Message-ID: <20240730072719.3715016-3-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730072719.3715016-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> <20240730072719.3715016-1-alexs@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240730_002256_270756_9FECC733 X-CRM114-Status: GOOD ( 14.53 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Alex Shi Way to replace pgtable_t aka struct page in most of archs. Signed-off-by: Alex Shi Cc: linux-mm@kvack.org Cc: sparclinux@vger.kernel.org Cc: linux-s390@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: Barry Song Cc: Lance Yang Cc: Kinsey Ho Cc: Aneesh Kumar K.V Cc: Benjamin Gray Cc: Andreas Larsson Cc: David S. Miller Cc: Sven Schnelle Cc: Christian Borntraeger Cc: Vasily Gorbik Cc: Heiko Carstens Cc: Gerald Schaefer Cc: Alexander Gordeev Cc: Naveen N. Rao Cc: Nicholas Piggin Cc: Ryan Roberts Cc: Matthew Wilcox Cc: David Hildenbrand Cc: Jason Gunthorpe Cc: Aneesh Kumar K.V Cc: Peter Xu Cc: Mike Rapoport Cc: Christophe Leroy Cc: Michael Ellerman --- arch/powerpc/include/asm/book3s/64/hash-4k.h | 4 +-- arch/powerpc/include/asm/book3s/64/hash-64k.h | 4 +-- arch/powerpc/include/asm/book3s/64/pgtable.h | 2 +- arch/powerpc/include/asm/book3s/64/radix.h | 4 +-- arch/powerpc/mm/book3s64/hash_pgtable.c | 4 +-- arch/powerpc/mm/book3s64/radix_pgtable.c | 4 +-- arch/s390/include/asm/pgtable.h | 2 +- arch/s390/mm/pgtable.c | 4 +-- arch/sparc/include/asm/pgtable_64.h | 2 +- arch/sparc/mm/tlb.c | 4 +-- include/linux/pgtable.h | 2 +- mm/huge_memory.c | 35 ++++++++++--------- mm/pgtable-generic.c | 4 +-- 13 files changed, 38 insertions(+), 37 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h index c654c376ef8b..3a99a0229c37 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-4k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h @@ -133,8 +133,8 @@ extern unsigned long hash__pmd_hugepage_update(struct mm_struct *mm, extern pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp); extern void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, - pgtable_t pgtable); -extern pgtable_t hash__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp); + struct ptdesc *ptdesc); +extern struct ptdesc *hash__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp); extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp); extern int hash__has_transparent_hugepage(void); diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h index 0bf6fd0bf42a..8f497e1617bd 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-64k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h @@ -274,8 +274,8 @@ extern unsigned long hash__pmd_hugepage_update(struct mm_struct *mm, extern pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp); extern void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, - pgtable_t pgtable); -extern pgtable_t hash__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp); + struct ptdesc *ptdesc); +extern struct ptdesc *hash__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp); extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp); extern int hash__has_transparent_hugepage(void); diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index 519b1743a0f4..0ee440b819d7 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -1373,7 +1373,7 @@ static inline void pgtable_trans_huge_deposit(struct mm_struct *mm, } #define __HAVE_ARCH_PGTABLE_WITHDRAW -static inline pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, +static inline struct ptdesc *pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) { if (radix_enabled()) diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h index 8f55ff74bb68..a8630b249f4c 100644 --- a/arch/powerpc/include/asm/book3s/64/radix.h +++ b/arch/powerpc/include/asm/book3s/64/radix.h @@ -291,8 +291,8 @@ extern unsigned long radix__pud_hugepage_update(struct mm_struct *mm, unsigned l extern pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp); extern void radix__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, - pgtable_t pgtable); -extern pgtable_t radix__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp); + struct ptdesc *ptdesc); +extern struct ptdesc *radix__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp); extern pmd_t radix__pmdp_huge_get_and_clear(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp); pud_t radix__pudp_huge_get_and_clear(struct mm_struct *mm, diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c index 988948d69bc1..35562d1f4267 100644 --- a/arch/powerpc/mm/book3s64/hash_pgtable.c +++ b/arch/powerpc/mm/book3s64/hash_pgtable.c @@ -284,7 +284,7 @@ void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, smp_wmb(); } -pgtable_t hash__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) +struct ptdesc *hash__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) { pgtable_t pgtable; pgtable_t *pgtable_slot; @@ -302,7 +302,7 @@ pgtable_t hash__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) * zero out the content on withdraw. */ memset(pgtable, 0, PTE_FRAG_SIZE); - return pgtable; + return (struct ptdesc *)pgtable; } /* diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index b0d927009af8..3b9bb19510e3 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -1492,7 +1492,7 @@ void radix__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, pmd_huge_pte(mm, pmdp) = pgtable; } -pgtable_t radix__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) +struct ptdesc *radix__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) { pte_t *ptep; pgtable_t pgtable; @@ -1513,7 +1513,7 @@ pgtable_t radix__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) *ptep = __pte(0); ptep++; *ptep = __pte(0); - return pgtable; + return (struct ptdesc *)pgtable; } pmd_t radix__pmdp_huge_get_and_clear(struct mm_struct *mm, diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index 3fa280d0672a..cf0baf4bfe5c 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -1738,7 +1738,7 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, pgtable_t pgtable); #define __HAVE_ARCH_PGTABLE_WITHDRAW -pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp); +struct ptdesc *pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp); #define __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS static inline int pmdp_set_access_flags(struct vm_area_struct *vma, diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c index 201d350abd1e..b9016ee145cb 100644 --- a/arch/s390/mm/pgtable.c +++ b/arch/s390/mm/pgtable.c @@ -577,7 +577,7 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, pmd_huge_pte(mm, pmdp) = (struct ptdesc *)pgtable; } -pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) +struct ptdesc *pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) { struct list_head *lh; pgtable_t pgtable; @@ -598,7 +598,7 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) set_pte(ptep, __pte(_PAGE_INVALID)); ptep++; set_pte(ptep, __pte(_PAGE_INVALID)); - return pgtable; + return (struct ptdesc *)pgtable; } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 3fe429d73a65..bfefd678e220 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -998,7 +998,7 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, pgtable_t pgtable); #define __HAVE_ARCH_PGTABLE_WITHDRAW -pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp); +struct ptdesc *pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp); #endif /* diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c index 903825b4c997..bd2d3b1f6ba3 100644 --- a/arch/sparc/mm/tlb.c +++ b/arch/sparc/mm/tlb.c @@ -281,7 +281,7 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, pmd_huge_pte(mm, pmdp) = (struct ptdesc *)pgtable; } -pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) +struct ptdesc *pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) { struct list_head *lh; pgtable_t pgtable; @@ -300,6 +300,6 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) pte_val(pgtable[0]) = 0; pte_val(pgtable[1]) = 0; - return pgtable; + return (struct ptdesc *)pgtable; } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 2a6a3cccfc36..3fa7b93580a3 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -929,7 +929,7 @@ extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, #endif #ifndef __HAVE_ARCH_PGTABLE_WITHDRAW -extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp); +extern struct ptdesc *pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp); #endif #ifndef arch_needs_pgtable_deposit diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 1c121ec85447..4dc36910c8aa 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1827,10 +1827,10 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, static inline void zap_deposited_table(struct mm_struct *mm, pmd_t *pmd) { - pgtable_t pgtable; + struct ptdesc *ptdesc; - pgtable = pgtable_trans_huge_withdraw(mm, pmd); - pte_free(mm, page_ptdesc(pgtable)); + ptdesc = pgtable_trans_huge_withdraw(mm, pmd); + pte_free(mm, ptdesc); mm_dec_nr_ptes(mm); } @@ -1959,9 +1959,10 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, VM_BUG_ON(!pmd_none(*new_pmd)); if (pmd_move_must_withdraw(new_ptl, old_ptl, vma)) { - pgtable_t pgtable; - pgtable = pgtable_trans_huge_withdraw(mm, old_pmd); - pgtable_trans_huge_deposit(mm, new_pmd, pgtable); + struct ptdesc *ptdesc; + + ptdesc = pgtable_trans_huge_withdraw(mm, old_pmd); + pgtable_trans_huge_deposit(mm, new_pmd, ptdesc_page(ptdesc)); } pmd = move_soft_dirty_pmd(pmd); set_pmd_at(mm, new_addr, new_pmd, pmd); @@ -2130,7 +2131,7 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm struct folio *src_folio; struct anon_vma *src_anon_vma; spinlock_t *src_ptl, *dst_ptl; - pgtable_t src_pgtable; + struct ptdesc *src_ptdesc; struct mmu_notifier_range range; int err = 0; @@ -2234,8 +2235,8 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm } set_pmd_at(mm, dst_addr, dst_pmd, _dst_pmd); - src_pgtable = pgtable_trans_huge_withdraw(mm, src_pmd); - pgtable_trans_huge_deposit(mm, dst_pmd, src_pgtable); + src_ptdesc = pgtable_trans_huge_withdraw(mm, src_pmd); + pgtable_trans_huge_deposit(mm, dst_pmd, ptdesc_page(src_ptdesc)); unlock_ptls: double_pt_unlock(src_ptl, dst_ptl); if (src_anon_vma) { @@ -2347,7 +2348,7 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma, unsigned long haddr, pmd_t *pmd) { struct mm_struct *mm = vma->vm_mm; - pgtable_t pgtable; + struct ptdesc *ptdesc; pmd_t _pmd, old_pmd; unsigned long addr; pte_t *pte; @@ -2363,8 +2364,8 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma, */ old_pmd = pmdp_huge_clear_flush(vma, haddr, pmd); - pgtable = pgtable_trans_huge_withdraw(mm, pmd); - pmd_populate(mm, &_pmd, pgtable); + ptdesc = pgtable_trans_huge_withdraw(mm, pmd); + pmd_populate(mm, &_pmd, ptdesc_page(ptdesc)); pte = pte_offset_map(&_pmd, haddr); VM_BUG_ON(!pte); @@ -2381,7 +2382,7 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma, } pte_unmap(pte - 1); smp_wmb(); /* make pte visible before pmd */ - pmd_populate(mm, pmd, pgtable); + pmd_populate(mm, pmd, ptdesc_page(ptdesc)); } static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, @@ -2390,7 +2391,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, struct mm_struct *mm = vma->vm_mm; struct folio *folio; struct page *page; - pgtable_t pgtable; + struct ptdesc *ptdesc; pmd_t old_pmd, _pmd; bool young, write, soft_dirty, pmd_migration = false, uffd_wp = false; bool anon_exclusive = false, dirty = false; @@ -2535,8 +2536,8 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, * Withdraw the table only after we mark the pmd entry invalid. * This's critical for some architectures (Power). */ - pgtable = pgtable_trans_huge_withdraw(mm, pmd); - pmd_populate(mm, &_pmd, pgtable); + ptdesc = pgtable_trans_huge_withdraw(mm, pmd); + pmd_populate(mm, &_pmd, ptdesc_page(ptdesc)); pte = pte_offset_map(&_pmd, haddr); VM_BUG_ON(!pte); @@ -2601,7 +2602,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, put_page(page); smp_wmb(); /* make pte visible before pmd */ - pmd_populate(mm, pmd, pgtable); + pmd_populate(mm, pmd, ptdesc_page(ptdesc)); } void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 92245a32656b..de1ed30fea16 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -178,7 +178,7 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, #ifndef __HAVE_ARCH_PGTABLE_WITHDRAW /* no "address" argument so destroys page coloring of some arch */ -pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) +struct ptdesc *pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) { struct ptdesc *ptdesc; @@ -190,7 +190,7 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) struct ptdesc, pt_list); if (pmd_huge_pte(mm, pmdp)) list_del(&ptdesc->pt_list); - return ptdesc_page(ptdesc); + return ptdesc; } #endif From patchwork Tue Jul 30 07:27:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13746850 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8AB6AC3DA61 for ; Tue, 30 Jul 2024 07:23:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=u1q1XlZGzyeWYb5iYdsdQnail8U2sg2fDBeCcTfLNx4=; b=ia3ZRyb25+RfdF qneHgUT8MXT5Dcxz9stU96hFvZpacek3agmiJM20JAXJsmeEgyMVO3z0hVbiaY0lQU0or1rW65RwI /ie5pG60CqTFqBqpoRjngzM7febxVsXqBRJmHcMo7dIU37lihT4IP2wJujhJiZQZb77RzDK8QbUod vV1OHhaUHoItykExroMuf8rCZqXD1NKJ+oCxEgkr69NsNVoef5isHYJ0MfDF+MES1Ux8xX6EGafGp q5zWpIy3bsWjTnqsiTxKycGhoBO01WNv6t6PeGpT8yiFG+rM3Ofd+2lHUVBXnyEos6p2xkm+wekKr ondWDaRkOwHCnp0VIL5g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYhCb-0000000E066-1xHo; Tue, 30 Jul 2024 07:23:21 +0000 Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYhCV-0000000E02b-1pee; Tue, 30 Jul 2024 07:23:17 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id C538CCE0EBE; Tue, 30 Jul 2024 07:23:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B21F9C4AF16; Tue, 30 Jul 2024 07:22:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722324192; bh=+91Y+/MLNk1WBxXTysrg77L3vkr/WGy2+/WN+U1aHfM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HHL0szqjb9e2c7ISow55+gL3zMTJ7Hi70dgqGvtn04IgFlKSoCDbSjvq5pKd3SALU BNdZFk3RcqpWHqICH7MEB7lMmy5GUPrdSGvEDrygCOQcEoLTdfx6YJDQCsmkTky0Ql Dd6ra4zjUVvYJyB9u9IbpBdyrSpsi7fah8Mpxsshed3fhBinvJRbysQV62PvMQ39yN malm4RfG9bAhGyLJJu3pI2OdmRtcqaC1PB2x/qfzEDiHAkmrq0gPhjyLfii9l1DWIZ wHmNuR5GcmhGyMxkdgUgoHi1zrgdIL924cLjlxVp5VoiwZM31oi9eGVEUa4eTSyk8D bxsp/h7xWID4w== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , "Naveen N . Rao" , nvdimm@lists.linux.dev, linux-fsdevel@vger.kernel.org, sparclinux@vger.kernel.org, Kinsey Ho , Ingo Molnar , Christian Brauner , Alexander Viro , Jan Kara , Dan Williams , Andreas Larsson , "David S . Miller" , Jason Gunthorpe Subject: [RFC PATCH 14/18] mm/pgtable: use ptdesc in pgtable_trans_huge_deposit Date: Tue, 30 Jul 2024 15:27:15 +0800 Message-ID: <20240730072719.3715016-4-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730072719.3715016-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> <20240730072719.3715016-1-alexs@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240730_002315_838899_6E2E3162 X-CRM114-Status: GOOD ( 15.82 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Alex Shi A step to replace pgtable_t to struct ptdesc. Signed-off-by: Alex Shi Cc: linux-mm@kvack.org Cc: nvdimm@lists.linux.dev Cc: linux-fsdevel@vger.kernel.org Cc: sparclinux@vger.kernel.org Cc: linux-s390@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: Barry Song Cc: Lance Yang Cc: Hugh Dickins Cc: Kinsey Ho Cc: Ingo Molnar Cc: Aneesh Kumar K.V Cc: Christian Brauner Cc: Alexander Viro Cc: Jan Kara Cc: Dan Williams Cc: Andreas Larsson Cc: David S. Miller Cc: Sven Schnelle Cc: Christian Borntraeger Cc: Vasily Gorbik Cc: Heiko Carstens Cc: Gerald Schaefer Cc: Alexander Gordeev Cc: Naveen N. Rao Cc: Nicholas Piggin Cc: Ryan Roberts Cc: David Hildenbrand Cc: Jason Gunthorpe Cc: Aneesh Kumar K.V Cc: Mike Rapoport Cc: Peter Xu Cc: Matthew Wilcox Cc: Christophe Leroy Cc: Michael Ellerman --- arch/powerpc/include/asm/book3s/64/pgtable.h | 6 +++--- arch/powerpc/mm/book3s64/hash_pgtable.c | 6 +++--- arch/powerpc/mm/book3s64/radix_pgtable.c | 6 +++--- arch/s390/include/asm/pgtable.h | 2 +- arch/s390/mm/pgtable.c | 6 +++--- arch/sparc/include/asm/pgtable_64.h | 2 +- arch/sparc/mm/tlb.c | 6 +++--- fs/dax.c | 2 +- include/linux/pgtable.h | 2 +- mm/debug_vm_pgtable.c | 2 +- mm/huge_memory.c | 14 +++++++------- mm/khugepaged.c | 2 +- mm/memory.c | 2 +- mm/pgtable-generic.c | 8 ++++---- 14 files changed, 33 insertions(+), 33 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index 0ee440b819d7..cf44e2440825 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -1365,11 +1365,11 @@ pud_t pudp_huge_get_and_clear_full(struct vm_area_struct *vma, #define __HAVE_ARCH_PGTABLE_DEPOSIT static inline void pgtable_trans_huge_deposit(struct mm_struct *mm, - pmd_t *pmdp, pgtable_t pgtable) + pmd_t *pmdp, struct ptdesc *ptdesc) { if (radix_enabled()) - return radix__pgtable_trans_huge_deposit(mm, pmdp, pgtable); - return hash__pgtable_trans_huge_deposit(mm, pmdp, pgtable); + return radix__pgtable_trans_huge_deposit(mm, pmdp, ptdesc); + return hash__pgtable_trans_huge_deposit(mm, pmdp, ptdesc); } #define __HAVE_ARCH_PGTABLE_WITHDRAW diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c index 35562d1f4267..8fd2c833dc3d 100644 --- a/arch/powerpc/mm/book3s64/hash_pgtable.c +++ b/arch/powerpc/mm/book3s64/hash_pgtable.c @@ -265,16 +265,16 @@ pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addres * the base page size hptes */ void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, - pgtable_t pgtable) + struct ptdesc *ptdesc) { - pgtable_t *pgtable_slot; + pte_t **pgtable_slot; assert_spin_locked(pmd_lockptr(mm, pmdp)); /* * we store the pgtable in the second half of PMD */ pgtable_slot = (pgtable_t *)pmdp + PTRS_PER_PMD; - *pgtable_slot = pgtable; + *pgtable_slot = (pte_t)ptdesc; /* * expose the deposited pgtable to other cpus. * before we set the hugepage PTE at pmd level diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index 3b9bb19510e3..c33e860966ad 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -1478,9 +1478,9 @@ pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addre * list_head memory area. */ void radix__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, - pgtable_t pgtable) + struct ptdesc *ptdesc) { - struct list_head *lh = (struct list_head *) pgtable; + struct list_head *lh = (struct list_head *)ptdesc; assert_spin_locked(pmd_lockptr(mm, pmdp)); @@ -1489,7 +1489,7 @@ void radix__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, INIT_LIST_HEAD(lh); else list_add(lh, (struct list_head *) pmd_huge_pte(mm, pmdp)); - pmd_huge_pte(mm, pmdp) = pgtable; + pmd_huge_pte(mm, pmdp) = ptdesc_page(ptdesc); } struct ptdesc *radix__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index cf0baf4bfe5c..d7b635f5e1e7 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -1735,7 +1735,7 @@ pud_t pudp_xchg_direct(struct mm_struct *, unsigned long, pud_t *, pud_t); #define __HAVE_ARCH_PGTABLE_DEPOSIT void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, - pgtable_t pgtable); + struct ptdesc *ptdesc); #define __HAVE_ARCH_PGTABLE_WITHDRAW struct ptdesc *pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp); diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c index b9016ee145cb..cf1a6aeb66d4 100644 --- a/arch/s390/mm/pgtable.c +++ b/arch/s390/mm/pgtable.c @@ -563,9 +563,9 @@ EXPORT_SYMBOL(pudp_xchg_direct); #ifdef CONFIG_TRANSPARENT_HUGEPAGE void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, - pgtable_t pgtable) + struct ptdesc *ptdesc) { - struct list_head *lh = (struct list_head *) pgtable; + struct list_head *lh = (struct list_head *)ptdesc; assert_spin_locked(pmd_lockptr(mm, pmdp)); @@ -574,7 +574,7 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, INIT_LIST_HEAD(lh); else list_add(lh, (struct list_head *) pmd_huge_pte(mm, pmdp)); - pmd_huge_pte(mm, pmdp) = (struct ptdesc *)pgtable; + pmd_huge_pte(mm, pmdp) = ptdesc; } struct ptdesc *pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index bfefd678e220..c71be5ef8b06 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -995,7 +995,7 @@ extern pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, #define __HAVE_ARCH_PGTABLE_DEPOSIT void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, - pgtable_t pgtable); + struct ptdesc *ptdesc); #define __HAVE_ARCH_PGTABLE_WITHDRAW struct ptdesc *pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp); diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c index bd2d3b1f6ba3..eeed4427f524 100644 --- a/arch/sparc/mm/tlb.c +++ b/arch/sparc/mm/tlb.c @@ -267,9 +267,9 @@ pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, } void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, - pgtable_t pgtable) + struct ptdesc *ptdesc) { - struct list_head *lh = (struct list_head *) pgtable; + struct list_head *lh = (struct list_head *)ptdesc; assert_spin_locked(&mm->page_table_lock); @@ -278,7 +278,7 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, INIT_LIST_HEAD(lh); else list_add(lh, (struct list_head *) pmd_huge_pte(mm, pmdp)); - pmd_huge_pte(mm, pmdp) = (struct ptdesc *)pgtable; + pmd_huge_pte(mm, pmdp) = ptdesc; } struct ptdesc *pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) diff --git a/fs/dax.c b/fs/dax.c index 61b9bd5200da..4b4e6acb0efc 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1234,7 +1234,7 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf, } if (ptdesc) { - pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, ptdesc_page(ptdesc)); + pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, ptdesc); mm_inc_nr_ptes(vma->vm_mm); } pmd_entry = mk_pmd(&zero_folio->page, vmf->vma->vm_page_prot); diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 3fa7b93580a3..9d256c548f5e 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -925,7 +925,7 @@ static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, #ifndef __HAVE_ARCH_PGTABLE_DEPOSIT extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, - pgtable_t pgtable); + struct ptdesc *ptdesc); #endif #ifndef __HAVE_ARCH_PGTABLE_WITHDRAW diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index f256bc816744..8550eec32aba 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -225,7 +225,7 @@ static void __init pmd_advanced_tests(struct pgtable_debug_args *args) /* Align the address wrt HPAGE_PMD_SIZE */ vaddr &= HPAGE_PMD_MASK; - pgtable_trans_huge_deposit(args->mm, args->pmdp, args->start_ptep); + pgtable_trans_huge_deposit(args->mm, args->pmdp, page_ptdesc(args->start_ptep)); pmd = pfn_pmd(args->pmd_pfn, args->page_prot); set_pmd_at(args->mm, vaddr, args->pmdp, pmd); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 4dc36910c8aa..aac67e8a8cc8 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -997,7 +997,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE); folio_add_lru_vma(folio, vma); - pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, ptdesc_page(ptdesc)); + pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, ptdesc); set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry); update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); @@ -1064,7 +1064,7 @@ static void set_huge_zero_folio(struct ptdesc *ptdesc, struct mm_struct *mm, return; entry = mk_pmd(&zero_folio->page, vma->vm_page_prot); entry = pmd_mkhuge(entry); - pgtable_trans_huge_deposit(mm, pmd, ptdesc_page(ptdesc)); + pgtable_trans_huge_deposit(mm, pmd, ptdesc); set_pmd_at(mm, haddr, pmd, entry); mm_inc_nr_ptes(mm); } @@ -1167,7 +1167,7 @@ static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, } if (ptdesc) { - pgtable_trans_huge_deposit(mm, pmd, ptdesc_page(ptdesc)); + pgtable_trans_huge_deposit(mm, pmd, ptdesc); mm_inc_nr_ptes(mm); ptdesc = NULL; } @@ -1404,7 +1404,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, } add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); mm_inc_nr_ptes(dst_mm); - pgtable_trans_huge_deposit(dst_mm, dst_pmd, ptdesc_page(ptdesc)); + pgtable_trans_huge_deposit(dst_mm, dst_pmd, ptdesc); if (!userfaultfd_wp(dst_vma)) pmd = pmd_swp_clear_uffd_wp(pmd); set_pmd_at(dst_mm, addr, dst_pmd, pmd); @@ -1449,7 +1449,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); out_zero_page: mm_inc_nr_ptes(dst_mm); - pgtable_trans_huge_deposit(dst_mm, dst_pmd, ptdesc_page(ptdesc)); + pgtable_trans_huge_deposit(dst_mm, dst_pmd, ptdesc); pmdp_set_wrprotect(src_mm, addr, src_pmd); if (!userfaultfd_wp(dst_vma)) pmd = pmd_clear_uffd_wp(pmd); @@ -1962,7 +1962,7 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, struct ptdesc *ptdesc; ptdesc = pgtable_trans_huge_withdraw(mm, old_pmd); - pgtable_trans_huge_deposit(mm, new_pmd, ptdesc_page(ptdesc)); + pgtable_trans_huge_deposit(mm, new_pmd, ptdesc); } pmd = move_soft_dirty_pmd(pmd); set_pmd_at(mm, new_addr, new_pmd, pmd); @@ -2236,7 +2236,7 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm set_pmd_at(mm, dst_addr, dst_pmd, _dst_pmd); src_ptdesc = pgtable_trans_huge_withdraw(mm, src_pmd); - pgtable_trans_huge_deposit(mm, dst_pmd, ptdesc_page(src_ptdesc)); + pgtable_trans_huge_deposit(mm, dst_pmd, src_ptdesc); unlock_ptls: double_pt_unlock(src_ptl, dst_ptl); if (src_anon_vma) { diff --git a/mm/khugepaged.c b/mm/khugepaged.c index f3b3db104615..48a54269472e 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1232,7 +1232,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, BUG_ON(!pmd_none(*pmd)); folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE); folio_add_lru_vma(folio, vma); - pgtable_trans_huge_deposit(mm, pmd, pgtable); + pgtable_trans_huge_deposit(mm, pmd, page_ptdesc(pgtable)); set_pmd_at(mm, address, pmd, _pmd); update_mmu_cache_pmd(vma, address, pmd); spin_unlock(pmd_ptl); diff --git a/mm/memory.c b/mm/memory.c index 27c2f63b7487..956cfe5f644d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4687,7 +4687,7 @@ static void deposit_prealloc_pte(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; - pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, vmf->prealloc_pte); + pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, page_ptdesc(vmf->prealloc_pte)); /* * We are going to consume the prealloc table, * count that as nr_ptes. diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index de1ed30fea16..5e763682941d 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -163,16 +163,16 @@ pud_t pudp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address, #ifndef __HAVE_ARCH_PGTABLE_DEPOSIT void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, - pgtable_t pgtable) + struct ptdesc *ptdesc) { assert_spin_locked(pmd_lockptr(mm, pmdp)); /* FIFO */ if (!pmd_huge_pte(mm, pmdp)) - INIT_LIST_HEAD(&pgtable->lru); + INIT_LIST_HEAD(&ptdesc->pt_list); else - list_add(&pgtable->lru, &pmd_huge_pte(mm, pmdp)->pt_list); - pmd_huge_pte(mm, pmdp) = page_ptdesc(pgtable); + list_add(&ptdesc->pt_list, &pmd_huge_pte(mm, pmdp)->pt_list); + pmd_huge_pte(mm, pmdp) = ptdesc; } #endif From patchwork Tue Jul 30 07:27:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13746851 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9A0E3C3DA49 for ; Tue, 30 Jul 2024 07:23:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=9mHhl75f+uB4P+Mzu7V+8d0jh+sQiW2Ou8BW/VTqdqc=; b=K4iByDa1cZt5K6 49Qc19IhyfaoIxwMXDOcmNSCvmdF10zeqOHhAVQl2uKEjLDoFrdgagB3bVfqHxX/WFUeQqfE5SYwB R9s6Ucvy7YhIH8buHR7dWBR8zYsoC++jStkrC+x4KM6BVu9UInH5tAjXc4njyz8cYiDaDJ8ciqbN8 eC8BaZGnhBerUCHecNk4r54YLyTqXoeUPFt8uC17jaO0FvJ0CRcIPB2YMmQeS5h2JUPU6hZFGkloF i6/NOlHy2X1N7/3AyU+ne4rWcx8naafEruIqZoD1vXRcVzHVRmfZpgng1quyhBnykzSNwNVmYDdhp YNtvRm/oVfbs/XmvIKqw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYhCp-0000000E0Dq-37Cj; Tue, 30 Jul 2024 07:23:35 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYhCk-0000000E09n-0cmE; Tue, 30 Jul 2024 07:23:32 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 50B1961DE1; Tue, 30 Jul 2024 07:23:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9B845C4AF0C; Tue, 30 Jul 2024 07:23:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722324209; bh=pmAot7c13dNSFHJIzK3xlU5SqKtGMVMiLmncclf/DQ4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AWXv2pEp/nNupQjdbP6hn5kUITNeBpHRDBazsYl2CUpADVig7M2J+omHoRHo3TFZb +Hk/4ktC45EKRUaZWbMYrHYfA/pgKh8zhcILxOPrp0butWLZomt6kbTElUUEqnCvg+ SVxMSWauhY/FHuAmUmxcA33G+FwZWZcQuJbRclpvkV5vc68Za2RA5bPELHmpu6rlPH N+kHuGSwuqg5k9oZXP5d+CBTW8Rt24mZVqRqc57EoCY2n8zfWwOCWH8dVd9qkRpU3K tg4w7ehvyfwE6q5ygSgvHB/7psYaWDXrfQSximxaEI4rwD91P92gbo/uuZ9G3P/b2a AxzNLLDhX3CvQ== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , "Naveen N . Rao" , linux-parisc@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-alpha@vger.kernel.org, Helge Deller , "James E . J . Bottomley" , Sam Creasey , Huacai Chen , Vineet Gupta , Matt Turner , Ivan Kokshaysky , Richard Henderson , Ard Biesheuvel Subject: [RFC PATCH 15/18] mm/pgtable: pass ptdesc to pmd_populate Date: Tue, 30 Jul 2024 15:27:16 +0800 Message-ID: <20240730072719.3715016-5-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730072719.3715016-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> <20240730072719.3715016-1-alexs@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240730_002330_343715_DC698688 X-CRM114-Status: GOOD ( 16.71 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Alex Shi Pass struct ptdesc to pmd_populate to further replace pgtable_t. We use type casting instead of page_ptdesc() helper since different arch has different type of pgtable_t. Helper ptdesc_pfn used for arch openrisc and hexagon. Signed-off-by: Alex Shi Cc: linux-mm@kvack.org Cc: linux-parisc@vger.kernel.org Cc: linux-openrisc@vger.kernel.org Cc: linux-m68k@lists.linux-m68k.org Cc: loongarch@lists.linux.dev Cc: linux-arm-kernel@lists.infradead.org Cc: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org Cc: linux-alpha@vger.kernel.org Cc: H. Peter Anvin Cc: x86@kernel.org Cc: Dave Hansen Cc: Borislav Petkov Cc: Thomas Gleixner Cc: Helge Deller Cc: James E.J. Bottomley Cc: Stafford Horne Cc: Stefan Kristiansson Cc: Jonas Bonn Cc: Sam Creasey Cc: Geert Uytterhoeven Cc: WANG Xuerui Cc: Huacai Chen Cc: Will Deacon Cc: Russell King Cc: Vineet Gupta Cc: Matt Turner Cc: Ivan Kokshaysky Cc: Richard Henderson Cc: Breno Leitao Cc: Josh Poimboeuf Cc: Bibo Mao Cc: Baolin Wang Cc: Mike Rapoport Cc: Vishal Moola Cc: Ard Biesheuvel --- arch/alpha/include/asm/pgalloc.h | 4 ++-- arch/arc/include/asm/pgalloc.h | 4 ++-- arch/arm/include/asm/pgalloc.h | 4 ++-- arch/arm64/include/asm/pgalloc.h | 4 ++-- arch/hexagon/include/asm/pgalloc.h | 4 ++-- arch/loongarch/include/asm/pgalloc.h | 4 ++-- arch/m68k/include/asm/motorola_pgalloc.h | 4 ++-- arch/m68k/include/asm/sun3_pgalloc.h | 4 ++-- arch/microblaze/include/asm/pgalloc.h | 2 +- arch/mips/include/asm/pgalloc.h | 4 ++-- arch/nios2/include/asm/pgalloc.h | 4 ++-- arch/openrisc/include/asm/pgalloc.h | 4 ++-- arch/parisc/include/asm/pgalloc.h | 2 +- arch/powerpc/include/asm/book3s/32/pgalloc.h | 2 +- arch/sh/include/asm/pgalloc.h | 4 ++-- arch/sparc/include/asm/pgalloc_32.h | 2 +- arch/x86/include/asm/pgalloc.h | 4 ++-- mm/debug_vm_pgtable.c | 2 +- mm/huge_memory.c | 8 ++++---- mm/khugepaged.c | 4 ++-- mm/memory.c | 2 +- mm/mremap.c | 2 +- 22 files changed, 39 insertions(+), 39 deletions(-) diff --git a/arch/alpha/include/asm/pgalloc.h b/arch/alpha/include/asm/pgalloc.h index 68be7adbfe58..ad62056059ac 100644 --- a/arch/alpha/include/asm/pgalloc.h +++ b/arch/alpha/include/asm/pgalloc.h @@ -14,9 +14,9 @@ */ static inline void -pmd_populate(struct mm_struct *mm, pmd_t *pmd, pgtable_t pte) +pmd_populate(struct mm_struct *mm, pmd_t *pmd, struct ptdesc *pte) { - pmd_set(pmd, (pte_t *)(page_to_pa(pte) + PAGE_OFFSET)); + pmd_set(pmd, (pte_t *)(page_to_pa(ptdesc_page(pte)) + PAGE_OFFSET)); } static inline void diff --git a/arch/arc/include/asm/pgalloc.h b/arch/arc/include/asm/pgalloc.h index 096b8ef58edb..51233cfb1bad 100644 --- a/arch/arc/include/asm/pgalloc.h +++ b/arch/arc/include/asm/pgalloc.h @@ -46,9 +46,9 @@ pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) set_pmd(pmd, __pmd((unsigned long)pte)); } -static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, pgtable_t pte_page) +static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, struct ptdesc *pte) { - set_pmd(pmd, __pmd((unsigned long)page_address(pte_page))); + set_pmd(pmd, __pmd((unsigned long)ptdesc_address(pte))); } static inline pgd_t *pgd_alloc(struct mm_struct *mm) diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h index e8501a6c3336..37a15220fce7 100644 --- a/arch/arm/include/asm/pgalloc.h +++ b/arch/arm/include/asm/pgalloc.h @@ -130,7 +130,7 @@ pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp, pte_t *ptep) } static inline void -pmd_populate(struct mm_struct *mm, pmd_t *pmdp, pgtable_t ptep) +pmd_populate(struct mm_struct *mm, pmd_t *pmdp, struct ptdesc *ptep) { extern pmdval_t user_pmd_table; pmdval_t prot; @@ -140,7 +140,7 @@ pmd_populate(struct mm_struct *mm, pmd_t *pmdp, pgtable_t ptep) else prot = _PAGE_USER_TABLE; - __pmd_populate(pmdp, page_to_phys(ptep), prot); + __pmd_populate(pmdp, page_to_phys(ptdesc_page(ptep)), prot); } #endif /* CONFIG_MMU */ diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h index 8ff5f2a2579e..d9074b5f9dfe 100644 --- a/arch/arm64/include/asm/pgalloc.h +++ b/arch/arm64/include/asm/pgalloc.h @@ -131,10 +131,10 @@ pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp, pte_t *ptep) } static inline void -pmd_populate(struct mm_struct *mm, pmd_t *pmdp, pgtable_t ptep) +pmd_populate(struct mm_struct *mm, pmd_t *pmdp, struct ptdesc *ptep) { VM_BUG_ON(mm == &init_mm); - __pmd_populate(pmdp, page_to_phys(ptep), PMD_TYPE_TABLE | PMD_TABLE_PXN); + __pmd_populate(pmdp, page_to_phys(ptdesc_page(ptep)), PMD_TYPE_TABLE | PMD_TABLE_PXN); } #endif diff --git a/arch/hexagon/include/asm/pgalloc.h b/arch/hexagon/include/asm/pgalloc.h index a3e082e54b74..f34e9fcad066 100644 --- a/arch/hexagon/include/asm/pgalloc.h +++ b/arch/hexagon/include/asm/pgalloc.h @@ -42,13 +42,13 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) } static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, - pgtable_t pte) + struct ptdesc *pte) { /* * Conveniently, zero in 3 LSB means indirect 4K page table. * Not so convenient when you're trying to vary the page size. */ - set_pmd(pmd, __pmd(((unsigned long)page_to_pfn(pte) << PAGE_SHIFT) | + set_pmd(pmd, __pmd(((unsigned long)ptdesc_pfn(pte) << PAGE_SHIFT) | HEXAGON_L1_PTE_SIZE)); } diff --git a/arch/loongarch/include/asm/pgalloc.h b/arch/loongarch/include/asm/pgalloc.h index c96d7160babc..3461da516ab9 100644 --- a/arch/loongarch/include/asm/pgalloc.h +++ b/arch/loongarch/include/asm/pgalloc.h @@ -18,9 +18,9 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, set_pmd(pmd, __pmd((unsigned long)pte)); } -static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, pgtable_t pte) +static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, struct ptdesc *pte) { - set_pmd(pmd, __pmd((unsigned long)page_address(pte))); + set_pmd(pmd, __pmd((unsigned long)ptdesc_address(pte))); } #ifndef __PAGETABLE_PMD_FOLDED diff --git a/arch/m68k/include/asm/motorola_pgalloc.h b/arch/m68k/include/asm/motorola_pgalloc.h index f9ee5ec4574d..a80c45b9d2a3 100644 --- a/arch/m68k/include/asm/motorola_pgalloc.h +++ b/arch/m68k/include/asm/motorola_pgalloc.h @@ -84,9 +84,9 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t * pmd_set(pmd, pte); } -static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, pgtable_t page) +static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, struct ptdesc *page) { - pmd_set(pmd, page); + pmd_set(pmd, ptdesc_page(page)); } static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd) diff --git a/arch/m68k/include/asm/sun3_pgalloc.h b/arch/m68k/include/asm/sun3_pgalloc.h index 4a137eecb6fe..965f663a4797 100644 --- a/arch/m68k/include/asm/sun3_pgalloc.h +++ b/arch/m68k/include/asm/sun3_pgalloc.h @@ -28,9 +28,9 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t * pmd_val(*pmd) = __pa((unsigned long)pte); } -static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, pgtable_t page) +static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, struct ptdesc *ptdesc) { - pmd_val(*pmd) = __pa((unsigned long)page_address(page)); + pmd_val(*pmd) = __pa((unsigned long)ptdesc_address(ptdesc)); } /* diff --git a/arch/microblaze/include/asm/pgalloc.h b/arch/microblaze/include/asm/pgalloc.h index 6c33b05f730f..0f4a479e015e 100644 --- a/arch/microblaze/include/asm/pgalloc.h +++ b/arch/microblaze/include/asm/pgalloc.h @@ -33,7 +33,7 @@ extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm); #define __pte_free_tlb(tlb, pte, addr) pte_free((tlb)->mm, (pte)) #define pmd_populate(mm, pmd, pte) \ - (pmd_val(*(pmd)) = (unsigned long)page_address(pte)) + (pmd_val(*(pmd)) = (unsigned long)page_address(ptdesc_page(pte))) #define pmd_populate_kernel(mm, pmd, pte) \ (pmd_val(*(pmd)) = (unsigned long) (pte)) diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h index f4440edcd8fe..2ef868d93b6b 100644 --- a/arch/mips/include/asm/pgalloc.h +++ b/arch/mips/include/asm/pgalloc.h @@ -25,9 +25,9 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, } static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, - pgtable_t pte) + struct ptdesc *pte) { - set_pmd(pmd, __pmd((unsigned long)page_address(pte))); + set_pmd(pmd, __pmd((unsigned long)ptdesc_address(pte))); } /* diff --git a/arch/nios2/include/asm/pgalloc.h b/arch/nios2/include/asm/pgalloc.h index ce6bb8e74271..420958d91a47 100644 --- a/arch/nios2/include/asm/pgalloc.h +++ b/arch/nios2/include/asm/pgalloc.h @@ -21,9 +21,9 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, } static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, - pgtable_t pte) + struct ptdesc *pte) { - set_pmd(pmd, __pmd((unsigned long)page_address(pte))); + set_pmd(pmd, __pmd((unsigned long)ptdesc_address(pte))); } extern pgd_t *pgd_alloc(struct mm_struct *mm); diff --git a/arch/openrisc/include/asm/pgalloc.h b/arch/openrisc/include/asm/pgalloc.h index 2251d940c3d8..a9479d873dca 100644 --- a/arch/openrisc/include/asm/pgalloc.h +++ b/arch/openrisc/include/asm/pgalloc.h @@ -29,10 +29,10 @@ extern int mem_init_done; set_pmd(pmd, __pmd(_KERNPG_TABLE + __pa(pte))) static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, - struct page *pte) + struct ptdesc *pte) { set_pmd(pmd, __pmd(_KERNPG_TABLE + - ((unsigned long)page_to_pfn(pte) << + ((unsigned long)ptdesc_pfn(pte) << (unsigned long) PAGE_SHIFT))); } diff --git a/arch/parisc/include/asm/pgalloc.h b/arch/parisc/include/asm/pgalloc.h index e3e142b1c5c5..9fd06e2fef89 100644 --- a/arch/parisc/include/asm/pgalloc.h +++ b/arch/parisc/include/asm/pgalloc.h @@ -68,6 +68,6 @@ pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) } #define pmd_populate(mm, pmd, pte_page) \ - pmd_populate_kernel(mm, pmd, page_address(pte_page)) + pmd_populate_kernel(mm, pmd, page_address(ptdesc_page(pte_page))) #endif diff --git a/arch/powerpc/include/asm/book3s/32/pgalloc.h b/arch/powerpc/include/asm/book3s/32/pgalloc.h index a435c84d1f9a..9971703d0566 100644 --- a/arch/powerpc/include/asm/book3s/32/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/32/pgalloc.h @@ -32,7 +32,7 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp, } static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmdp, - pgtable_t pte_page) + struct ptdesc *pte_page) { *pmdp = __pmd(__pa(pte_page) | _PMD_PRESENT); } diff --git a/arch/sh/include/asm/pgalloc.h b/arch/sh/include/asm/pgalloc.h index 5d8577ab1591..095521089c20 100644 --- a/arch/sh/include/asm/pgalloc.h +++ b/arch/sh/include/asm/pgalloc.h @@ -27,9 +27,9 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, } static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, - pgtable_t pte) + struct ptdesc *pte) { - set_pmd(pmd, __pmd((unsigned long)page_address(pte))); + set_pmd(pmd, __pmd((unsigned long)ptdesc_address(pte))); } #define __pte_free_tlb(tlb, pte, addr) \ diff --git a/arch/sparc/include/asm/pgalloc_32.h b/arch/sparc/include/asm/pgalloc_32.h index addaade56f21..6f0f661a380f 100644 --- a/arch/sparc/include/asm/pgalloc_32.h +++ b/arch/sparc/include/asm/pgalloc_32.h @@ -50,7 +50,7 @@ static inline void free_pmd_fast(pmd_t * pmd) #define pmd_free(mm, pmd) free_pmd_fast(pmd) #define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd) -#define pmd_populate(mm, pmd, pte) pmd_set(pmd, pte) +#define pmd_populate(mm, pmd, pte) pmd_set(pmd, (pte_t *)pte) void pmd_set(pmd_t *pmdp, pte_t *ptep); #define pmd_populate_kernel pmd_populate diff --git a/arch/x86/include/asm/pgalloc.h b/arch/x86/include/asm/pgalloc.h index 06a9a5867a86..5ca8ac568768 100644 --- a/arch/x86/include/asm/pgalloc.h +++ b/arch/x86/include/asm/pgalloc.h @@ -76,9 +76,9 @@ static inline void pmd_populate_kernel_safe(struct mm_struct *mm, } static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, - struct page *pte) + struct ptdesc *pte) { - unsigned long pfn = page_to_pfn(pte); + unsigned long pfn = page_to_pfn(ptdesc_page(pte)); paravirt_alloc_pte(mm, pfn); set_pmd(pmd, __pmd(((pteval_t)pfn << PAGE_SHIFT) | _PAGE_TABLE)); diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index 8550eec32aba..bf9dc9c0a9bf 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -645,7 +645,7 @@ static void __init pmd_populate_tests(struct pgtable_debug_args *args) * This entry points to next level page table page. * Hence this must not qualify as pmd_bad(). */ - pmd_populate(args->mm, args->pmdp, args->start_ptep); + pmd_populate(args->mm, args->pmdp, page_ptdesc(args->start_ptep)); pmd = READ_ONCE(*args->pmdp); WARN_ON(pmd_bad(pmd)); } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index aac67e8a8cc8..665445706491 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2365,7 +2365,7 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma, old_pmd = pmdp_huge_clear_flush(vma, haddr, pmd); ptdesc = pgtable_trans_huge_withdraw(mm, pmd); - pmd_populate(mm, &_pmd, ptdesc_page(ptdesc)); + pmd_populate(mm, &_pmd, ptdesc); pte = pte_offset_map(&_pmd, haddr); VM_BUG_ON(!pte); @@ -2382,7 +2382,7 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma, } pte_unmap(pte - 1); smp_wmb(); /* make pte visible before pmd */ - pmd_populate(mm, pmd, ptdesc_page(ptdesc)); + pmd_populate(mm, pmd, ptdesc); } static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, @@ -2537,7 +2537,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, * This's critical for some architectures (Power). */ ptdesc = pgtable_trans_huge_withdraw(mm, pmd); - pmd_populate(mm, &_pmd, ptdesc_page(ptdesc)); + pmd_populate(mm, &_pmd, ptdesc); pte = pte_offset_map(&_pmd, haddr); VM_BUG_ON(!pte); @@ -2602,7 +2602,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, put_page(page); smp_wmb(); /* make pte visible before pmd */ - pmd_populate(mm, pmd, ptdesc_page(ptdesc)); + pmd_populate(mm, pmd, ptdesc); } void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 48a54269472e..5b466a1c2136 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -769,7 +769,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte, * acquiring anon_vma_lock_write is unnecessary. */ pmd_ptl = pmd_lock(vma->vm_mm, pmd); - pmd_populate(vma->vm_mm, pmd, pmd_pgtable(orig_pmd)); + pmd_populate(vma->vm_mm, pmd, (struct ptdesc *)pmd_pgtable(orig_pmd)); spin_unlock(pmd_ptl); /* * Release both raw and compound pages isolated @@ -1198,7 +1198,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, * hugepmds and never for establishing regular pmds that * points to regular pagetables. Use pmd_populate for that */ - pmd_populate(mm, pmd, pmd_pgtable(_pmd)); + pmd_populate(mm, pmd, (struct ptdesc *)pmd_pgtable(_pmd)); spin_unlock(pmd_ptl); anon_vma_unlock_write(vma->anon_vma); goto out_up_write; diff --git a/mm/memory.c b/mm/memory.c index 956cfe5f644d..cbed8824059f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -438,7 +438,7 @@ void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte) * smp_rmb() barriers in page table walking code. */ smp_wmb(); /* Could be smp_wmb__xxx(before|after)_spin_lock */ - pmd_populate(mm, pmd, *pte); + pmd_populate(mm, pmd, (struct ptdesc *)(*pte)); *pte = NULL; } spin_unlock(ptl); diff --git a/mm/mremap.c b/mm/mremap.c index e7ae140fc640..f32d35accd97 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -283,7 +283,7 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, VM_BUG_ON(!pmd_none(*new_pmd)); - pmd_populate(mm, new_pmd, pmd_pgtable(pmd)); + pmd_populate(mm, new_pmd, (struct ptdesc *)pmd_pgtable(pmd)); flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE); if (new_ptl != old_ptl) spin_unlock(new_ptl); From patchwork Tue Jul 30 07:27:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13746852 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 22756C3DA61 for ; Tue, 30 Jul 2024 07:23:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2+WD4obSJr0MimLsV8ZUk3WS4YreuXojvzWE+N+lERE=; b=j0sbgm2CzGFogm RhxOnj3ATHPqEQ7SfmNhr7Yz+N74edOrvp1R49MUBaZQFsrsuU0F/dSLT99+d8Sas/VxdUegzhtre Wn0cfRvsHo81mGuhUkigR0YG4gsFSE53RutC851lj8yyJvJ8Hc3N/x0/t5G8YMO8Y6b8FsMgjJV4+ IdEVzzVUjhFi6bTh8ZDryKlrsuetwQWFnmSl/6nxW8bl9WsjlBALJtceG+UKwfPOWzD6d2idKobon c6bGJfOnrWWs7GX5TGxtn6GhSOtOPWz/2MUYCyyQ2Cko7z8aZ3OPet5sZdrfIgcu3XFTMdX1RrR1s hNQpR197SRPveIaZMYmg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYhD6-0000000E0P3-2LVZ; Tue, 30 Jul 2024 07:23:52 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYhD0-0000000E0K8-2zB0; Tue, 30 Jul 2024 07:23:48 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 919D4CE0EC5; Tue, 30 Jul 2024 07:23:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8986AC32782; Tue, 30 Jul 2024 07:23:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722324223; bh=MBumahwIovMrLM2xijUGbqbzC8eqefcbVCDPjO9VQdE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VS2i0h4StkfC64c/JDyScKsNP0M1q1w3AGB7FpchDWYE3z1M10NJVKgdKsS8NCbiy d2uHvMiSRGm1FVHaBK8eFwUvEtY7QsciTEUxncVwivItvrAAyaZKX/UrzKXDsx80fB hdVzmcvfXZq0cq838KL/IIOuw5o6htAm35hjuefbFnaeXbpXlVepWunB9D2y1XoH/P DOr/E3Jc/8HLWb1qhmvVZLFWhR2A9Ugp62gjlZlG+ibYdl1NrFjQKylKux8BJ7W6+i nRKNxS68jjsmSHjYUvwPjztBhfwuSyjjuZuFnGp5UzkRLxjYlH9kVNt7n3/xt5EXtO i4Za4M6sm1W+Q== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , "Naveen N . Rao" , linux-fsdevel@vger.kernel.org, Andrew Morton Subject: [RFC PATCH 16/18] mm/pgtable: pass ptdesc to pmd_install Date: Tue, 30 Jul 2024 15:27:17 +0800 Message-ID: <20240730072719.3715016-6-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730072719.3715016-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> <20240730072719.3715016-1-alexs@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240730_002347_138196_4A46C21B X-CRM114-Status: GOOD ( 13.42 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Alex Shi A new step to replace pgtable_t by ptdesc, also a preparation to change vmf.prealloc_pte to ptdesc too. Signed-off-by: Alex Shi Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-fsdevel@vger.kernel.org Cc: Andrew Morton Cc: Matthew Wilcox --- mm/filemap.c | 2 +- mm/internal.h | 2 +- mm/memory.c | 8 ++++---- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index d62150418b91..3708ef71182e 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3453,7 +3453,7 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct folio *folio, } if (pmd_none(*vmf->pmd) && vmf->prealloc_pte) - pmd_install(mm, vmf->pmd, &vmf->prealloc_pte); + pmd_install(mm, vmf->pmd, (struct ptdesc **)&vmf->prealloc_pte); return false; } diff --git a/mm/internal.h b/mm/internal.h index 7a3bcc6d95e7..e4bc64d5176a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -320,7 +320,7 @@ void folio_activate(struct folio *folio); void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas, struct vm_area_struct *start_vma, unsigned long floor, unsigned long ceiling, bool mm_wr_locked); -void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte); +void pmd_install(struct mm_struct *mm, pmd_t *pmd, struct ptdesc **pte); struct zap_details; void unmap_page_range(struct mmu_gather *tlb, diff --git a/mm/memory.c b/mm/memory.c index cbed8824059f..79685600d23f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -418,7 +418,7 @@ void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas, } while (vma); } -void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte) +void pmd_install(struct mm_struct *mm, pmd_t *pmd, struct ptdesc **pte) { spinlock_t *ptl = pmd_lock(mm, pmd); @@ -438,7 +438,7 @@ void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte) * smp_rmb() barriers in page table walking code. */ smp_wmb(); /* Could be smp_wmb__xxx(before|after)_spin_lock */ - pmd_populate(mm, pmd, (struct ptdesc *)(*pte)); + pmd_populate(mm, pmd, *pte); *pte = NULL; } spin_unlock(ptl); @@ -450,7 +450,7 @@ int __pte_alloc(struct mm_struct *mm, pmd_t *pmd) if (!ptdesc) return -ENOMEM; - pmd_install(mm, pmd, (pgtable_t *)&ptdesc); + pmd_install(mm, pmd, &ptdesc); if (ptdesc) pte_free(mm, ptdesc); return 0; @@ -4868,7 +4868,7 @@ vm_fault_t finish_fault(struct vm_fault *vmf) } if (vmf->prealloc_pte) - pmd_install(vma->vm_mm, vmf->pmd, &vmf->prealloc_pte); + pmd_install(vma->vm_mm, vmf->pmd, (struct ptdesc **)&vmf->prealloc_pte); else if (unlikely(pte_alloc(vma->vm_mm, vmf->pmd))) return VM_FAULT_OOM; } From patchwork Tue Jul 30 07:27:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13746853 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 85EC2C3DA49 for ; Tue, 30 Jul 2024 07:24:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rbsR8rX0ckfarLZ/tHTKWBRhgIcwBCBuKwDhIfClDCY=; b=wAIWBf54G9doGs MCjpcXs1s6O1WuVcnLpjdnPvC5gPsunVsveBsX0kQRfCT3gDWh3ORZA5nA81PKWTQKEpVFqxUeW9b nIJSuJ+tvJy3RWC0Yw/YmtdBAQ7JQYdmm5ZuN8H9u4oM97rZlhzyUBF95Ovoxijah/b6k0XVu/uDr tPE1sWBuRKpqujLm5j9VZ4h3b4BDbMQ72RASxxBu3YZAkelIi8+ZlI6ejQGY57cMuI7ySCooEOKp2 1Cbh6qa0sRNs6yoYUCfY9WNUSkP8vnP2hkDjcXbSLZsWkMwrS2jvZM9EwkzI/H+/7n7nEzjZDdZhi FuGNuNwUij35ga4fizbg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYhDJ-0000000E0WH-2Qh8; Tue, 30 Jul 2024 07:24:05 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYhDD-0000000E0SM-31DL; Tue, 30 Jul 2024 07:24:01 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id EE2E161E00; Tue, 30 Jul 2024 07:23:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5B0D5C4AF0F; Tue, 30 Jul 2024 07:23:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722324238; bh=hohz5++xhGxNDKVPZKJqUhh7A1cYUT79OYxYo7u11yY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Umdw1n/4ZRW6vj4XY31OqgNYPiPhAbZlbgkm463jfeKHwogT02Y63bxfSFWNs6V7i IbJVXhKCJIDwUUeMqi8R0XrMXUHiugWfyCxKZqI770IZLjiuICkEJnn+DmK0S0wtXg wUCiuEnqytXfwcqqtWz5fOkHF+luguzZyWaFMoJ4rhle5cDdQdyE5uQaHGo1ueEOcb h/0PAa/PMRqHVn0XqkQbLbFosheXAJ2bz8RLIDCtPfwms57k82sGo6pfGja5YC1oQa kpLBBVhBZpJMpUdVnsCBJv3mecb+/z0M1D++yUjl5E3rrcM36nTFGIo1ZUq4uLWm+B aAD8huEqWxnFw== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , "Naveen N . Rao" , linux-fsdevel@vger.kernel.org, Andrew Morton Subject: [RFC PATCH 17/18] mm: convert vmf.prealloc_pte to struct ptdesc pointer Date: Tue, 30 Jul 2024 15:27:18 +0800 Message-ID: <20240730072719.3715016-7-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730072719.3715016-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> <20240730072719.3715016-1-alexs@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240730_002359_898224_CA08F43E X-CRM114-Status: GOOD ( 15.19 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Alex Shi vmfs.prealloc_pte is a pointer to page table memory, so converter it to struct ptdesc pointer. Signed-off-by: Alex Shi Cc: linux-fsdevel@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Cc: Matthew Wilcox Cc: Andrew Morton --- include/linux/mm.h | 2 +- mm/filemap.c | 2 +- mm/memory.c | 12 ++++++------ 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 7424f964dff3..749d6dd311fa 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -567,7 +567,7 @@ struct vm_fault { * Protects pte page table if 'pte' * is not NULL, otherwise pmd. */ - pgtable_t prealloc_pte; /* Pre-allocated pte page table. + struct ptdesc *prealloc_pte; /* Pre-allocated pte page table. * vm_ops->map_pages() sets up a page * table from atomic context. * do_fault_around() pre-allocates diff --git a/mm/filemap.c b/mm/filemap.c index 3708ef71182e..d62150418b91 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3453,7 +3453,7 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct folio *folio, } if (pmd_none(*vmf->pmd) && vmf->prealloc_pte) - pmd_install(mm, vmf->pmd, (struct ptdesc **)&vmf->prealloc_pte); + pmd_install(mm, vmf->pmd, &vmf->prealloc_pte); return false; } diff --git a/mm/memory.c b/mm/memory.c index 79685600d23f..1a5fb17ab045 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4648,7 +4648,7 @@ static vm_fault_t __do_fault(struct vm_fault *vmf) * # flush A, B to clear the writeback */ if (pmd_none(*vmf->pmd) && !vmf->prealloc_pte) { - vmf->prealloc_pte = ptdesc_page(pte_alloc_one(vma->vm_mm)); + vmf->prealloc_pte = pte_alloc_one(vma->vm_mm); if (!vmf->prealloc_pte) return VM_FAULT_OOM; } @@ -4687,7 +4687,7 @@ static void deposit_prealloc_pte(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; - pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, page_ptdesc(vmf->prealloc_pte)); + pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, vmf->prealloc_pte); /* * We are going to consume the prealloc table, * count that as nr_ptes. @@ -4726,7 +4726,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) * related to pte entry. Use the preallocated table for that. */ if (arch_needs_pgtable_deposit() && !vmf->prealloc_pte) { - vmf->prealloc_pte = ptdesc_page(pte_alloc_one(vma->vm_mm)); + vmf->prealloc_pte = pte_alloc_one(vma->vm_mm); if (!vmf->prealloc_pte) return VM_FAULT_OOM; } @@ -4868,7 +4868,7 @@ vm_fault_t finish_fault(struct vm_fault *vmf) } if (vmf->prealloc_pte) - pmd_install(vma->vm_mm, vmf->pmd, (struct ptdesc **)&vmf->prealloc_pte); + pmd_install(vma->vm_mm, vmf->pmd, &vmf->prealloc_pte); else if (unlikely(pte_alloc(vma->vm_mm, vmf->pmd))) return VM_FAULT_OOM; } @@ -5011,7 +5011,7 @@ static vm_fault_t do_fault_around(struct vm_fault *vmf) pte_off + vma_pages(vmf->vma) - vma_off) - 1; if (pmd_none(*vmf->pmd)) { - vmf->prealloc_pte = ptdesc_page(pte_alloc_one(vmf->vma->vm_mm)); + vmf->prealloc_pte = pte_alloc_one(vmf->vma->vm_mm); if (!vmf->prealloc_pte) return VM_FAULT_OOM; } @@ -5197,7 +5197,7 @@ static vm_fault_t do_fault(struct vm_fault *vmf) /* preallocated pagetable is unused: free it */ if (vmf->prealloc_pte) { - pte_free(vm_mm, page_ptdesc(vmf->prealloc_pte)); + pte_free(vm_mm, vmf->prealloc_pte); vmf->prealloc_pte = NULL; } return ret; From patchwork Tue Jul 30 07:27:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13746854 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 66638C3DA61 for ; Tue, 30 Jul 2024 07:24:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=grEuHdIoGAPr6HqxUDUqLxCEqRTU4wtdgdZrZVVa/KY=; b=C8bf/rkPRpLokq wU1+taRjlomgzMmDrLZ0MnbadyGA+s/FySjAm7SjG/F6BmwHMeHNfnIUk7g03N13aJXLgBzUcbEaq r20Whdz3IvztG9dkwVljRufYO8aGMG4L0AGQsFrRNAtYCTM2fId3GjZ6pRt+bPhCjzI7iTjJ1FcEo vJaWpBuLqlJ/f3OPw9To8UcYSUbKDXpEaqEtZBD3QJXkRWUgu/nWK3MMgKWUnUctbt25urqe9Ke9Q pY2pylZNdEtVua4rpTR6lOv3lygWjT1kXdqd6VkIYP56zuoGb0RnaCDmQv50K65HepXuQkUkJjeL4 qW9r7Uc6AGeWiIM1tigA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYhDZ-0000000E0gn-2OZe; Tue, 30 Jul 2024 07:24:21 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYhDT-0000000E0bM-3sYo; Tue, 30 Jul 2024 07:24:18 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id B4986CE0EA6; Tue, 30 Jul 2024 07:24:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2DD3FC32782; Tue, 30 Jul 2024 07:23:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722324253; bh=8w3GasCTOtDHhV15ERLHRdppxCj5vRL742PIV75jwC0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BNah1I2e6F1QoAnt7xUUiWCMNufC/iZmg+WwLGUeKSjh/ezvW81r2QzGBGhopUnZw T833FXdR7zygKEdg2G+quC5GtOt4OQTzugQai+O+6TupWTHCwZY5BlwYKwXw56TSdq vY34shLfDNHoPzuLCy400CbGOeFlukSJvU0eGFABH99AvKOdaFwdjgHm29H1euM8gA WlrNy3SLd5dfkQ/+cLJx9pG7QYiKRwVC5MJSSW/9dJiRv1LpUoJHsX2N7UJqj6fgKK IO9qGAvc0jLl3AJAX7InSokEu+On5T61pO/8BtUoFF4W/zuaN+246bWmc18jRq3Ct9 h3HFRijs52IOw== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , "Naveen N . Rao" Subject: [RFC PATCH 18/18] mm/pgtable: pass ptdesc in pte_free_defer Date: Tue, 30 Jul 2024 15:27:19 +0800 Message-ID: <20240730072719.3715016-8-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730072719.3715016-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> <20240730072719.3715016-1-alexs@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240730_002416_345405_190B5A4D X-CRM114-Status: GOOD ( 16.11 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Alex Shi pass ptdesc in pte_free_defer() and use ptdesc in collapse_huge_page(). This patch is immature, there is a issue from pmd_pgtable() conversion in few archs. The problem need a fix. Signed-off-by: Alex Shi Cc: linux-mm@kvack.org Cc: linux-s390@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: Mike Rapoport Cc: Barry Song Cc: Peter Xu Cc: Lance Yang Cc: Kemeng Shi Cc: Aneesh Kumar K.V Cc: Qi Zheng Cc: Sven Schnelle Cc: Christian Borntraeger Cc: Vasily Gorbik Cc: Heiko Carstens Cc: Naveen N Rao Cc: Nicholas Piggin Cc: Michael Ellerman Cc: Matthew Wilcox Cc: Ryan Roberts Cc: David Hildenbrand Cc: Vishal Moola Cc: Hugh Dickins Cc: Gerald Schaefer Cc: Alexander Gordeev Cc: Christophe Leroy --- arch/powerpc/include/asm/pgalloc.h | 2 +- arch/s390/include/asm/pgalloc.h | 2 +- arch/s390/mm/pgalloc.c | 2 +- include/linux/pgtable.h | 2 +- mm/khugepaged.c | 10 +++++----- mm/pgtable-generic.c | 4 +--- 6 files changed, 10 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/include/asm/pgalloc.h b/arch/powerpc/include/asm/pgalloc.h index 12520521163e..ca21b67c593f 100644 --- a/arch/powerpc/include/asm/pgalloc.h +++ b/arch/powerpc/include/asm/pgalloc.h @@ -47,7 +47,7 @@ static inline void pte_free(struct mm_struct *mm, struct ptdesc *ptepage) /* arch use pte_free_defer() implementation in arch/powerpc/mm/pgtable-frag.c */ #define pte_free_defer pte_free_defer -void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable); +void pte_free_defer(struct mm_struct *mm, struct ptdesc *pgtable); /* * Functions that deal with pagetables that could be at any level of diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h index 771494526f6e..a229cee11bbd 100644 --- a/arch/s390/include/asm/pgalloc.h +++ b/arch/s390/include/asm/pgalloc.h @@ -144,7 +144,7 @@ static inline void pmd_populate(struct mm_struct *mm, /* arch use pte_free_defer() implementation in arch/s390/mm/pgalloc.c */ #define pte_free_defer pte_free_defer -void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable); +void pte_free_defer(struct mm_struct *mm, struct ptdesc *pgtable); void vmem_map_init(void); void *vmem_crst_alloc(unsigned long val); diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index f691e0fb66a2..c7bb38d85d81 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -214,7 +214,7 @@ static void pte_free_now(struct rcu_head *head) pagetable_pte_dtor_free(ptdesc); } -void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable) +void pte_free_defer(struct mm_struct *mm, struct ptdesc *pgtable) { struct ptdesc *ptdesc = virt_to_ptdesc(pgtable); diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 9d256c548f5e..e7b018de1d0f 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -116,7 +116,7 @@ static inline void pte_unmap(pte_t *pte) } #endif -void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable); +void pte_free_defer(struct mm_struct *mm, struct ptdesc *ptdesc); /* Find an entry in the second-level page table.. */ #ifndef pmd_offset diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 5b466a1c2136..30cf61d02c1c 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1094,7 +1094,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, LIST_HEAD(compound_pagelist); pmd_t *pmd, _pmd; pte_t *pte; - pgtable_t pgtable; + struct ptdesc *ptdesc; struct folio *folio; spinlock_t *pmd_ptl, *pte_ptl; int result = SCAN_FAIL; @@ -1223,7 +1223,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, * write. */ __folio_mark_uptodate(folio); - pgtable = pmd_pgtable(_pmd); + ptdesc = pmd_ptdesc(&_pmd); _pmd = mk_huge_pmd(&folio->page, vma->vm_page_prot); _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma); @@ -1232,7 +1232,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, BUG_ON(!pmd_none(*pmd)); folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE); folio_add_lru_vma(folio, vma); - pgtable_trans_huge_deposit(mm, pmd, page_ptdesc(pgtable)); + pgtable_trans_huge_deposit(mm, pmd, ptdesc); set_pmd_at(mm, address, pmd, _pmd); update_mmu_cache_pmd(vma, address, pmd); spin_unlock(pmd_ptl); @@ -1664,7 +1664,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, mm_dec_nr_ptes(mm); page_table_check_pte_clear_range(mm, haddr, pgt_pmd); - pte_free_defer(mm, pmd_pgtable(pgt_pmd)); + pte_free_defer(mm, pmd_ptdesc(&pgt_pmd)); maybe_install_pmd: /* step 5: install pmd entry */ @@ -1777,7 +1777,7 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) if (retracted) { mm_dec_nr_ptes(mm); page_table_check_pte_clear_range(mm, addr, pgt_pmd); - pte_free_defer(mm, pmd_pgtable(pgt_pmd)); + pte_free_defer(mm, pmd_ptdesc(&pgt_pmd)); } } i_mmap_unlock_read(mapping); diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 5e763682941d..f3bc2b17893a 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -244,10 +244,8 @@ static void pte_free_now(struct rcu_head *head) pte_free(NULL /* mm not passed and not used */, ptdesc); } -void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable) +void pte_free_defer(struct mm_struct *mm, struct ptdesc *ptdesc) { - struct ptdesc *ptdesc = page_ptdesc(pgtable); - call_rcu(&ptdesc->pt_rcu_head, pte_free_now); } #endif /* pte_free_defer */