From patchwork Tue Mar 24 05:22:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 11454475 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 96B4C6CA for ; Tue, 24 Mar 2020 05:23:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5477620714 for ; Tue, 24 Mar 2020 05:23:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5477620714 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 66F9D6B00CD; Tue, 24 Mar 2020 01:23:29 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 622006B00CE; Tue, 24 Mar 2020 01:23:29 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 50F596B00CF; Tue, 24 Mar 2020 01:23:29 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0021.hostedemail.com [216.40.44.21]) by kanga.kvack.org (Postfix) with ESMTP id 3981F6B00CD for ; Tue, 24 Mar 2020 01:23:29 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 7D98F18731B4B for ; Tue, 24 Mar 2020 05:23:29 +0000 (UTC) X-FDA: 76629112938.03.earth38_3b820abdae562 X-Spam-Summary: 2,0,0,af5c32df45aa3f98,d41d8cd98f00b204,anshuman.khandual@arm.com,,RULES_HIT:1:41:334:355:368:369:379:541:800:960:966:973:988:989:1260:1261:1345:1359:1431:1437:1605:1730:1747:1777:1792:1963:2194:2196:2199:2200:2393:2559:2562:2637:2689:2693:3138:3139:3140:3141:3142:3167:3865:3867:3868:3871:3872:3874:4250:4321:4385:5007:6117:6119:6261:6742:6743:7903:8634:8957:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12555:12683:12895:12986:13053:13141:13230:13255:13548:13846:14096:14394:21080:21433:21451:21627:21939:21990:30054:30089,0,RBL:217.140.110.172:@arm.com:.lbl8.mailshell.net-64.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: earth38_3b820abdae562 X-Filterd-Recvd-Size: 13184 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Mar 2020 05:23:28 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 85BE17FA; Mon, 23 Mar 2020 22:23:27 -0700 (PDT) Received: from p8cg001049571a15.arm.com (unknown [10.163.1.71]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6B1163F7C3; Mon, 23 Mar 2020 22:27:24 -0700 (PDT) From: Anshuman Khandual To: linux-mm@kvack.org Cc: christophe.leroy@c-s.fr, Anshuman Khandual , Andrew Morton , Mike Rapoport , Vineet Gupta , Catalin Marinas , Will Deacon , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , "Kirill A . Shutemov" , Paul Walmsley , Palmer Dabbelt , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-riscv@lists.infradead.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH V2 1/3] mm/debug: Add tests validating arch page table helpers for core features Date: Tue, 24 Mar 2020 10:52:53 +0530 Message-Id: <1585027375-9997-2-git-send-email-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1585027375-9997-1-git-send-email-anshuman.khandual@arm.com> References: <1585027375-9997-1-git-send-email-anshuman.khandual@arm.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This adds new tests validating arch page table helpers for these following core memory features. These tests create and test specific mapping types at various page table levels. 1. SPECIAL mapping 2. PROTNONE mapping 3. DEVMAP mapping 4. SOFTDIRTY mapping 5. SWAP mapping 6. MIGRATION mapping 7. HUGETLB mapping 8. THP mapping Cc: Andrew Morton Cc: Mike Rapoport Cc: Vineet Gupta Cc: Catalin Marinas Cc: Will Deacon Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman Cc: Heiko Carstens Cc: Vasily Gorbik Cc: Christian Borntraeger Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: Kirill A. Shutemov Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: linux-snps-arc@lists.infradead.org Cc: linux-arm-kernel@lists.infradead.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-s390@vger.kernel.org Cc: linux-riscv@lists.infradead.org Cc: x86@kernel.org Cc: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org Suggested-by: Catalin Marinas Signed-off-by: Anshuman Khandual Reviewed-by: Zi Yan Reported-by: kernel test robot --- mm/debug_vm_pgtable.c | 291 +++++++++++++++++++++++++++++++++++++++++- 1 file changed, 290 insertions(+), 1 deletion(-) diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index 98990a515268..15055a8f6478 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -289,6 +289,267 @@ static void __init pmd_populate_tests(struct mm_struct *mm, pmd_t *pmdp, WARN_ON(pmd_bad(pmd)); } +static void __init pte_special_tests(unsigned long pfn, pgprot_t prot) +{ + pte_t pte = pfn_pte(pfn, prot); + + if (!IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL)) + return; + + WARN_ON(!pte_special(pte_mkspecial(pte))); +} + +static void __init pte_protnone_tests(unsigned long pfn, pgprot_t prot) +{ + pte_t pte = pfn_pte(pfn, prot); + + if (!IS_ENABLED(CONFIG_NUMA_BALANCING)) + return; + + WARN_ON(!pte_protnone(pte)); + WARN_ON(!pte_present(pte)); +} + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +static void __init pmd_protnone_tests(unsigned long pfn, pgprot_t prot) +{ + pmd_t pmd = pfn_pmd(pfn, prot); + + if (!IS_ENABLED(CONFIG_NUMA_BALANCING)) + return; + + WARN_ON(!pmd_protnone(pmd)); + WARN_ON(!pmd_present(pmd)); +} +#else +static void __init pmd_protnone_tests(unsigned long pfn, pgprot_t prot) { } +#endif + +#ifdef CONFIG_ARCH_HAS_PTE_DEVMAP +static void __init pte_devmap_tests(unsigned long pfn, pgprot_t prot) +{ + pte_t pte = pfn_pte(pfn, prot); + + WARN_ON(!pte_devmap(pte_mkdevmap(pte))); +} + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +static void __init pmd_devmap_tests(unsigned long pfn, pgprot_t prot) +{ + pmd_t pmd = pfn_pmd(pfn, prot); + + WARN_ON(!pmd_devmap(pmd_mkdevmap(pmd))); +} + +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +static void __init pud_devmap_tests(unsigned long pfn, pgprot_t prot) +{ + pud_t pud = pfn_pud(pfn, prot); + + WARN_ON(!pud_devmap(pud_mkdevmap(pud))); +} +#else +static void __init pud_devmap_tests(unsigned long pfn, pgprot_t prot) { } +#endif +#else +static void __init pmd_devmap_tests(unsigned long pfn, pgprot_t prot) { } +static void __init pud_devmap_tests(unsigned long pfn, pgprot_t prot) { } +#endif +#else +static void __init pte_devmap_tests(unsigned long pfn, pgprot_t prot) { } +static void __init pmd_devmap_tests(unsigned long pfn, pgprot_t prot) { } +static void __init pud_devmap_tests(unsigned long pfn, pgprot_t prot) { } +#endif + +static void __init pte_soft_dirty_tests(unsigned long pfn, pgprot_t prot) +{ + pte_t pte = pfn_pte(pfn, prot); + + if (!IS_ENABLED(CONFIG_HAVE_ARCH_SOFT_DIRTY)) + return; + + WARN_ON(!pte_soft_dirty(pte_mksoft_dirty(pte))); + WARN_ON(pte_soft_dirty(pte_clear_soft_dirty(pte))); +} + +static void __init pte_swap_soft_dirty_tests(unsigned long pfn, pgprot_t prot) +{ + pte_t pte = pfn_pte(pfn, prot); + + if (!IS_ENABLED(CONFIG_HAVE_ARCH_SOFT_DIRTY)) + return; + + WARN_ON(!pte_swp_soft_dirty(pte_swp_mksoft_dirty(pte))); + WARN_ON(pte_swp_soft_dirty(pte_swp_clear_soft_dirty(pte))); +} + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +static void __init pmd_soft_dirty_tests(unsigned long pfn, pgprot_t prot) +{ + pmd_t pmd = pfn_pmd(pfn, prot); + + if (!IS_ENABLED(CONFIG_HAVE_ARCH_SOFT_DIRTY)) + return; + + WARN_ON(!pmd_soft_dirty(pmd_mksoft_dirty(pmd))); + WARN_ON(pmd_soft_dirty(pmd_clear_soft_dirty(pmd))); +} + +static void __init pmd_swap_soft_dirty_tests(unsigned long pfn, pgprot_t prot) +{ + pmd_t pmd = pfn_pmd(pfn, prot); + + if (!IS_ENABLED(CONFIG_HAVE_ARCH_SOFT_DIRTY) || + !IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION)) + return; + + WARN_ON(!pmd_swp_soft_dirty(pmd_swp_mksoft_dirty(pmd))); + WARN_ON(pmd_swp_soft_dirty(pmd_swp_clear_soft_dirty(pmd))); +} +#else +static void __init pmd_soft_dirty_tests(unsigned long pfn, pgprot_t prot) { } +static void __init pmd_swap_soft_dirty_tests(unsigned long pfn, pgprot_t prot) +{ +} +#endif + +static void __init pte_swap_tests(unsigned long pfn, pgprot_t prot) +{ + swp_entry_t swp; + pte_t pte; + + pte = pfn_pte(pfn, prot); + swp = __pte_to_swp_entry(pte); + WARN_ON(!pte_same(pte, __swp_entry_to_pte(swp))); +} + +#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION +static void __init pmd_swap_tests(unsigned long pfn, pgprot_t prot) +{ + swp_entry_t swp; + pmd_t pmd; + + pmd = pfn_pmd(pfn, prot); + swp = __pmd_to_swp_entry(pmd); + WARN_ON(!pmd_same(pmd, __swp_entry_to_pmd(swp))); +} +#else +static void __init pmd_swap_tests(unsigned long pfn, pgprot_t prot) { } +#endif + +static void __init swap_migration_tests(void) +{ + struct page *page; + swp_entry_t swp; + + if (!IS_ENABLED(CONFIG_MIGRATION)) + return; + /* + * swap_migration_tests() requires a dedicated page as it needs to + * be locked before creating a migration entry from it. Locking the + * page that actually maps kernel text ('start_kernel') can be real + * problematic. Lets allocate a dedicated page explicitly for this + * purpose that will be freed subsequently. + */ + page = alloc_page(GFP_KERNEL); + if (!page) { + pr_err("page allocation failed\n"); + return; + } + + /* + * make_migration_entry() expects given page to be + * locked, otherwise it stumbles upon a BUG_ON(). + */ + __SetPageLocked(page); + swp = make_migration_entry(page, 1); + WARN_ON(!is_migration_entry(swp)); + WARN_ON(!is_write_migration_entry(swp)); + + make_migration_entry_read(&swp); + WARN_ON(!is_migration_entry(swp)); + WARN_ON(is_write_migration_entry(swp)); + + swp = make_migration_entry(page, 0); + WARN_ON(!is_migration_entry(swp)); + WARN_ON(is_write_migration_entry(swp)); + __ClearPageLocked(page); + __free_page(page); +} + +#ifdef CONFIG_HUGETLB_PAGE +static void __init hugetlb_basic_tests(unsigned long pfn, pgprot_t prot) +{ + struct page *page; + pte_t pte; + + /* + * Accessing the page associated with the pfn is safe here, + * as it was previously derived from a real kernel symbol. + */ + page = pfn_to_page(pfn); + pte = mk_huge_pte(page, prot); + + WARN_ON(!huge_pte_dirty(huge_pte_mkdirty(pte))); + WARN_ON(!huge_pte_write(huge_pte_mkwrite(huge_pte_wrprotect(pte)))); + WARN_ON(huge_pte_write(huge_pte_wrprotect(huge_pte_mkwrite(pte)))); + +#ifdef CONFIG_ARCH_WANT_GENERAL_HUGETLB + pte = pfn_pte(pfn, prot); + + WARN_ON(!pte_huge(pte_mkhuge(pte))); +#endif +} +#else +static void __init hugetlb_basic_tests(unsigned long pfn, pgprot_t prot) { } +#endif + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +static void __init pmd_thp_tests(unsigned long pfn, pgprot_t prot) +{ + pmd_t pmd; + + /* + * pmd_trans_huge() and pmd_present() must return positive + * after MMU invalidation with pmd_mknotpresent(). + */ + pmd = pfn_pmd(pfn, prot); + WARN_ON(!pmd_trans_huge(pmd_mkhuge(pmd))); + +#ifndef __HAVE_ARCH_PMDP_INVALIDATE + WARN_ON(!pmd_trans_huge(pmd_mknotpresent(pmd_mkhuge(pmd)))); + WARN_ON(!pmd_present(pmd_mknotpresent(pmd_mkhuge(pmd)))); +#endif +} + +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +static void __init pud_thp_tests(unsigned long pfn, pgprot_t prot) +{ + pud_t pud; + + /* + * pud_trans_huge() and pud_present() must return positive + * after MMU invalidation with pud_mknotpresent(). + */ + pud = pfn_pud(pfn, prot); + WARN_ON(!pud_trans_huge(pud_mkhuge(pud))); + + /* + * pud_mknotpresent() has been dropped for now. Enable back + * these tests when it comes back with a modified pud_present(). + * + * WARN_ON(!pud_trans_huge(pud_mknotpresent(pud_mkhuge(pud)))); + * WARN_ON(!pud_present(pud_mknotpresent(pud_mkhuge(pud)))); + */ +} +#else +static void __init pud_thp_tests(unsigned long pfn, pgprot_t prot) { } +#endif +#else +static void __init pmd_thp_tests(unsigned long pfn, pgprot_t prot) { } +static void __init pud_thp_tests(unsigned long pfn, pgprot_t prot) { } +#endif + static unsigned long __init get_random_vaddr(void) { unsigned long random_vaddr, random_pages, total_user_pages; @@ -310,7 +571,7 @@ void __init debug_vm_pgtable(void) pmd_t *pmdp, *saved_pmdp, pmd; pte_t *ptep; pgtable_t saved_ptep; - pgprot_t prot; + pgprot_t prot, protnone; phys_addr_t paddr; unsigned long vaddr, pte_aligned, pmd_aligned; unsigned long pud_aligned, p4d_aligned, pgd_aligned; @@ -325,6 +586,12 @@ void __init debug_vm_pgtable(void) return; } + /* + * __P000 (or even __S000) will help create page table entries with + * PROT_NONE permission as required for pxx_protnone_tests(). + */ + protnone = __P000; + /* * PFN for mapping at PTE level is determined from a standard kernel * text symbol. But pfns for higher page table levels are derived by @@ -380,6 +647,28 @@ void __init debug_vm_pgtable(void) p4d_populate_tests(mm, p4dp, saved_pudp); pgd_populate_tests(mm, pgdp, saved_p4dp); + pte_special_tests(pte_aligned, prot); + pte_protnone_tests(pte_aligned, protnone); + pmd_protnone_tests(pmd_aligned, protnone); + + pte_devmap_tests(pte_aligned, prot); + pmd_devmap_tests(pmd_aligned, prot); + pud_devmap_tests(pud_aligned, prot); + + pte_soft_dirty_tests(pte_aligned, prot); + pmd_soft_dirty_tests(pmd_aligned, prot); + pte_swap_soft_dirty_tests(pte_aligned, prot); + pmd_swap_soft_dirty_tests(pmd_aligned, prot); + + pte_swap_tests(pte_aligned, prot); + pmd_swap_tests(pmd_aligned, prot); + + swap_migration_tests(); + hugetlb_basic_tests(pte_aligned, prot); + + pmd_thp_tests(pmd_aligned, prot); + pud_thp_tests(pud_aligned, prot); + p4d_free(mm, saved_p4dp); pud_free(mm, saved_pudp); pmd_free(mm, saved_pmdp); From patchwork Tue Mar 24 05:22:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 11454479 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C7DB36CA for ; Tue, 24 Mar 2020 05:23:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 869E020714 for ; Tue, 24 Mar 2020 05:23:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 869E020714 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ADCE06B00CE; Tue, 24 Mar 2020 01:23:37 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A8C746B00CF; Tue, 24 Mar 2020 01:23:37 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9561F6B00D0; Tue, 24 Mar 2020 01:23:37 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0119.hostedemail.com [216.40.44.119]) by kanga.kvack.org (Postfix) with ESMTP id 7DDE76B00CE for ; Tue, 24 Mar 2020 01:23:37 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id ABE657C368 for ; Tue, 24 Mar 2020 05:23:37 +0000 (UTC) X-FDA: 76629113274.10.card65_3cb90f327dc40 X-Spam-Summary: 2,0,0,0221fdcc2bced496,d41d8cd98f00b204,anshuman.khandual@arm.com,,RULES_HIT:1:41:334:355:368:369:379:541:800:960:966:973:988:989:1260:1261:1345:1359:1431:1437:1605:1730:1747:1777:1792:1963:2196:2199:2393:2559:2562:2638:2693:3138:3139:3140:3141:3142:3167:3865:3868:3871:3874:4250:4385:5007:6117:6119:6261:6742:6743:7903:8634:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12555:12683:12895:13548:13846:14096:14394:21080:21433:21451:21627:21939:21990:30054:30089,0,RBL:217.140.110.172:@arm.com:.lbl8.mailshell.net-62.2.0.100 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: card65_3cb90f327dc40 X-Filterd-Recvd-Size: 14982 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Mar 2020 05:23:37 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D590F30E; Mon, 23 Mar 2020 22:23:35 -0700 (PDT) Received: from p8cg001049571a15.arm.com (unknown [10.163.1.71]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AA2443F7C3; Mon, 23 Mar 2020 22:27:32 -0700 (PDT) From: Anshuman Khandual To: linux-mm@kvack.org Cc: christophe.leroy@c-s.fr, Anshuman Khandual , Andrew Morton , Mike Rapoport , Vineet Gupta , Catalin Marinas , Will Deacon , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , "Kirill A . Shutemov" , Paul Walmsley , Palmer Dabbelt , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-riscv@lists.infradead.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH V2 2/3] mm/debug: Add tests validating arch advanced page table helpers Date: Tue, 24 Mar 2020 10:52:54 +0530 Message-Id: <1585027375-9997-3-git-send-email-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1585027375-9997-1-git-send-email-anshuman.khandual@arm.com> References: <1585027375-9997-1-git-send-email-anshuman.khandual@arm.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This adds new tests validating for these following arch advanced page table helpers. These tests create and test specific mapping types at various page table levels. 1. pxxp_set_wrprotect() 2. pxxp_get_and_clear() 3. pxxp_set_access_flags() 4. pxxp_get_and_clear_full() 5. pxxp_test_and_clear_young() 6. pxx_leaf() 7. pxx_set_huge() 8. pxx_(clear|mk)_savedwrite() 9. huge_pxxp_xxx() Cc: Andrew Morton Cc: Mike Rapoport Cc: Vineet Gupta Cc: Catalin Marinas Cc: Will Deacon Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman Cc: Heiko Carstens Cc: Vasily Gorbik Cc: Christian Borntraeger Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: Kirill A. Shutemov Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: linux-snps-arc@lists.infradead.org Cc: linux-arm-kernel@lists.infradead.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-s390@vger.kernel.org Cc: linux-riscv@lists.infradead.org Cc: x86@kernel.org Cc: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org Suggested-by: Catalin Marinas Signed-off-by: Anshuman Khandual Reported-by: kernel test robot --- mm/debug_vm_pgtable.c | 290 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 290 insertions(+) diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index 15055a8f6478..87b4b495333b 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -29,6 +29,7 @@ #include #include #include +#include /* * Basic operations @@ -68,6 +69,54 @@ static void __init pte_basic_tests(unsigned long pfn, pgprot_t prot) WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte)))); } +static void __init pte_advanced_tests(struct mm_struct *mm, + struct vm_area_struct *vma, pte_t *ptep, + unsigned long pfn, unsigned long vaddr, pgprot_t prot) +{ + pte_t pte = pfn_pte(pfn, prot); + + pte = pfn_pte(pfn, prot); + set_pte_at(mm, vaddr, ptep, pte); + ptep_set_wrprotect(mm, vaddr, ptep); + pte = READ_ONCE(*ptep); + WARN_ON(pte_write(pte)); + + pte = pfn_pte(pfn, prot); + set_pte_at(mm, vaddr, ptep, pte); + ptep_get_and_clear(mm, vaddr, ptep); + pte = READ_ONCE(*ptep); + WARN_ON(!pte_none(pte)); + + pte = pfn_pte(pfn, prot); + pte = pte_wrprotect(pte); + pte = pte_mkclean(pte); + set_pte_at(mm, vaddr, ptep, pte); + pte = pte_mkwrite(pte); + pte = pte_mkdirty(pte); + ptep_set_access_flags(vma, vaddr, ptep, pte, 1); + pte = READ_ONCE(*ptep); + WARN_ON(!(pte_write(pte) && pte_dirty(pte))); + + pte = pfn_pte(pfn, prot); + set_pte_at(mm, vaddr, ptep, pte); + ptep_get_and_clear_full(mm, vaddr, ptep, 1); + pte = READ_ONCE(*ptep); + WARN_ON(!pte_none(pte)); + + pte = pte_mkyoung(pte); + set_pte_at(mm, vaddr, ptep, pte); + ptep_test_and_clear_young(vma, vaddr, ptep); + pte = READ_ONCE(*ptep); + WARN_ON(pte_young(pte)); +} + +static void __init pte_savedwrite_tests(unsigned long pfn, pgprot_t prot) +{ + pte_t pte = pfn_pte(pfn, prot); + + WARN_ON(!pte_savedwrite(pte_mk_savedwrite(pte_clear_savedwrite(pte)))); + WARN_ON(pte_savedwrite(pte_clear_savedwrite(pte_mk_savedwrite(pte)))); +} #ifdef CONFIG_TRANSPARENT_HUGEPAGE static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot) { @@ -87,6 +136,83 @@ static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot) WARN_ON(!pmd_bad(pmd_mkhuge(pmd))); } +static void __init pmd_advanced_tests(struct mm_struct *mm, + struct vm_area_struct *vma, pmd_t *pmdp, + unsigned long pfn, unsigned long vaddr, pgprot_t prot) +{ + pmd_t pmd = pfn_pmd(pfn, prot); + + /* Align the address wrt HPAGE_PMD_SIZE */ + vaddr = (vaddr & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE; + + pmd = pfn_pmd(pfn, prot); + set_pmd_at(mm, vaddr, pmdp, pmd); + pmdp_set_wrprotect(mm, vaddr, pmdp); + pmd = READ_ONCE(*pmdp); + WARN_ON(pmd_write(pmd)); + + pmd = pfn_pmd(pfn, prot); + set_pmd_at(mm, vaddr, pmdp, pmd); + pmdp_huge_get_and_clear(mm, vaddr, pmdp); + pmd = READ_ONCE(*pmdp); + WARN_ON(!pmd_none(pmd)); + + pmd = pfn_pmd(pfn, prot); + pmd = pmd_wrprotect(pmd); + pmd = pmd_mkclean(pmd); + set_pmd_at(mm, vaddr, pmdp, pmd); + pmd = pmd_mkwrite(pmd); + pmd = pmd_mkdirty(pmd); + pmdp_set_access_flags(vma, vaddr, pmdp, pmd, 1); + pmd = READ_ONCE(*pmdp); + WARN_ON(!(pmd_write(pmd) && pmd_dirty(pmd))); + + pmd = pmd_mkhuge(pfn_pmd(pfn, prot)); + set_pmd_at(mm, vaddr, pmdp, pmd); + pmdp_huge_get_and_clear_full(mm, vaddr, pmdp, 1); + pmd = READ_ONCE(*pmdp); + WARN_ON(!pmd_none(pmd)); + + pmd = pmd_mkyoung(pmd); + set_pmd_at(mm, vaddr, pmdp, pmd); + pmdp_test_and_clear_young(vma, vaddr, pmdp); + pmd = READ_ONCE(*pmdp); + WARN_ON(pmd_young(pmd)); +} + +static void __init pmd_leaf_tests(unsigned long pfn, pgprot_t prot) +{ + pmd_t pmd = pfn_pmd(pfn, prot); + + /* + * PMD based THP is a leaf entry. + */ + pmd = pmd_mkhuge(pmd); + WARN_ON(!pmd_leaf(pmd)); +} + +static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot) +{ + pmd_t pmd; + + /* + * X86 defined pmd_set_huge() verifies that the given + * PMD is not a populated non-leaf entry. + */ + WRITE_ONCE(*pmdp, __pmd(0)); + WARN_ON(!pmd_set_huge(pmdp, __pfn_to_phys(pfn), prot)); + WARN_ON(!pmd_clear_huge(pmdp)); + pmd = READ_ONCE(*pmdp); + WARN_ON(!pmd_none(pmd)); +} + +static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot) +{ + pmd_t pmd = pfn_pmd(pfn, prot); + + WARN_ON(!pmd_savedwrite(pmd_mk_savedwrite(pmd_clear_savedwrite(pmd)))); + WARN_ON(pmd_savedwrite(pmd_clear_savedwrite(pmd_mk_savedwrite(pmd)))); +} #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) { @@ -107,12 +233,110 @@ static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) */ WARN_ON(!pud_bad(pud_mkhuge(pud))); } + +static void pud_advanced_tests(struct mm_struct *mm, + struct vm_area_struct *vma, pud_t *pudp, + unsigned long pfn, unsigned long vaddr, pgprot_t prot) +{ + pud_t pud = pfn_pud(pfn, prot); + + /* Align the address wrt HPAGE_PUD_SIZE */ + vaddr = (vaddr & HPAGE_PUD_MASK) + HPAGE_PUD_SIZE; + + set_pud_at(mm, vaddr, pudp, pud); + pudp_set_wrprotect(mm, vaddr, pudp); + pud = READ_ONCE(*pudp); + WARN_ON(pud_write(pud)); + +#ifndef __PAGETABLE_PMD_FOLDED + pud = pfn_pud(pfn, prot); + set_pud_at(mm, vaddr, pudp, pud); + pudp_huge_get_and_clear(mm, vaddr, pudp); + pud = READ_ONCE(*pudp); + WARN_ON(!pud_none(pud)); + + pud = pfn_pud(pfn, prot); + set_pud_at(mm, vaddr, pudp, pud); + pudp_huge_get_and_clear_full(mm, vaddr, pudp, 1); + pud = READ_ONCE(*pudp); + WARN_ON(!pud_none(pud)); +#endif + pud = pfn_pud(pfn, prot); + pud = pud_wrprotect(pud); + pud = pud_mkclean(pud); + set_pud_at(mm, vaddr, pudp, pud); + pud = pud_mkwrite(pud); + pud = pud_mkdirty(pud); + pudp_set_access_flags(vma, vaddr, pudp, pud, 1); + pud = READ_ONCE(*pudp); + WARN_ON(!(pud_write(pud) && pud_dirty(pud))); + + pud = pud_mkyoung(pud); + set_pud_at(mm, vaddr, pudp, pud); + pudp_test_and_clear_young(vma, vaddr, pudp); + pud = READ_ONCE(*pudp); + WARN_ON(pud_young(pud)); +} + +static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot) +{ + pud_t pud = pfn_pud(pfn, prot); + + /* + * PUD based THP is a leaf entry. + */ + pud = pud_mkhuge(pud); + WARN_ON(!pud_leaf(pud)); +} + +static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot) +{ + pud_t pud; + + /* + * X86 defined pud_set_huge() verifies that the given + * PUD is not a populated non-leaf entry. + */ + WRITE_ONCE(*pudp, __pud(0)); + WARN_ON(!pud_set_huge(pudp, __pfn_to_phys(pfn), prot)); + WARN_ON(!pud_clear_huge(pudp)); + pud = READ_ONCE(*pudp); + WARN_ON(!pud_none(pud)); +} #else static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) { } +static void pud_advanced_tests(struct mm_struct *mm, + struct vm_area_struct *vma, pud_t *pudp, + unsigned long pfn, unsigned long vaddr, pgprot_t prot) +{ +} +static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot) { } +static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot) +{ +} #endif #else static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot) { } static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) { } +static void __init pmd_advanced_tests(struct mm_struct *mm, + struct vm_area_struct *vma, pmd_t *pmdp, + unsigned long pfn, unsigned long vaddr, pgprot_t prot) +{ +} +static void __init pud_advanced_tests(struct mm_struct *mm, + struct vm_area_struct *vma, pud_t *pudp, + unsigned long pfn, unsigned long vaddr, pgprot_t prot) +{ +} +static void __init pmd_leaf_tests(unsigned long pfn, pgprot_t prot) { } +static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot) { } +static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot) +{ +} +static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot) +{ +} +static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot) { } #endif static void __init p4d_basic_tests(unsigned long pfn, pgprot_t prot) @@ -500,8 +724,52 @@ static void __init hugetlb_basic_tests(unsigned long pfn, pgprot_t prot) WARN_ON(!pte_huge(pte_mkhuge(pte))); #endif } + +static void __init hugetlb_advanced_tests(struct mm_struct *mm, + struct vm_area_struct *vma, + pte_t *ptep, unsigned long pfn, + unsigned long vaddr, pgprot_t prot) +{ + struct page *page = pfn_to_page(pfn); + pte_t pte = READ_ONCE(*ptep); + + pte = __pte(pte_val(pte) | RANDOM_ORVALUE); + set_huge_pte_at(mm, vaddr, ptep, pte); + barrier(); + WARN_ON(!pte_same(pte, huge_ptep_get(ptep))); + huge_pte_clear(mm, vaddr, ptep, PMD_SIZE); + pte = READ_ONCE(*ptep); + WARN_ON(!huge_pte_none(pte)); + + pte = mk_huge_pte(page, prot); + set_huge_pte_at(mm, vaddr, ptep, pte); + huge_ptep_set_wrprotect(mm, vaddr, ptep); + pte = READ_ONCE(*ptep); + WARN_ON(huge_pte_write(pte)); + + pte = mk_huge_pte(page, prot); + set_huge_pte_at(mm, vaddr, ptep, pte); + huge_ptep_get_and_clear(mm, vaddr, ptep); + pte = READ_ONCE(*ptep); + WARN_ON(!huge_pte_none(pte)); + + pte = mk_huge_pte(page, prot); + pte = huge_pte_wrprotect(pte); + set_huge_pte_at(mm, vaddr, ptep, pte); + pte = huge_pte_mkwrite(pte); + pte = huge_pte_mkdirty(pte); + huge_ptep_set_access_flags(vma, vaddr, ptep, pte, 1); + pte = READ_ONCE(*ptep); + WARN_ON(!(huge_pte_write(pte) && huge_pte_dirty(pte))); +} #else static void __init hugetlb_basic_tests(unsigned long pfn, pgprot_t prot) { } +static void __init hugetlb_advanced_tests(struct mm_struct *mm, + struct vm_area_struct *vma, + pte_t *ptep, unsigned long pfn, + unsigned long vaddr, pgprot_t prot) +{ +} #endif #ifdef CONFIG_TRANSPARENT_HUGEPAGE @@ -564,6 +832,7 @@ static unsigned long __init get_random_vaddr(void) void __init debug_vm_pgtable(void) { + struct vm_area_struct *vma; struct mm_struct *mm; pgd_t *pgdp; p4d_t *p4dp, *saved_p4dp; @@ -592,6 +861,12 @@ void __init debug_vm_pgtable(void) */ protnone = __P000; + vma = vm_area_alloc(mm); + if (!vma) { + pr_err("vma allocation failed\n"); + return; + } + /* * PFN for mapping at PTE level is determined from a standard kernel * text symbol. But pfns for higher page table levels are derived by @@ -640,6 +915,20 @@ void __init debug_vm_pgtable(void) p4d_clear_tests(mm, p4dp); pgd_clear_tests(mm, pgdp); + pte_advanced_tests(mm, vma, ptep, pte_aligned, vaddr, prot); + pmd_advanced_tests(mm, vma, pmdp, pmd_aligned, vaddr, prot); + pud_advanced_tests(mm, vma, pudp, pud_aligned, vaddr, prot); + hugetlb_advanced_tests(mm, vma, ptep, pte_aligned, vaddr, prot); + + pmd_leaf_tests(pmd_aligned, prot); + pud_leaf_tests(pud_aligned, prot); + + pmd_huge_tests(pmdp, pmd_aligned, prot); + pud_huge_tests(pudp, pud_aligned, prot); + + pte_savedwrite_tests(pte_aligned, prot); + pmd_savedwrite_tests(pmd_aligned, prot); + pte_unmap_unlock(ptep, ptl); pmd_populate_tests(mm, pmdp, saved_ptep); @@ -674,6 +963,7 @@ void __init debug_vm_pgtable(void) pmd_free(mm, saved_pmdp); pte_free(mm, saved_ptep); + vm_area_free(vma); mm_dec_nr_puds(mm); mm_dec_nr_pmds(mm); mm_dec_nr_ptes(mm); From patchwork Tue Mar 24 05:22:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 11454481 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1DB2E6CA for ; Tue, 24 Mar 2020 05:23:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CBF7A20714 for ; Tue, 24 Mar 2020 05:23:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CBF7A20714 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 92F256B00CF; Tue, 24 Mar 2020 01:23:40 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8E0886B00D0; Tue, 24 Mar 2020 01:23:40 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7F96E6B00D1; Tue, 24 Mar 2020 01:23:40 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0110.hostedemail.com [216.40.44.110]) by kanga.kvack.org (Postfix) with ESMTP id 6531E6B00CF for ; Tue, 24 Mar 2020 01:23:40 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 9EC4581772 for ; Tue, 24 Mar 2020 05:23:40 +0000 (UTC) X-FDA: 76629113400.19.roll97_3d222c3c2dc50 X-Spam-Summary: 2,0,0,a4c91afd62dc3f37,d41d8cd98f00b204,anshuman.khandual@arm.com,,RULES_HIT:41:69:327:355:379:541:800:960:968:973:982:988:989:1260:1261:1345:1359:1431:1437:1605:1730:1747:1777:1792:1801:1963:1981:2194:2199:2393:2559:2562:2689:2901:3138:3139:3140:3141:3142:3865:3867:3868:3870:3871:3874:4250:4321:4605:5007:6117:6119:6261:7903:8634:10004:10226:11026:11232:11473:11658:11914:12043:12291:12296:12297:12438:12555:12683:12895:12986:13149:13206:13229:13230:13846:14096:14394:21080:21433:21451:21627:21990:30054,0,RBL:217.140.110.172:@arm.com:.lbl8.mailshell.net-62.2.0.100 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none X-HE-Tag: roll97_3d222c3c2dc50 X-Filterd-Recvd-Size: 22326 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Mar 2020 05:23:39 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CA3067FA; Mon, 23 Mar 2020 22:23:38 -0700 (PDT) Received: from p8cg001049571a15.arm.com (unknown [10.163.1.71]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id F2AC73F7C3; Mon, 23 Mar 2020 22:27:40 -0700 (PDT) From: Anshuman Khandual To: linux-mm@kvack.org Cc: christophe.leroy@c-s.fr, Anshuman Khandual , Jonathan Corbet , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH V2 3/3] Documentation/mm: Add descriptions for arch page table helpers Date: Tue, 24 Mar 2020 10:52:55 +0530 Message-Id: <1585027375-9997-4-git-send-email-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1585027375-9997-1-git-send-email-anshuman.khandual@arm.com> References: <1585027375-9997-1-git-send-email-anshuman.khandual@arm.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This removes existing description from the test and instead adds a specific description file for all arch page table helpers which is in sync with the semantics being tested via CONFIG_DEBUG_VM_PGTABLE. All future changes either to these descriptions here or the debug test should always remain in sync. Cc: Jonathan Corbet Cc: Andrew Morton Cc: linux-doc@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Suggested-by: Mike Rapoport Signed-off-by: Anshuman Khandual --- Documentation/vm/arch_pgtable_helpers.rst | 256 ++++++++++++++++++++++ mm/debug_vm_pgtable.c | 13 +- 2 files changed, 259 insertions(+), 10 deletions(-) create mode 100644 Documentation/vm/arch_pgtable_helpers.rst diff --git a/Documentation/vm/arch_pgtable_helpers.rst b/Documentation/vm/arch_pgtable_helpers.rst new file mode 100644 index 000000000000..1fc8a1d8932a --- /dev/null +++ b/Documentation/vm/arch_pgtable_helpers.rst @@ -0,0 +1,256 @@ +.. SPDX-License-Identifier: GPL-2.0 + +.. _arch_page_table_helpers: + +=============================== +Architecture Page Table Helpers +=============================== + +Generic MM expects architectures (with MMU) to provide helpers to create, access +and modify page table entries at various level for different memory functions. +These page table helpers need to conform to a common semantics across platforms. +Following tables describe the expected semantics which can also be tested during +boot via CONFIG_DEBUG_VM_PGTABLE option. All future changes in here or the debug +test need to be in sync. + +====================== +PTE Page Table Helpers +====================== + +-------------------------------------------------------------------------------- +| pte_same | Tests whether both PTE entries are the same | +-------------------------------------------------------------------------------- +| pte_bad | Tests a non-table mapped PTE | +-------------------------------------------------------------------------------- +| pte_present | Tests a valid mapped PTE | +-------------------------------------------------------------------------------- +| pte_young | Tests a young PTE | +-------------------------------------------------------------------------------- +| pte_dirty | Tests a dirty PTE | +-------------------------------------------------------------------------------- +| pte_write | Tests a writable PTE | +-------------------------------------------------------------------------------- +| pte_special | Tests a special PTE | +-------------------------------------------------------------------------------- +| pte_protnone | Tests a PROT_NONE PTE | +-------------------------------------------------------------------------------- +| pte_devmap | Tests a ZONE_DEVICE mapped PTE | +-------------------------------------------------------------------------------- +| pte_soft_dirty | Tests a soft dirty PTE | +-------------------------------------------------------------------------------- +| pte_swp_soft_dirty | Tests a soft dirty swapped PTE | +-------------------------------------------------------------------------------- +| pte_mkyoung | Creates a young PTE | +-------------------------------------------------------------------------------- +| pte_mkold | Creates an old PTE | +-------------------------------------------------------------------------------- +| pte_mkdirty | Creates a dirty PTE | +-------------------------------------------------------------------------------- +| pte_mkclean | Creates a clean PTE | +-------------------------------------------------------------------------------- +| pte_mkwrite | Creates a writable PTE | +-------------------------------------------------------------------------------- +| pte_mkwrprotect | Creates a write protected PTE | +-------------------------------------------------------------------------------- +| pte_mkspecial | Creates a special PTE | +-------------------------------------------------------------------------------- +| pte_mkdevmap | Creates a ZONE_DEVICE mapped PTE | +-------------------------------------------------------------------------------- +| pte_mksoft_dirty | Creates a soft dirty PTE | +-------------------------------------------------------------------------------- +| pte_clear_soft_dirty | Clears a soft dirty PTE | +-------------------------------------------------------------------------------- +| pte_swp_mksoft_dirty | Creates a soft dirty swapped PTE | +-------------------------------------------------------------------------------- +| pte_swp_clear_soft_dirty | Clears a soft dirty swapped PTE | +-------------------------------------------------------------------------------- +| pte_mknotpresent | Invalidates a mapped PTE | +-------------------------------------------------------------------------------- +| ptep_get_and_clear | Clears a PTE | +-------------------------------------------------------------------------------- +| ptep_get_and_clear_full | Clears a PTE | +-------------------------------------------------------------------------------- +| ptep_test_and_clear_young | Clears young from a PTE | +-------------------------------------------------------------------------------- +| ptep_set_wrprotect | Converts into a write protected PTE | +-------------------------------------------------------------------------------- +| ptep_set_access_flags | Converts into a more permissive PTE | +-------------------------------------------------------------------------------- + +====================== +PMD Page Table Helpers +====================== + +-------------------------------------------------------------------------------- +| pmd_same | Tests whether both PMD entries are the same | +-------------------------------------------------------------------------------- +| pmd_bad | Tests a non-table mapped PMD | +-------------------------------------------------------------------------------- +| pmd_leaf | Tests a leaf mapped PMD | +-------------------------------------------------------------------------------- +| pmd_huge | Tests a HugeTLB mapped PMD | +-------------------------------------------------------------------------------- +| pmd_trans_huge | Tests a Transparent Huge Page (THP) at PMD | +-------------------------------------------------------------------------------- +| pmd_present | Tests a valid mapped PMD | +-------------------------------------------------------------------------------- +| pmd_young | Tests a young PMD | +-------------------------------------------------------------------------------- +| pmd_dirty | Tests a dirty PMD | +-------------------------------------------------------------------------------- +| pmd_write | Tests a writable PMD | +-------------------------------------------------------------------------------- +| pmd_special | Tests a special PMD | +-------------------------------------------------------------------------------- +| pmd_protnone | Tests a PROT_NONE PMD | +-------------------------------------------------------------------------------- +| pmd_devmap | Tests a ZONE_DEVICE mapped PMD | +-------------------------------------------------------------------------------- +| pmd_soft_dirty | Tests a soft dirty PMD | +-------------------------------------------------------------------------------- +| pmd_swp_soft_dirty | Tests a soft dirty swapped PMD | +-------------------------------------------------------------------------------- +| pmd_mkyoung | Creates a young PMD | +-------------------------------------------------------------------------------- +| pmd_mkold | Creates an old PMD | +-------------------------------------------------------------------------------- +| pmd_mkdirty | Creates a dirty PMD | +-------------------------------------------------------------------------------- +| pmd_mkclean | Creates a clean PMD | +-------------------------------------------------------------------------------- +| pmd_mkwrite | Creates a writable PMD | +-------------------------------------------------------------------------------- +| pmd_mkwrprotect | Creates a write protected PMD | +-------------------------------------------------------------------------------- +| pmd_mkspecial | Creates a special PMD | +-------------------------------------------------------------------------------- +| pmd_mkdevmap | Creates a ZONE_DEVICE mapped PMD | +-------------------------------------------------------------------------------- +| pmd_mksoft_dirty | Creates a soft dirty PMD | +-------------------------------------------------------------------------------- +| pmd_clear_soft_dirty | Clears a soft dirty PMD | +-------------------------------------------------------------------------------- +| pmd_swp_mksoft_dirty | Creates a soft dirty swapped PMD | +-------------------------------------------------------------------------------- +| pmd_swp_clear_soft_dirty | Clears a soft dirty swapped PMD | +-------------------------------------------------------------------------------- +| pmd_mknotpresent | Invalidates a mapped PMD | +-------------------------------------------------------------------------------- +| pmd_set_huge | Creates a PMD huge mapping | +-------------------------------------------------------------------------------- +| pmd_clear_huge | Clears a PMD huge mapping | +-------------------------------------------------------------------------------- +| pmdp_get_and_clear | Clears a PMD | +-------------------------------------------------------------------------------- +| pmdp_get_and_clear_full | Clears a PMD | +-------------------------------------------------------------------------------- +| pmdp_test_and_clear_young | Clears young from a PMD | +-------------------------------------------------------------------------------- +| pmdp_set_wrprotect | Converts into a write protected PMD | +-------------------------------------------------------------------------------- +| pmdp_set_access_flags | Converts into a more permissive PMD | +-------------------------------------------------------------------------------- + +====================== +PUD Page Table Helpers +====================== + +-------------------------------------------------------------------------------- +| pud_same | Tests whether both PUD entries are the same | +-------------------------------------------------------------------------------- +| pud_bad | Tests a non-table mapped PUD | +-------------------------------------------------------------------------------- +| pud_leaf | Tests a leaf mapped PUD | +-------------------------------------------------------------------------------- +| pud_huge | Tests a HugeTLB mapped PUD | +-------------------------------------------------------------------------------- +| pud_trans_huge | Tests a Transparent Huge Page (THP) at PUD | +-------------------------------------------------------------------------------- +| pud_present | Tests a valid mapped PUD | +-------------------------------------------------------------------------------- +| pud_young | Tests a young PUD | +-------------------------------------------------------------------------------- +| pud_dirty | Tests a dirty PUD | +-------------------------------------------------------------------------------- +| pud_write | Tests a writable PUD | +-------------------------------------------------------------------------------- +| pud_devmap | Tests a ZONE_DEVICE mapped PUD | +-------------------------------------------------------------------------------- +| pud_mkyoung | Creates a young PUD | +-------------------------------------------------------------------------------- +| pud_mkold | Creates an old PUD | +-------------------------------------------------------------------------------- +| pud_mkdirty | Creates a dirty PUD | +-------------------------------------------------------------------------------- +| pud_mkclean | Creates a clean PUD | +-------------------------------------------------------------------------------- +| pud_mkwrite | Creates a writable PMD | +-------------------------------------------------------------------------------- +| pud_mkwrprotect | Creates a write protected PMD | +-------------------------------------------------------------------------------- +| pud_mkdevmap | Creates a ZONE_DEVICE mapped PMD | +-------------------------------------------------------------------------------- +| pud_mknotpresent | Invalidates a mapped PUD | +-------------------------------------------------------------------------------- +| pud_set_huge | Creates a PUD huge mapping | +-------------------------------------------------------------------------------- +| pud_clear_huge | Clears a PUD huge mapping | +-------------------------------------------------------------------------------- +| pudp_get_and_clear | Clears a PUD | +-------------------------------------------------------------------------------- +| pudp_get_and_clear_full | Clears a PUD | +-------------------------------------------------------------------------------- +| pudp_test_and_clear_young | Clears young from a PUD | +-------------------------------------------------------------------------------- +| pudp_set_wrprotect | Converts into a write protected PUD | +-------------------------------------------------------------------------------- +| pudp_set_access_flags | Converts into a more permissive PUD | +-------------------------------------------------------------------------------- + +========================== +HugeTLB Page Table Helpers +========================== + +-------------------------------------------------------------------------------- +| pte_huge | Tests a HugeTLB | +-------------------------------------------------------------------------------- +| pte_mkhuge | Creates a HugeTLB | +-------------------------------------------------------------------------------- +| huge_pte_dirty | Tests a dirty HugeTLB | +-------------------------------------------------------------------------------- +| huge_pte_write | Tests a writable HugeTLB | +-------------------------------------------------------------------------------- +| huge_pte_mkdirty | Creates a dirty HugeTLB | +-------------------------------------------------------------------------------- +| huge_pte_mkwrite | Creates a writable HugeTLB | +-------------------------------------------------------------------------------- +| huge_pte_mkwrprotect | Creates a write protected HugeTLB | +-------------------------------------------------------------------------------- +| huge_ptep_get_and_clear | Clears a HugeTLB | +-------------------------------------------------------------------------------- +| huge_ptep_set_wrprotect | Converts into a write protected HugeTLB | +-------------------------------------------------------------------------------- +| huge_ptep_set_access_flags | Converts into a more permissive HugeTLB | +-------------------------------------------------------------------------------- + +======================== +SWAP Page Table Helpers +======================== + +-------------------------------------------------------------------------------- +| __pte_to_swp_entry | Creates a swapped entry (arch) from a mapepd PTE | +-------------------------------------------------------------------------------- +| __swp_to_pte_entry | Creates a mapped PTE from a swapped entry (arch) | +-------------------------------------------------------------------------------- +| __pmd_to_swp_entry | Creates a swapped entry (arch) from a mapepd PMD | +-------------------------------------------------------------------------------- +| __swp_to_pmd_entry | Creates a mapped PMD from a swapped entry (arch) | +-------------------------------------------------------------------------------- +| is_migration_entry | Tests a migration (read or write) swapped entry | +-------------------------------------------------------------------------------- +| is_write_migration_entry | Tests a write migration swapped entry | +-------------------------------------------------------------------------------- +| make_migration_entry_read | Converts into read migration swapped entry | +-------------------------------------------------------------------------------- +| make_migration_entry | Creates a migration swapped entry (read or write)| +-------------------------------------------------------------------------------- diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index 87b4b495333b..7c210f99a812 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -32,16 +32,9 @@ #include /* - * Basic operations - * - * mkold(entry) = An old and not a young entry - * mkyoung(entry) = A young and not an old entry - * mkdirty(entry) = A dirty and not a clean entry - * mkclean(entry) = A clean and not a dirty entry - * mkwrite(entry) = A write and not a write protected entry - * wrprotect(entry) = A write protected and not a write entry - * pxx_bad(entry) = A mapped and non-table entry - * pxx_same(entry1, entry2) = Both entries hold the exact same value + * Please refer Documentation/vm/arch_pgtable_helpers.rst for the semantics + * expectations that are being validated here. All future changes in here + * or the documentation need to be in sync. */ #define VMFLAGS (VM_READ|VM_WRITE|VM_EXEC)