From patchwork Mon Oct 7 05:45:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 11176821 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 90B0376 for ; Mon, 7 Oct 2019 05:45:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 599CE21479 for ; Mon, 7 Oct 2019 05:45:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 599CE21479 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9347D6B0007; Mon, 7 Oct 2019 01:45:42 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8E5C46B0008; Mon, 7 Oct 2019 01:45:42 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7ABD26B000A; Mon, 7 Oct 2019 01:45:42 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0227.hostedemail.com [216.40.44.227]) by kanga.kvack.org (Postfix) with ESMTP id 527E76B0007 for ; Mon, 7 Oct 2019 01:45:42 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id E699352A0 for ; Mon, 7 Oct 2019 05:45:41 +0000 (UTC) X-FDA: 76015901682.28.smile02_148ab08be8f01 X-Spam-Summary: 2,0,0,657cc8ea5f461337,d41d8cd98f00b204,anshuman.khandual@arm.com,::anshuman.khandual@arm.com:akpm@linux-foundation.org:vbabka@suse.cz:gregkh@linuxfoundation.org:tglx@linutronix.de:rppt@linux.vnet.ibm.com:mike.kravetz@oracle.com:jgg@ziepe.ca:dan.j.williams@intel.com:peterz@infradead.org:mhocko@kernel.org:mark.rutland@arm.com:broonie@kernel.org:steven.price@arm.com:ard.biesheuvel@linaro.org:yamada.masahiro@socionext.com:keescook@chromium.org:penguin-kernel@i-love.sakura.ne.jp:willy@infradead.org:schowdary@nvidia.com:dave.hansen@intel.com:linux@armlinux.org.uk:mpe@ellerman.id.au:paulus@samba.org:schwidefsky@de.ibm.com:heiko.carstens@de.ibm.com:davem@davemloft.net:vgupta@synopsys.com:jhogan@kernel.org:paul.burton@mips.com:ralf@linux-mips.org:kirill@shutemov.name:gerald.schaefer@de.ibm.com:christophe.leroy@c-s.fr:linux-snps-arc@lists.infradead.org:linux-mips@vger.kernel.org:linux-arm-kernel@lists.infradead.org:linux-ia64@vger.kernel.org:linuxppc-dev@lists.ozlabs.o rg:linux X-HE-Tag: smile02_148ab08be8f01 X-Filterd-Recvd-Size: 8105 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Oct 2019 05:45:41 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C997E15AB; Sun, 6 Oct 2019 22:45:39 -0700 (PDT) Received: from p8cg001049571a15.blr.arm.com (p8cg001049571a15.blr.arm.com [10.162.40.136]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9F30F3F68E; Sun, 6 Oct 2019 22:48:07 -0700 (PDT) From: Anshuman Khandual To: linux-mm@kvack.org Cc: Anshuman Khandual , Andrew Morton , Vlastimil Babka , Greg Kroah-Hartman , Thomas Gleixner , Mike Rapoport , Mike Kravetz , Jason Gunthorpe , Dan Williams , Peter Zijlstra , Michal Hocko , Mark Rutland , Mark Brown , Steven Price , Ard Biesheuvel , Masahiro Yamada , Kees Cook , Tetsuo Handa , Matthew Wilcox , Sri Krishna chowdary , Dave Hansen , Russell King - ARM Linux , Michael Ellerman , Paul Mackerras , Martin Schwidefsky , Heiko Carstens , "David S. Miller" , Vineet Gupta , James Hogan , Paul Burton , Ralf Baechle , "Kirill A . Shutemov" , Gerald Schaefer , Christophe Leroy , linux-snps-arc@lists.infradead.org, linux-mips@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH V4 1/2] mm/hugetlb: Make alloc_gigantic_page() available for general use Date: Mon, 7 Oct 2019 11:15:23 +0530 Message-Id: <1570427124-21887-2-git-send-email-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1570427124-21887-1-git-send-email-anshuman.khandual@arm.com> References: <1570427124-21887-1-git-send-email-anshuman.khandual@arm.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: alloc_gigantic_page() implements an allocation method where it scans over various zones looking for a large contiguous memory block which could not have been allocated through the buddy allocator. A subsequent patch which tests arch page table helpers needs such a method to allocate PUD_SIZE sized memory block. In the future such methods might have other use cases as well. So alloc_gigantic_page() has been split carving out actual memory allocation method and made available via new alloc_gigantic_page_order(). Cc: Andrew Morton Cc: Vlastimil Babka Cc: Greg Kroah-Hartman Cc: Thomas Gleixner Cc: Mike Rapoport Cc: Mike Kravetz Cc: Jason Gunthorpe Cc: Dan Williams Cc: Peter Zijlstra Cc: Michal Hocko Cc: Mark Rutland Cc: Mark Brown Cc: Steven Price Cc: Ard Biesheuvel Cc: Masahiro Yamada Cc: Kees Cook Cc: Tetsuo Handa Cc: Matthew Wilcox Cc: Sri Krishna chowdary Cc: Dave Hansen Cc: Russell King - ARM Linux Cc: Michael Ellerman Cc: Paul Mackerras Cc: Martin Schwidefsky Cc: Heiko Carstens Cc: "David S. Miller" Cc: Vineet Gupta Cc: James Hogan Cc: Paul Burton Cc: Ralf Baechle Cc: Kirill A. Shutemov Cc: Gerald Schaefer Cc: Christophe Leroy Cc: linux-snps-arc@lists.infradead.org Cc: linux-mips@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-ia64@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-s390@vger.kernel.org Cc: linux-sh@vger.kernel.org Cc: sparclinux@vger.kernel.org Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- include/linux/hugetlb.h | 9 +++++++++ mm/hugetlb.c | 24 ++++++++++++++++++++++-- 2 files changed, 31 insertions(+), 2 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 53fc34f930d0..cc50d5ad4885 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -299,6 +299,9 @@ static inline bool is_file_hugepages(struct file *file) } +struct page * +alloc_gigantic_page_order(unsigned int order, gfp_t gfp_mask, + int nid, nodemask_t *nodemask); #else /* !CONFIG_HUGETLBFS */ #define is_file_hugepages(file) false @@ -310,6 +313,12 @@ hugetlb_file_setup(const char *name, size_t size, vm_flags_t acctflag, return ERR_PTR(-ENOSYS); } +static inline struct page * +alloc_gigantic_page_order(unsigned int order, gfp_t gfp_mask, + int nid, nodemask_t *nodemask) +{ + return NULL; +} #endif /* !CONFIG_HUGETLBFS */ #ifdef HAVE_ARCH_HUGETLB_UNMAPPED_AREA diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ef37c85423a5..3fb81252f52b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1112,10 +1112,9 @@ static bool zone_spans_last_pfn(const struct zone *zone, return zone_spans_pfn(zone, last_pfn); } -static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, +struct page *alloc_gigantic_page_order(unsigned int order, gfp_t gfp_mask, int nid, nodemask_t *nodemask) { - unsigned int order = huge_page_order(h); unsigned long nr_pages = 1 << order; unsigned long ret, pfn, flags; struct zonelist *zonelist; @@ -1151,6 +1150,14 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, return NULL; } +static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, + int nid, nodemask_t *nodemask) +{ + unsigned int order = huge_page_order(h); + + return alloc_gigantic_page_order(order, gfp_mask, nid, nodemask); +} + static void prep_new_huge_page(struct hstate *h, struct page *page, int nid); static void prep_compound_gigantic_page(struct page *page, unsigned int order); #else /* !CONFIG_CONTIG_ALLOC */ @@ -1159,6 +1166,12 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, { return NULL; } + +struct page *alloc_gigantic_page_order(unsigned int order, gfp_t gfp_mask, + int nid, nodemask_t *nodemask) +{ + return NULL; +} #endif /* CONFIG_CONTIG_ALLOC */ #else /* !CONFIG_ARCH_HAS_GIGANTIC_PAGE */ @@ -1167,6 +1180,13 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, { return NULL; } + +struct page *alloc_gigantic_page_order(unsigned int order, gfp_t gfp_mask, + int nid, nodemask_t *nodemask) +{ + return NULL; +} + static inline void free_gigantic_page(struct page *page, unsigned int order) { } static inline void destroy_compound_gigantic_page(struct page *page, unsigned int order) { } From patchwork Mon Oct 7 05:45:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 11176831 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 01B8B76 for ; Mon, 7 Oct 2019 05:45:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B1DBA21835 for ; Mon, 7 Oct 2019 05:45:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B1DBA21835 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E3C966B000A; Mon, 7 Oct 2019 01:45:55 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DED996B000C; Mon, 7 Oct 2019 01:45:55 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CB5286B000D; Mon, 7 Oct 2019 01:45:55 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0236.hostedemail.com [216.40.44.236]) by kanga.kvack.org (Postfix) with ESMTP id 9BA126B000A for ; Mon, 7 Oct 2019 01:45:55 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 3E96845C4 for ; Mon, 7 Oct 2019 05:45:55 +0000 (UTC) X-FDA: 76015902270.07.taste90_167b6abc32448 X-Spam-Summary: 2,0,0,abcf79baeed8c332,d41d8cd98f00b204,anshuman.khandual@arm.com,::anshuman.khandual@arm.com:akpm@linux-foundation.org:vbabka@suse.cz:gregkh@linuxfoundation.org:tglx@linutronix.de:rppt@linux.vnet.ibm.com:jgg@ziepe.ca:dan.j.williams@intel.com:peterz@infradead.org:mhocko@kernel.org:mark.rutland@arm.com:broonie@kernel.org:steven.price@arm.com:ard.biesheuvel@linaro.org:yamada.masahiro@socionext.com:keescook@chromium.org:penguin-kernel@i-love.sakura.ne.jp:willy@infradead.org:schowdary@nvidia.com:dave.hansen@intel.com:linux@armlinux.org.uk:mpe@ellerman.id.au:paulus@samba.org:schwidefsky@de.ibm.com:heiko.carstens@de.ibm.com:davem@davemloft.net:vgupta@synopsys.com:jhogan@kernel.org:paul.burton@mips.com:ralf@linux-mips.org:kirill@shutemov.name:gerald.schaefer@de.ibm.com:christophe.leroy@c-s.fr:linux-snps-arc@lists.infradead.org:linux-mips@vger.kernel.org:linux-arm-kernel@lists.infradead.org:linux-ia64@vger.kernel.org:linuxppc-dev@lists.ozlabs.org:linux-s390@vger.kerne l.org:li X-HE-Tag: taste90_167b6abc32448 X-Filterd-Recvd-Size: 19355 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Oct 2019 05:45:54 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 933C015AB; Sun, 6 Oct 2019 22:45:53 -0700 (PDT) Received: from p8cg001049571a15.blr.arm.com (p8cg001049571a15.blr.arm.com [10.162.40.136]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0FA873F68E; Sun, 6 Oct 2019 22:48:20 -0700 (PDT) From: Anshuman Khandual To: linux-mm@kvack.org Cc: Anshuman Khandual , Andrew Morton , Vlastimil Babka , Greg Kroah-Hartman , Thomas Gleixner , Mike Rapoport , Jason Gunthorpe , Dan Williams , Peter Zijlstra , Michal Hocko , Mark Rutland , Mark Brown , Steven Price , Ard Biesheuvel , Masahiro Yamada , Kees Cook , Tetsuo Handa , Matthew Wilcox , Sri Krishna chowdary , Dave Hansen , Russell King - ARM Linux , Michael Ellerman , Paul Mackerras , Martin Schwidefsky , Heiko Carstens , "David S. Miller" , Vineet Gupta , James Hogan , Paul Burton , Ralf Baechle , "Kirill A . Shutemov" , Gerald Schaefer , Christophe Leroy , linux-snps-arc@lists.infradead.org, linux-mips@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH V4 2/2] mm/pgtable/debug: Add test validating architecture page table helpers Date: Mon, 7 Oct 2019 11:15:24 +0530 Message-Id: <1570427124-21887-3-git-send-email-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1570427124-21887-1-git-send-email-anshuman.khandual@arm.com> References: <1570427124-21887-1-git-send-email-anshuman.khandual@arm.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This adds a test module which will validate architecture page table helpers and accessors regarding compliance with generic MM semantics expectations. This will help various architectures in validating changes to the existing page table helpers or addition of new ones. Test page table and memory pages creating it's entries at various level are all allocated from system memory with required alignments. If memory pages with required size and alignment could not be allocated, then all depending individual tests are skipped. Cc: Andrew Morton Cc: Vlastimil Babka Cc: Greg Kroah-Hartman Cc: Thomas Gleixner Cc: Mike Rapoport Cc: Jason Gunthorpe Cc: Dan Williams Cc: Peter Zijlstra Cc: Michal Hocko Cc: Mark Rutland Cc: Mark Brown Cc: Steven Price Cc: Ard Biesheuvel Cc: Masahiro Yamada Cc: Kees Cook Cc: Tetsuo Handa Cc: Matthew Wilcox Cc: Sri Krishna chowdary Cc: Dave Hansen Cc: Russell King - ARM Linux Cc: Michael Ellerman Cc: Paul Mackerras Cc: Martin Schwidefsky Cc: Heiko Carstens Cc: "David S. Miller" Cc: Vineet Gupta Cc: James Hogan Cc: Paul Burton Cc: Ralf Baechle Cc: Kirill A. Shutemov Cc: Gerald Schaefer Cc: Christophe Leroy Cc: linux-snps-arc@lists.infradead.org Cc: linux-mips@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-ia64@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-s390@vger.kernel.org Cc: linux-sh@vger.kernel.org Cc: sparclinux@vger.kernel.org Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org Suggested-by: Catalin Marinas Signed-off-by: Christophe Leroy Tested-by: Christophe Leroy #PPC32 Signed-off-by: Anshuman Khandual --- arch/x86/include/asm/pgtable_64_types.h | 2 + mm/Kconfig.debug | 15 + mm/Makefile | 1 + mm/arch_pgtable_test.c | 440 ++++++++++++++++++++++++ 4 files changed, 458 insertions(+) create mode 100644 mm/arch_pgtable_test.c diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 52e5f5f2240d..b882792a3999 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -40,6 +40,8 @@ static inline bool pgtable_l5_enabled(void) #define pgtable_l5_enabled() 0 #endif /* CONFIG_X86_5LEVEL */ +#define mm_p4d_folded(mm) (!pgtable_l5_enabled()) + extern unsigned int pgdir_shift; extern unsigned int ptrs_per_p4d; diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug index 327b3ebf23bf..683131b1ee7d 100644 --- a/mm/Kconfig.debug +++ b/mm/Kconfig.debug @@ -117,3 +117,18 @@ config DEBUG_RODATA_TEST depends on STRICT_KERNEL_RWX ---help--- This option enables a testcase for the setting rodata read-only. + +config DEBUG_ARCH_PGTABLE_TEST + bool "Test arch page table helpers for semantics compliance" + depends on MMU + depends on DEBUG_KERNEL + depends on !(ARM || IA64) + help + This options provides a kernel module which can be used to test + architecture page table helper functions on various platform in + verifying if they comply with expected generic MM semantics. This + will help architectures code in making sure that any changes or + new additions of these helpers will still conform to generic MM + expected semantics. + + If unsure, say N. diff --git a/mm/Makefile b/mm/Makefile index d996846697ef..bb572c5aa8c5 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -86,6 +86,7 @@ obj-$(CONFIG_HWPOISON_INJECT) += hwpoison-inject.o obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o obj-$(CONFIG_DEBUG_RODATA_TEST) += rodata_test.o +obj-$(CONFIG_DEBUG_ARCH_PGTABLE_TEST) += arch_pgtable_test.o obj-$(CONFIG_PAGE_OWNER) += page_owner.o obj-$(CONFIG_CLEANCACHE) += cleancache.o obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o diff --git a/mm/arch_pgtable_test.c b/mm/arch_pgtable_test.c new file mode 100644 index 000000000000..2942a0484d63 --- /dev/null +++ b/mm/arch_pgtable_test.c @@ -0,0 +1,440 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * This kernel module validates architecture page table helpers & + * accessors and helps in verifying their continued compliance with + * generic MM semantics. + * + * Copyright (C) 2019 ARM Ltd. + * + * Author: Anshuman Khandual + */ +#define pr_fmt(fmt) "arch_pgtable_test: %s " fmt, __func__ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * Basic operations + * + * mkold(entry) = An old and not a young entry + * mkyoung(entry) = A young and not an old entry + * mkdirty(entry) = A dirty and not a clean entry + * mkclean(entry) = A clean and not a dirty entry + * mkwrite(entry) = A write and not a write protected entry + * wrprotect(entry) = A write protected and not a write entry + * pxx_bad(entry) = A mapped and non-table entry + * pxx_same(entry1, entry2) = Both entries hold the exact same value + */ +#define VMFLAGS (VM_READ|VM_WRITE|VM_EXEC) + +/* + * On s390 platform, the lower 12 bits are used to identify given page table + * entry type and for other arch specific requirements. But these bits might + * affect the ability to clear entries with pxx_clear(). So while loading up + * the entries skip all lower 12 bits in order to accommodate s390 platform. + * It does not have affect any other platform. + */ +#define RANDOM_ORVALUE (0xfffffffffffff000UL) +#define RANDOM_NZVALUE (0xff) + +static bool pud_aligned __initdata; +static bool pmd_aligned __initdata; + +static void __init pte_basic_tests(struct page *page, pgprot_t prot) +{ + pte_t pte = mk_pte(page, prot); + + WARN_ON(!pte_same(pte, pte)); + WARN_ON(!pte_young(pte_mkyoung(pte))); + WARN_ON(!pte_dirty(pte_mkdirty(pte))); + WARN_ON(!pte_write(pte_mkwrite(pte))); + WARN_ON(pte_young(pte_mkold(pte))); + WARN_ON(pte_dirty(pte_mkclean(pte))); + WARN_ON(pte_write(pte_wrprotect(pte))); +} + +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE +static void __init pmd_basic_tests(struct page *page, pgprot_t prot) +{ + pmd_t pmd; + + /* + * Memory block here must be PMD_SIZE aligned. Abort this + * test in case we could not allocate such a memory block. + */ + if (!pmd_aligned) { + pr_warn("Could not proceed with PMD tests\n"); + return; + } + + pmd = mk_pmd(page, prot); + WARN_ON(!pmd_same(pmd, pmd)); + WARN_ON(!pmd_young(pmd_mkyoung(pmd))); + WARN_ON(!pmd_dirty(pmd_mkdirty(pmd))); + WARN_ON(!pmd_write(pmd_mkwrite(pmd))); + WARN_ON(pmd_young(pmd_mkold(pmd))); + WARN_ON(pmd_dirty(pmd_mkclean(pmd))); + WARN_ON(pmd_write(pmd_wrprotect(pmd))); + /* + * A huge page does not point to next level page table + * entry. Hence this must qualify as pmd_bad(). + */ + WARN_ON(!pmd_bad(pmd_mkhuge(pmd))); +} +#else +static void __init pmd_basic_tests(struct page *page, pgprot_t prot) { } +#endif + +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +static void __init pud_basic_tests(struct page *page, pgprot_t prot) +{ + pud_t pud; + + /* + * Memory block here must be PUD_SIZE aligned. Abort this + * test in case we could not allocate such a memory block. + */ + if (!pud_aligned) { + pr_warn("Could not proceed with PUD tests\n"); + return; + } + + pud = pfn_pud(page_to_pfn(page), prot); + WARN_ON(!pud_same(pud, pud)); + WARN_ON(!pud_young(pud_mkyoung(pud))); + WARN_ON(!pud_write(pud_mkwrite(pud))); + WARN_ON(pud_write(pud_wrprotect(pud))); + WARN_ON(pud_young(pud_mkold(pud))); + + if (mm_pmd_folded(mm) || __is_defined(ARCH_HAS_4LEVEL_HACK)) + return; + + /* + * A huge page does not point to next level page table + * entry. Hence this must qualify as pud_bad(). + */ + WARN_ON(!pud_bad(pud_mkhuge(pud))); +} +#else +static void __init pud_basic_tests(struct page *page, pgprot_t prot) { } +#endif + +static void __init p4d_basic_tests(struct page *page, pgprot_t prot) +{ + p4d_t p4d; + + memset(&p4d, RANDOM_NZVALUE, sizeof(p4d_t)); + WARN_ON(!p4d_same(p4d, p4d)); +} + +static void __init pgd_basic_tests(struct page *page, pgprot_t prot) +{ + pgd_t pgd; + + memset(&pgd, RANDOM_NZVALUE, sizeof(pgd_t)); + WARN_ON(!pgd_same(pgd, pgd)); +} + +#ifndef __ARCH_HAS_4LEVEL_HACK +static void __init pud_clear_tests(struct mm_struct *mm, pud_t *pudp) +{ + pud_t pud = READ_ONCE(*pudp); + + if (mm_pmd_folded(mm)) + return; + + pud = __pud(pud_val(pud) | RANDOM_ORVALUE); + WRITE_ONCE(*pudp, pud); + pud_clear(pudp); + pud = READ_ONCE(*pudp); + WARN_ON(!pud_none(pud)); +} + +static void __init pud_populate_tests(struct mm_struct *mm, pud_t *pudp, + pmd_t *pmdp) +{ + pud_t pud; + + if (mm_pmd_folded(mm)) + return; + /* + * This entry points to next level page table page. + * Hence this must not qualify as pud_bad(). + */ + pmd_clear(pmdp); + pud_clear(pudp); + pud_populate(mm, pudp, pmdp); + pud = READ_ONCE(*pudp); + WARN_ON(pud_bad(pud)); +} +#else +static void __init pud_clear_tests(struct mm_struct *mm, pud_t *pudp) { } +static void __init pud_populate_tests(struct mm_struct *mm, pud_t *pudp, + pmd_t *pmdp) +{ +} +#endif + +#ifndef __ARCH_HAS_5LEVEL_HACK +static void __init p4d_clear_tests(struct mm_struct *mm, p4d_t *p4dp) +{ + p4d_t p4d = READ_ONCE(*p4dp); + + if (mm_pud_folded(mm)) + return; + + p4d = __p4d(p4d_val(p4d) | RANDOM_ORVALUE); + WRITE_ONCE(*p4dp, p4d); + p4d_clear(p4dp); + p4d = READ_ONCE(*p4dp); + WARN_ON(!p4d_none(p4d)); +} + +static void __init p4d_populate_tests(struct mm_struct *mm, p4d_t *p4dp, + pud_t *pudp) +{ + p4d_t p4d; + + if (mm_pud_folded(mm)) + return; + + /* + * This entry points to next level page table page. + * Hence this must not qualify as p4d_bad(). + */ + pud_clear(pudp); + p4d_clear(p4dp); + p4d_populate(mm, p4dp, pudp); + p4d = READ_ONCE(*p4dp); + WARN_ON(p4d_bad(p4d)); +} + +static void __init pgd_clear_tests(struct mm_struct *mm, pgd_t *pgdp) +{ + pgd_t pgd = READ_ONCE(*pgdp); + + if (mm_p4d_folded(mm)) + return; + + pgd = __pgd(pgd_val(pgd) | RANDOM_ORVALUE); + WRITE_ONCE(*pgdp, pgd); + pgd_clear(pgdp); + pgd = READ_ONCE(*pgdp); + WARN_ON(!pgd_none(pgd)); +} + +static void __init pgd_populate_tests(struct mm_struct *mm, pgd_t *pgdp, + p4d_t *p4dp) +{ + pgd_t pgd; + + if (mm_p4d_folded(mm)) + return; + + /* + * This entry points to next level page table page. + * Hence this must not qualify as pgd_bad(). + */ + p4d_clear(p4dp); + pgd_clear(pgdp); + pgd_populate(mm, pgdp, p4dp); + pgd = READ_ONCE(*pgdp); + WARN_ON(pgd_bad(pgd)); +} +#else +static void __init p4d_clear_tests(struct mm_struct *mm, p4d_t *p4dp) { } +static void __init pgd_clear_tests(struct mm_struct *mm, pgd_t *pgdp) { } +static void __init p4d_populate_tests(struct mm_struct *mm, p4d_t *p4dp, + pud_t *pudp) +{ +} +static void __init pgd_populate_tests(struct mm_struct *mm, pgd_t *pgdp, + p4d_t *p4dp) +{ +} +#endif + +static void __init pte_clear_tests(struct mm_struct *mm, pte_t *ptep) +{ + pte_t pte = READ_ONCE(*ptep); + + pte = __pte(pte_val(pte) | RANDOM_ORVALUE); + WRITE_ONCE(*ptep, pte); + pte_clear(mm, 0, ptep); + pte = READ_ONCE(*ptep); + WARN_ON(!pte_none(pte)); +} + +static void __init pmd_clear_tests(struct mm_struct *mm, pmd_t *pmdp) +{ + pmd_t pmd = READ_ONCE(*pmdp); + + pmd = __pmd(pmd_val(pmd) | RANDOM_ORVALUE); + WRITE_ONCE(*pmdp, pmd); + pmd_clear(pmdp); + pmd = READ_ONCE(*pmdp); + WARN_ON(!pmd_none(pmd)); +} + +static void __init pmd_populate_tests(struct mm_struct *mm, pmd_t *pmdp, + pgtable_t pgtable) +{ + pmd_t pmd; + + /* + * This entry points to next level page table page. + * Hence this must not qualify as pmd_bad(). + */ + pmd_clear(pmdp); + pmd_populate(mm, pmdp, pgtable); + pmd = READ_ONCE(*pmdp); + WARN_ON(pmd_bad(pmd)); +} + +static struct page * __init alloc_mapped_page(void) +{ + struct page *page; + gfp_t gfp_mask = GFP_KERNEL | __GFP_ZERO; + + page = alloc_gigantic_page_order(get_order(PUD_SIZE), gfp_mask, + first_memory_node, &node_states[N_MEMORY]); + if (page) { + pud_aligned = true; + pmd_aligned = true; + return page; + } + + page = alloc_pages(gfp_mask, get_order(PMD_SIZE)); + if (page) { + pmd_aligned = true; + return page; + } + return alloc_page(gfp_mask); +} + +static void __init free_mapped_page(struct page *page) +{ + if (pud_aligned) { + unsigned long pfn = page_to_pfn(page); + + free_contig_range(pfn, 1ULL << get_order(PUD_SIZE)); + return; + } + + if (pmd_aligned) { + int order = get_order(PMD_SIZE); + + free_pages((unsigned long)page_address(page), order); + return; + } + free_page((unsigned long)page_address(page)); +} + +static unsigned long __init get_random_vaddr(void) +{ + unsigned long random_vaddr, random_pages, total_user_pages; + + total_user_pages = (TASK_SIZE - FIRST_USER_ADDRESS) / PAGE_SIZE; + + random_pages = get_random_long() % total_user_pages; + random_vaddr = FIRST_USER_ADDRESS + random_pages * PAGE_SIZE; + + WARN_ON(random_vaddr > TASK_SIZE); + WARN_ON(random_vaddr < FIRST_USER_ADDRESS); + return random_vaddr; +} + +static int __init arch_pgtable_tests_init(void) +{ + struct mm_struct *mm; + struct page *page; + pgd_t *pgdp; + p4d_t *p4dp, *saved_p4dp; + pud_t *pudp, *saved_pudp; + pmd_t *pmdp, *saved_pmdp, pmd; + pte_t *ptep; + pgtable_t saved_ptep; + pgprot_t prot; + unsigned long vaddr; + + prot = vm_get_page_prot(VMFLAGS); + vaddr = get_random_vaddr(); + mm = mm_alloc(); + if (!mm) { + pr_err("mm_struct allocation failed\n"); + return 1; + } + + page = alloc_mapped_page(); + if (!page) { + pr_err("memory allocation failed\n"); + return 1; + } + + pgdp = pgd_offset(mm, vaddr); + p4dp = p4d_alloc(mm, pgdp, vaddr); + pudp = pud_alloc(mm, p4dp, vaddr); + pmdp = pmd_alloc(mm, pudp, vaddr); + ptep = pte_alloc_map(mm, pmdp, vaddr); + + /* + * Save all the page table page addresses as the page table + * entries will be used for testing with random or garbage + * values. These saved addresses will be used for freeing + * page table pages. + */ + pmd = READ_ONCE(*pmdp); + saved_p4dp = p4d_offset(pgdp, 0UL); + saved_pudp = pud_offset(p4dp, 0UL); + saved_pmdp = pmd_offset(pudp, 0UL); + saved_ptep = pmd_pgtable(pmd); + + pte_basic_tests(page, prot); + pmd_basic_tests(page, prot); + pud_basic_tests(page, prot); + p4d_basic_tests(page, prot); + pgd_basic_tests(page, prot); + + pte_clear_tests(mm, ptep); + pmd_clear_tests(mm, pmdp); + pud_clear_tests(mm, pudp); + p4d_clear_tests(mm, p4dp); + pgd_clear_tests(mm, pgdp); + + pte_unmap(ptep); + + pmd_populate_tests(mm, pmdp, saved_ptep); + pud_populate_tests(mm, pudp, saved_pmdp); + p4d_populate_tests(mm, p4dp, saved_pudp); + pgd_populate_tests(mm, pgdp, saved_p4dp); + + p4d_free(mm, saved_p4dp); + pud_free(mm, saved_pudp); + pmd_free(mm, saved_pmdp); + pte_free(mm, saved_ptep); + + mm_dec_nr_puds(mm); + mm_dec_nr_pmds(mm); + mm_dec_nr_ptes(mm); + __mmdrop(mm); + + free_mapped_page(page); + return 0; +} +late_initcall(arch_pgtable_tests_init);