From patchwork Wed Aug 11 00:42:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12429685 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14965C4320E for ; Wed, 11 Aug 2021 00:43:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B36AF6056C for ; Wed, 11 Aug 2021 00:43:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B36AF6056C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id C2D2B6B0089; Tue, 10 Aug 2021 20:43:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BB5D56B008C; Tue, 10 Aug 2021 20:43:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91F006B008A; Tue, 10 Aug 2021 20:43:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0020.hostedemail.com [216.40.44.20]) by kanga.kvack.org (Postfix) with ESMTP id 591316B0083 for ; Tue, 10 Aug 2021 20:43:24 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 134B121960 for ; Wed, 11 Aug 2021 00:43:24 +0000 (UTC) X-FDA: 78460951128.21.F18F748 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf24.hostedemail.com (Postfix) with ESMTP id AEC5BB000838 for ; Wed, 11 Aug 2021 00:43:23 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id B562B61008; Wed, 11 Aug 2021 00:43:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628642602; bh=fbU0hMmleCw5vRgQgXl1CKzt9JscXC+sV2x5UO3YuGo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LqPNYuK2wTt0KTP/CLyyyDZ+7qk/5ZvutiPZwv+6AkJVRYI0LKwaa1YjBXbbsRz1n zjEu0d8Eub46NnLcuoa+2ThJ7y5XJtzvkRe3QmLCpcAVUcOAZMKrEoJxlj+TnmPKV5 x38JQmkSO8xA6cVdO5ksMmGXYczvX71ocM+rx+k1Ds2GnbXa8OcfY0kBb91xrAEnuG v+3gU2t1eM3m79CFAT845BEO5PPztfBR41ymp6IFS0FIQMlk9biORrxHHJf1YAV2wx EtF4Vb7lGwYHzLAr6n4Jtxt/EzW8euzUhTxQEHXtRUC0vTCrgYdKY7jV0k28WIIC7M xwmMoRf/1m+gA== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH 15/18] ARC: mm: support 3 levels of page tables Date: Tue, 10 Aug 2021 17:42:55 -0700 Message-Id: <20210811004258.138075-16-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210811004258.138075-1-vgupta@kernel.org> References: <20210811004258.138075-1-vgupta@kernel.org> MIME-Version: 1.0 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=LqPNYuK2; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf24.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-Stat-Signature: qeoynp8q7kd9rdzjjkut4m65i85phd1i X-Rspamd-Queue-Id: AEC5BB000838 X-Rspamd-Server: rspam01 X-HE-Tag: 1628642603-859128 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: ARCv2 MMU is software walked and Linux implements 2 levels of paging: pgd/pte. Forthcoming hw will have multiple levels, so this change preps mm code for same. It is also fun to try multi levels even on soft-walked code to ensure generic mm code is robust to handle. overview diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig index 59d5b2a179f6..43cb8aaf57a2 100644 --- a/arch/arc/Kconfig +++ b/arch/arc/Kconfig @@ -314,6 +314,10 @@ config ARC_HUGEPAGE_16M endchoice +config PGTABLE_LEVELS + int "Number of Page table levels" + default 2 + config ARC_COMPACT_IRQ_LEVELS depends on ISA_ARCOMPACT bool "Setup Timer IRQ as high Priority" diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h index 313e6f543d2d..df3cc154ae4a 100644 --- a/arch/arc/include/asm/page.h +++ b/arch/arc/include/asm/page.h @@ -41,6 +41,17 @@ typedef struct { #define pgd_val(x) ((x).pgd) #define __pgd(x) ((pgd_t) { (x) }) +#if CONFIG_PGTABLE_LEVELS > 2 + +typedef struct { + unsigned long pmd; +} pmd_t; + +#define pmd_val(x) ((x).pmd) +#define __pmd(x) ((pmd_t) { (x) }) + +#endif + typedef struct { #ifdef CONFIG_ARC_HAS_PAE40 unsigned long long pte; diff --git a/arch/arc/include/asm/pgalloc.h b/arch/arc/include/asm/pgalloc.h index 0cf73431eb89..01c2d84418ed 100644 --- a/arch/arc/include/asm/pgalloc.h +++ b/arch/arc/include/asm/pgalloc.h @@ -86,6 +86,28 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) } +#if CONFIG_PGTABLE_LEVELS > 2 + +static inline void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmdp) +{ + set_pud(pudp, __pud((unsigned long)pmdp)); +} + +static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr) +{ + return (pmd_t *)__get_free_page( + GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_ZERO); +} + +static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) +{ + free_page((unsigned long)pmd); +} + +#define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd) + +#endif + /* * With software-only page-tables, addr-split for traversal is tweakable and * that directly governs how big tables would be at each level. diff --git a/arch/arc/include/asm/pgtable-levels.h b/arch/arc/include/asm/pgtable-levels.h index 8ece75335bb5..1c2f022d4ad0 100644 --- a/arch/arc/include/asm/pgtable-levels.h +++ b/arch/arc/include/asm/pgtable-levels.h @@ -10,6 +10,8 @@ #ifndef _ASM_ARC_PGTABLE_LEVELS_H #define _ASM_ARC_PGTABLE_LEVELS_H +#if CONFIG_PGTABLE_LEVELS == 2 + /* * 2 level paging setup for software walked MMUv3 (ARC700) and MMUv4 (HS) * @@ -37,16 +39,38 @@ #define PGDIR_SHIFT 21 #endif -#define PGDIR_SIZE BIT(PGDIR_SHIFT) /* vaddr span, not PDG sz */ -#define PGDIR_MASK (~(PGDIR_SIZE - 1)) +#else + +/* + * A default 3 level paging testing setup in software walked MMU + * MMUv4 (8K page): <4> : <7> : <8> : <13> + */ +#define PGDIR_SHIFT 28 +#if CONFIG_PGTABLE_LEVELS > 2 +#define PMD_SHIFT 21 +#endif + +#endif +#define PGDIR_SIZE BIT(PGDIR_SHIFT) +#define PGDIR_MASK (~(PGDIR_SIZE - 1)) #define PTRS_PER_PGD BIT(32 - PGDIR_SHIFT) -#define PTRS_PER_PTE BIT(PGDIR_SHIFT - PAGE_SHIFT) +#if CONFIG_PGTABLE_LEVELS > 2 +#define PMD_SIZE BIT(PMD_SHIFT) +#define PMD_MASK (~(PMD_SIZE - 1)) +#define PTRS_PER_PMD BIT(PGDIR_SHIFT - PMD_SHIFT) +#endif + +#define PTRS_PER_PTE BIT(PMD_SHIFT - PAGE_SHIFT) #ifndef __ASSEMBLY__ +#if CONFIG_PGTABLE_LEVELS > 2 +#include +#else #include +#endif /* * 1st level paging: pgd @@ -57,9 +81,35 @@ #define pgd_ERROR(e) \ pr_crit("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e)) +#if CONFIG_PGTABLE_LEVELS > 2 + +/* In 3 level paging, pud_* macros work on pgd */ +#define pud_none(x) (!pud_val(x)) +#define pud_bad(x) ((pud_val(x) & ~PAGE_MASK)) +#define pud_present(x) (pud_val(x)) +#define pud_clear(xp) do { pud_val(*(xp)) = 0; } while (0) +#define pud_pgtable(pud) ((pmd_t *)(pud_val(pud) & PAGE_MASK)) +#define pud_page(pud) virt_to_page(pud_pgtable(pud)) +#define set_pud(pudp, pud) (*(pudp) = pud) + +/* + * 2nd level paging: pmd + */ +#define pmd_ERROR(e) \ + pr_crit("%s:%d: bad pmd %08lx.\n", __FILE__, __LINE__, pmd_val(e)) + +#define pmd_pfn(pmd) ((pmd_val(pmd) & PMD_MASK) >> PAGE_SHIFT) +#define pfn_pmd(pfn,prot) __pmd(((pfn) << PAGE_SHIFT) | pgprot_val(prot)) +#define mk_pmd(page,prot) pfn_pmd(page_to_pfn(page),prot) + +#endif + /* - * Due to the strange way generic pgtable level folding works, in a 2 level - * setup, pmd_val() returns pgd, so these pmd_* macros actually work on pgd + * Due to the strange way generic pgtable level folding works, the pmd_* macros + * - are valid even for 2 levels (which supposedly only has pgd - pte) + * - behave differently for 2 vs. 3 + * In 2 level paging (pgd -> pte), pmd_* macros work on pgd + * In 3+ level paging (pgd -> pmd -> pte), pmd_* macros work on pmd */ #define pmd_none(x) (!pmd_val(x)) #define pmd_bad(x) ((pmd_val(x) & ~PAGE_MASK)) @@ -70,6 +120,9 @@ #define set_pmd(pmdp, pmd) (*(pmdp) = pmd) #define pmd_pgtable(pmd) ((pgtable_t) pmd_page_vaddr(pmd)) +/* + * 3rd level paging: pte + */ #define pte_ERROR(e) \ pr_crit("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e)) diff --git a/arch/arc/include/asm/processor.h b/arch/arc/include/asm/processor.h index e4031ecd3c8c..f28afcf5c6d1 100644 --- a/arch/arc/include/asm/processor.h +++ b/arch/arc/include/asm/processor.h @@ -93,7 +93,7 @@ extern unsigned int get_wchan(struct task_struct *p); #define VMALLOC_START (PAGE_OFFSET - (CONFIG_ARC_KVADDR_SIZE << 20)) /* 1 PGDIR_SIZE each for fixmap/pkmap, 2 PGDIR_SIZE gutter (see asm/highmem.h) */ -#define VMALLOC_SIZE ((CONFIG_ARC_KVADDR_SIZE << 20) - PGDIR_SIZE * 4) +#define VMALLOC_SIZE ((CONFIG_ARC_KVADDR_SIZE << 20) - PMD_SIZE * 4) #define VMALLOC_END (VMALLOC_START + VMALLOC_SIZE) diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c index 41f154320964..8da2f0ad8c69 100644 --- a/arch/arc/mm/fault.c +++ b/arch/arc/mm/fault.c @@ -39,6 +39,8 @@ noinline static int handle_kernel_vaddr_fault(unsigned long address) if (!pgd_present(*pgd_k)) goto bad_area; + set_pgd(pgd, *pgd_k); + p4d = p4d_offset(pgd, address); p4d_k = p4d_offset(pgd_k, address); if (!p4d_present(*p4d_k)) @@ -49,6 +51,8 @@ noinline static int handle_kernel_vaddr_fault(unsigned long address) if (!pud_present(*pud_k)) goto bad_area; + set_pud(pud, *pud_k); + pmd = pmd_offset(pud, address); pmd_k = pmd_offset(pud_k, address); if (!pmd_present(*pmd_k)) diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index 34f16e0b41e6..77da83569b36 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -658,8 +658,8 @@ char *arc_mmu_mumbojumbo(int cpu_id, char *buf, int len) IS_USED_CFG(CONFIG_TRANSPARENT_HUGEPAGE)); n += scnprintf(buf + n, len - n, - "MMU [v%x]\t: %dk PAGE, %sJTLB %d (%dx%d), uDTLB %d, uITLB %d%s%s\n", - p_mmu->ver, p_mmu->pg_sz_k, super_pg, + "MMU [v%x]\t: %dk PAGE, %s, swalk %d lvl, JTLB %d (%dx%d), uDTLB %d, uITLB %d%s%s\n", + p_mmu->ver, p_mmu->pg_sz_k, super_pg, CONFIG_PGTABLE_LEVELS, p_mmu->sets * p_mmu->ways, p_mmu->sets, p_mmu->ways, p_mmu->u_dtlb, p_mmu->u_itlb, IS_AVAIL2(p_mmu->pae, ", PAE40 ", CONFIG_ARC_HAS_PAE40)); diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S index d08bd09a0afc..5f6bfdfda1be 100644 --- a/arch/arc/mm/tlbex.S +++ b/arch/arc/mm/tlbex.S @@ -173,6 +173,15 @@ ex_saved_reg1: tst r3, r3 bz do_slow_path_pf ; if no Page Table, do page fault +#if CONFIG_PGTABLE_LEVELS > 2 + lsr r0, r2, PMD_SHIFT ; Bits for indexing into PMD + and r0, r0, (PTRS_PER_PMD - 1) + ld.as r1, [r3, r0] ; PMD entry + tst r1, r1 + bz do_slow_path_pf + mov r3, r1 +#endif + #ifdef CONFIG_TRANSPARENT_HUGEPAGE and.f 0, r3, _PAGE_HW_SZ ; Is this Huge PMD (thp) add2.nz r1, r1, r0