From patchwork Mon May 31 08:45:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 12288915 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.7 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52206C4708F for ; Mon, 31 May 2021 08:49:20 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1743F61186 for ; Mon, 31 May 2021 08:49:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1743F61186 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Bqh75mcMTbE/Dgu/YYrRARDOb1DyKn7x2XYgw2hu7g0=; b=kBXMrgXhBN+BER 6Fc+Ie5Ewrkn60tkcTljKnOIcZ2eSfjPYnf4IdVMtkgxgYTviTf81y0ZrFNhGPZefvZQb06vYcHMM rP4zLndckXH6LlFAt/LhQ9/8TlWEFLlHMG8d7XVoFdQOw98Dzs6HHcI7PrQVHMhWi+UEBx4MGOKoK EZcGz/EYg9FVMHUYQWfoip6KgO+MYuBrguGnYGOZFezcImIvggnQ07gtdtlo29MSPM3lHxomecJ9i iU1+5V596uK42I4+m768F8kmRU6aTIU5xDe/69xW53ggmtkQdxtIyxjeJmx1FvbM1ylBw373qkv6+ QTTyRPe2KCNPDVvIPS8Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lndZu-00BWM9-Aw; Mon, 31 May 2021 08:47:18 +0000 Received: from mail-pf1-x429.google.com ([2607:f8b0:4864:20::429]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lndZR-00BWDa-9S for linux-arm-kernel@lists.infradead.org; Mon, 31 May 2021 08:46:51 +0000 Received: by mail-pf1-x429.google.com with SMTP id 22so8493378pfv.11 for ; Mon, 31 May 2021 01:46:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Zel3PPNBalQvU0nQThY/lOELl5rVDzxGrrqCtqzuEa0=; b=SH/S6z8LoO4JchsNLVFhMwJBwa13G0Up4h8p0GzJW57z73NDLI5DSZECNiHbxGrQwo 3FQryHjrqKDC4mcg6hTxsNkUYUitXKEL1v3kJGZYufaEzF6S0xXxPWtoLPtTytKXSrCb c2W/JliThtesTj48RCyk6qzrH1rQVvqGBPUHqROQa3IK4Cg0wtgY0/l0/gKcSY62b0wN VlokPPuJmvDqgyZxVVVmLiPp9xZTtwK/CcWxybcA2uwlXkFpgJ9JCtn39OaglA9+eJ2Z zE6VqZKET/hpcRMNAldJ8DEAzCLmuIdNah6nOPcCrkulp1Jyav8CWbhIIB2bIICIeTWz 25uA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Zel3PPNBalQvU0nQThY/lOELl5rVDzxGrrqCtqzuEa0=; b=KANtj5AMG0JSghte/cE1XnRzHXgIOiPsjpnuDtymPzx5pW/xHPSeVoc1tu7dE4WWhE 9I39cjdx0eQtrR0jxVkQSO8Fh6TnEZqtIPgGQedpmeSDNPfpixs9Edsti6AH+s/2aHZE 97UWQFsb86syw/5gTixySpWljmrTECUMgPEEqCMR7WmpoI7TmPG2d8qNtTAmSWXYiDcA h4MrnV4A9nG7z5dOas+uvSophjf/H81HjBwzNqWwFhX953VmpgHPvOgje9KAueWnbFtL rQjOf1Cid5mAUSVclDNvpHUV3Qyp6YIIDOsoBxNaC87v0MmrpeH8VXHHmDTydBSDt8mZ OJaA== X-Gm-Message-State: AOAM530wL/SwrQSRpEbGPbAfTsTi5umFRH6iK/OTzSgkwUvFCrAC60gu I8KtEUmd2DHutFJ1rx6LAYCLjhK38FusFTw= X-Google-Smtp-Source: ABdhPJypAyObrFNjX9KOn1kNNAnNzGXzY6XXfosGxSA7t2Xa5AagB6okQ+TvMG1jkGu4MWKSZDhs3g== X-Received: by 2002:a62:4ed1:0:b029:2e4:df13:fbd8 with SMTP id c200-20020a624ed10000b02902e4df13fbd8mr15832483pfb.68.1622450807871; Mon, 31 May 2021 01:46:47 -0700 (PDT) Received: from qualcomm-amberwing-rep-18.khw4.lab.eng.bos.redhat.com (nat-pool-bos-t.redhat.com. [66.187.233.206]) by smtp.gmail.com with ESMTPSA id fs24sm5125677pjb.6.2021.05.31.01.46.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 May 2021 01:46:47 -0700 (PDT) From: Pingfan Liu To: linux-arm-kernel@lists.infradead.org Cc: Pingfan Liu , Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Kristina Martsenko , James Morse , Steven Price , Jonathan Cameron , Pavel Tatashin , Anshuman Khandual , Atish Patra , Mike Rapoport , Logan Gunthorpe , Mark Brown Subject: [PATCHv3 5/5] arm64/mm: use __create_pgd_mapping() to create pgtable for idmap_pg_dir and init_pg_dir Date: Mon, 31 May 2021 04:45:40 -0400 Message-Id: <20210531084540.78546-6-kernelfans@gmail.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210531084540.78546-1-kernelfans@gmail.com> References: <20210531084540.78546-1-kernelfans@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210531_014649_430661_B7960CD2 X-CRM114-Status: GOOD ( 24.62 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now, everything is ready for calling __create_pgd_mapping() from head.S. Switching to these C routine and remove the asm counterpart. This patch has been successfully tested with the following config value: PAGE_SIZE VA PA PGTABLE_LEVEL 4K 48 48 4 4K 39 48 3 16K 48 48 4 16K 47 48 3 64K 52 52 3 64K 48 52 3 64K 42 52 2 Signed-off-by: Pingfan Liu Cc: Catalin Marinas Cc: Will Deacon Cc: Ard Biesheuvel Cc: Marc Zyngier Cc: Kristina Martsenko Cc: James Morse Cc: Steven Price Cc: Jonathan Cameron Cc: Pavel Tatashin Cc: Anshuman Khandual Cc: Atish Patra Cc: Mike Rapoport Cc: Logan Gunthorpe Cc: Mark Brown To: linux-arm-kernel@lists.infradead.org --- arch/arm64/include/asm/kernel-pgtable.h | 12 +- arch/arm64/include/asm/pgalloc.h | 9 ++ arch/arm64/kernel/head.S | 164 +++++++----------------- arch/arm64/mm/mmu.c | 7 +- 4 files changed, 60 insertions(+), 132 deletions(-) diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h index 249ab132fced..121856008a0e 100644 --- a/arch/arm64/include/asm/kernel-pgtable.h +++ b/arch/arm64/include/asm/kernel-pgtable.h @@ -108,15 +108,11 @@ /* * Initial memory map attributes. + * When using ARM64_SWAPPER_USES_SECTION_MAPS, init_pmd()->pmd_set_huge() + * sets up section mapping. */ -#define SWAPPER_PTE_FLAGS (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED) -#define SWAPPER_PMD_FLAGS (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S) - -#if ARM64_SWAPPER_USES_SECTION_MAPS -#define SWAPPER_MM_MMUFLAGS (PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS) -#else -#define SWAPPER_MM_MMUFLAGS (PTE_ATTRINDX(MT_NORMAL) | SWAPPER_PTE_FLAGS) -#endif +#define SWAPPER_PAGE_FLAGS ((PTE_TYPE_PAGE | PTE_AF | PTE_SHARED) | \ + PTE_ATTRINDX(MT_NORMAL)) /* * To make optimal use of block mappings when laying out the linear diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h index 31fbab3d6f99..9a6fb96ff291 100644 --- a/arch/arm64/include/asm/pgalloc.h +++ b/arch/arm64/include/asm/pgalloc.h @@ -8,6 +8,8 @@ #ifndef __ASM_PGALLOC_H #define __ASM_PGALLOC_H +#ifndef __ASSEMBLY__ + #include #include #include @@ -89,3 +91,10 @@ pmd_populate(struct mm_struct *mm, pmd_t *pmdp, pgtable_t ptep) #define pmd_pgtable(pmd) pmd_page(pmd) #endif + +#define NO_BLOCK_MAPPINGS BIT(0) +#define NO_CONT_MAPPINGS BIT(1) +#define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */ +#define BOOT_HEAD BIT(3) + +#endif diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 96873dfa67fd..7158987f52b1 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -28,6 +28,7 @@ #include #include #include +#include #include #include #include @@ -93,6 +94,8 @@ SYM_CODE_START(primary_entry) adrp x23, __PHYS_OFFSET and x23, x23, MIN_KIMG_ALIGN - 1 // KASLR offset, defaults to 0 bl set_cpu_boot_mode_flag + adrp x4, init_thread_union + add sp, x4, #THREAD_SIZE bl __create_page_tables /* * The following calls CPU setup code, see arch/arm64/mm/proc.S for @@ -144,112 +147,6 @@ SYM_CODE_END(preserve_boot_args) add \tbl, \tbl, #PAGE_SIZE // next level table page .endm -/* - * Macro to populate page table entries, these entries can be pointers to the next level - * or last level entries pointing to physical memory. - * - * tbl: page table address - * rtbl: pointer to page table or physical memory - * index: start index to write - * eindex: end index to write - [index, eindex] written to - * flags: flags for pagetable entry to or in - * inc: increment to rtbl between each entry - * tmp1: temporary variable - * - * Preserves: tbl, eindex, flags, inc - * Corrupts: index, tmp1 - * Returns: rtbl - */ - .macro populate_entries, tbl, rtbl, index, eindex, flags, inc, tmp1 -.Lpe\@: phys_to_pte \tmp1, \rtbl - orr \tmp1, \tmp1, \flags // tmp1 = table entry - str \tmp1, [\tbl, \index, lsl #3] - add \rtbl, \rtbl, \inc // rtbl = pa next level - add \index, \index, #1 - cmp \index, \eindex - b.ls .Lpe\@ - .endm - -/* - * Compute indices of table entries from virtual address range. If multiple entries - * were needed in the previous page table level then the next page table level is assumed - * to be composed of multiple pages. (This effectively scales the end index). - * - * vstart: virtual address of start of range - * vend: virtual address of end of range - * shift: shift used to transform virtual address into index - * ptrs: number of entries in page table - * istart: index in table corresponding to vstart - * iend: index in table corresponding to vend - * count: On entry: how many extra entries were required in previous level, scales - * our end index. - * On exit: returns how many extra entries required for next page table level - * - * Preserves: vstart, vend, shift, ptrs - * Returns: istart, iend, count - */ - .macro compute_indices, vstart, vend, shift, ptrs, istart, iend, count - lsr \iend, \vend, \shift - mov \istart, \ptrs - sub \istart, \istart, #1 - and \iend, \iend, \istart // iend = (vend >> shift) & (ptrs - 1) - mov \istart, \ptrs - mul \istart, \istart, \count - add \iend, \iend, \istart // iend += (count - 1) * ptrs - // our entries span multiple tables - - lsr \istart, \vstart, \shift - mov \count, \ptrs - sub \count, \count, #1 - and \istart, \istart, \count - - sub \count, \iend, \istart - .endm - -/* - * Map memory for specified virtual address range. Each level of page table needed supports - * multiple entries. If a level requires n entries the next page table level is assumed to be - * formed from n pages. - * - * tbl: location of page table - * rtbl: address to be used for first level page table entry (typically tbl + PAGE_SIZE) - * vstart: start address to map - * vend: end address to map - we map [vstart, vend] - * flags: flags to use to map last level entries - * phys: physical address corresponding to vstart - physical memory is contiguous - * pgds: the number of pgd entries - * - * Temporaries: istart, iend, tmp, count, sv - these need to be different registers - * Preserves: vstart, vend, flags - * Corrupts: tbl, rtbl, istart, iend, tmp, count, sv - */ - .macro map_memory, tbl, rtbl, vstart, vend, flags, phys, pgds, istart, iend, tmp, count, sv - add \rtbl, \tbl, #PAGE_SIZE - mov \sv, \rtbl - mov \count, #0 - compute_indices \vstart, \vend, #PGDIR_SHIFT, \pgds, \istart, \iend, \count - populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp - mov \tbl, \sv - mov \sv, \rtbl - -#if SWAPPER_PGTABLE_LEVELS > 3 - compute_indices \vstart, \vend, #PUD_SHIFT, #PTRS_PER_PUD, \istart, \iend, \count - populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp - mov \tbl, \sv - mov \sv, \rtbl -#endif - -#if SWAPPER_PGTABLE_LEVELS > 2 - compute_indices \vstart, \vend, #SWAPPER_TABLE_SHIFT, #PTRS_PER_PMD, \istart, \iend, \count - populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp - mov \tbl, \sv -#endif - - compute_indices \vstart, \vend, #SWAPPER_BLOCK_SHIFT, #PTRS_PER_PTE, \istart, \iend, \count - bic \count, \phys, #SWAPPER_BLOCK_SIZE - 1 - populate_entries \tbl, \count, \istart, \iend, \flags, #SWAPPER_BLOCK_SIZE, \tmp - .endm - /* * Setup the initial page tables. We only setup the barest amount which is * required to get the kernel running. The following sections are required: @@ -284,8 +181,6 @@ SYM_FUNC_START_LOCAL(__create_page_tables) subs x1, x1, #64 b.ne 1b - mov x7, SWAPPER_MM_MMUFLAGS - /* * Create the identity mapping. */ @@ -357,21 +252,54 @@ SYM_FUNC_START_LOCAL(__create_page_tables) mov x5, x3 // __pa(__idmap_text_start) adr_l x6, __idmap_text_end // __pa(__idmap_text_end) - map_memory x0, x1, x3, x6, x7, x3, x4, x10, x11, x12, x13, x14 + /* + * x0 points to either idmap_pg_dir or idmap_pg_dir + PAGE_SIZE + */ + stp x0, x1, [sp, #-64]! + stp x2, x3, [sp, #48] + stp x4, x5, [sp, #32] + stp x6, x7, [sp, #16] + + adrp x1, idmap_pg_end + sub x1, x1, x0 + bl set_cur_headpool + mov x0, #0 + bl head_pgtable_alloc // return x0, containing the appropriate pgtable level + + adrp x1, __idmap_text_start + adrp x2, __idmap_text_start // va = pa for idmap + adr_l x3, __idmap_text_end + sub x3, x3, x1 + ldr x4, =SWAPPER_PAGE_FLAGS + adr_l x5, head_pgtable_alloc + mov x6, #BOOT_HEAD + bl __create_pgd_mapping /* * Map the kernel image (starting with PHYS_OFFSET). */ adrp x0, init_pg_dir - mov_q x5, KIMAGE_VADDR // compile time __va(_text) - add x5, x5, x23 // add KASLR displacement - mov x4, PTRS_PER_PGD - adrp x6, _end // runtime __pa(_end) - adrp x3, _text // runtime __pa(_text) - sub x6, x6, x3 // _end - _text - add x6, x6, x5 // runtime __va(_end) - - map_memory x0, x1, x5, x6, x7, x3, x4, x10, x11, x12, x13, x14 + adrp x1, init_pg_end + sub x1, x1, x0 + bl set_cur_headpool + mov x0, #0 + bl head_pgtable_alloc // return x0, containing init_pg_dir + + adrp x1, _text // runtime __pa(_text) + mov_q x2, KIMAGE_VADDR // compile time __va(_text) + add x2, x2, x23 // add KASLR displacement + adrp x3, _end // runtime __pa(_end) + sub x3, x3, x1 // _end - _text + + ldr x4, =SWAPPER_PAGE_FLAGS + adr_l x5, head_pgtable_alloc + mov x6, #BOOT_HEAD + bl __create_pgd_mapping + + ldp x6, x7, [sp, #16] + ldp x4, x5, [sp, #32] + ldp x2, x3, [sp, #48] + ldp x0, x1, [sp], #64 /* * Since the page tables have been populated with non-cacheable diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 5f717552b524..b3295523607e 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -37,11 +37,6 @@ #include #include -#define NO_BLOCK_MAPPINGS BIT(0) -#define NO_CONT_MAPPINGS BIT(1) -#define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */ -#define BOOT_HEAD BIT(3) - u64 idmap_t0sz = TCR_T0SZ(VA_BITS_MIN); u64 idmap_ptrs_per_pgd = PTRS_PER_PGD; @@ -420,7 +415,7 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, pud_clear_fixmap(); } -static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys, +void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys, unsigned long virt, phys_addr_t size, pgprot_t prot, phys_addr_t (*pgtable_alloc)(int),