From patchwork Thu Nov 24 12:39:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054932 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 738F5C4321E for ; Thu, 24 Nov 2022 12:43:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2jFqQ8on86rCQoZxrle8VEfzz5G3xPLljI9RCI8ZSiU=; b=iE8GMlSle/mgLP L8TJr/je8PN9s4q0gYumIgmjM0NSQauZ6MJyKd57rx2MN7L7xbCpIALCTQ/lq64F3S2umrY8/2g9P f+YBt66QA8TQlpGHLGt5OdkOl2Htrdz2BWL/HQop6N6rxjK/HkCUYdW8K7El/KPdqU8YKbPaTtBye IbaRypZeJU9yU+7VMCLMx4wtZs9Q3DELnkAhvJ1v273K0Vca3VXrtL1aY7+F/ydP4ioHWZ8ec52da OkGWhbcb1HsE+JVeTPzG0M1JN3zR5kfKrMG0WoM6Zg5F85yN3DTi6Ee7k/rWqEqzLLRns37e77od2 5mzTO/N7EI4T8weUZPqw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBYJ-008PJj-9t; Thu, 24 Nov 2022 12:42:03 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBWV-008OQo-6l for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:40:13 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D23D3B82788; Thu, 24 Nov 2022 12:40:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E182DC4347C; Thu, 24 Nov 2022 12:40:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293609; bh=rxULJ+ZEZZdxxXHJRaxeyp0ZqnT5ERP8nql4vRFNirU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AEJns3hj9ODcQ/SVZDxDVMrNbal6h5VmjlhkCWIGIuoP6Tjxyu37NHFLU5+MwGkEU MKoX7SKH/5Ml05eMbiu6K9SMh1o7y1k9vpbgyOavsriFzXX0yxOfb64ch5z4JAfY12 iDZ4toUmjMg+T3JEI7SZ5hGj9/g7lJvYAhyqErrU5xC5CIpAUd8ZsfXYZDVQFHci4u oXsJ3iq3BQPmB5Rl+xwscJBYtATcJzm710bkQVqzDU9p5r3XeZC/hnQt/o6oj8/sZm B7BTDuZuP7S3bg1H69h+Vs6hTKLz3xVsk82++XPIX2V4PKzYF5R5kkg0oYpUdSRn76 hc+NFOxmbMTAQ== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts Subject: [PATCH v2 06/19] arm64: head: remove order argument from early mapping routine Date: Thu, 24 Nov 2022 13:39:19 +0100 Message-Id: <20221124123932.2648991-7-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_044011_585918_AB460097 X-CRM114-Status: GOOD ( 19.94 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When creating mappings in the upper region of the address space, it is important to know the order of the table being created, i.e., the number of bits that are being translated at the level in question. Bits beyond that number do not contribute to the virtual address, and need to be masked out. Now that we no longer use the asm kernel page creation code for mappings in the upper region, those bits are guaranteed to be zero anyway, so we don't have to account for them in the masking. This means we can simply use the maximum order for the all tables including the root level table. Doing so will also allow us to transparently use the same routines creating the initial ID map covering 4 levels when the VA space is configured for 5. Note that the root level tables are always statically allocated as full pages regardless of how many VA bits they translate. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 26 +++++++++----------- 1 file changed, 11 insertions(+), 15 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 3b3c5e8e84af..a37525a5ee34 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -158,7 +158,6 @@ SYM_CODE_END(preserve_boot_args) * vstart: virtual address of start of range * vend: virtual address of end of range - we map [vstart, vend] * shift: shift used to transform virtual address into index - * order: #imm 2log(number of entries in page table) * istart: index in table corresponding to vstart * iend: index in table corresponding to vend * count: On entry: how many extra entries were required in previous level, scales @@ -168,10 +167,10 @@ SYM_CODE_END(preserve_boot_args) * Preserves: vstart, vend * Returns: istart, iend, count */ - .macro compute_indices, vstart, vend, shift, order, istart, iend, count - ubfx \istart, \vstart, \shift, \order - ubfx \iend, \vend, \shift, \order - add \iend, \iend, \count, lsl \order + .macro compute_indices, vstart, vend, shift, istart, iend, count + ubfx \istart, \vstart, \shift, #PAGE_SHIFT - 3 + ubfx \iend, \vend, \shift, #PAGE_SHIFT - 3 + add \iend, \iend, \count, lsl #PAGE_SHIFT - 3 sub \count, \iend, \istart .endm @@ -186,7 +185,6 @@ SYM_CODE_END(preserve_boot_args) * vend: virtual address of end of range - we map [vstart, vend - 1] * flags: flags to use to map last level entries * phys: physical address corresponding to vstart - physical memory is contiguous - * order: #imm 2log(number of entries in PGD table) * * If extra_shift is set, an extra level will be populated if the end address does * not fit in 'extra_shift' bits. This assumes vend is in the TTBR0 range. @@ -195,7 +193,7 @@ SYM_CODE_END(preserve_boot_args) * Preserves: vstart, flags * Corrupts: tbl, rtbl, vend, istart, iend, tmp, count, sv */ - .macro map_memory, tbl, rtbl, vstart, vend, flags, phys, order, istart, iend, tmp, count, sv, extra_shift + .macro map_memory, tbl, rtbl, vstart, vend, flags, phys, istart, iend, tmp, count, sv, extra_shift sub \vend, \vend, #1 add \rtbl, \tbl, #PAGE_SIZE mov \count, #0 @@ -203,32 +201,32 @@ SYM_CODE_END(preserve_boot_args) .ifnb \extra_shift tst \vend, #~((1 << (\extra_shift)) - 1) b.eq .L_\@ - compute_indices \vstart, \vend, #\extra_shift, #(PAGE_SHIFT - 3), \istart, \iend, \count + compute_indices \vstart, \vend, #\extra_shift, \istart, \iend, \count mov \sv, \rtbl populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp mov \tbl, \sv .endif .L_\@: - compute_indices \vstart, \vend, #PGDIR_SHIFT, #\order, \istart, \iend, \count + compute_indices \vstart, \vend, #PGDIR_SHIFT, \istart, \iend, \count mov \sv, \rtbl populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp mov \tbl, \sv #if INIT_IDMAP_TABLE_LEVELS > 3 - compute_indices \vstart, \vend, #PUD_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count + compute_indices \vstart, \vend, #PUD_SHIFT, \istart, \iend, \count mov \sv, \rtbl populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp mov \tbl, \sv #endif #if INIT_IDMAP_TABLE_LEVELS > 2 - compute_indices \vstart, \vend, #INIT_IDMAP_TABLE_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count + compute_indices \vstart, \vend, #INIT_IDMAP_TABLE_SHIFT, \istart, \iend, \count mov \sv, \rtbl populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp mov \tbl, \sv #endif - compute_indices \vstart, \vend, #INIT_IDMAP_BLOCK_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count + compute_indices \vstart, \vend, #INIT_IDMAP_BLOCK_SHIFT, \istart, \iend, \count bic \rtbl, \phys, #INIT_IDMAP_BLOCK_SIZE - 1 populate_entries \tbl, \rtbl, \istart, \iend, \flags, #INIT_IDMAP_BLOCK_SIZE, \tmp .endm @@ -294,7 +292,6 @@ SYM_FUNC_START_LOCAL(create_idmap) * requires more than 47 or 48 bits, respectively. */ #if (VA_BITS < 48) -#define IDMAP_PGD_ORDER (VA_BITS - PGDIR_SHIFT) #define EXTRA_SHIFT (PGDIR_SHIFT + PAGE_SHIFT - 3) /* @@ -308,7 +305,6 @@ SYM_FUNC_START_LOCAL(create_idmap) #error "Mismatch between VA_BITS and page size/number of translation levels" #endif #else -#define IDMAP_PGD_ORDER (PHYS_MASK_SHIFT - PGDIR_SHIFT) #define EXTRA_SHIFT /* * If VA_BITS == 48, we don't have to configure an additional @@ -320,7 +316,7 @@ SYM_FUNC_START_LOCAL(create_idmap) adrp x6, _end + MAX_FDT_SIZE + INIT_IDMAP_BLOCK_SIZE mov x7, INIT_IDMAP_RX_MMUFLAGS - map_memory x0, x1, x3, x6, x7, x3, IDMAP_PGD_ORDER, x10, x11, x12, x13, x14, EXTRA_SHIFT + map_memory x0, x1, x3, x6, x7, x3, x10, x11, x12, x13, x14, EXTRA_SHIFT /* Remap BSS and the kernel page tables r/w in the ID map */ adrp x1, _text