From patchwork Mon Dec 18 21:47:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10121913 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A2710603FA for ; Mon, 18 Dec 2017 21:50:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 92CE528C12 for ; Mon, 18 Dec 2017 21:50:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8698A28C65; Mon, 18 Dec 2017 21:50:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A008028C12 for ; Mon, 18 Dec 2017 21:50:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=qu/HJv4w/cBc4BCmOHX2UXaW3UdBynoEn5qUd9+Rbfk=; b=XCNp8zYQ8Kp+xeLjIM/QgCJlBk Ss//S1z8HA8mZJr6AsYWnjr8+JbJPM/YIzxN6hEb44okmxsxPnn3zhWscOzSN5DxAFkXiFImorVap dAp5Qq+cWNGCT3PvIrM/AcH1i4Z2wD3OaUc7x3a9/BECaup42U+TQV4lFM68xwlcqx3lAO+Cw26AC pCbmlQy5t64lh4dDxo9fHZukYcCMPc4qzLlWm01AXtinFEy823p5jwdKib1TDFFCyhnx2ntIXaXaR GZ/+PxefFh+LMkClEEHqbIGcoFbQJxsNxWYj7DYHzbN3c3T2jqFHObBmxe3I9vB5kOy5sSBymSx4u rn9s+DXw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1eR3IW-0003Il-KQ; Mon, 18 Dec 2017 21:50:08 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1eR3Gi-0002HQ-24 for linux-arm-kernel@lists.infradead.org; Mon, 18 Dec 2017 21:48:24 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2525916A0; Mon, 18 Dec 2017 13:48:02 -0800 (PST) Received: from capper-debian.emea.arm.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id EF3C53F41F; Mon, 18 Dec 2017 13:48:00 -0800 (PST) From: Steve Capper To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Subject: [PATCH V2 7/7] arm64: mm: Add 48/52-bit kernel VA support Date: Mon, 18 Dec 2017 21:47:36 +0000 Message-Id: <20171218214736.13761-8-steve.capper@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20171218214736.13761-1-steve.capper@arm.com> References: <20171218214736.13761-1-steve.capper@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20171218_134816_887278_CBF59C7B X-CRM114-Status: UNSURE ( 9.61 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: catalin.marinas@arm.com, Steve Capper , ard.biesheuvel@linaro.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Add the option to use 52-bit VA support upon availability at boot. We use the same KASAN_SHADOW_OFFSET for both 48 and 52 bit VA spaces as in both cases the start and end of the KASAN shadow region are PGD aligned. From ID_AA64MMFR2, we check the LVA field on very early boot and set the VA size, PGDIR_SHIFT and TCR.T[01]SZ values which then influence how the rest of the memory system behaves. Note that userspace addresses will still be capped out at 48-bit. More patches are needed to deal with scenarios where the user provides MMAP_FIXED hint and a high address to mmap. Signed-off-by: Steve Capper --- arch/arm64/Kconfig | 8 ++++++++ arch/arm64/include/asm/memory.h | 4 ++++ arch/arm64/mm/proc.S | 13 +++++++++++++ 3 files changed, 25 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 5a42edc18718..3fa5342849dc 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -262,6 +262,7 @@ config PGTABLE_LEVELS default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36 default 2 if ARM64_64K_PAGES && ARM64_VA_BITS_42 default 3 if ARM64_64K_PAGES && ARM64_VA_BITS_48 + default 3 if ARM64_64K_PAGES && ARM64_VA_BITS_48_52 default 3 if ARM64_4K_PAGES && ARM64_VA_BITS_39 default 3 if ARM64_16K_PAGES && ARM64_VA_BITS_47 default 4 if !ARM64_64K_PAGES && ARM64_VA_BITS_48 @@ -275,6 +276,7 @@ config ARCH_PROC_KCORE_TEXT config KASAN_SHADOW_OFFSET hex depends on KASAN + default 0xdfffa00000000000 if ARM64_VA_BITS_48_52 default 0xdfffa00000000000 if ARM64_VA_BITS_48 default 0xdfffd00000000000 if ARM64_VA_BITS_47 default 0xdffffe8000000000 if ARM64_VA_BITS_42 @@ -656,6 +658,10 @@ config ARM64_VA_BITS_47 config ARM64_VA_BITS_48 bool "48-bit" +config ARM64_VA_BITS_48_52 + bool "48 or 52-bit (decided at boot time)" + depends on ARM64_64K_PAGES + endchoice config ARM64_VA_BITS @@ -665,9 +671,11 @@ config ARM64_VA_BITS default 42 if ARM64_VA_BITS_42 default 47 if ARM64_VA_BITS_47 default 48 if ARM64_VA_BITS_48 + default 48 if ARM64_VA_BITS_48_52 config ARM64_VA_BITS_ALT bool + default y if ARM64_VA_BITS_48_52 default n config CPU_BIG_ENDIAN diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 2c11df336109..417b70bb50be 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -75,6 +75,10 @@ #define _VA_START(va) (UL(0xffffffffffffffff) - \ (UL(1) << ((va) - 1)) + 1) +#ifdef CONFIG_ARM64_VA_BITS_48_52 +#define VA_BITS_ALT (52) +#endif + #define KERNEL_START _text #define KERNEL_END _end diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 16564324c957..42a91a4a1126 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -259,9 +259,22 @@ ENTRY(__cpu_setup) ENDPROC(__cpu_setup) ENTRY(__setup_va_constants) +#ifdef CONFIG_ARM64_VA_BITS_48_52 + mrs_s x5, SYS_ID_AA64MMFR2_EL1 + and x5, x5, #0xf << ID_AA64MMFR2_LVA_SHIFT + cmp x5, #1 << ID_AA64MMFR2_LVA_SHIFT + b.ne 1f + mov x0, #VA_BITS_ALT + mov x1, TCR_T0SZ(VA_BITS_ALT) + mov x2, #1 << (VA_BITS_ALT - PGDIR_SHIFT) + b 2f +#endif + +1: mov x0, #VA_BITS_MIN mov x1, TCR_T0SZ(VA_BITS_MIN) mov x2, #1 << (VA_BITS_MIN - PGDIR_SHIFT) +2: str_l x0, vabits_actual, x5 str_l x1, idmap_t0sz, x5 str_l x2, ptrs_per_pgd, x5