From patchwork Wed Aug 29 12:45:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10580289 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3BC2917DE for ; Wed, 29 Aug 2018 12:47:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2C0D528A10 for ; Wed, 29 Aug 2018 12:47:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1EDBB2B01F; Wed, 29 Aug 2018 12:47:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7F83128A10 for ; Wed, 29 Aug 2018 12:47:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=UfSUrunxHQTYLculjt8h8ArT+0iwOJnAi2sU2moB5DY=; b=XVYb3ELlDob56JZ6ofU9QewKhV Fe5WuoDajJWRBpMioaaGhhXMkXhYYKU9IkxeFhWeJBv0CjMV4+wEymdCv3u+PgNxVbVHbUQfTF2O9 Ngz6tHX4tUb61CEViEwQY4b+8SCMkEcMLvABLOJNToUlMBGzyp+jg2ijVOsjrE198D07DP7IXWh47 aZKcBmmZXDdTjErc6fp6hbs49yEQEjssHRhCmxal0b0DBJBt9sluHJZFwWXTy2Gg5Jdkr5spva22U 7fCCuz1S0PZJirmfMMUQ5KnhVllmVlawIl4dYoxTRZTnhYVg/ZdPr+RTqv/1RaR15ptJ/KBdvreZP Prc6vcNw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fuzsf-0005kU-Eg; Wed, 29 Aug 2018 12:47:29 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fuzrG-00056J-52 for linux-arm-kernel@lists.infradead.org; Wed, 29 Aug 2018 12:46:05 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6C5FCED1; Wed, 29 Aug 2018 05:45:51 -0700 (PDT) Received: from capper-debian.arm.com (unknown [10.37.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6173C3F5BC; Wed, 29 Aug 2018 05:45:50 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 1/5] arm64: mm: Introduce DEFAULT_MAP_WINDOW Date: Wed, 29 Aug 2018 13:45:39 +0100 Message-Id: <20180829124543.25314-2-steve.capper@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180829124543.25314-1-steve.capper@arm.com> References: <20180829124543.25314-1-steve.capper@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180829_054602_235826_F3DCCF4B X-CRM114-Status: GOOD ( 15.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: catalin.marinas@arm.com, will.deacon@arm.com, ard.biesheuvel@linaro.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP We wish to introduce a 52-bit virtual address space for userspace but maintain compatibility with software that assumes the maximum VA space size is 48 bit. In order to achieve this, on 52-bit VA systems, we make mmap behave as if it were running on a 48-bit VA system (unless userspace explicitly requests a VA where addr[51:48] != 0). On a system running a 52-bit userspace we need TASK_SIZE to represent the 52-bit limit as it is used in various places to distinguish between kernelspace and userspace addresses. Thus we need a new limit for mmap, stack, ELF loader and EFI (which uses TTBR0) to represent the non-extended VA space. This patch introduces DEFAULT_MAP_WINDOW and DEFAULT_MAP_WINDOW_64 and switches the appropriate logic to use that instead of TASK_SIZE. Signed-off-by: Steve Capper --- arch/arm64/include/asm/elf.h | 2 +- arch/arm64/include/asm/processor.h | 9 +++++++-- drivers/firmware/efi/arm-runtime.c | 2 +- drivers/firmware/efi/libstub/arm-stub.c | 2 +- 4 files changed, 10 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h index 433b9554c6a1..bc9bd9e77d9d 100644 --- a/arch/arm64/include/asm/elf.h +++ b/arch/arm64/include/asm/elf.h @@ -117,7 +117,7 @@ * 64-bit, this is above 4GB to leave the entire 32-bit address * space open for things that want to use the area for 32-bit pointers. */ -#define ELF_ET_DYN_BASE (2 * TASK_SIZE_64 / 3) +#define ELF_ET_DYN_BASE (2 * DEFAULT_MAP_WINDOW_64 / 3) #ifndef __ASSEMBLY__ diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 79657ad91397..46c9d9ff028c 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -26,6 +26,8 @@ #ifndef __ASSEMBLY__ +#define DEFAULT_MAP_WINDOW_64 (UL(1) << VA_BITS) + /* * Default implementation of macro that returns current * instruction pointer ("program counter"). @@ -58,13 +60,16 @@ TASK_SIZE_32 : TASK_SIZE_64) #define TASK_SIZE_OF(tsk) (test_tsk_thread_flag(tsk, TIF_32BIT) ? \ TASK_SIZE_32 : TASK_SIZE_64) +#define DEFAULT_MAP_WINDOW (test_tsk_thread_flag(tsk, TIF_32BIT) ? \ + TASK_SIZE_32 : DEFAULT_MAP_WINDOW_64) #else #define TASK_SIZE TASK_SIZE_64 +#define DEFAULT_MAP_WINDOW DEFAULT_MAP_WINDOW_64 #endif /* CONFIG_COMPAT */ -#define TASK_UNMAPPED_BASE (PAGE_ALIGN(TASK_SIZE / 4)) +#define TASK_UNMAPPED_BASE (PAGE_ALIGN(DEFAULT_MAP_WINDOW / 4)) +#define STACK_TOP_MAX DEFAULT_MAP_WINDOW_64 -#define STACK_TOP_MAX TASK_SIZE_64 #ifdef CONFIG_COMPAT #define AARCH32_VECTORS_BASE 0xffff0000 #define STACK_TOP (test_thread_flag(TIF_32BIT) ? \ diff --git a/drivers/firmware/efi/arm-runtime.c b/drivers/firmware/efi/arm-runtime.c index 922cfb813109..952cec5b611a 100644 --- a/drivers/firmware/efi/arm-runtime.c +++ b/drivers/firmware/efi/arm-runtime.c @@ -38,7 +38,7 @@ static struct ptdump_info efi_ptdump_info = { .mm = &efi_mm, .markers = (struct addr_marker[]){ { 0, "UEFI runtime start" }, - { TASK_SIZE_64, "UEFI runtime end" } + { DEFAULT_MAP_WINDOW_64, "UEFI runtime end" } }, .base_addr = 0, }; diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c index 6920033de6d4..ac297c20ab1e 100644 --- a/drivers/firmware/efi/libstub/arm-stub.c +++ b/drivers/firmware/efi/libstub/arm-stub.c @@ -33,7 +33,7 @@ #define EFI_RT_VIRTUAL_SIZE SZ_512M #ifdef CONFIG_ARM64 -# define EFI_RT_VIRTUAL_LIMIT TASK_SIZE_64 +# define EFI_RT_VIRTUAL_LIMIT DEFAULT_MAP_WINDOW_64 #else # define EFI_RT_VIRTUAL_LIMIT TASK_SIZE #endif From patchwork Wed Aug 29 12:45:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10580293 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 46A8614BD for ; Wed, 29 Aug 2018 12:48:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 33C522AFE8 for ; Wed, 29 Aug 2018 12:48:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 27E0C2AFF8; Wed, 29 Aug 2018 12:48:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2F3252AFE8 for ; Wed, 29 Aug 2018 12:48:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=v7sj7O4T39haSdLek6jlQB8nws7rwL6+KCcegRWCS3I=; b=lW0kdCJTYe7pJNq+td1uHQPbu+ pMf5y69bqeVkKMX7RNXgzXKQ8xpsaWyAcdLEr7nt8GUQqEF2Bn9569nJghjUio4iXGN09FQiEarFt S0Do8T7Gae7sMwCim30Z0fkrbrL/bFM84fs8/JGzp4vO3uJnpsOIlwdgYW4e2lTjup4hjUG0+feFA H/HPoyZktnbE2aSeqY16pYvB3S/JnLhnT2O1H0FWj4zF75va3gyuvqUw5W+MAPSaEiP/ZHIvnzJoi eRoPekAA1InBAK4/ESl1Q1LJ3kDjPyDfeHw38eIviZEmaew5GDAZksJGHCX87t1FIsepDOvkYTL8f izPYPIAg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fuztM-00062s-06; Wed, 29 Aug 2018 12:48:12 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fuzrI-00056Y-B4 for linux-arm-kernel@lists.infradead.org; Wed, 29 Aug 2018 12:46:10 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C904F15BF; Wed, 29 Aug 2018 05:45:52 -0700 (PDT) Received: from capper-debian.arm.com (unknown [10.37.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B31E63F5BC; Wed, 29 Aug 2018 05:45:51 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 2/5] arm64: mm: introduce 52-bit userspace support Date: Wed, 29 Aug 2018 13:45:40 +0100 Message-Id: <20180829124543.25314-3-steve.capper@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180829124543.25314-1-steve.capper@arm.com> References: <20180829124543.25314-1-steve.capper@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180829_054604_489115_ECD0F654 X-CRM114-Status: GOOD ( 18.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: catalin.marinas@arm.com, will.deacon@arm.com, ard.biesheuvel@linaro.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP On arm64 there is optional support for a 52-bit virtual address space. To exploit this one has to be running with a 64KB page size and be running on hardware that supports this. For an arm64 kernel supporting a 48 bit VA with a 64KB page size, a few changes are needed to support a 52-bit userspace: * TCR_EL1.T0SZ needs to be 12 instead of 16, * pgd_offset needs to work with a different PTRS_PER_PGD, * PGD_SIZE needs to be increased, * TASK_SIZE needs to reflect the new size. This patch implements the above when the support for 52-bit VAs is detected at early boot time. On arm64 userspace addresses translation is controlled by TTBR0_EL1. As well as userspace, TTBR0_EL1 controls: * The identity mapping, * EFI runtime code. It is possible to run a kernel with an identity mapping that has a larger VA size than userspace (and for this case __cpu_set_tcr_t0sz() would set TCR_EL1.T0SZ as appropriate). However, when the conditions for 52-bit userspace are met; it is possible to keep TCR_EL1.T0SZ fixed at 12. Thus in this patch, the TCR_EL1.T0SZ size changing logic is disabled. A future patch enables CONFIG_ARM64_TRY_52BIT_VA (which essentially activates this patch) as we need to also change the mmap logic to maintain compatibility with a userspace that expects at most 48-bits of VA. Signed-off-by: Steve Capper --- arch/arm64/include/asm/assembler.h | 7 +++---- arch/arm64/include/asm/cpucaps.h | 3 ++- arch/arm64/include/asm/mmu_context.h | 3 +++ arch/arm64/include/asm/pgalloc.h | 4 ++++ arch/arm64/include/asm/pgtable.h | 16 +++++++++++++--- arch/arm64/include/asm/processor.h | 10 +++++----- arch/arm64/kernel/cpufeature.c | 16 ++++++++++++++++ arch/arm64/kernel/entry.S | 6 +++++- arch/arm64/kernel/head.S | 13 +++++++++++++ arch/arm64/mm/fault.c | 2 +- arch/arm64/mm/mmu.c | 1 + arch/arm64/mm/proc.S | 10 +++++++++- 12 files changed, 75 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 0bcc98dbba56..8c8ed20beca9 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -343,11 +343,10 @@ alternative_endif .endm /* - * tcr_set_idmap_t0sz - update TCR.T0SZ so that we can load the ID map + * tcr_set_t0sz - update TCR.T0SZ so that we can load the ID map */ - .macro tcr_set_idmap_t0sz, valreg, tmpreg - ldr_l \tmpreg, idmap_t0sz - bfi \valreg, \tmpreg, #TCR_T0SZ_OFFSET, #TCR_TxSZ_WIDTH + .macro tcr_set_t0sz, valreg, t0sz + bfi \valreg, \t0sz, #TCR_T0SZ_OFFSET, #TCR_TxSZ_WIDTH .endm /* diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index ae1f70450fb2..0cf286307b79 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -51,7 +51,8 @@ #define ARM64_SSBD 30 #define ARM64_MISMATCHED_CACHE_TYPE 31 #define ARM64_HAS_STAGE2_FWB 32 +#define ARM64_HAS_52BIT_VA 33 -#define ARM64_NCAPS 33 +#define ARM64_NCAPS 34 #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index 39ec0b8a689e..70598750a21e 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -72,6 +72,9 @@ extern u64 idmap_ptrs_per_pgd; static inline bool __cpu_uses_extended_idmap(void) { + if (IS_ENABLED(CONFIG_ARM64_TRY_52BIT_VA)) + return false; + return unlikely(idmap_t0sz != TCR_T0SZ(VA_BITS)); } diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h index 2e05bcd944c8..7f50804a677e 100644 --- a/arch/arm64/include/asm/pgalloc.h +++ b/arch/arm64/include/asm/pgalloc.h @@ -27,7 +27,11 @@ #define check_pgt_cache() do { } while (0) #define PGALLOC_GFP (GFP_KERNEL | __GFP_ZERO) +#ifdef CONFIG_ARM64_TRY_52BIT_VA +#define PGD_SIZE ((1 << (52 - PGDIR_SHIFT)) * sizeof(pgd_t)) +#else #define PGD_SIZE (PTRS_PER_PGD * sizeof(pgd_t)) +#endif #if CONFIG_PGTABLE_LEVELS > 2 diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 1bdeca8918a6..8449e266cd46 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -577,11 +577,21 @@ static inline phys_addr_t pgd_page_paddr(pgd_t pgd) #define pgd_ERROR(pgd) __pgd_error(__FILE__, __LINE__, pgd_val(pgd)) /* to find an entry in a page-table-directory */ -#define pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)) +#define pgd_index(addr, ptrs) (((addr) >> PGDIR_SHIFT) & ((ptrs) - 1)) +#define _pgd_offset_raw(pgd, addr, ptrs) ((pgd) + pgd_index(addr, ptrs)) +#define pgd_offset_raw(pgd, addr) (_pgd_offset_raw(pgd, addr, PTRS_PER_PGD)) -#define pgd_offset_raw(pgd, addr) ((pgd) + pgd_index(addr)) +static inline pgd_t *pgd_offset(const struct mm_struct *mm, unsigned long addr) +{ + pgd_t *ret; + + if (IS_ENABLED(CONFIG_ARM64_TRY_52BIT_VA) && (addr < TASK_SIZE)) + ret = _pgd_offset_raw(mm->pgd, addr, 1ULL << (vabits_user - PGDIR_SHIFT)); + else + ret = pgd_offset_raw(mm->pgd, addr); -#define pgd_offset(mm, addr) (pgd_offset_raw((mm)->pgd, (addr))) + return ret; +} /* to find an entry in a kernel page-table-directory */ #define pgd_offset_k(addr) pgd_offset(&init_mm, addr) diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 46c9d9ff028c..ba63b8a8dac1 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -19,13 +19,13 @@ #ifndef __ASM_PROCESSOR_H #define __ASM_PROCESSOR_H -#define TASK_SIZE_64 (UL(1) << VA_BITS) - -#define KERNEL_DS UL(-1) -#define USER_DS (TASK_SIZE_64 - 1) - +#define KERNEL_DS UL(-1) +#define _USER_DS(vabits) ((UL(1) << (vabits)) - 1) #ifndef __ASSEMBLY__ +extern u64 vabits_user; +#define TASK_SIZE_64 (UL(1) << vabits_user) +#define USER_DS _USER_DS(vabits_user) #define DEFAULT_MAP_WINDOW_64 (UL(1) << VA_BITS) /* diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index e238b7932096..4807fee515b6 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1006,6 +1006,14 @@ static bool has_hw_dbm(const struct arm64_cpu_capabilities *cap, #endif +#ifdef CONFIG_ARM64_TRY_52BIT_VA +extern u64 vabits_user; +static bool has_52bit_vas(const struct arm64_cpu_capabilities *entry, int __unused) +{ + return vabits_user == 52; +} +#endif + #ifdef CONFIG_ARM64_VHE static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused) { @@ -1222,6 +1230,14 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .cpu_enable = cpu_enable_hw_dbm, }, #endif +#ifdef CONFIG_ARM64_TRY_52BIT_VA + { + .desc = "52-bit Virtual Addresses", + .capability = ARM64_HAS_52BIT_VA, + .type = ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE, + .matches = has_52bit_vas, + }, +#endif /* CONFIG_ARM64_TRY_52BIT_VA */ {}, }; diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 09dbea221a27..72b4b0b069b9 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -189,7 +189,11 @@ alternative_cb_end /* Save the task's original addr_limit and set USER_DS */ ldr x20, [tsk, #TSK_TI_ADDR_LIMIT] str x20, [sp, #S_ORIG_ADDR_LIMIT] - mov x20, #USER_DS +alternative_if ARM64_HAS_52BIT_VA + mov x20, #_USER_DS(52) +alternative_else + mov x20, #_USER_DS(VA_BITS) +alternative_endif str x20, [tsk, #TSK_TI_ADDR_LIMIT] /* No need to reset PSTATE.UAO, hardware's already set it to 0 for us */ .endif /* \el == 0 */ diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index b0853069702f..6b7d32990b25 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -316,6 +316,19 @@ __create_page_tables: adrp x0, idmap_pg_dir adrp x3, __idmap_text_start // __pa(__idmap_text_start) +#ifdef CONFIG_ARM64_TRY_52BIT_VA + mrs_s x6, SYS_ID_AA64MMFR2_EL1 + and x6, x6, #(0xf << ID_AA64MMFR2_LVA_SHIFT) + mov x5, #52 + cbnz x6, 1f +#endif + mov x5, #VA_BITS +1: + adr_l x6, vabits_user + str x5, [x6] + dmb sy + dc ivac, x6 // Invalidate potentially stale cache line + /* * VA_BITS may be too small to allow for an ID mapping to be created * that covers system RAM if that is located sufficiently high in the diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 50b30ff30de4..4a9b86152b9a 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -153,7 +153,7 @@ void show_pte(unsigned long addr) pr_alert("%s pgtable: %luk pages, %u-bit VAs, pgdp = %p\n", mm == &init_mm ? "swapper" : "user", PAGE_SIZE / SZ_1K, - VA_BITS, mm->pgd); + (addr >= TASK_SIZE) ? VA_BITS : (int) vabits_user, mm->pgd); pgdp = pgd_offset(mm, addr); pgd = READ_ONCE(*pgdp); pr_alert("[%016lx] pgd=%016llx", addr, pgd_val(pgd)); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 65f86271f02b..f5e37705b399 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -52,6 +52,7 @@ u64 idmap_t0sz = TCR_T0SZ(VA_BITS); u64 idmap_ptrs_per_pgd = PTRS_PER_PGD; +u64 vabits_user __ro_after_init; u64 kimage_voffset __ro_after_init; EXPORT_SYMBOL(kimage_voffset); diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 03646e6a2ef4..4834bd434143 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -441,7 +441,15 @@ ENTRY(__cpu_setup) ldr x10, =TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \ TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \ TCR_TBI0 | TCR_A1 - tcr_set_idmap_t0sz x10, x9 + +#ifdef CONFIG_ARM64_TRY_52BIT_VA + ldr_l x9, vabits_user + sub x9, xzr, x9 + add x9, x9, #64 +#else + ldr_l x9, idmap_t0sz +#endif + tcr_set_t0sz x10, x9 /* * Set the IPS bits in TCR_EL1. From patchwork Wed Aug 29 12:45:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10580297 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D993514BD for ; Wed, 29 Aug 2018 12:49:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C842A2B005 for ; Wed, 29 Aug 2018 12:49:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BBEB32B00B; Wed, 29 Aug 2018 12:49:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2DA082B005 for ; Wed, 29 Aug 2018 12:49:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=omkkNWrRshrBq2WNZbYe5W5u9FP5qGyNvSKvCqHdr9c=; b=l4M66L/b1QPwIQ3mzcL5vQWNhV RcMA7HHVbuijFc5YsafnjPoG9gkacQWTpyz2LlHzi1Y1fGkqaspftl/LcXp7t3x40I2wvuzNyxUuP 81sgfGnvasxzT2RXt0WDDrffvUFXpHxlJV/wzgv1UdtgF16jucfsIWcdmtNixXphPoaM5Fe3KPHDU Yk9eHf2JeHSlMshhjLsZ0EjDd/82MnU16VTibOyZiz0aPUVGIVsDsrwBGBksa71CsGfPHuqiYD3yQ cxuxT3ZObIbA0o5PfGD4Ed6ntmfrRNQr1ZOL3G4UQMh6GkbYdnwFyf+pWintTbE4vcD8aYi77LQx5 dMi4N8uw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fuzu4-0006MW-Gg; Wed, 29 Aug 2018 12:48:56 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fuzrG-00056j-0Y for linux-arm-kernel@lists.infradead.org; Wed, 29 Aug 2018 12:46:09 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2FE651682; Wed, 29 Aug 2018 05:45:54 -0700 (PDT) Received: from capper-debian.arm.com (unknown [10.37.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1E0A83F5BC; Wed, 29 Aug 2018 05:45:52 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 3/5] arm64: mm: Copy over arch_get_unmapped_area Date: Wed, 29 Aug 2018 13:45:41 +0100 Message-Id: <20180829124543.25314-4-steve.capper@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180829124543.25314-1-steve.capper@arm.com> References: <20180829124543.25314-1-steve.capper@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180829_054602_077869_AB83558E X-CRM114-Status: GOOD ( 16.28 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: catalin.marinas@arm.com, will.deacon@arm.com, ard.biesheuvel@linaro.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP In order to support 52-bit VAs for userspace we need to alter the mmap area choosing logic to give 52-bit VAs where "high" addresses are requested. This patch copies over the arch_get_unmapped_area and arch_get_unmapped_area_topdown routines from common code such that we can make modifications to the logic in a future patch. Signed-off-by: Steve Capper --- arch/arm64/include/asm/pgtable.h | 7 ++++ arch/arm64/mm/mmap.c | 84 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 91 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 8449e266cd46..8d4175cde295 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -785,6 +785,13 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, #define phys_to_ttbr(addr) (addr) #endif +/* + * On arm64 we can have larger VA spaces for userspace, we define our own + * arch_get_unmapped_area_ routines to allow for hinting from userspace. + */ +#define HAVE_ARCH_UNMAPPED_AREA +#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN + #endif /* !__ASSEMBLY__ */ #endif /* __ASM_PGTABLE_H */ diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c index 842c8a5fcd53..b516e0bfdb71 100644 --- a/arch/arm64/mm/mmap.c +++ b/arch/arm64/mm/mmap.c @@ -79,6 +79,90 @@ static unsigned long mmap_base(unsigned long rnd, struct rlimit *rlim_stack) return PAGE_ALIGN(STACK_TOP - gap - rnd); } +extern unsigned long mmap_min_addr; + +unsigned long +arch_get_unmapped_area(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, unsigned long flags) +{ + struct mm_struct *mm = current->mm; + struct vm_area_struct *vma, *prev; + struct vm_unmapped_area_info info; + + if (len > TASK_SIZE - mmap_min_addr) + return -ENOMEM; + + if (flags & MAP_FIXED) + return addr; + + if (addr) { + addr = PAGE_ALIGN(addr); + vma = find_vma_prev(mm, addr, &prev); + if (TASK_SIZE - len >= addr && addr >= mmap_min_addr && + (!vma || addr + len <= vm_start_gap(vma)) && + (!prev || addr >= vm_end_gap(prev))) + return addr; + } + + info.flags = 0; + info.length = len; + info.low_limit = mm->mmap_base; + info.high_limit = TASK_SIZE; + info.align_mask = 0; + return vm_unmapped_area(&info); +} + +unsigned long +arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, + const unsigned long len, const unsigned long pgoff, + const unsigned long flags) +{ + struct vm_area_struct *vma, *prev; + struct mm_struct *mm = current->mm; + unsigned long addr = addr0; + struct vm_unmapped_area_info info; + + /* requested length too big for entire address space */ + if (len > TASK_SIZE - mmap_min_addr) + return -ENOMEM; + + if (flags & MAP_FIXED) + return addr; + + /* requesting a specific address */ + if (addr) { + addr = PAGE_ALIGN(addr); + vma = find_vma_prev(mm, addr, &prev); + if (TASK_SIZE - len >= addr && addr >= mmap_min_addr && + (!vma || addr + len <= vm_start_gap(vma)) && + (!prev || addr >= vm_end_gap(prev))) + return addr; + } + + info.flags = VM_UNMAPPED_AREA_TOPDOWN; + info.length = len; + info.low_limit = max(PAGE_SIZE, mmap_min_addr); + info.high_limit = mm->mmap_base; + info.align_mask = 0; + addr = vm_unmapped_area(&info); + + /* + * A failed mmap() very likely causes application failure, + * so fall back to the bottom-up function here. This scenario + * can happen with large stack limits and large mmap() + * allocations. + */ + if (offset_in_page(addr)) { + VM_BUG_ON(addr != -ENOMEM); + info.flags = 0; + info.low_limit = TASK_UNMAPPED_BASE; + info.high_limit = TASK_SIZE; + addr = vm_unmapped_area(&info); + } + + return addr; +} + /* * This function, called very early during the creation of a new process VM * image, sets up which VM layout function to use: From patchwork Wed Aug 29 12:45:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10580281 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D2BB7174A for ; Wed, 29 Aug 2018 12:46:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C34372B031 for ; Wed, 29 Aug 2018 12:46:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B7D442B037; Wed, 29 Aug 2018 12:46:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1FBBD2AFB5 for ; Wed, 29 Aug 2018 12:46:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=KWzyckf2Rcbay4cjgpQjq1sagR/XNbTtn9lZSd9jgmY=; b=guZAQCtenFQC8Mr8Dd4u+tvxhZ 9sNPItTwcwb73ciRQ3/0SFeTRR1LPe1TVZrLuLgt1FN9dRifw3CprSLxADLCcUI26zBsUVRK8look hVD0qPyxAX4bostfn9VT5rysmgGzVj9EZFGKh/7c2a8to4ifa/5ywEPes3hJnrAXUuL19RlkOOAxz IL07eQrrV4grBFHpVl0iXN7KRqxELa7ayRBCiMpov532k+WhF5bSqDRFKdu0riRFvI+XI2sFU7bBL VmARfHte5j5Bi3oXe9oK98qGDtCuss3xkGK7VpqUlViD2FSRIkeRRSePHmAdCJTPlrmII/5BNXmeK mKNtRUkA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fuzrJ-00059F-LX; Wed, 29 Aug 2018 12:46:05 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fuzrF-00056s-TA for linux-arm-kernel@lists.infradead.org; Wed, 29 Aug 2018 12:46:03 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 89B861684; Wed, 29 Aug 2018 05:45:55 -0700 (PDT) Received: from capper-debian.arm.com (unknown [10.37.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 77ABA3F5BC; Wed, 29 Aug 2018 05:45:54 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 4/5] arm64: mmap: Allow for "high" 52-bit VA allocations Date: Wed, 29 Aug 2018 13:45:42 +0100 Message-Id: <20180829124543.25314-5-steve.capper@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180829124543.25314-1-steve.capper@arm.com> References: <20180829124543.25314-1-steve.capper@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180829_054602_062272_5D399DBF X-CRM114-Status: GOOD ( 14.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: catalin.marinas@arm.com, will.deacon@arm.com, ard.biesheuvel@linaro.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch alters arch_get_unmapped_area and arch_get_unmapped_area_topdown such that mmap calls with an addr parameter that lie above 48-bit VAs will receive a VA that lies within the 52-bit VA space on systems that support it. Signed-off-by: Steve Capper --- arch/arm64/mm/mmap.c | 26 ++++++++++++++++++++------ 1 file changed, 20 insertions(+), 6 deletions(-) diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c index b516e0bfdb71..827414b69866 100644 --- a/arch/arm64/mm/mmap.c +++ b/arch/arm64/mm/mmap.c @@ -81,6 +81,15 @@ static unsigned long mmap_base(unsigned long rnd, struct rlimit *rlim_stack) extern unsigned long mmap_min_addr; +static unsigned long get_end_address(unsigned long addr) +{ + if (IS_ENABLED(CONFIG_ARM64_TRY_52BIT_VA) && + (addr > DEFAULT_MAP_WINDOW)) + return TASK_SIZE; + else + return DEFAULT_MAP_WINDOW; +} + unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) @@ -88,8 +97,9 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, struct mm_struct *mm = current->mm; struct vm_area_struct *vma, *prev; struct vm_unmapped_area_info info; + unsigned long end = get_end_address(addr); - if (len > TASK_SIZE - mmap_min_addr) + if (len > end - mmap_min_addr) return -ENOMEM; if (flags & MAP_FIXED) @@ -98,7 +108,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, if (addr) { addr = PAGE_ALIGN(addr); vma = find_vma_prev(mm, addr, &prev); - if (TASK_SIZE - len >= addr && addr >= mmap_min_addr && + if (end - len >= addr && addr >= mmap_min_addr && (!vma || addr + len <= vm_start_gap(vma)) && (!prev || addr >= vm_end_gap(prev))) return addr; @@ -107,7 +117,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, info.flags = 0; info.length = len; info.low_limit = mm->mmap_base; - info.high_limit = TASK_SIZE; + info.high_limit = end; info.align_mask = 0; return vm_unmapped_area(&info); } @@ -121,9 +131,10 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, struct mm_struct *mm = current->mm; unsigned long addr = addr0; struct vm_unmapped_area_info info; + unsigned long end = get_end_address(addr); /* requested length too big for entire address space */ - if (len > TASK_SIZE - mmap_min_addr) + if (len > end - mmap_min_addr) return -ENOMEM; if (flags & MAP_FIXED) @@ -133,7 +144,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, if (addr) { addr = PAGE_ALIGN(addr); vma = find_vma_prev(mm, addr, &prev); - if (TASK_SIZE - len >= addr && addr >= mmap_min_addr && + if (end - len >= addr && addr >= mmap_min_addr && (!vma || addr + len <= vm_start_gap(vma)) && (!prev || addr >= vm_end_gap(prev))) return addr; @@ -143,6 +154,9 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, info.length = len; info.low_limit = max(PAGE_SIZE, mmap_min_addr); info.high_limit = mm->mmap_base; + if (IS_ENABLED(CONFIG_ARM64_TRY_52BIT_VA) && (addr > DEFAULT_MAP_WINDOW)) + info.high_limit += TASK_SIZE - DEFAULT_MAP_WINDOW; + info.align_mask = 0; addr = vm_unmapped_area(&info); @@ -156,7 +170,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, VM_BUG_ON(addr != -ENOMEM); info.flags = 0; info.low_limit = TASK_UNMAPPED_BASE; - info.high_limit = TASK_SIZE; + info.high_limit = end; addr = vm_unmapped_area(&info); } From patchwork Wed Aug 29 12:45:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10580299 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DE9CF17DB for ; Wed, 29 Aug 2018 12:50:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CE1ED2B014 for ; Wed, 29 Aug 2018 12:50:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C23452B017; Wed, 29 Aug 2018 12:50:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2FEB82B014 for ; Wed, 29 Aug 2018 12:50:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=bpmhMKZCMO0oW0/oz0EnDfSbSj9LK6zkwmlEQ1tuoag=; b=s4dTDBQAv0G6Vuy5YF3OUG7Vgh 1yWJSlK3BgCGL8PTYA0j5HN8AkkpVoeZ8NHu/pRX/dnB6oCHqhGvLDGrmwuUykTOrLN0FGtoTHp6s ku8scHI0EnijwlOKPB2bKmWZd9PLnU1ageQ4JSX47DTOuyQo324zsR48LMPbWhL2XxMDjuk+ErqtZ GOF8lVE3WuUhSjN6FZ3FeNAowCRoDZkg7bxHaUxL7yETg5VYVN/WUqZ5e3ADXIfIt2mSoGxj7EVyz yOImJDoNrhI2a4fcoPdoNPzUZWM+PHFHYpY7nttP3tVn85DruL5PVOcb/+0CEq5nDllAjyF4kZMeH tnrqYGaw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fuzux-0006kD-1B; Wed, 29 Aug 2018 12:49:51 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fuzrR-00058m-P9 for linux-arm-kernel@lists.infradead.org; Wed, 29 Aug 2018 12:46:15 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E153C168F; Wed, 29 Aug 2018 05:45:56 -0700 (PDT) Received: from capper-debian.arm.com (unknown [10.37.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D0F533F5BC; Wed, 29 Aug 2018 05:45:55 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 5/5] arm64: mm: Activate 52-bit userspace VA support Date: Wed, 29 Aug 2018 13:45:43 +0100 Message-Id: <20180829124543.25314-6-steve.capper@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180829124543.25314-1-steve.capper@arm.com> References: <20180829124543.25314-1-steve.capper@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180829_054613_873617_6EFC2E42 X-CRM114-Status: GOOD ( 10.58 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: catalin.marinas@arm.com, will.deacon@arm.com, ard.biesheuvel@linaro.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP We have all the pieces in place now to support a 52-bit userspace VA. This patch enables this logic for systems running with a 48-bit VA and a 64KB PAGE_SIZE. Signed-off-by: Steve Capper --- arch/arm64/Kconfig | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 29e75b47becd..2561b541c9df 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -691,6 +691,10 @@ config ARM64_PA_BITS_52 endchoice +config ARM64_TRY_52BIT_VA + def_bool y + depends on ARM64_VA_BITS_48 && ARM64_64K_PAGES + config ARM64_PA_BITS int default 48 if ARM64_PA_BITS_48