From patchwork Wed Dec 5 16:41:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10714609 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DCE57109C for ; Wed, 5 Dec 2018 16:42:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CCED02DE5C for ; Wed, 5 Dec 2018 16:42:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C15542DE5E; Wed, 5 Dec 2018 16:42:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 45AD72DE5D for ; Wed, 5 Dec 2018 16:42:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=vKryr8D1u4tBORk82NfwB6l3ZTT0UuhIwwct71iNI4Q=; b=QklcOA1MA6x8qv gyq+lTKXe2kKe80Zdfu8QfZYW1oQ58C24k285vtakKItNn/icnp3yVx+da3eNB1IMqdOpzEHoVP/Y weQYdpt72m5rFbZqTTtcfq59NhWy08thll3+iMtBGmRcmQGIyUYheLhlyg6Q9yYA2pNM0f75Txyev zICcM+svsN6YAN8UKE0F/0mVBMRY5YhXGjm1y4HLrJrJx0anTasjbg3wg3SXCe/6GX/3wL8qV1mOE JOlDUZTLzAXwaKKL3GK+4UnbbJuWJvssKRL0SCBvaFKFexE0+7Reo6DDVw2EntYbc32WVRYXPak8T HJYihh4Jz1Ji2YtLLPjw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gUaFg-0000XM-89; Wed, 05 Dec 2018 16:42:20 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gUaFU-0000KZ-AG for linux-arm-kernel@lists.infradead.org; Wed, 05 Dec 2018 16:42:09 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 19F5315AD; Wed, 5 Dec 2018 08:42:00 -0800 (PST) Received: from capper-debian.emea.arm.com (C02R32KKFVH8.manchester.arm.com [10.32.102.139]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A2EE03F5AF; Wed, 5 Dec 2018 08:41:58 -0800 (PST) From: Steve Capper To: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH V4 1/6] mm: mmap: Allow for "high" userspace addresses Date: Wed, 5 Dec 2018 16:41:40 +0000 Message-Id: <20181205164145.24568-2-steve.capper@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181205164145.24568-1-steve.capper@arm.com> References: <20181205164145.24568-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181205_084208_395585_D9CEBB63 X-CRM114-Status: GOOD ( 16.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Steve Capper , catalin.marinas@arm.com, ard.biesheuvel@linaro.org, will.deacon@arm.com, jcm@redhat.com, Andrew Morton Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds support for "high" userspace addresses that are optionally supported on the system and have to be requested via a hint mechanism ("high" addr parameter to mmap). Architectures such as powerpc and x86 achieve this by making changes to their architectural versions of arch_get_unmapped_* functions. However, on arm64 we use the generic versions of these functions. Rather than duplicate the generic arch_get_unmapped_* implementations for arm64, this patch instead introduces two architectural helper macros and applies them to arch_get_unmapped_*: arch_get_mmap_end(addr) - get mmap upper limit depending on addr hint arch_get_mmap_base(addr, base) - get mmap_base depending on addr hint If these macros are not defined in architectural code then they default to (TASK_SIZE) and (base) so should not introduce any behavioural changes to architectures that do not define them. Signed-off-by: Steve Capper Reviewed-by: Catalin Marinas Cc: Andrew Morton --- Changed in V4, added Catalin's reviewed by Changed in V3, commit log cleared up, explanation given for why core code change over just architectural change --- mm/mmap.c | 25 ++++++++++++++++++------- 1 file changed, 18 insertions(+), 7 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 6c04292e16a7..7bb64381e77c 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2066,6 +2066,15 @@ unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info) return gap_end; } + +#ifndef arch_get_mmap_end +#define arch_get_mmap_end(addr) (TASK_SIZE) +#endif + +#ifndef arch_get_mmap_base +#define arch_get_mmap_base(addr, base) (base) +#endif + /* Get an address range which is currently unmapped. * For shmat() with addr=0. * @@ -2085,8 +2094,9 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, struct mm_struct *mm = current->mm; struct vm_area_struct *vma, *prev; struct vm_unmapped_area_info info; + const unsigned long mmap_end = arch_get_mmap_end(addr); - if (len > TASK_SIZE - mmap_min_addr) + if (len > mmap_end - mmap_min_addr) return -ENOMEM; if (flags & MAP_FIXED) @@ -2095,7 +2105,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, if (addr) { addr = PAGE_ALIGN(addr); vma = find_vma_prev(mm, addr, &prev); - if (TASK_SIZE - len >= addr && addr >= mmap_min_addr && + if (mmap_end - len >= addr && addr >= mmap_min_addr && (!vma || addr + len <= vm_start_gap(vma)) && (!prev || addr >= vm_end_gap(prev))) return addr; @@ -2104,7 +2114,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, info.flags = 0; info.length = len; info.low_limit = mm->mmap_base; - info.high_limit = TASK_SIZE; + info.high_limit = mmap_end; info.align_mask = 0; return vm_unmapped_area(&info); } @@ -2124,9 +2134,10 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, struct mm_struct *mm = current->mm; unsigned long addr = addr0; struct vm_unmapped_area_info info; + const unsigned long mmap_end = arch_get_mmap_end(addr); /* requested length too big for entire address space */ - if (len > TASK_SIZE - mmap_min_addr) + if (len > mmap_end - mmap_min_addr) return -ENOMEM; if (flags & MAP_FIXED) @@ -2136,7 +2147,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, if (addr) { addr = PAGE_ALIGN(addr); vma = find_vma_prev(mm, addr, &prev); - if (TASK_SIZE - len >= addr && addr >= mmap_min_addr && + if (mmap_end - len >= addr && addr >= mmap_min_addr && (!vma || addr + len <= vm_start_gap(vma)) && (!prev || addr >= vm_end_gap(prev))) return addr; @@ -2145,7 +2156,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, info.flags = VM_UNMAPPED_AREA_TOPDOWN; info.length = len; info.low_limit = max(PAGE_SIZE, mmap_min_addr); - info.high_limit = mm->mmap_base; + info.high_limit = arch_get_mmap_base(addr, mm->mmap_base); info.align_mask = 0; addr = vm_unmapped_area(&info); @@ -2159,7 +2170,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, VM_BUG_ON(addr != -ENOMEM); info.flags = 0; info.low_limit = TASK_UNMAPPED_BASE; - info.high_limit = TASK_SIZE; + info.high_limit = mmap_end; addr = vm_unmapped_area(&info); } From patchwork Wed Dec 5 16:41:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10714611 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 392BC109C for ; Wed, 5 Dec 2018 16:42:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2873A2DE68 for ; Wed, 5 Dec 2018 16:42:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 265582DE55; Wed, 5 Dec 2018 16:42:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A98C62DE56 for ; Wed, 5 Dec 2018 16:42:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=HuwiWgRy/29Gz2gPKaf3bObPXjS+AHonuNYB/BVQUyc=; b=BZB6JnTJv+rL4Y uNMB8KU2iJxM9pNUiBiWi3YM/XKBE7NBSQ313sV8Uhz7Q/WWXRQRhKRw2LObJHmqPlBqTDzwtEPSx dDZYZTkW+sSNzqLAIgDYehQ1aXJFYct+YJ8MyFIxwWNz+WY/LDyTqe4O5UlBR0Zx9j26qvB5jocZC 9coX9+IdC28xuf3QyyeqYZ7R6xSaDveyb3bpr2weoSm7gIcfZigC5A11a5Jj9ZnvyBoBRawiWxibq urmRENqZkwWIiICQ+H49wkz2UmDAf2fl5ozTKH+BJ5VXJz5lT6YlIFNkAwbrh2rEm+ONGyPI6+axR bNfM2HjJCZpwytl/NBiA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gUaFy-0000sO-JZ; Wed, 05 Dec 2018 16:42:38 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gUaFU-0000Kb-AH for linux-arm-kernel@lists.infradead.org; Wed, 05 Dec 2018 16:42:11 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 607CA1650; Wed, 5 Dec 2018 08:42:02 -0800 (PST) Received: from capper-debian.emea.arm.com (C02R32KKFVH8.manchester.arm.com [10.32.102.139]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1A8A13F5AF; Wed, 5 Dec 2018 08:42:00 -0800 (PST) From: Steve Capper To: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH V4 2/6] arm64: mm: Introduce DEFAULT_MAP_WINDOW Date: Wed, 5 Dec 2018 16:41:41 +0000 Message-Id: <20181205164145.24568-3-steve.capper@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181205164145.24568-1-steve.capper@arm.com> References: <20181205164145.24568-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181205_084208_412807_719F3619 X-CRM114-Status: GOOD ( 14.82 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: catalin.marinas@arm.com, Steve Capper , will.deacon@arm.com, jcm@redhat.com, ard.biesheuvel@linaro.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP We wish to introduce a 52-bit virtual address space for userspace but maintain compatibility with software that assumes the maximum VA space size is 48 bit. In order to achieve this, on 52-bit VA systems, we make mmap behave as if it were running on a 48-bit VA system (unless userspace explicitly requests a VA where addr[51:48] != 0). On a system running a 52-bit userspace we need TASK_SIZE to represent the 52-bit limit as it is used in various places to distinguish between kernelspace and userspace addresses. Thus we need a new limit for mmap, stack, ELF loader and EFI (which uses TTBR0) to represent the non-extended VA space. This patch introduces DEFAULT_MAP_WINDOW and DEFAULT_MAP_WINDOW_64 and switches the appropriate logic to use that instead of TASK_SIZE. Signed-off-by: Steve Capper Reviewed-by: Catalin Marinas --- Changed in V3: corrections to allow COMPAT 32-bit EL0 mode to work --- arch/arm64/include/asm/elf.h | 2 +- arch/arm64/include/asm/processor.h | 10 ++++++++-- arch/arm64/mm/init.c | 2 +- drivers/firmware/efi/arm-runtime.c | 2 +- drivers/firmware/efi/libstub/arm-stub.c | 2 +- 5 files changed, 12 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h index 433b9554c6a1..bc9bd9e77d9d 100644 --- a/arch/arm64/include/asm/elf.h +++ b/arch/arm64/include/asm/elf.h @@ -117,7 +117,7 @@ * 64-bit, this is above 4GB to leave the entire 32-bit address * space open for things that want to use the area for 32-bit pointers. */ -#define ELF_ET_DYN_BASE (2 * TASK_SIZE_64 / 3) +#define ELF_ET_DYN_BASE (2 * DEFAULT_MAP_WINDOW_64 / 3) #ifndef __ASSEMBLY__ diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 3e2091708b8e..50586ca6bacb 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -45,19 +45,25 @@ * TASK_SIZE - the maximum size of a user space task. * TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area. */ + +#define DEFAULT_MAP_WINDOW_64 (UL(1) << VA_BITS) + #ifdef CONFIG_COMPAT #define TASK_SIZE_32 UL(0x100000000) #define TASK_SIZE (test_thread_flag(TIF_32BIT) ? \ TASK_SIZE_32 : TASK_SIZE_64) #define TASK_SIZE_OF(tsk) (test_tsk_thread_flag(tsk, TIF_32BIT) ? \ TASK_SIZE_32 : TASK_SIZE_64) +#define DEFAULT_MAP_WINDOW (test_thread_flag(TIF_32BIT) ? \ + TASK_SIZE_32 : DEFAULT_MAP_WINDOW_64) #else #define TASK_SIZE TASK_SIZE_64 +#define DEFAULT_MAP_WINDOW DEFAULT_MAP_WINDOW_64 #endif /* CONFIG_COMPAT */ -#define TASK_UNMAPPED_BASE (PAGE_ALIGN(TASK_SIZE / 4)) +#define TASK_UNMAPPED_BASE (PAGE_ALIGN(DEFAULT_MAP_WINDOW / 4)) +#define STACK_TOP_MAX DEFAULT_MAP_WINDOW_64 -#define STACK_TOP_MAX TASK_SIZE_64 #ifdef CONFIG_COMPAT #define AARCH32_VECTORS_BASE 0xffff0000 #define STACK_TOP (test_thread_flag(TIF_32BIT) ? \ diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 9d9582cac6c4..7239c103be06 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -609,7 +609,7 @@ void __init mem_init(void) * detected at build time already. */ #ifdef CONFIG_COMPAT - BUILD_BUG_ON(TASK_SIZE_32 > TASK_SIZE_64); + BUILD_BUG_ON(TASK_SIZE_32 > DEFAULT_MAP_WINDOW_64); #endif #ifdef CONFIG_SPARSEMEM_VMEMMAP diff --git a/drivers/firmware/efi/arm-runtime.c b/drivers/firmware/efi/arm-runtime.c index 922cfb813109..952cec5b611a 100644 --- a/drivers/firmware/efi/arm-runtime.c +++ b/drivers/firmware/efi/arm-runtime.c @@ -38,7 +38,7 @@ static struct ptdump_info efi_ptdump_info = { .mm = &efi_mm, .markers = (struct addr_marker[]){ { 0, "UEFI runtime start" }, - { TASK_SIZE_64, "UEFI runtime end" } + { DEFAULT_MAP_WINDOW_64, "UEFI runtime end" } }, .base_addr = 0, }; diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c index 30ac0c975f8a..d1ec7136e3e1 100644 --- a/drivers/firmware/efi/libstub/arm-stub.c +++ b/drivers/firmware/efi/libstub/arm-stub.c @@ -33,7 +33,7 @@ #define EFI_RT_VIRTUAL_SIZE SZ_512M #ifdef CONFIG_ARM64 -# define EFI_RT_VIRTUAL_LIMIT TASK_SIZE_64 +# define EFI_RT_VIRTUAL_LIMIT DEFAULT_MAP_WINDOW_64 #else # define EFI_RT_VIRTUAL_LIMIT TASK_SIZE #endif From patchwork Wed Dec 5 16:41:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10714613 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 79600109C for ; Wed, 5 Dec 2018 16:42:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 68E1F2DDDB for ; Wed, 5 Dec 2018 16:42:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 66CC52DE49; Wed, 5 Dec 2018 16:42:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1F7092DDDB for ; Wed, 5 Dec 2018 16:42:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4D50oOviBB1HW/uUl7CrH+IsbXcbMZTKYSA0t92L05Y=; b=NlLYw5lnL08eTr xUFxN4UJOaH+VbiT7h2lE39Sngsu6NmQrB5uzvwCfh+6UGILnUVkdrcMz+pM78ZACQELVna3mbmhB eiYIIMz8Si02nK0eQ3ODbyokKtQZJg08vvgR1yw2Kjx+sH041aQ2cOc7mso7674xlomIUjZQLkJRV 15+H8TYdpxSTS6By7OrXTEsWzYwh9jzIZ5a61DHouuIP/pwVMCqDVHEpDVeq4DnENjvEFNwEwfTbL 9zwIzfWcXnxUDivxjQcLp1F0vSpBWI86qFN1NGhiQplVZFTPGQ9xE7zVaQ1v0NTU/qo7JGa3Re5RQ jmHCs0Ng+oMM+sCHTplg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gUaGF-00016I-3Q; Wed, 05 Dec 2018 16:42:55 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gUaFU-0000Ko-DB for linux-arm-kernel@lists.infradead.org; Wed, 05 Dec 2018 16:42:11 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A898C1682; Wed, 5 Dec 2018 08:42:04 -0800 (PST) Received: from capper-debian.emea.arm.com (C02R32KKFVH8.manchester.arm.com [10.32.102.139]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 63BC33F5AF; Wed, 5 Dec 2018 08:42:03 -0800 (PST) From: Steve Capper To: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH V4 3/6] arm64: mm: Define arch_get_mmap_end, arch_get_mmap_base Date: Wed, 5 Dec 2018 16:41:42 +0000 Message-Id: <20181205164145.24568-4-steve.capper@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181205164145.24568-1-steve.capper@arm.com> References: <20181205164145.24568-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181205_084208_486194_0A70C8B1 X-CRM114-Status: UNSURE ( 9.15 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: catalin.marinas@arm.com, Steve Capper , will.deacon@arm.com, jcm@redhat.com, ard.biesheuvel@linaro.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Now that we have DEFAULT_MAP_WINDOW defined, we can arch_get_mmap_end and arch_get_mmap_base helpers to allow for high addresses in mmap. Signed-off-by: Steve Capper Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/processor.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 50586ca6bacb..fe95fd8b065e 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -72,6 +72,13 @@ #define STACK_TOP STACK_TOP_MAX #endif /* CONFIG_COMPAT */ +#define arch_get_mmap_end(addr) ((addr > DEFAULT_MAP_WINDOW) ? TASK_SIZE :\ + DEFAULT_MAP_WINDOW) + +#define arch_get_mmap_base(addr, base) ((addr > DEFAULT_MAP_WINDOW) ? \ + base + TASK_SIZE - DEFAULT_MAP_WINDOW :\ + base) + extern phys_addr_t arm64_dma_phys_limit; #define ARCH_LOW_ADDRESS_LIMIT (arm64_dma_phys_limit - 1) From patchwork Wed Dec 5 16:41:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10714615 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3BD3C13BB for ; Wed, 5 Dec 2018 16:43:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2C4EA2DE86 for ; Wed, 5 Dec 2018 16:43:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 205E02DE85; Wed, 5 Dec 2018 16:43:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 843E42DE85 for ; Wed, 5 Dec 2018 16:43:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NzAWwzXQJsbjBc7SXwhdYGDVJVfykzYtwDqmVY7MxWs=; b=WqOnXpYIC80HQd fbTorS2whOW4ZjacDSvg70wnn8Lj4N3ybk41FaWcYsO7PkVj3jJr2DaGS1UmxZH9VXeW7i3KE/943 sH0V9YZwjxhTZIm0iehTgK7sxzabzE3DHIzNSl/lXMqG2DWBENTfSzuGMoR9BUfRO49K1uPKYrznl i1Fb9UyVtS5waXcFLp+9ycAxmUVvq8PfypFSqNmvG5aUkHvHX4wgBZhvqtW9nY+zpIMul52VEM/5U 8FR1eqQ/miZpo+37atlv/Qp4k5JcoMlL5APGkinc3d/UwkmUfQQOEcGXpPO3saioe9e/P3dCvPhjE MXYReLQNkDh4VJ+g0QjQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gUaGT-0001KB-4S; Wed, 05 Dec 2018 16:43:09 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gUaFU-0000Kr-QH for linux-arm-kernel@lists.infradead.org; Wed, 05 Dec 2018 16:42:11 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DECAC168F; Wed, 5 Dec 2018 08:42:06 -0800 (PST) Received: from capper-debian.emea.arm.com (C02R32KKFVH8.manchester.arm.com [10.32.102.139]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 95B283F5AF; Wed, 5 Dec 2018 08:42:05 -0800 (PST) From: Steve Capper To: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH V4 4/6] arm64: mm: Offset TTBR1 to allow 52-bit PTRS_PER_PGD Date: Wed, 5 Dec 2018 16:41:43 +0000 Message-Id: <20181205164145.24568-5-steve.capper@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181205164145.24568-1-steve.capper@arm.com> References: <20181205164145.24568-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181205_084208_859733_422C50BA X-CRM114-Status: GOOD ( 15.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: catalin.marinas@arm.com, Steve Capper , will.deacon@arm.com, jcm@redhat.com, ard.biesheuvel@linaro.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Enabling 52-bit VAs on arm64 requires that the PGD table expands from 64 entries (for the 48-bit case) to 1024 entries. This quantity, PTRS_PER_PGD is used as follows to compute which PGD entry corresponds to a given virtual address, addr: pgd_index(addr) -> (addr >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1) Userspace addresses are prefixed by 0's, so for a 48-bit userspace address, uva, the following is true: (uva >> PGDIR_SHIFT) & (1024 - 1) == (uva >> PGDIR_SHIFT) & (64 - 1) In other words, a 48-bit userspace address will have the same pgd_index when using PTRS_PER_PGD = 64 and 1024. Kernel addresses are prefixed by 1's so, given a 48-bit kernel address, kva, we have the following inequality: (kva >> PGDIR_SHIFT) & (1024 - 1) != (kva >> PGDIR_SHIFT) & (64 - 1) In other words a 48-bit kernel virtual address will have a different pgd_index when using PTRS_PER_PGD = 64 and 1024. If, however, we note that: kva = 0xFFFF << 48 + lower (where lower[63:48] == 0b) and, PGDIR_SHIFT = 42 (as we are dealing with 64KB PAGE_SIZE) We can consider: (kva >> PGDIR_SHIFT) & (1024 - 1) - (kva >> PGDIR_SHIFT) & (64 - 1) = (0xFFFF << 6) & 0x3FF - (0xFFFF << 6) & 0x3F // "lower" cancels out = 0x3C0 In other words, one can switch PTRS_PER_PGD to the 52-bit value globally provided that they increment ttbr1_el1 by 0x3C0 * 8 = 0x1E00 bytes when running with 48-bit kernel VAs (TCR_EL1.T1SZ = 16). For kernel configuration where 52-bit userspace VAs are possible, this patch offsets ttbr1_el1 and sets PTRS_PER_PGD corresponding to the 52-bit value. Suggested-by: Catalin Marinas Signed-off-by: Steve Capper --- This patch is new in V4 of the series --- arch/arm64/include/asm/asm-uaccess.h | 4 ++++ arch/arm64/include/asm/assembler.h | 23 +++++++++++++++++++++++ arch/arm64/include/asm/pgtable-hwdef.h | 9 +++++++++ arch/arm64/include/asm/uaccess.h | 4 ++++ arch/arm64/kernel/head.S | 1 + arch/arm64/kernel/hibernate-asm.S | 1 + arch/arm64/mm/proc.S | 4 ++++ 7 files changed, 46 insertions(+) diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h index 4128bec033f6..cd361dd16b12 100644 --- a/arch/arm64/include/asm/asm-uaccess.h +++ b/arch/arm64/include/asm/asm-uaccess.h @@ -14,11 +14,13 @@ #ifdef CONFIG_ARM64_SW_TTBR0_PAN .macro __uaccess_ttbr0_disable, tmp1 mrs \tmp1, ttbr1_el1 // swapper_pg_dir + restore_ttbr1 \tmp1 bic \tmp1, \tmp1, #TTBR_ASID_MASK sub \tmp1, \tmp1, #RESERVED_TTBR0_SIZE // reserved_ttbr0 just before swapper_pg_dir msr ttbr0_el1, \tmp1 // set reserved TTBR0_EL1 isb add \tmp1, \tmp1, #RESERVED_TTBR0_SIZE + offset_ttbr1 \tmp1 msr ttbr1_el1, \tmp1 // set reserved ASID isb .endm @@ -27,8 +29,10 @@ get_thread_info \tmp1 ldr \tmp1, [\tmp1, #TSK_TI_TTBR0] // load saved TTBR0_EL1 mrs \tmp2, ttbr1_el1 + restore_ttbr1 \tmp2 extr \tmp2, \tmp2, \tmp1, #48 ror \tmp2, \tmp2, #16 + offset_ttbr1 \tmp2 msr ttbr1_el1, \tmp2 // set the active ASID isb msr ttbr0_el1, \tmp1 // set the non-PAN TTBR0_EL1 diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 6142402c2eb4..e2fe378d2a63 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -515,6 +515,29 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU mrs \rd, sp_el0 .endm +/* + * Offset ttbr1 to allow for 48-bit kernel VAs set with 52-bit PTRS_PER_PGD. + * orr is used as it can cover the immediate value (and is idempotent). + * In future this may be nop'ed out when dealing with 52-bit kernel VAs. + * ttbr: Value of ttbr to set, modified. + */ + .macro offset_ttbr1, ttbr +#ifdef CONFIG_ARM64_52BIT_VA + orr \ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET +#endif + .endm + +/* + * Perform the reverse of offset_ttbr1. + * bic is used as it can cover the immediate value and, in future, won't need + * to be nop'ed out when dealing with 52-bit kernel VAs. + */ + .macro restore_ttbr1, ttbr +#ifdef CONFIG_ARM64_52BIT_VA + bic \ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET +#endif + .endm + /* * Arrange a physical address in a TTBR register, taking care of 52-bit * addresses. diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index 1d7d8da2ef9b..4a29c7e03ae4 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -80,7 +80,11 @@ #define PGDIR_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(4 - CONFIG_PGTABLE_LEVELS) #define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT) #define PGDIR_MASK (~(PGDIR_SIZE-1)) +#ifdef CONFIG_ARM64_52BIT_VA +#define PTRS_PER_PGD (1 << (52 - PGDIR_SHIFT)) +#else #define PTRS_PER_PGD (1 << (VA_BITS - PGDIR_SHIFT)) +#endif /* * Section address mask and size definitions. @@ -306,4 +310,9 @@ #define TTBR_BADDR_MASK_52 (((UL(1) << 46) - 1) << 2) #endif +#ifdef CONFIG_ARM64_52BIT_VA +#define TTBR1_BADDR_4852_OFFSET (((UL(1) << (52 - PGDIR_SHIFT)) - \ + (UL(1) << (48 - PGDIR_SHIFT))) * 8) +#endif + #endif diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 07c34087bd5e..df60b3978568 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -124,7 +124,11 @@ static inline void __uaccess_ttbr0_disable(void) ttbr = read_sysreg(ttbr1_el1); ttbr &= ~TTBR_ASID_MASK; /* reserved_ttbr0 placed before swapper_pg_dir */ +#ifdef CONFIG_ARM64_52BIT_VA + write_sysreg((ttbr & ~TTBR1_BADDR_4852_OFFSET) - RESERVED_TTBR0_SIZE, ttbr0_el1); +#else write_sysreg(ttbr - RESERVED_TTBR0_SIZE, ttbr0_el1); +#endif isb(); /* Set reserved ASID */ write_sysreg(ttbr, ttbr1_el1); diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 4471f570a295..f60081be9a1b 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -769,6 +769,7 @@ ENTRY(__enable_mmu) phys_to_ttbr x1, x1 phys_to_ttbr x2, x2 msr ttbr0_el1, x2 // load TTBR0 + offset_ttbr1 x1 msr ttbr1_el1, x1 // load TTBR1 isb msr sctlr_el1, x0 diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S index dd14ab8c9f72..fe36d85c60bd 100644 --- a/arch/arm64/kernel/hibernate-asm.S +++ b/arch/arm64/kernel/hibernate-asm.S @@ -40,6 +40,7 @@ tlbi vmalle1 dsb nsh phys_to_ttbr \tmp, \page_table + offset_ttbr1 \tmp msr ttbr1_el1, \tmp isb .endm diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 2c75b0b903ae..2db1c491d45d 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -182,6 +182,7 @@ ENDPROC(cpu_do_switch_mm) .macro __idmap_cpu_set_reserved_ttbr1, tmp1, tmp2 adrp \tmp1, empty_zero_page phys_to_ttbr \tmp2, \tmp1 + offset_ttbr1 \tmp2 msr ttbr1_el1, \tmp2 isb tlbi vmalle1 @@ -200,6 +201,7 @@ ENTRY(idmap_cpu_replace_ttbr1) __idmap_cpu_set_reserved_ttbr1 x1, x3 + offset_ttbr1 x0 msr ttbr1_el1, x0 isb @@ -254,6 +256,7 @@ ENTRY(idmap_kpti_install_ng_mappings) pte .req x16 mrs swapper_ttb, ttbr1_el1 + restore_ttbr1 swapper_ttb adr flag_ptr, __idmap_kpti_flag cbnz cpu, __idmap_kpti_secondary @@ -373,6 +376,7 @@ __idmap_kpti_secondary: cbnz w18, 1b /* All done, act like nothing happened */ + offset_ttbr1 swapper_ttb msr ttbr1_el1, swapper_ttb isb ret From patchwork Wed Dec 5 16:41:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10714617 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CF07D109C for ; Wed, 5 Dec 2018 16:43:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BDCF12DE5C for ; Wed, 5 Dec 2018 16:43:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BC0BE2DE7C; Wed, 5 Dec 2018 16:43:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 459DC2DE92 for ; Wed, 5 Dec 2018 16:43:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=C1NwL4FB08Qvv2rWE4WlO2AgGfSv5JKpQIEDR2eS464=; b=o0OWIRC/+9UyYP 9FZK5GFknzG9AQiHwunLgIvibbci6RpALEh8GPJYS1klSvWlAyjCgFkfjTNb8Bn3p1EJhxmo1/1t2 lLzhvWAPUtJt8iXF8YByoDclhP+VA/HV+q+pyibjtZSK2PzAiAp/kDqxRm8bjolunQVIadT5/r7WG AuV16zNHNEyeB4elmLZvyY3Aih3HwTpssgbe/i2D03aHtfwgSpaKaHPyukF09aIbpXw4Ejmm/gE7C MQmVTRz5DQ/fayX/esbNFY8FqnAFoDMoRLdrNRCaX8lK6/3buoYpyFpPsgjcpGYAtyzIh9f3J4yqt q6fBj5/I4f29ZtS5xwBQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gUaGk-0001an-Ni; Wed, 05 Dec 2018 16:43:26 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gUaFg-0000M8-2r for linux-arm-kernel@lists.infradead.org; Wed, 05 Dec 2018 16:42:25 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 183C2169E; Wed, 5 Dec 2018 08:42:09 -0800 (PST) Received: from capper-debian.emea.arm.com (C02R32KKFVH8.manchester.arm.com [10.32.102.139]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C64253F5AF; Wed, 5 Dec 2018 08:42:07 -0800 (PST) From: Steve Capper To: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH V4 5/6] arm64: mm: introduce 52-bit userspace support Date: Wed, 5 Dec 2018 16:41:44 +0000 Message-Id: <20181205164145.24568-6-steve.capper@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181205164145.24568-1-steve.capper@arm.com> References: <20181205164145.24568-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181205_084221_297231_BD21C515 X-CRM114-Status: GOOD ( 17.63 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: catalin.marinas@arm.com, Steve Capper , will.deacon@arm.com, jcm@redhat.com, ard.biesheuvel@linaro.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP On arm64 there is optional support for a 52-bit virtual address space. To exploit this one has to be running with a 64KB page size and be running on hardware that supports this. For an arm64 kernel supporting a 48 bit VA with a 64KB page size, some changes are needed to support a 52-bit userspace: * TCR_EL1.T0SZ needs to be 12 instead of 16, * TASK_SIZE needs to reflect the new size. This patch implements the above when the support for 52-bit VAs is detected at early boot time. On arm64 userspace addresses translation is controlled by TTBR0_EL1. As well as userspace, TTBR0_EL1 controls: * The identity mapping, * EFI runtime code. It is possible to run a kernel with an identity mapping that has a larger VA size than userspace (and for this case __cpu_set_tcr_t0sz() would set TCR_EL1.T0SZ as appropriate). However, when the conditions for 52-bit userspace are met; it is possible to keep TCR_EL1.T0SZ fixed at 12. Thus in this patch, the TCR_EL1.T0SZ size changing logic is disabled. Signed-off-by: Steve Capper --- Changed in V4, pgd_index logic removed as we offset ttbr1 instead --- arch/arm64/Kconfig | 4 ++++ arch/arm64/include/asm/assembler.h | 7 +++---- arch/arm64/include/asm/mmu_context.h | 3 +++ arch/arm64/include/asm/processor.h | 14 +++++++++----- arch/arm64/kernel/head.S | 13 +++++++++++++ arch/arm64/mm/fault.c | 2 +- arch/arm64/mm/mmu.c | 1 + arch/arm64/mm/proc.S | 10 +++++++++- 8 files changed, 43 insertions(+), 11 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 787d7850e064..eab02d24f5d1 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -709,6 +709,10 @@ config ARM64_PA_BITS_52 endchoice +config ARM64_52BIT_VA + def_bool y + depends on ARM64_VA_BITS_48 && ARM64_64K_PAGES + config ARM64_PA_BITS int default 48 if ARM64_PA_BITS_48 diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index e2fe378d2a63..243ec4f0c00f 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -342,11 +342,10 @@ alternative_endif .endm /* - * tcr_set_idmap_t0sz - update TCR.T0SZ so that we can load the ID map + * tcr_set_t0sz - update TCR.T0SZ so that we can load the ID map */ - .macro tcr_set_idmap_t0sz, valreg, tmpreg - ldr_l \tmpreg, idmap_t0sz - bfi \valreg, \tmpreg, #TCR_T0SZ_OFFSET, #TCR_TxSZ_WIDTH + .macro tcr_set_t0sz, valreg, t0sz + bfi \valreg, \t0sz, #TCR_T0SZ_OFFSET, #TCR_TxSZ_WIDTH .endm /* diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index 1e58bf58c22b..b125fafc611b 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -72,6 +72,9 @@ extern u64 idmap_ptrs_per_pgd; static inline bool __cpu_uses_extended_idmap(void) { + if (IS_ENABLED(CONFIG_ARM64_52BIT_VA)) + return false; + return unlikely(idmap_t0sz != TCR_T0SZ(VA_BITS)); } diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index fe95fd8b065e..b363fc705be4 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -19,11 +19,12 @@ #ifndef __ASM_PROCESSOR_H #define __ASM_PROCESSOR_H -#define TASK_SIZE_64 (UL(1) << VA_BITS) - -#define KERNEL_DS UL(-1) -#define USER_DS (TASK_SIZE_64 - 1) - +#define KERNEL_DS UL(-1) +#ifdef CONFIG_ARM64_52BIT_VA +#define USER_DS ((UL(1) << 52) - 1) +#else +#define USER_DS ((UL(1) << VA_BITS) - 1) +#endif /* CONFIG_ARM64_52IT_VA */ #ifndef __ASSEMBLY__ #ifdef __KERNEL__ @@ -48,6 +49,9 @@ #define DEFAULT_MAP_WINDOW_64 (UL(1) << VA_BITS) +extern u64 vabits_user; +#define TASK_SIZE_64 (UL(1) << vabits_user) + #ifdef CONFIG_COMPAT #define TASK_SIZE_32 UL(0x100000000) #define TASK_SIZE (test_thread_flag(TIF_32BIT) ? \ diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index f60081be9a1b..5bc776b8ee5e 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -318,6 +318,19 @@ __create_page_tables: adrp x0, idmap_pg_dir adrp x3, __idmap_text_start // __pa(__idmap_text_start) +#ifdef CONFIG_ARM64_52BIT_VA + mrs_s x6, SYS_ID_AA64MMFR2_EL1 + and x6, x6, #(0xf << ID_AA64MMFR2_LVA_SHIFT) + mov x5, #52 + cbnz x6, 1f +#endif + mov x5, #VA_BITS +1: + adr_l x6, vabits_user + str x5, [x6] + dmb sy + dc ivac, x6 // Invalidate potentially stale cache line + /* * VA_BITS may be too small to allow for an ID mapping to be created * that covers system RAM if that is located sufficiently high in the diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 7d9571f4ae3d..5fe6d2e40e9b 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -160,7 +160,7 @@ void show_pte(unsigned long addr) pr_alert("%s pgtable: %luk pages, %u-bit VAs, pgdp = %p\n", mm == &init_mm ? "swapper" : "user", PAGE_SIZE / SZ_1K, - VA_BITS, mm->pgd); + mm == &init_mm ? VA_BITS : (int) vabits_user, mm->pgd); pgdp = pgd_offset(mm, addr); pgd = READ_ONCE(*pgdp); pr_alert("[%016lx] pgd=%016llx", addr, pgd_val(pgd)); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 394b8d554def..f8fc393143ea 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -52,6 +52,7 @@ u64 idmap_t0sz = TCR_T0SZ(VA_BITS); u64 idmap_ptrs_per_pgd = PTRS_PER_PGD; +u64 vabits_user __ro_after_init; u64 kimage_voffset __ro_after_init; EXPORT_SYMBOL(kimage_voffset); diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 2db1c491d45d..0cf86b17714c 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -450,7 +450,15 @@ ENTRY(__cpu_setup) ldr x10, =TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \ TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \ TCR_TBI0 | TCR_A1 - tcr_set_idmap_t0sz x10, x9 + +#ifdef CONFIG_ARM64_52BIT_VA + ldr_l x9, vabits_user + sub x9, xzr, x9 + add x9, x9, #64 +#else + ldr_l x9, idmap_t0sz +#endif + tcr_set_t0sz x10, x9 /* * Set the IPS bits in TCR_EL1. From patchwork Wed Dec 5 16:41:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10714619 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 95A88109C for ; Wed, 5 Dec 2018 16:43:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 84A922D333 for ; Wed, 5 Dec 2018 16:43:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 78C7129708; Wed, 5 Dec 2018 16:43:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A7E2C2DD59 for ; Wed, 5 Dec 2018 16:43:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=FS6umVJ2rxtk2vwLAkfWVo/A1J+OJn+fH7/xNfbBtd0=; b=lXJ59FrVsSSJg8 E34gcP+AJfOHFtGuIpb0z1Wkn24zdZ/FwgUf+1204V1Il+HnUpIdgVLY8kPCiaNRIGbixxFtzGCqs gcR30JXyeK954Q2T9nQJy7YN6uva8IcRPxigee77jLwFUb1+0PIuZDiLjgcVAgRSEHZa130l+lt5h XvTZwcPM8cJSJ8K1IJ2R3GuekGccmVGMp0SCXWtil5SG59BZV4T2ylgD/K2c9JnHdDw85U07GbixB 5UzjL5ZCy5uqnrHJpmKIiHuZBRpzsFJd1wHNUMulXMcmbJZgnlnIaUdIDoOaCsKUgaar9YMQtUBR/ yd5FO5VvG2DrtpLdSIkQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gUaH1-0001qc-5n; Wed, 05 Dec 2018 16:43:43 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gUaFg-0000N3-2p for linux-arm-kernel@lists.infradead.org; Wed, 05 Dec 2018 16:42:29 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 57A02A78; Wed, 5 Dec 2018 08:42:11 -0800 (PST) Received: from capper-debian.emea.arm.com (C02R32KKFVH8.manchester.arm.com [10.32.102.139]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 13AE33F5AF; Wed, 5 Dec 2018 08:42:09 -0800 (PST) From: Steve Capper To: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH V4 6/6] arm64: mm: Allow forcing all userspace addresses to 52-bit Date: Wed, 5 Dec 2018 16:41:45 +0000 Message-Id: <20181205164145.24568-7-steve.capper@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181205164145.24568-1-steve.capper@arm.com> References: <20181205164145.24568-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181205_084221_257301_129D1E11 X-CRM114-Status: GOOD ( 14.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: catalin.marinas@arm.com, Steve Capper , will.deacon@arm.com, jcm@redhat.com, ard.biesheuvel@linaro.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP On arm64 52-bit VAs are provided to userspace when a hint is supplied to mmap. This helps maintain compatibility with software that expects at most 48-bit VAs to be returned. In order to help identify software that has 48-bit VA assumptions, this patch allows one to compile a kernel where 52-bit VAs are returned by default on HW that supports it. This feature is intended to be for development systems only. Signed-off-by: Steve Capper Acked-by: Catalin Marinas --- arch/arm64/Kconfig | 13 +++++++++++++ arch/arm64/include/asm/elf.h | 4 ++++ arch/arm64/include/asm/processor.h | 9 ++++++++- 3 files changed, 25 insertions(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index eab02d24f5d1..9f50dc8af110 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1165,6 +1165,19 @@ config ARM64_CNP at runtime, and does not affect PEs that do not implement this feature. +config ARM64_FORCE_52BIT + bool "Force 52-bit virtual addresses for userspace" + depends on ARM64_52BIT_VA && EXPERT + help + For systems with 52-bit userspace VAs enabled, the kernel will attempt + to maintain compatibility with older software by providing 48-bit VAs + unless a hint is supplied to mmap. + + This configuration option disables the 48-bit compatibility logic, and + forces all userspace addresses to be 52-bit on HW that supports it. One + should only enable this configuration option for stress testing userspace + memory management code. If unsure say N here. + endmenu config ARM64_SVE diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h index bc9bd9e77d9d..6adc1a90e7e6 100644 --- a/arch/arm64/include/asm/elf.h +++ b/arch/arm64/include/asm/elf.h @@ -117,7 +117,11 @@ * 64-bit, this is above 4GB to leave the entire 32-bit address * space open for things that want to use the area for 32-bit pointers. */ +#ifdef CONFIG_ARM64_FORCE_52BIT +#define ELF_ET_DYN_BASE (2 * TASK_SIZE_64 / 3) +#else #define ELF_ET_DYN_BASE (2 * DEFAULT_MAP_WINDOW_64 / 3) +#endif /* CONFIG_ARM64_FORCE_52BIT */ #ifndef __ASSEMBLY__ diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index b363fc705be4..9abd91570b5b 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -65,8 +65,13 @@ extern u64 vabits_user; #define DEFAULT_MAP_WINDOW DEFAULT_MAP_WINDOW_64 #endif /* CONFIG_COMPAT */ -#define TASK_UNMAPPED_BASE (PAGE_ALIGN(DEFAULT_MAP_WINDOW / 4)) +#ifdef CONFIG_ARM64_FORCE_52BIT +#define STACK_TOP_MAX TASK_SIZE_64 +#define TASK_UNMAPPED_BASE (PAGE_ALIGN(TASK_SIZE / 4)) +#else #define STACK_TOP_MAX DEFAULT_MAP_WINDOW_64 +#define TASK_UNMAPPED_BASE (PAGE_ALIGN(DEFAULT_MAP_WINDOW / 4)) +#endif /* CONFIG_ARM64_FORCE_52BIT */ #ifdef CONFIG_COMPAT #define AARCH32_VECTORS_BASE 0xffff0000 @@ -76,12 +81,14 @@ extern u64 vabits_user; #define STACK_TOP STACK_TOP_MAX #endif /* CONFIG_COMPAT */ +#ifndef CONFIG_ARM64_FORCE_52BIT #define arch_get_mmap_end(addr) ((addr > DEFAULT_MAP_WINDOW) ? TASK_SIZE :\ DEFAULT_MAP_WINDOW) #define arch_get_mmap_base(addr, base) ((addr > DEFAULT_MAP_WINDOW) ? \ base + TASK_SIZE - DEFAULT_MAP_WINDOW :\ base) +#endif /* CONFIG_ARM64_FORCE_52BIT */ extern phys_addr_t arm64_dma_phys_limit; #define ARCH_LOW_ADDRESS_LIMIT (arm64_dma_phys_limit - 1)