From patchwork Tue Dec 6 16:03:44 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Brodsky X-Patchwork-Id: 9462873 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5AF5460236 for ; Tue, 6 Dec 2016 16:08:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 490A32840E for ; Tue, 6 Dec 2016 16:08:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 30EC328437; Tue, 6 Dec 2016 16:08:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B7A142840E for ; Tue, 6 Dec 2016 16:08:47 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1cEIGi-0006Ex-EN; Tue, 06 Dec 2016 16:07:00 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1cEIFk-0005jT-W3 for linux-arm-kernel@lists.infradead.org; Tue, 06 Dec 2016 16:06:05 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F3FEB15A1; Tue, 6 Dec 2016 08:05:41 -0800 (PST) Received: from e107154-lin.arm.com (e107154-lin.cambridge.arm.com [10.1.207.15]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AA28A3F220; Tue, 6 Dec 2016 08:05:40 -0800 (PST) From: Kevin Brodsky To: linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH v3 02/11] arm64: compat: Split the sigreturn trampolines and kuser helpers Date: Tue, 6 Dec 2016 16:03:44 +0000 Message-Id: <20161206160353.14581-3-kevin.brodsky@arm.com> X-Mailer: git-send-email 2.10.2 In-Reply-To: <20161206160353.14581-1-kevin.brodsky@arm.com> References: <20161206160353.14581-1-kevin.brodsky@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20161206_080601_239153_FCDCC0B7 X-CRM114-Status: GOOD ( 16.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jisheng Zhang , Catalin Marinas , Kevin Brodsky , Nathan Lynch , Will Deacon , Christopher Covington , Dmitry Safonov MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP AArch32 processes are currently installed a special [vectors] page that contains the sigreturn trampolines and the kuser helpers, at the fixed address mandated by the kuser helpers ABI. Having both functionalities in the same page is becoming problematic, because: * It makes it impossible to disable the kuser helpers (the sigreturn trampolines cannot be removed), which is possible in arm. * A future 32-bit vDSO would provide the sigreturn trampolines itself, making those in [vectors] redundant. This patch addresses the problem by moving the sigreturn trampolines to a separate [sigreturn] page, in similar fashion to [sigpage] in arm. [vectors] has always been a misnomer on arm64/compat, as there is no AArch32 vector there. Now that only the kuser helpers are left there, we can rename it to [kuserhelpers]. mm->context.vdso used to point to the [vectors] page, which is unnecessary (as its address is fixed). It now points to the [sigreturn] page (whose address is randomized like a vDSO). Finally aarch32_setup_vectors_page() has been renamed to the more generic aarch32_setup_additional_pages(). Cc: Will Deacon Cc: Catalin Marinas Cc: Nathan Lynch Cc: Christopher Covington Cc: Dmitry Safonov Cc: Jisheng Zhang Signed-off-by: Kevin Brodsky --- arch/arm64/include/asm/elf.h | 6 +- arch/arm64/include/asm/processor.h | 4 +- arch/arm64/include/asm/signal32.h | 2 - arch/arm64/kernel/signal32.c | 5 +- arch/arm64/kernel/vdso.c | 135 ++++++++++++++++++++++++++----------- 5 files changed, 104 insertions(+), 48 deletions(-) diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h index a55384f4a5d7..7da9452596ad 100644 --- a/arch/arm64/include/asm/elf.h +++ b/arch/arm64/include/asm/elf.h @@ -185,10 +185,10 @@ typedef compat_elf_greg_t compat_elf_gregset_t[COMPAT_ELF_NGREG]; #define compat_start_thread compat_start_thread #define COMPAT_SET_PERSONALITY(ex) set_thread_flag(TIF_32BIT); #define COMPAT_ARCH_DLINFO -extern int aarch32_setup_vectors_page(struct linux_binprm *bprm, - int uses_interp); +extern int aarch32_setup_additional_pages(struct linux_binprm *bprm, + int uses_interp); #define compat_arch_setup_additional_pages \ - aarch32_setup_vectors_page + aarch32_setup_additional_pages #endif /* CONFIG_COMPAT */ diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 60e34824e18c..b976060b1113 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -39,9 +39,9 @@ #define STACK_TOP_MAX TASK_SIZE_64 #ifdef CONFIG_COMPAT -#define AARCH32_VECTORS_BASE 0xffff0000 +#define AARCH32_KUSER_HELPERS_BASE 0xffff0000 #define STACK_TOP (test_thread_flag(TIF_32BIT) ? \ - AARCH32_VECTORS_BASE : STACK_TOP_MAX) + AARCH32_KUSER_HELPERS_BASE : STACK_TOP_MAX) #else #define STACK_TOP STACK_TOP_MAX #endif /* CONFIG_COMPAT */ diff --git a/arch/arm64/include/asm/signal32.h b/arch/arm64/include/asm/signal32.h index 81abea0b7650..58e288aaf0ba 100644 --- a/arch/arm64/include/asm/signal32.h +++ b/arch/arm64/include/asm/signal32.h @@ -20,8 +20,6 @@ #ifdef CONFIG_COMPAT #include -#define AARCH32_KERN_SIGRET_CODE_OFFSET 0x500 - int compat_setup_frame(int usig, struct ksignal *ksig, sigset_t *set, struct pt_regs *regs); int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set, diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c index b7063de792f7..49396a2b6e11 100644 --- a/arch/arm64/kernel/signal32.c +++ b/arch/arm64/kernel/signal32.c @@ -484,14 +484,13 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka, retcode = ptr_to_compat(ka->sa.sa_restorer); } else { /* Set up sigreturn pointer */ + void *sigreturn_base = current->mm->context.vdso; unsigned int idx = thumb << 1; if (ka->sa.sa_flags & SA_SIGINFO) idx += 3; - retcode = AARCH32_VECTORS_BASE + - AARCH32_KERN_SIGRET_CODE_OFFSET + - (idx << 2) + thumb; + retcode = ptr_to_compat(sigreturn_base) + (idx << 2) + thumb; } regs->regs[0] = usig; diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index a2c2478e7d78..6208b7ba4593 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -1,5 +1,7 @@ /* - * VDSO implementation for AArch64 and vector page setup for AArch32. + * Additional userspace pages setup for AArch64 and AArch32. + * - AArch64: vDSO pages setup, vDSO data page update. + * - AArch32: sigreturn and kuser helpers pages setup. * * Copyright (C) 2012 ARM Limited * @@ -50,64 +52,121 @@ static union { struct vdso_data *vdso_data = &vdso_data_store.data; #ifdef CONFIG_COMPAT -/* - * Create and map the vectors page for AArch32 tasks. - */ -static struct page *vectors_page[1] __ro_after_init; -static int __init alloc_vectors_page(void) -{ - extern char __kuser_helper_start[], __kuser_helper_end[]; - extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[]; +/* sigreturn trampolines page */ +static struct page *sigreturn_page __ro_after_init; +static const struct vm_special_mapping sigreturn_spec = { + .name = "[sigreturn]", + .pages = &sigreturn_page, +}; - int kuser_sz = __kuser_helper_end - __kuser_helper_start; - int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start; - unsigned long vpage; +static int __init aarch32_sigreturn_init(void) +{ + extern char __aarch32_sigret_code_start, __aarch32_sigret_code_end; - vpage = get_zeroed_page(GFP_ATOMIC); + size_t sigret_sz = + &__aarch32_sigret_code_end - &__aarch32_sigret_code_start; + struct page *page; + unsigned long page_addr; - if (!vpage) + page = alloc_page(GFP_KERNEL | __GFP_ZERO); + if (!page) return -ENOMEM; + page_addr = (unsigned long)page_address(page); - /* kuser helpers */ - memcpy((void *)vpage + 0x1000 - kuser_sz, __kuser_helper_start, - kuser_sz); - - /* sigreturn code */ - memcpy((void *)vpage + AARCH32_KERN_SIGRET_CODE_OFFSET, - __aarch32_sigret_code_start, sigret_sz); + memcpy((void *)page_addr, &__aarch32_sigret_code_start, sigret_sz); - flush_icache_range(vpage, vpage + PAGE_SIZE); - vectors_page[0] = virt_to_page(vpage); + flush_icache_range(page_addr, page_addr + PAGE_SIZE); + sigreturn_page = page; return 0; } -arch_initcall(alloc_vectors_page); +arch_initcall(aarch32_sigreturn_init); -int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp) +static int sigreturn_setup(struct mm_struct *mm) { - struct mm_struct *mm = current->mm; - unsigned long addr = AARCH32_VECTORS_BASE; - static const struct vm_special_mapping spec = { - .name = "[vectors]", - .pages = vectors_page, + unsigned long addr; + void *ret; + + addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0); + if (IS_ERR_VALUE(addr)) { + ret = ERR_PTR(addr); + goto out; + } + + ret = _install_special_mapping(mm, addr, PAGE_SIZE, + VM_READ|VM_EXEC| + VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC, + &sigreturn_spec); + if (IS_ERR(ret)) + goto out; + + mm->context.vdso = (void *)addr; + +out: + return PTR_ERR_OR_ZERO(ret); +} + +/* kuser helpers page */ +static struct page *kuser_helpers_page __ro_after_init; +static const struct vm_special_mapping kuser_helpers_spec = { + .name = "[kuserhelpers]", + .pages = &kuser_helpers_page, +}; + +static int __init aarch32_kuser_helpers_init(void) +{ + extern char __kuser_helper_start, __kuser_helper_end; - }; + size_t kuser_sz = &__kuser_helper_end - &__kuser_helper_start; + struct page *page; + unsigned long page_addr; + + page = alloc_page(GFP_KERNEL | __GFP_ZERO); + if (!page) + return -ENOMEM; + page_addr = (unsigned long)page_address(page); + + memcpy((void *)(page_addr + 0x1000 - kuser_sz), &__kuser_helper_start, + kuser_sz); + + flush_icache_range(page_addr, page_addr + PAGE_SIZE); + + kuser_helpers_page = page; + return 0; +} +arch_initcall(aarch32_kuser_helpers_init); + +static int kuser_helpers_setup(struct mm_struct *mm) +{ void *ret; + /* Map the kuser helpers at the ABI-defined high address */ + ret = _install_special_mapping(mm, AARCH32_KUSER_HELPERS_BASE, PAGE_SIZE, + VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC, + &kuser_helpers_spec); + return PTR_ERR_OR_ZERO(ret); +} + +int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) +{ + struct mm_struct *mm = current->mm; + int ret; + if (down_write_killable(&mm->mmap_sem)) return -EINTR; - current->mm->context.vdso = (void *)addr; - /* Map vectors page at the high address. */ - ret = _install_special_mapping(mm, addr, PAGE_SIZE, - VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC, - &spec); + ret = sigreturn_setup(mm); + if (ret) + goto out; - up_write(&mm->mmap_sem); + ret = kuser_helpers_setup(mm); - return PTR_ERR_OR_ZERO(ret); +out: + up_write(&mm->mmap_sem); + return ret; } + #endif /* CONFIG_COMPAT */ static struct vm_special_mapping vdso_spec[2] __ro_after_init = {