From patchwork Thu Sep 8 14:00:56 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Brodsky X-Patchwork-Id: 9321461 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E9D4B607D3 for ; Thu, 8 Sep 2016 14:05:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D829729741 for ; Thu, 8 Sep 2016 14:05:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CA10029799; Thu, 8 Sep 2016 14:05:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 61C3729741 for ; Thu, 8 Sep 2016 14:05:14 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bhzvG-0007Jc-Va; Thu, 08 Sep 2016 14:03:22 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bhztm-0005qr-V5 for linux-arm-kernel@lists.infradead.org; Thu, 08 Sep 2016 14:02:23 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8D2B15C9D; Thu, 8 Sep 2016 07:01:31 -0700 (PDT) Received: from e107154-lin.arm.com (e107154-lin.cambridge.arm.com [10.2.131.184]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CD7243F21A; Thu, 8 Sep 2016 07:01:30 -0700 (PDT) From: Kevin Brodsky To: linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH 1/8] arm64: Refactor vDSO setup Date: Thu, 8 Sep 2016 15:00:56 +0100 Message-Id: <20160908140103.7468-2-kevin.brodsky@arm.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20160908140103.7468-1-kevin.brodsky@arm.com> References: <20160908140103.7468-1-kevin.brodsky@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160908_070151_747279_12AF81CD X-CRM114-Status: GOOD ( 16.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Brodsky , Will Deacon , Dave Martin MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Move the logic for setting up mappings and pages for the vDSO into static functions with a clear interface. This will allow to reuse the setup code for the future compat vDSO. Signed-off-by: Kevin Brodsky --- arch/arm64/kernel/vdso.c | 176 ++++++++++++++++++++++++++--------------------- 1 file changed, 98 insertions(+), 78 deletions(-) diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index 076312b17d4f..d74f9ac7e336 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -36,9 +36,11 @@ #include #include -extern char vdso_start, vdso_end; -static unsigned long vdso_pages; -static struct page **vdso_pagelist; +struct vdso_mappings { + unsigned long num_pages; + struct page **pages; + struct vm_special_mapping data_mapping, code_mapping; +}; /* * The vDSO data page. @@ -49,6 +51,93 @@ static union { } vdso_data_store __page_aligned_data; struct vdso_data *vdso_data = &vdso_data_store.data; +static int __init setup_vdso_mappings(const char *name, + const char *code_start, + const char *code_end, + struct vdso_mappings *mappings) +{ + unsigned long i, num_pages; + struct page **pages; + + if (memcmp(code_start, "\177ELF", 4)) { + pr_err("%s is not a valid ELF object!\n", name); + return -EINVAL; + } + + num_pages = (code_end - code_start) >> PAGE_SHIFT; + pr_info("%s: %ld pages (%ld code @ %p, %ld data @ %p)\n", + name, num_pages + 1, num_pages, code_start, 1L, vdso_data); + + /* Allocate the vDSO code pages, plus a page for the data. */ + pages = kcalloc(num_pages + 1, sizeof(struct page *), GFP_KERNEL); + if (pages == NULL) + return -ENOMEM; + + /* Grab the vDSO data page. */ + pages[0] = pfn_to_page(PHYS_PFN(__pa(vdso_data))); + + /* Grab the vDSO code pages. */ + for (i = 0; i < num_pages; i++) + pages[i + 1] = pfn_to_page(PHYS_PFN(__pa(code_start)) + i); + + /* Populate the special mapping structures */ + mappings->data_mapping = (struct vm_special_mapping) { + .name = "[vvar]", + .pages = &pages[0], + }; + + mappings->code_mapping = (struct vm_special_mapping) { + .name = "[vdso]", + .pages = &pages[1], + }; + + mappings->num_pages = num_pages; + mappings->pages = pages; + return 0; +} + +static int setup_vdso_pages(const struct vdso_mappings *mappings) +{ + struct mm_struct *mm = current->mm; + unsigned long vdso_base, vdso_text_len, vdso_mapping_len; + void *ret; + + vdso_text_len = mappings->num_pages << PAGE_SHIFT; + /* Be sure to map the data page */ + vdso_mapping_len = vdso_text_len + PAGE_SIZE; + + if (down_write_killable(&mm->mmap_sem)) + return -EINTR; + vdso_base = get_unmapped_area(NULL, 0, vdso_mapping_len, 0, 0); + if (IS_ERR_VALUE(vdso_base)) { + ret = ERR_PTR(vdso_base); + goto up_fail; + } + ret = _install_special_mapping(mm, vdso_base, PAGE_SIZE, + VM_READ|VM_MAYREAD, + &mappings->data_mapping); + if (IS_ERR(ret)) + goto up_fail; + + vdso_base += PAGE_SIZE; + mm->context.vdso = (void *)vdso_base; + ret = _install_special_mapping(mm, vdso_base, vdso_text_len, + VM_READ|VM_EXEC| + VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC, + &mappings->code_mapping); + if (IS_ERR(ret)) + goto up_fail; + + + up_write(&mm->mmap_sem); + return 0; + +up_fail: + mm->context.vdso = NULL; + up_write(&mm->mmap_sem); + return PTR_ERR(ret); +} + #ifdef CONFIG_COMPAT /* * Create and map the vectors page for AArch32 tasks. @@ -110,90 +199,21 @@ int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp) } #endif /* CONFIG_COMPAT */ -static struct vm_special_mapping vdso_spec[2]; +extern char vdso_start, vdso_end; + +static struct vdso_mappings vdso_mappings; static int __init vdso_init(void) { - int i; - - if (memcmp(&vdso_start, "\177ELF", 4)) { - pr_err("vDSO is not a valid ELF object!\n"); - return -EINVAL; - } - - vdso_pages = (&vdso_end - &vdso_start) >> PAGE_SHIFT; - pr_info("vdso: %ld pages (%ld code @ %p, %ld data @ %p)\n", - vdso_pages + 1, vdso_pages, &vdso_start, 1L, vdso_data); - - /* Allocate the vDSO pagelist, plus a page for the data. */ - vdso_pagelist = kcalloc(vdso_pages + 1, sizeof(struct page *), - GFP_KERNEL); - if (vdso_pagelist == NULL) - return -ENOMEM; - - /* Grab the vDSO data page. */ - vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa(vdso_data))); - - /* Grab the vDSO code pages. */ - for (i = 0; i < vdso_pages; i++) - vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa(&vdso_start)) + i); - - /* Populate the special mapping structures */ - vdso_spec[0] = (struct vm_special_mapping) { - .name = "[vvar]", - .pages = vdso_pagelist, - }; - - vdso_spec[1] = (struct vm_special_mapping) { - .name = "[vdso]", - .pages = &vdso_pagelist[1], - }; - - return 0; + return setup_vdso_mappings("vdso", &vdso_start, &vdso_end, + &vdso_mappings); } arch_initcall(vdso_init); int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) { - struct mm_struct *mm = current->mm; - unsigned long vdso_base, vdso_text_len, vdso_mapping_len; - void *ret; - - vdso_text_len = vdso_pages << PAGE_SHIFT; - /* Be sure to map the data page */ - vdso_mapping_len = vdso_text_len + PAGE_SIZE; - - if (down_write_killable(&mm->mmap_sem)) - return -EINTR; - vdso_base = get_unmapped_area(NULL, 0, vdso_mapping_len, 0, 0); - if (IS_ERR_VALUE(vdso_base)) { - ret = ERR_PTR(vdso_base); - goto up_fail; - } - ret = _install_special_mapping(mm, vdso_base, PAGE_SIZE, - VM_READ|VM_MAYREAD, - &vdso_spec[0]); - if (IS_ERR(ret)) - goto up_fail; - - vdso_base += PAGE_SIZE; - mm->context.vdso = (void *)vdso_base; - ret = _install_special_mapping(mm, vdso_base, vdso_text_len, - VM_READ|VM_EXEC| - VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC, - &vdso_spec[1]); - if (IS_ERR(ret)) - goto up_fail; - - - up_write(&mm->mmap_sem); - return 0; - -up_fail: - mm->context.vdso = NULL; - up_write(&mm->mmap_sem); - return PTR_ERR(ret); + return setup_vdso_pages(&vdso_mappings); } /*