From patchwork Tue Apr 28 16:49:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 11515105 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DB8E1112C for ; Tue, 28 Apr 2020 16:50:32 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BA18D208E0 for ; Tue, 28 Apr 2020 16:50:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="LXA4pyy8" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BA18D208E0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=WcEbIfTZr9kr8ImayegQP+YitskDZ5tS0XXTD9Rnuts=; b=LXA4pyy8euNxtNQyPq77QPXMRu 8W+rKQeFrHTzO5uJlISolbIPtvmjg6M3L8vNhX5dZ04uWnCiI2qWGTtBvYG56ICU015Zikiz2+Gb8 cVVbCxbLpYS9/yD9z6fS5rtaQn0o9eDQdSyHGPF46zRFLYvVvXGKBd6DxTRKDGgPmIqH1VA4MII2G Z+Ado75gaBMxQ/lus/RFxMsQpzE5UYoW3hjhDldJjBvIX9+ShGnEb+WB2LHAtkd6xjuwt6QLy+jU9 dZN6HQtmIYsUqFEHhuLEGCcdDdmHhOs5uv0abXn+K+OMYELaeAZhcybJM5ZWOk+bTiQSohovA/6aV jFTNQ4wA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTTRE-0007XY-Lo; Tue, 28 Apr 2020 16:50:28 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTTQI-0004E0-3M for linux-arm-kernel@lists.infradead.org; Tue, 28 Apr 2020 16:49:31 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 71177C14; Tue, 28 Apr 2020 09:49:28 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A6DD43F305; Tue, 28 Apr 2020 09:49:27 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org, will@kernel.org Subject: [PATCHv2 1/4] arm64: vdso: remove aarch32_vdso_pages[] Date: Tue, 28 Apr 2020 17:49:18 +0100 Message-Id: <20200428164921.41641-2-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20200428164921.41641-1-mark.rutland@arm.com> References: <20200428164921.41641-1-mark.rutland@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200428_094930_199953_2194237A X-CRM114-Status: GOOD ( 14.02 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [217.140.110.172 listed in list.dnswl.org] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, catalin.marinas@arm.com, vincenzo.frascino@arm.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org The aarch32_vdso_pages[] array is unnecessarily confusing. We only ever use the C_VECTORS and C_SIGPAGE slots, and the other slots are unused despite having corresponding mappings (sharing pages with the AArch64 vDSO). Let's make this clearer by using separate variables for the vectors page and the sigreturn page. A subsequent patch will clean up the C_* naming and conflation of pages with mappings. Note that since both the vectors page and sig page are single pages, and the mapping is a single page long, their pages array do not need to be NULL-terminated (and this was not the case with the existing code for the sig page as it was the last entry in the aarch32_vdso_pages array). There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Vincenzo Frascino Cc: Will Deacon --- arch/arm64/kernel/vdso.c | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index 033a48f30dbb..263bc6084c71 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -206,11 +206,16 @@ static int aarch32_vdso_mremap(const struct vm_special_mapping *sm, #define C_SIGPAGE 1 #define C_PAGES (C_SIGPAGE + 1) #endif /* CONFIG_COMPAT_VDSO */ -static struct page *aarch32_vdso_pages[C_PAGES] __ro_after_init; + +static struct page *aarch32_vectors_page __ro_after_init; +#ifndef CONFIG_COMPAT_VDSO +static struct page *aarch32_sig_page __ro_after_init; +#endif + static struct vm_special_mapping aarch32_vdso_spec[C_PAGES] = { { .name = "[vectors]", /* ABI */ - .pages = &aarch32_vdso_pages[C_VECTORS], + .pages = &aarch32_vectors_page, }, #ifdef CONFIG_COMPAT_VDSO { @@ -223,7 +228,7 @@ static struct vm_special_mapping aarch32_vdso_spec[C_PAGES] = { #else { .name = "[sigpage]", /* ABI */ - .pages = &aarch32_vdso_pages[C_SIGPAGE], + .pages = &aarch32_sig_page, }, #endif /* CONFIG_COMPAT_VDSO */ }; @@ -243,8 +248,8 @@ static int aarch32_alloc_kuser_vdso_page(void) memcpy((void *)(vdso_page + 0x1000 - kuser_sz), __kuser_helper_start, kuser_sz); - aarch32_vdso_pages[C_VECTORS] = virt_to_page(vdso_page); - flush_dcache_page(aarch32_vdso_pages[C_VECTORS]); + aarch32_vectors_page = virt_to_page(vdso_page); + flush_dcache_page(aarch32_vectors_page); return 0; } @@ -275,8 +280,8 @@ static int __aarch32_alloc_vdso_pages(void) return -ENOMEM; memcpy((void *)sigpage, __aarch32_sigret_code_start, sigret_sz); - aarch32_vdso_pages[C_SIGPAGE] = virt_to_page(sigpage); - flush_dcache_page(aarch32_vdso_pages[C_SIGPAGE]); + aarch32_sig_page = virt_to_page(sigpage); + flush_dcache_page(aarch32_sig_page); ret = aarch32_alloc_kuser_vdso_page(); if (ret) From patchwork Tue Apr 28 16:49:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 11515101 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D0ADB112C for ; Tue, 28 Apr 2020 16:50:09 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4F3612063A for ; Tue, 28 Apr 2020 16:50:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Ga8vGYPK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4F3612063A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=5AiM6OGgBBJmpiDG33MB9sZ3hf60y5H20tFBefSfkoQ=; b=Ga8vGYPKd3h/NvKFEveNJUjLgA 3wgPgU7UTqNjmMKYW7fpfTkWZMJmwXCHtgM5qaJyRSTyKD6EEh0kA25e+d4sqW68yq8x/Dd4Fw0LY 4TM0SMng5XL3zfUOqF7OsrJAU23We/FIb0YHu6/owvHiPWCxYDlYNR2fZ1MLSs/uTpgKc2rUywCYZ zzYUpUHm8B2mYPidTq+qzM9R7y1LMTUJTyQgCjPZXaw9HdgOJU+7Vq6rNy8/Hhia4OP+3dQu+ed7J Hi4wFerwJg4uH/YTTVhKx+rUaLxbPxp9zTEpLlFwatLXzwQlUtsvTIPDqVSJliAhsZrn5EFor0HLa BQTdUVaQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTTQm-0004my-SG; Tue, 28 Apr 2020 16:50:00 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTTQI-0004Ek-OM for linux-arm-kernel@lists.infradead.org; Tue, 28 Apr 2020 16:49:32 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DE44A101E; Tue, 28 Apr 2020 09:49:29 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 20AFC3F305; Tue, 28 Apr 2020 09:49:29 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org, will@kernel.org Subject: [PATCHv2 2/4] arm64: vdso: simplify arch_vdso_type ifdeffery Date: Tue, 28 Apr 2020 17:49:19 +0100 Message-Id: <20200428164921.41641-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20200428164921.41641-1-mark.rutland@arm.com> References: <20200428164921.41641-1-mark.rutland@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200428_094930_851353_B88D8D4E X-CRM114-Status: GOOD ( 12.03 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [217.140.110.172 listed in list.dnswl.org] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, catalin.marinas@arm.com, vincenzo.frascino@arm.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Currently we have some ifdeffery to determine the number of elements in enum arch_vdso_type as VDSO_TYPES, rather that the usual pattern of having the enum define this: | enum foo_type { | FOO_TYPE_A, | FOO_TYPE_B, | #ifdef CONFIG_C | FOO_TYPE_C, | #endif | NR_FOO_TYPES | } ... however, given we only use this number to size the vdso_lookup[] array, this is redundant anyway as the compiler can automatically size the array to fit all defined elements. So let's remove the VDSO_TYPES to simplify the code. At the same time, let's use designated initializers for the array elements so that these are guarnateed to be at the expected indices, regardless of how we modify the structure. For clariy the redundant explicit initialization of the enum elements is dropped. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Vincenzo Frascino Cc: Will Deacon --- arch/arm64/kernel/vdso.c | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index 263bc6084c71..b4b01ac30112 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -35,16 +35,11 @@ extern char vdso32_start[], vdso32_end[]; /* vdso_lookup arch_index */ enum arch_vdso_type { - ARM64_VDSO = 0, + ARM64_VDSO, #ifdef CONFIG_COMPAT_VDSO - ARM64_VDSO32 = 1, + ARM64_VDSO32, #endif /* CONFIG_COMPAT_VDSO */ }; -#ifdef CONFIG_COMPAT_VDSO -#define VDSO_TYPES (ARM64_VDSO32 + 1) -#else -#define VDSO_TYPES (ARM64_VDSO + 1) -#endif /* CONFIG_COMPAT_VDSO */ struct __vdso_abi { const char *name; @@ -57,14 +52,14 @@ struct __vdso_abi { struct vm_special_mapping *cm; }; -static struct __vdso_abi vdso_lookup[VDSO_TYPES] __ro_after_init = { - { +static struct __vdso_abi vdso_lookup[] __ro_after_init = { + [ARM64_VDSO] = { .name = "vdso", .vdso_code_start = vdso_start, .vdso_code_end = vdso_end, }, #ifdef CONFIG_COMPAT_VDSO - { + [ARM64_VDSO32] = { .name = "vdso32", .vdso_code_start = vdso32_start, .vdso_code_end = vdso32_end, From patchwork Tue Apr 28 16:49:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 11515109 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C64A814DD for ; Tue, 28 Apr 2020 16:51:06 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A4A19208E0 for ; Tue, 28 Apr 2020 16:51:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="e9nz+55E" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A4A19208E0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=xkloEYD6smfFihcGZFgApJCf5sV/r1WgrkbKwwLt1Ik=; b=e9nz+55Ez0t+yXcXtccD8emp2f lJHnS8K+YmOz79R9Jlu1fS5mkxLtScIWi7sZMFnmwMmgmXl+G82BK98GEYCehT7O1i1noka+JiW4f SfrySA2K+k1DZxdZVqC5qAdjMI7hNL0TNHeFXn2UTE9Eccv1rm4Yre31erQDkU42KVn7BuWqnLgWO OMW6SjQdRvgFad3dQ4FVyqauxk2UztHO/yEL8jzv62xhzGqtVbIG5iRXAbzOG1U2nAGrB5M9aB5W7 KIGqCY/jBreMPum7ApNBEA4KVolm4GPPJZgRJS15PK6urEkp6LRkVH7qV+LgRGpzRkTsYZRb3mewm +mi2cjyw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTTRl-0008Da-Bn; Tue, 28 Apr 2020 16:51:01 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTTQK-0004Gj-Dk for linux-arm-kernel@lists.infradead.org; Tue, 28 Apr 2020 16:49:35 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4E3BD30E; Tue, 28 Apr 2020 09:49:31 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 847BE3F305; Tue, 28 Apr 2020 09:49:30 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org, will@kernel.org Subject: [PATCHv2 3/4] arm64: vdso: use consistent 'abi' nomenclature Date: Tue, 28 Apr 2020 17:49:20 +0100 Message-Id: <20200428164921.41641-4-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20200428164921.41641-1-mark.rutland@arm.com> References: <20200428164921.41641-1-mark.rutland@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200428_094932_594511_C14155E9 X-CRM114-Status: GOOD ( 18.69 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [217.140.110.172 listed in list.dnswl.org] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, catalin.marinas@arm.com, vincenzo.frascino@arm.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org The current code doesn't use a consistent naming scheme for structures, enums, or variables, making it harder than necessary to determine the relationship between these. Let's make this easier by consistently using 'vdso_abi' nomenclature. The 'vdso_lookup' array is renamed to 'vdso_info' to describe what it contains rather than how it is consumed. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Vincenzo Frascino Cc: Will Deacon Signed-off-by: Mark Rutland --- arch/arm64/kernel/vdso.c | 69 ++++++++++++++++++++++++------------------------ 1 file changed, 34 insertions(+), 35 deletions(-) diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index b4b01ac30112..89ef61686362 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -33,15 +33,14 @@ extern char vdso_start[], vdso_end[]; extern char vdso32_start[], vdso32_end[]; #endif /* CONFIG_COMPAT_VDSO */ -/* vdso_lookup arch_index */ -enum arch_vdso_type { - ARM64_VDSO, +enum vdso_abi { + VDSO_ABI_AA64, #ifdef CONFIG_COMPAT_VDSO - ARM64_VDSO32, + VDSO_ABI_AA32, #endif /* CONFIG_COMPAT_VDSO */ }; -struct __vdso_abi { +struct vdso_abi_info { const char *name; const char *vdso_code_start; const char *vdso_code_end; @@ -52,14 +51,14 @@ struct __vdso_abi { struct vm_special_mapping *cm; }; -static struct __vdso_abi vdso_lookup[] __ro_after_init = { - [ARM64_VDSO] = { +static struct vdso_abi_info vdso_info[] __ro_after_init = { + [VDSO_ABI_AA64] = { .name = "vdso", .vdso_code_start = vdso_start, .vdso_code_end = vdso_end, }, #ifdef CONFIG_COMPAT_VDSO - [ARM64_VDSO32] = { + [VDSO_ABI_AA32] = { .name = "vdso32", .vdso_code_start = vdso32_start, .vdso_code_end = vdso32_end, @@ -76,13 +75,13 @@ static union { } vdso_data_store __page_aligned_data; struct vdso_data *vdso_data = vdso_data_store.data; -static int __vdso_remap(enum arch_vdso_type arch_index, +static int __vdso_remap(enum vdso_abi abi, const struct vm_special_mapping *sm, struct vm_area_struct *new_vma) { unsigned long new_size = new_vma->vm_end - new_vma->vm_start; - unsigned long vdso_size = vdso_lookup[arch_index].vdso_code_end - - vdso_lookup[arch_index].vdso_code_start; + unsigned long vdso_size = vdso_info[abi].vdso_code_end - + vdso_info[abi].vdso_code_start; if (vdso_size != new_size) return -EINVAL; @@ -92,24 +91,24 @@ static int __vdso_remap(enum arch_vdso_type arch_index, return 0; } -static int __vdso_init(enum arch_vdso_type arch_index) +static int __vdso_init(enum vdso_abi abi) { int i; struct page **vdso_pagelist; unsigned long pfn; - if (memcmp(vdso_lookup[arch_index].vdso_code_start, "\177ELF", 4)) { + if (memcmp(vdso_info[abi].vdso_code_start, "\177ELF", 4)) { pr_err("vDSO is not a valid ELF object!\n"); return -EINVAL; } - vdso_lookup[arch_index].vdso_pages = ( - vdso_lookup[arch_index].vdso_code_end - - vdso_lookup[arch_index].vdso_code_start) >> + vdso_info[abi].vdso_pages = ( + vdso_info[abi].vdso_code_end - + vdso_info[abi].vdso_code_start) >> PAGE_SHIFT; /* Allocate the vDSO pagelist, plus a page for the data. */ - vdso_pagelist = kcalloc(vdso_lookup[arch_index].vdso_pages + 1, + vdso_pagelist = kcalloc(vdso_info[abi].vdso_pages + 1, sizeof(struct page *), GFP_KERNEL); if (vdso_pagelist == NULL) @@ -120,18 +119,18 @@ static int __vdso_init(enum arch_vdso_type arch_index) /* Grab the vDSO code pages. */ - pfn = sym_to_pfn(vdso_lookup[arch_index].vdso_code_start); + pfn = sym_to_pfn(vdso_info[abi].vdso_code_start); - for (i = 0; i < vdso_lookup[arch_index].vdso_pages; i++) + for (i = 0; i < vdso_info[abi].vdso_pages; i++) vdso_pagelist[i + 1] = pfn_to_page(pfn + i); - vdso_lookup[arch_index].dm->pages = &vdso_pagelist[0]; - vdso_lookup[arch_index].cm->pages = &vdso_pagelist[1]; + vdso_info[abi].dm->pages = &vdso_pagelist[0]; + vdso_info[abi].cm->pages = &vdso_pagelist[1]; return 0; } -static int __setup_additional_pages(enum arch_vdso_type arch_index, +static int __setup_additional_pages(enum vdso_abi abi, struct mm_struct *mm, struct linux_binprm *bprm, int uses_interp) @@ -139,7 +138,7 @@ static int __setup_additional_pages(enum arch_vdso_type arch_index, unsigned long vdso_base, vdso_text_len, vdso_mapping_len; void *ret; - vdso_text_len = vdso_lookup[arch_index].vdso_pages << PAGE_SHIFT; + vdso_text_len = vdso_info[abi].vdso_pages << PAGE_SHIFT; /* Be sure to map the data page */ vdso_mapping_len = vdso_text_len + PAGE_SIZE; @@ -151,7 +150,7 @@ static int __setup_additional_pages(enum arch_vdso_type arch_index, ret = _install_special_mapping(mm, vdso_base, PAGE_SIZE, VM_READ|VM_MAYREAD, - vdso_lookup[arch_index].dm); + vdso_info[abi].dm); if (IS_ERR(ret)) goto up_fail; @@ -160,7 +159,7 @@ static int __setup_additional_pages(enum arch_vdso_type arch_index, ret = _install_special_mapping(mm, vdso_base, vdso_text_len, VM_READ|VM_EXEC| VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC, - vdso_lookup[arch_index].cm); + vdso_info[abi].cm); if (IS_ERR(ret)) goto up_fail; @@ -179,7 +178,7 @@ static int __setup_additional_pages(enum arch_vdso_type arch_index, static int aarch32_vdso_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma) { - return __vdso_remap(ARM64_VDSO32, sm, new_vma); + return __vdso_remap(VDSO_ABI_AA32, sm, new_vma); } #endif /* CONFIG_COMPAT_VDSO */ @@ -253,10 +252,10 @@ static int __aarch32_alloc_vdso_pages(void) { int ret; - vdso_lookup[ARM64_VDSO32].dm = &aarch32_vdso_spec[C_VVAR]; - vdso_lookup[ARM64_VDSO32].cm = &aarch32_vdso_spec[C_VDSO]; + vdso_info[VDSO_ABI_AA32].dm = &aarch32_vdso_spec[C_VVAR]; + vdso_info[VDSO_ABI_AA32].cm = &aarch32_vdso_spec[C_VDSO]; - ret = __vdso_init(ARM64_VDSO32); + ret = __vdso_init(VDSO_ABI_AA32); if (ret) return ret; @@ -354,7 +353,7 @@ int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) goto out; #ifdef CONFIG_COMPAT_VDSO - ret = __setup_additional_pages(ARM64_VDSO32, + ret = __setup_additional_pages(VDSO_ABI_AA32, mm, bprm, uses_interp); @@ -371,7 +370,7 @@ int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) static int vdso_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma) { - return __vdso_remap(ARM64_VDSO, sm, new_vma); + return __vdso_remap(VDSO_ABI_AA64, sm, new_vma); } /* @@ -394,10 +393,10 @@ static struct vm_special_mapping vdso_spec[A_PAGES] __ro_after_init = { static int __init vdso_init(void) { - vdso_lookup[ARM64_VDSO].dm = &vdso_spec[A_VVAR]; - vdso_lookup[ARM64_VDSO].cm = &vdso_spec[A_VDSO]; + vdso_info[VDSO_ABI_AA64].dm = &vdso_spec[A_VVAR]; + vdso_info[VDSO_ABI_AA64].cm = &vdso_spec[A_VDSO]; - return __vdso_init(ARM64_VDSO); + return __vdso_init(VDSO_ABI_AA64); } arch_initcall(vdso_init); @@ -410,7 +409,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, if (down_write_killable(&mm->mmap_sem)) return -EINTR; - ret = __setup_additional_pages(ARM64_VDSO, + ret = __setup_additional_pages(VDSO_ABI_AA64, mm, bprm, uses_interp); From patchwork Tue Apr 28 16:49:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 11515107 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9DF0114DD for ; Tue, 28 Apr 2020 16:50:55 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 64AFF208E0 for ; Tue, 28 Apr 2020 16:50:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="aHN3iFAb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 64AFF208E0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=D/TBfS2OqFNZnyLVt/EObheHjUML3rhd/OW71l6f4j8=; b=aHN3iFAbLb8LFmDlwemh+ro4aZ EEQmE2l4ACAxsgFWhd0Q2l/NT9xK00cyodz7lskW4hyER55We9yEYbwpQzNq2PFR5VbDv8ArouznY ruzmtLoPBl19wUPSD6KmwwOvRfjgsUvw20fWmZANGV5EJfA+imIxwVGKPMLQpDuLfCI1e3sqCy5zg qndQYjuMGNvVo2PrBbrCFNpk5Qk94koWkbR0z96+HXsUxNi5K9YH6StJ/x8OWL6560H8p01CAfSlX 1c4WqAwbY7Q8cK9LJ6Mpky8a7QQMDpM1p+3nWW2NtApmKarW6bi8WorojZmwy6Ch2Oge0Gjsty6HF aOrGEQBA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTTRa-00082V-G1; Tue, 28 Apr 2020 16:50:50 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTTQL-0004Hj-A3 for linux-arm-kernel@lists.infradead.org; Tue, 28 Apr 2020 16:49:35 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AE7F9C14; Tue, 28 Apr 2020 09:49:32 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E4A5B3F305; Tue, 28 Apr 2020 09:49:31 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org, will@kernel.org Subject: [PATCHv2 4/4] arm64: vdso: use consistent 'map' nomenclature Date: Tue, 28 Apr 2020 17:49:21 +0100 Message-Id: <20200428164921.41641-5-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20200428164921.41641-1-mark.rutland@arm.com> References: <20200428164921.41641-1-mark.rutland@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200428_094933_435940_76DF7A20 X-CRM114-Status: GOOD ( 13.20 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [217.140.110.172 listed in list.dnswl.org] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, catalin.marinas@arm.com, vincenzo.frascino@arm.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org The current code doesn't use a consistent naming scheme for structures, enums, or variables, making it harder than necessary to determine the relationship between these. Let's make this easier by consistently using 'map' nomenclature for mappings created in userspace, minimizing redundant comments, and using designated array initializers to tie indices to their respective elements. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Vincenzo Frascino Cc: Will Deacon Signed-off-by: Mark Rutland --- arch/arm64/kernel/vdso.c | 64 ++++++++++++++++++++---------------------------- 1 file changed, 26 insertions(+), 38 deletions(-) diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index 89ef61686362..f3eea5e20a41 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -182,45 +182,36 @@ static int aarch32_vdso_mremap(const struct vm_special_mapping *sm, } #endif /* CONFIG_COMPAT_VDSO */ -/* - * aarch32_vdso_pages: - * 0 - kuser helpers - * 1 - sigreturn code - * or (CONFIG_COMPAT_VDSO): - * 0 - kuser helpers - * 1 - vdso data - * 2 - vdso code - */ -#define C_VECTORS 0 +enum aarch32_map { + AA32_MAP_VECTORS, /* kuser helpers */ #ifdef CONFIG_COMPAT_VDSO -#define C_VVAR 1 -#define C_VDSO 2 -#define C_PAGES (C_VDSO + 1) + AA32_MAP_VVAR, + AA32_MAP_VDSO, #else -#define C_SIGPAGE 1 -#define C_PAGES (C_SIGPAGE + 1) -#endif /* CONFIG_COMPAT_VDSO */ + AA32_MAP_SIGPAGE +#endif +}; static struct page *aarch32_vectors_page __ro_after_init; #ifndef CONFIG_COMPAT_VDSO static struct page *aarch32_sig_page __ro_after_init; #endif -static struct vm_special_mapping aarch32_vdso_spec[C_PAGES] = { - { +static struct vm_special_mapping aarch32_vdso_maps[] = { + [AA32_MAP_VECTORS] = { .name = "[vectors]", /* ABI */ .pages = &aarch32_vectors_page, }, #ifdef CONFIG_COMPAT_VDSO - { + [AA32_MAP_VVAR] = { .name = "[vvar]", }, - { + [AA32_MAP_VDSO] = { .name = "[vdso]", .mremap = aarch32_vdso_mremap, }, #else - { + [AA32_MAP_SIGPAGE] = { .name = "[sigpage]", /* ABI */ .pages = &aarch32_sig_page, }, @@ -252,8 +243,8 @@ static int __aarch32_alloc_vdso_pages(void) { int ret; - vdso_info[VDSO_ABI_AA32].dm = &aarch32_vdso_spec[C_VVAR]; - vdso_info[VDSO_ABI_AA32].cm = &aarch32_vdso_spec[C_VDSO]; + vdso_info[VDSO_ABI_AA32].dm = &aarch32_vdso_maps[AA32_MAP_VVAR]; + vdso_info[VDSO_ABI_AA32].cm = &aarch32_vdso_maps[AA32_MAP_VDSO]; ret = __vdso_init(VDSO_ABI_AA32); if (ret) @@ -305,7 +296,7 @@ static int aarch32_kuser_helpers_setup(struct mm_struct *mm) ret = _install_special_mapping(mm, AARCH32_VECTORS_BASE, PAGE_SIZE, VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYEXEC, - &aarch32_vdso_spec[C_VECTORS]); + &aarch32_vdso_maps[AA32_MAP_VECTORS]); return PTR_ERR_OR_ZERO(ret); } @@ -329,7 +320,7 @@ static int aarch32_sigreturn_setup(struct mm_struct *mm) ret = _install_special_mapping(mm, addr, PAGE_SIZE, VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC, - &aarch32_vdso_spec[C_SIGPAGE]); + &aarch32_vdso_maps[AA32_MAP_SIGPAGE]); if (IS_ERR(ret)) goto out; @@ -373,19 +364,16 @@ static int vdso_mremap(const struct vm_special_mapping *sm, return __vdso_remap(VDSO_ABI_AA64, sm, new_vma); } -/* - * aarch64_vdso_pages: - * 0 - vvar - * 1 - vdso - */ -#define A_VVAR 0 -#define A_VDSO 1 -#define A_PAGES (A_VDSO + 1) -static struct vm_special_mapping vdso_spec[A_PAGES] __ro_after_init = { - { +enum aarch64_map { + AA64_MAP_VVAR, + AA64_MAP_VDSO, +}; + +static struct vm_special_mapping aarch64_vdso_maps[] __ro_after_init = { + [AA64_MAP_VVAR] = { .name = "[vvar]", }, - { + [AA64_MAP_VDSO] = { .name = "[vdso]", .mremap = vdso_mremap, }, @@ -393,8 +381,8 @@ static struct vm_special_mapping vdso_spec[A_PAGES] __ro_after_init = { static int __init vdso_init(void) { - vdso_info[VDSO_ABI_AA64].dm = &vdso_spec[A_VVAR]; - vdso_info[VDSO_ABI_AA64].cm = &vdso_spec[A_VDSO]; + vdso_info[VDSO_ABI_AA64].dm = &aarch64_vdso_maps[AA64_MAP_VVAR]; + vdso_info[VDSO_ABI_AA64].cm = &aarch64_vdso_maps[AA64_MAP_VDSO]; return __vdso_init(VDSO_ABI_AA64); }