From patchwork Mon Jul 22 15:37:30 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 2831433 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id C5109C0319 for ; Mon, 22 Jul 2013 15:39:59 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1620520207 for ; Mon, 22 Jul 2013 15:39:56 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EC901201EE for ; Mon, 22 Jul 2013 15:39:51 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1V1ICG-0006Wq-4l; Mon, 22 Jul 2013 15:38:48 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1V1IBw-00072C-J0; Mon, 22 Jul 2013 15:38:28 +0000 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1V1IBt-00070D-EH for linux-arm-kernel@lists.infradead.org; Mon, 22 Jul 2013 15:38:26 +0000 Received: from e106331-lin.cambridge.arm.com (e106331-lin.cambridge.arm.com [10.1.205.41]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id r6MFbZkj007531; Mon, 22 Jul 2013 16:38:00 +0100 (BST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Subject: [PATCHv2 1/5] arm64: reorganise smp_enable_ops Date: Mon, 22 Jul 2013 16:37:30 +0100 Message-Id: <1374507454-4573-2-git-send-email-mark.rutland@arm.com> X-Mailer: git-send-email 1.8.1.1 In-Reply-To: <1374507454-4573-1-git-send-email-mark.rutland@arm.com> References: <1374507454-4573-1-git-send-email-mark.rutland@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130722_113825_837316_CB6E7075 X-CRM114-Status: GOOD ( 18.47 ) X-Spam-Score: -8.4 (--------) Cc: Mark Rutland , Lorenzo.Pieralisi@arm.com, graeme.gregory@linaro.org, nico@linaro.org, Marc.Zyngier@arm.com, Catalin.Marinas@arm.com, sboyd@codeaurora.org, santosh.shilimkar@ti.com, hanjun.guo@linaro.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP For hotplug support, we're going to want a place to store operations that do more than bring CPUs online, and it makes sense to group these with our current smp_enable_ops. This patch renames smp_enable_ops to smp_ops to make the intended use of the structure clearer. While we're at it, fix up instances of the cpu parameter to be an unsigned int, drop the init markings and rename the *_cpu functions to cpu_* to reduce future churn when smp_operations is extended. Signed-off-by: Mark Rutland --- arch/arm64/include/asm/smp.h | 10 +++++----- arch/arm64/kernel/smp.c | 24 ++++++++++++------------ arch/arm64/kernel/smp_psci.c | 10 +++++----- arch/arm64/kernel/smp_spin_table.c | 10 +++++----- 4 files changed, 27 insertions(+), 27 deletions(-) diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h index 4b8023c..90626b6 100644 --- a/arch/arm64/include/asm/smp.h +++ b/arch/arm64/include/asm/smp.h @@ -68,13 +68,13 @@ extern void arch_send_call_function_ipi_mask(const struct cpumask *mask); struct device_node; -struct smp_enable_ops { +struct smp_operations { const char *name; - int (*init_cpu)(struct device_node *, int); - int (*prepare_cpu)(int); + int (*cpu_init)(struct device_node *, unsigned int); + int (*cpu_prepare)(unsigned int); }; -extern const struct smp_enable_ops smp_spin_table_ops; -extern const struct smp_enable_ops smp_psci_ops; +extern const struct smp_operations smp_spin_table_ops; +extern const struct smp_operations smp_psci_ops; #endif /* ifndef __ASM_SMP_H */ diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index fee5cce..533f405 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -236,17 +236,17 @@ void __init smp_prepare_boot_cpu(void) static void (*smp_cross_call)(const struct cpumask *, unsigned int); -static const struct smp_enable_ops *enable_ops[] __initconst = { +static const struct smp_operations *supported_smp_ops[] __initconst = { &smp_spin_table_ops, &smp_psci_ops, NULL, }; -static const struct smp_enable_ops *smp_enable_ops[NR_CPUS]; +static const struct smp_operations *smp_ops[NR_CPUS]; -static const struct smp_enable_ops * __init smp_get_enable_ops(const char *name) +static const struct smp_operations * __init smp_get_ops(const char *name) { - const struct smp_enable_ops **ops = enable_ops; + const struct smp_operations **ops = supported_smp_ops; while (*ops) { if (!strcmp(name, (*ops)->name)) @@ -267,7 +267,7 @@ void __init smp_init_cpus(void) { const char *enable_method; struct device_node *dn = NULL; - int i, cpu = 1; + unsigned int i, cpu = 1; bool bootcpu_valid = false; while ((dn = of_find_node_by_type(dn, "cpu"))) { @@ -346,15 +346,15 @@ void __init smp_init_cpus(void) goto next; } - smp_enable_ops[cpu] = smp_get_enable_ops(enable_method); + smp_ops[cpu] = smp_get_ops(enable_method); - if (!smp_enable_ops[cpu]) { + if (!smp_ops[cpu]) { pr_err("%s: invalid enable-method property: %s\n", dn->full_name, enable_method); goto next; } - if (smp_enable_ops[cpu]->init_cpu(dn, cpu)) + if (smp_ops[cpu]->cpu_init(dn, cpu)) goto next; pr_debug("cpu logical map 0x%llx\n", hwid); @@ -384,8 +384,8 @@ next: void __init smp_prepare_cpus(unsigned int max_cpus) { - int cpu, err; - unsigned int ncores = num_possible_cpus(); + int err; + unsigned int cpu, ncores = num_possible_cpus(); /* * are we trying to boot more cores than exist? @@ -412,10 +412,10 @@ void __init smp_prepare_cpus(unsigned int max_cpus) if (cpu == smp_processor_id()) continue; - if (!smp_enable_ops[cpu]) + if (!smp_ops[cpu]) continue; - err = smp_enable_ops[cpu]->prepare_cpu(cpu); + err = smp_ops[cpu]->cpu_prepare(cpu); if (err) continue; diff --git a/arch/arm64/kernel/smp_psci.c b/arch/arm64/kernel/smp_psci.c index 0c53330..2f0d3dd 100644 --- a/arch/arm64/kernel/smp_psci.c +++ b/arch/arm64/kernel/smp_psci.c @@ -23,12 +23,12 @@ #include #include -static int __init smp_psci_init_cpu(struct device_node *dn, int cpu) +static int smp_psci_cpu_init(struct device_node *dn, unsigned int cpu) { return 0; } -static int __init smp_psci_prepare_cpu(int cpu) +static int smp_psci_cpu_prepare(unsigned int cpu) { int err; @@ -46,8 +46,8 @@ static int __init smp_psci_prepare_cpu(int cpu) return 0; } -const struct smp_enable_ops smp_psci_ops __initconst = { +const struct smp_operations smp_psci_ops = { .name = "psci", - .init_cpu = smp_psci_init_cpu, - .prepare_cpu = smp_psci_prepare_cpu, + .cpu_init = smp_psci_cpu_init, + .cpu_prepare = smp_psci_cpu_prepare, }; diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c index 7c35fa6..5fecffc 100644 --- a/arch/arm64/kernel/smp_spin_table.c +++ b/arch/arm64/kernel/smp_spin_table.c @@ -24,7 +24,7 @@ static phys_addr_t cpu_release_addr[NR_CPUS]; -static int __init smp_spin_table_init_cpu(struct device_node *dn, int cpu) +static int smp_spin_table_cpu_init(struct device_node *dn, unsigned int cpu) { /* * Determine the address from which the CPU is polling. @@ -40,7 +40,7 @@ static int __init smp_spin_table_init_cpu(struct device_node *dn, int cpu) return 0; } -static int __init smp_spin_table_prepare_cpu(int cpu) +static int smp_spin_table_cpu_prepare(unsigned int cpu) { void **release_addr; @@ -59,8 +59,8 @@ static int __init smp_spin_table_prepare_cpu(int cpu) return 0; } -const struct smp_enable_ops smp_spin_table_ops __initconst = { +const struct smp_operations smp_spin_table_ops = { .name = "spin-table", - .init_cpu = smp_spin_table_init_cpu, - .prepare_cpu = smp_spin_table_prepare_cpu, + .cpu_init = smp_spin_table_cpu_init, + .cpu_prepare = smp_spin_table_cpu_prepare, };