From patchwork Mon Aug 29 21:48:19 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 9304567 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8BD676077C for ; Mon, 29 Aug 2016 21:53:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7903628673 for ; Mon, 29 Aug 2016 21:53:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6DCEA2867C; Mon, 29 Aug 2016 21:53:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D48B828673 for ; Mon, 29 Aug 2016 21:53:48 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1beUTS-0000TR-Sz; Mon, 29 Aug 2016 21:52:10 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1beUQT-0005fP-I8 for linux-arm-kernel@lists.infradead.org; Mon, 29 Aug 2016 21:49:10 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D3B45CF2; Mon, 29 Aug 2016 14:48:24 -0700 (PDT) Received: from beelzebub.ast.arm.com (unknown [10.118.96.220]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 78BC23F445; Mon, 29 Aug 2016 14:48:24 -0700 (PDT) From: Jeremy Linton To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v8 8/9] arm64: pmu: Detect and enable multiple PMUs in an ACPI system Date: Mon, 29 Aug 2016 16:48:19 -0500 Message-Id: <1472507300-9844-9-git-send-email-jeremy.linton@arm.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1472507300-9844-1-git-send-email-jeremy.linton@arm.com> References: <1472507300-9844-1-git-send-email-jeremy.linton@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160829_144905_790760_73F811A8 X-CRM114-Status: GOOD ( 19.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, steve.capper@arm.com, mlangsdorf@redhat.com, punit.agrawal@arm.com, will.deacon@arm.com, linux-acpi@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Its possible that an ACPI system has multiple CPU types in it with differing PMU counters. Iterate the CPU's and make a determination about how many of each type exist in the system. Then take and create a PMU platform device for each type, and assign it the interrupts parsed from the MADT. Creating a platform device is necessary because the PMUs are not described as devices in the DSDT table. This code is loosely based on earlier work by Mark Salter. Signed-off-by: Jeremy Linton --- drivers/perf/arm_pmu.c | 8 +- drivers/perf/arm_pmu_acpi.c | 178 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 185 insertions(+), 1 deletion(-) diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index f3e9fcf..5ef78da 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -1073,7 +1073,13 @@ int arm_pmu_device_probe(struct platform_device *pdev, if (!ret) ret = init_fn(pmu); } else if (probe_table) { - ret = probe_plat_pmu(pmu, probe_table, read_cpuid_id()); + if (acpi_disabled) { + /* use the current cpu. */ + ret = probe_plat_pmu(pmu, probe_table, + read_cpuid_id()); + } else { + ret = probe_plat_pmu(pmu, probe_table, pdev->id); + } } if (ret) { diff --git a/drivers/perf/arm_pmu_acpi.c b/drivers/perf/arm_pmu_acpi.c index c1cf00c..bf2800a 100644 --- a/drivers/perf/arm_pmu_acpi.c +++ b/drivers/perf/arm_pmu_acpi.c @@ -2,13 +2,17 @@ * ARM ACPI PMU support * * Copyright (C) 2015 Red Hat Inc. + * Copyright (C) 2016 ARM Ltd. * Author: Mark Salter + * Jeremy Linton * * This work is licensed under the terms of the GNU GPL, version 2. See * the COPYING file in the top-level directory. * */ +#define pr_fmt(fmt) "ACPI-PMU: " fmt + #include #include #include @@ -23,6 +27,12 @@ struct pmu_irq { bool registered; }; +struct pmu_types { + struct list_head list; + int cpu_type; + int cpu_count; +}; + static struct pmu_irq pmu_irqs[NR_CPUS] __initdata; /* @@ -38,3 +48,171 @@ void __init arm_pmu_parse_acpi(int cpu, struct acpi_madt_generic_interrupt *gic) else pmu_irqs[cpu].trigger = ACPI_LEVEL_SENSITIVE; } + +/* Count number and type of CPU cores in the system. */ +static void __init arm_pmu_acpi_determine_cpu_types(struct list_head *pmus) +{ + int i; + bool alloc_failure = false; + + for_each_possible_cpu(i) { + struct cpuinfo_arm64 *cinfo = per_cpu_ptr(&cpu_data, i); + u32 partnum = MIDR_PARTNUM(cinfo->reg_midr); + struct pmu_types *pmu; + + list_for_each_entry(pmu, pmus, list) { + if (pmu->cpu_type == partnum) { + pmu->cpu_count++; + break; + } + } + + /* we didn't find the CPU type, add an entry to identify it */ + if ((&pmu->list == pmus) && (!alloc_failure)) { + pmu = kzalloc(sizeof(struct pmu_types), GFP_KERNEL); + if (!pmu) { + pr_warn("Unable to allocate pmu_types\n"); + /* + * continue to count cpus for any pmu_types + * already allocated, but don't allocate any + * more pmu_types. This avoids undercounting. + */ + alloc_failure = true; + } else { + pmu->cpu_type = partnum; + pmu->cpu_count++; + list_add_tail(&pmu->list, pmus); + } + } + } +} + +/* + * Registers the group of PMU interfaces which correspond to the 'last_cpu_id'. + * This group utilizes 'count' resources in the 'res'. + */ +static int __init arm_pmu_acpi_register_pmu(int count, struct resource *res, + int last_cpu_id) +{ + int i; + int err = -ENOMEM; + bool free_gsi = false; + struct platform_device *pdev; + + if (count) { + pdev = platform_device_alloc(ARMV8_PMU_PDEV_NAME, last_cpu_id); + if (pdev) { + err = platform_device_add_resources(pdev, res, count); + if (!err) { + err = platform_device_add(pdev); + if (err) { + pr_warn("Unable to register PMU device\n"); + free_gsi = true; + } + } else { + pr_warn("Unable to add resources to device\n"); + free_gsi = true; + platform_device_put(pdev); + } + } else { + pr_warn("Unable to allocate platform device\n"); + free_gsi = true; + } + } + + /* unmark (and possibly unregister) registered GSIs */ + for_each_possible_cpu(i) { + if (pmu_irqs[i].registered) { + if (free_gsi) + acpi_unregister_gsi(pmu_irqs[i].gsi); + pmu_irqs[i].registered = false; + } + } + + return err; +} + +/* + * For the given cpu/pmu type, walk all known GSIs, register them, and add + * them to the resource structure. Return the number of GSI's contained + * in the res structure, and the id of the last CPU/PMU we added. + */ +static int __init arm_pmu_acpi_gsi_res(struct pmu_types *pmus, + struct resource *res, int *last_cpu_id) +{ + int i, count; + int irq; + + /* lets group all the PMU's from similar CPU's together */ + count = 0; + for_each_possible_cpu(i) { + struct cpuinfo_arm64 *cinfo = per_cpu_ptr(&cpu_data, i); + + if (pmus->cpu_type == MIDR_PARTNUM(cinfo->reg_midr)) { + if ((pmu_irqs[i].gsi == 0) && (cinfo->reg_midr != 0)) { + pr_info("CPU %d is assigned interrupt 0\n", i); + continue; + } + + irq = acpi_register_gsi(NULL, pmu_irqs[i].gsi, + pmu_irqs[i].trigger, + ACPI_ACTIVE_HIGH); + + res[count].start = res[count].end = irq; + res[count].flags = IORESOURCE_IRQ; + + if (pmu_irqs[i].trigger == ACPI_EDGE_SENSITIVE) + res[count].flags |= IORESOURCE_IRQ_HIGHEDGE; + else + res[count].flags |= IORESOURCE_IRQ_HIGHLEVEL; + + pmu_irqs[i].registered = true; + count++; + (*last_cpu_id) = cinfo->reg_midr; + } + } + return count; +} + +static int __init pmu_acpi_init(void) +{ + struct resource *res; + int err = -ENOMEM; + int count, cpu_id; + struct pmu_types *pmu, *safe_temp; + LIST_HEAD(pmus); + + if (acpi_disabled) + return 0; + + arm_pmu_acpi_determine_cpu_types(&pmus); + + list_for_each_entry_safe(pmu, safe_temp, &pmus, list) { + res = kcalloc(pmu->cpu_count, + sizeof(struct resource), GFP_KERNEL); + + /* for a given PMU type collect all the GSIs. */ + if (res) { + count = arm_pmu_acpi_gsi_res(pmu, res, + &cpu_id); + /* + * register this set of interrupts + * with a new PMU device + */ + err = arm_pmu_acpi_register_pmu(count, res, cpu_id); + if (!err) + pr_info("Registered %d devices for %X\n", + count, pmu->cpu_type); + kfree(res); + } else { + pr_warn("PMU unable to allocate interrupt resource space\n"); + } + + list_del(&pmu->list); + kfree(pmu); + } + + return err; +} + +arch_initcall(pmu_acpi_init);