From patchwork Thu Dec 3 14:16:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Li X-Patchwork-Id: 11948955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5A8AC433C1 for ; Thu, 3 Dec 2020 14:18:07 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 39B7C20784 for ; Thu, 3 Dec 2020 14:18:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 39B7C20784 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:Message-ID:Date:Subject:To:From: Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender :Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Owner; bh=U+07a0M343DcADrl6hor1yzuu2pr2NQMnCGItFX1Sbs=; b=SzjznEV1jjf66nOWQfDG2Cek5f xrlJtayZ1EIuG+ei6/hkMctUR+foJHcLnGZ2a39/Wini731IqRU4KS+sxKZrHiEcxLiFY3YGLusAP FgY2ifQ/OL42f3oYDPl41GWYnzD9uo+flAb+QY5zFrs9NLWZ+b+feXyLCazEbxIQ2UHIMo9tZPKxL DNycuLNGUMHUpBrYzo/Et6i0L7vGLtGavRuOGHwUYY9wnMpqfst5rxmeYy6QQB7IVKt8Raf1F5qV2 KfkDnKsWiwqoxlBPOZvQd0EmNP+qZLeDhjlgVI/YHO/gmWFl1uoAAnwW9X48hXZKZbKkFvQdXTU+8 O8cEYbTQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kkpPX-00069s-PR; Thu, 03 Dec 2020 14:16:43 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kkpPI-00060l-QT for linux-arm-kernel@lists.infradead.org; Thu, 03 Dec 2020 14:16:38 +0000 Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4CmyYN3B3sz15WyR; Thu, 3 Dec 2020 22:15:52 +0800 (CST) Received: from euler.huawei.com (10.175.124.27) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.487.0; Thu, 3 Dec 2020 22:16:13 +0800 From: Wei Li To: Catalin Marinas , Will Deacon , Mark Rutland , Suzuki K Poulose , Anshuman Khandual , Vincenzo Frascino , Marc Zyngier , Ionela Voinescu , Ard Biesheuvel , Amit Daniel Kachhap , Vladimir Murzin Subject: [PATCH v4] drivers/perf: Add support for ARMv8.3-SPE Date: Thu, 3 Dec 2020 22:16:09 +0800 Message-ID: <20201203141609.14148-1-liwei391@huawei.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 X-Originating-IP: [10.175.124.27] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201203_091629_472084_62CBD845 X-CRM114-Status: GOOD ( 18.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, guohanjun@huawei.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Armv8.3 extends the SPE by adding: - Alignment field in the Events packet, and filtering on this event using PMSEVFR_EL1. - Support for the Scalable Vector Extension (SVE). The main additions for SVE are: - Recording the vector length for SVE operations in the Operation Type packet. It is not possible to filter on vector length. - Incomplete predicate and empty predicate fields in the Events packet, and filtering on these events using PMSEVFR_EL1. Update the check of pmsevfr for empty/partial predicated SVE and alignment event in SPE driver. Signed-off-by: Wei Li --- v3 -> v4: - Return the highest supported version in default in arm_spe_pmsevfr_res0(). - Drop the exposing of 'pmsver'. (Suggested by Will.) --- v2 -> v3: - Make the definition of 'pmsevfr_res0' progressive and easy to check. (Suggested by Will.) --- v1 -> v2: - Rename 'pmuver' to 'pmsver', change it's type to 'u16' from 'int'. (Suggested by Will and Leo.) - Expose 'pmsver' as cap attribute through sysfs, instead of printing. (Suggested by Will.) --- arch/arm64/include/asm/sysreg.h | 9 ++++++++- drivers/perf/arm_spe_pmu.c | 17 +++++++++++++++-- 2 files changed, 23 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index d52c1b3ce589..57e5aee6f7e6 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -287,7 +287,11 @@ #define SYS_PMSFCR_EL1_ST_SHIFT 18 #define SYS_PMSEVFR_EL1 sys_reg(3, 0, 9, 9, 5) -#define SYS_PMSEVFR_EL1_RES0 0x0000ffff00ff0f55UL +#define SYS_PMSEVFR_EL1_RES0_8_2 \ + (GENMASK_ULL(47, 32) | GENMASK_ULL(23, 16) | GENMASK_ULL(11, 8) |\ + BIT_ULL(6) | BIT_ULL(4) | BIT_ULL(2) | BIT_ULL(0)) +#define SYS_PMSEVFR_EL1_RES0_8_3 \ + (SYS_PMSEVFR_EL1_RES0_8_2 & ~(BIT_ULL(18) | BIT_ULL(17) | BIT_ULL(11))) #define SYS_PMSLATFR_EL1 sys_reg(3, 0, 9, 9, 6) #define SYS_PMSLATFR_EL1_MINLAT_SHIFT 0 @@ -829,6 +833,9 @@ #define ID_AA64DFR0_PMUVER_8_5 0x6 #define ID_AA64DFR0_PMUVER_IMP_DEF 0xf +#define ID_AA64DFR0_PMSVER_8_2 0x1 +#define ID_AA64DFR0_PMSVER_8_3 0x2 + #define ID_DFR0_PERFMON_SHIFT 24 #define ID_DFR0_PERFMON_8_1 0x4 diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c index cc00915ad6d1..bce9aff9f546 100644 --- a/drivers/perf/arm_spe_pmu.c +++ b/drivers/perf/arm_spe_pmu.c @@ -54,7 +54,7 @@ struct arm_spe_pmu { struct hlist_node hotplug_node; int irq; /* PPI */ - + u16 pmsver; u16 min_period; u16 counter_sz; @@ -655,6 +655,18 @@ static irqreturn_t arm_spe_pmu_irq_handler(int irq, void *dev) return IRQ_HANDLED; } +static u64 arm_spe_pmsevfr_res0(u16 pmsver) +{ + switch (pmsver) { + case ID_AA64DFR0_PMSVER_8_2: + return SYS_PMSEVFR_EL1_RES0_8_2; + case ID_AA64DFR0_PMSVER_8_3: + /* Return the highest version we support in default */ + default: + return SYS_PMSEVFR_EL1_RES0_8_3; + } +} + /* Perf callbacks */ static int arm_spe_pmu_event_init(struct perf_event *event) { @@ -670,7 +682,7 @@ static int arm_spe_pmu_event_init(struct perf_event *event) !cpumask_test_cpu(event->cpu, &spe_pmu->supported_cpus)) return -ENOENT; - if (arm_spe_event_to_pmsevfr(event) & SYS_PMSEVFR_EL1_RES0) + if (arm_spe_event_to_pmsevfr(event) & arm_spe_pmsevfr_res0(spe_pmu->pmsver)) return -EOPNOTSUPP; if (attr->exclude_idle) @@ -937,6 +949,7 @@ static void __arm_spe_pmu_dev_probe(void *info) fld, smp_processor_id()); return; } + spe_pmu->pmsver = (u16)fld; /* Read PMBIDR first to determine whether or not we have access */ reg = read_sysreg_s(SYS_PMBIDR_EL1);