From patchwork Tue Jan 17 01:35:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13104022 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EC54C54EBE for ; Tue, 17 Jan 2023 01:36:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234745AbjAQBgG (ORCPT ); Mon, 16 Jan 2023 20:36:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41228 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233671AbjAQBgF (ORCPT ); Mon, 16 Jan 2023 20:36:05 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 39CBF23D8A for ; Mon, 16 Jan 2023 17:36:04 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id t12-20020a17090a3e4c00b00225cb4e761cso13647934pjm.3 for ; Mon, 16 Jan 2023 17:36:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=300KIYOplSSI+ZMp3X/TUeSfcU5weuXC03K02O9sYcM=; b=e3uPjfL92VI+YaZk15R9Pjh1o0bTmFUgCgCOFhlKvx6QcljtI9lz6v/zbfJSXvCd1D iePHCuAoPzbThx8eQiufOY3N1km12jfbD/cou2HuyyJ+Lcks6hZjcTxr9AmbX0n0WwgU 8dLluqitN3rdpOREVQp48ieClgybL+BAS2FXNmseBT+L87FIguRrtRc+f+51aE0ze0V9 IJBPNu7PtOmIU1HobDWB2MihtfXtnwrcIFxlHwwteauzVb06LlFO+GgsJodWobJEzKEB UKghbjf4h72RgnjI8XjD9aZwQWz5JvXhq+FGDviZVrXadf4JROqAO9hMm0OJ5bKwv4d1 QZNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=300KIYOplSSI+ZMp3X/TUeSfcU5weuXC03K02O9sYcM=; b=WwH0XiGIPqt4SzwAWy8xODbdb8pjhqdrQ1hfyD4JlHWnoHGeSAErjKPyQElPllTyTt qRTg3H6YZGwSemwbqKKLkSwvRlveitLHXBDT8MemXHOyy0lAcj6kWHOpTv19Ns0s6Ck+ oRfKt8oY7Zhx+EzRaDT2/NDldLa4OCmNsFo+aH93MbcNtmBu63HUnARiLdpGhh0LbCB5 4REG8wCO4KPVYsT9I6dzMZE6oTjm0Ltw9zw+qL8FIrK2umoWT5keTfXFO0VEgk2S8rQq 98aEAYcOOta2gvPnPtBVTugoreoFMhdHs+ScAT6IJgHNeZh/Ik+HIWBUuG/NjRT0zWwX QPQg== X-Gm-Message-State: AFqh2kpvememq0VLI6X3B4Ola5rFbRAERFaoOtucz1G435nj/mYGPrNa qrx5u+spw85fPZpLWZS05HoL74sQJGU= X-Google-Smtp-Source: AMrXdXs3rKjFgNbseEaHIbohSFHUxCQJdUXPGHsNJTk1fHY3uVZgpmMJpJfCn/DzEXDXOFyJQLcX1QD8O9w= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a62:e801:0:b0:58d:aeb6:2d66 with SMTP id c1-20020a62e801000000b0058daeb62d66mr130395pfi.66.1673919363765; Mon, 16 Jan 2023 17:36:03 -0800 (PST) Date: Mon, 16 Jan 2023 17:35:35 -0800 In-Reply-To: <20230117013542.371944-1-reijiw@google.com> Mime-Version: 1.0 References: <20230117013542.371944-1-reijiw@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230117013542.371944-2-reijiw@google.com> Subject: [PATCH v2 1/8] KVM: arm64: PMU: Have reset_pmu_reg() to clear a register From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On vCPU reset, PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and PMOVS{SET,CLR}_EL1 for a vCPU are reset by reset_pmu_reg(). This function clears RAZ bits of those registers corresponding to unimplemented event counters on the vCPU, and sets bits corresponding to implemented event counters to a predefined pseudo UNKNOWN value (some bits are set to 1). The function identifies (un)implemented event counters on the vCPU based on the PMCR_EL1.N value on the host. Using the host value for this would be problematic when KVM supports letting userspace set PMCR_EL1.N to a value different from the host value (some of the RAZ bits of those registers could end up being set to 1). Fix reset_pmu_reg() to clear the registers so that it can ensure that all the RAZ bits are cleared even when the PMCR_EL1.N value for the vCPU is different from the host value. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 10 +--------- 1 file changed, 1 insertion(+), 9 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index c6cbfe6b854b..ec4bdaf71a15 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -604,19 +604,11 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu, static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { - u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX); - /* No PMU available, any PMU reg may UNDEF... */ if (!kvm_arm_support_pmu_v3()) return; - n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT; - n &= ARMV8_PMU_PMCR_N_MASK; - if (n) - mask |= GENMASK(n - 1, 0); - - reset_unknown(vcpu, r); - __vcpu_sys_reg(vcpu, r->reg) &= mask; + __vcpu_sys_reg(vcpu, r->reg) = 0; } static void reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) From patchwork Tue Jan 17 01:35:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13104023 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E855CC46467 for ; Tue, 17 Jan 2023 01:37:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234744AbjAQBhG (ORCPT ); Mon, 16 Jan 2023 20:37:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41638 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232686AbjAQBhE (ORCPT ); Mon, 16 Jan 2023 20:37:04 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB49A23665 for ; Mon, 16 Jan 2023 17:37:03 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id a11-20020a170902eccb00b0019476ac8ccfso6395318plh.10 for ; Mon, 16 Jan 2023 17:37:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=eC9TU+b1AulwCNQH9iDH+DAro1tSq5M9VVS0yBGcwEk=; b=pTfHbn8ECPuNLNJ2Du97YQ7SgX3rG2N1cPqBY0/gBrCiyAo0wNNJso7a5TAIuh4ML0 LEshQKaWSd5UTKoQApFyvOZpi4Ti00acwDmnhhA9HsjoWA3rTg5ZykzvrrNt7vFxO/C/ p1vg6AGjXPg7vvdJMCBh5OStNOaDF2Ux3yg0qrNskUBDrxzp7+L/bxe4gSSXI6mG+1YI ccaX9/iuVnPqZ6yr7WChriNCwdTqVROaP9Hy8UPl2mIIXQd9StFVSaKgsxhxDObNVfXx l7QzMU9Yid6Q53qTYfHNKhYL2ROX6Uy4yxDrzQ9+5Qx7tJRI+v5P1rcTL00+4WcM3Lrn Xg7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=eC9TU+b1AulwCNQH9iDH+DAro1tSq5M9VVS0yBGcwEk=; b=hLJwN5IhlfBzpMeamo10G1x+Mc+CJJWFfbfXgquZMg5KuxKqBIlrrN7ggopHnwHUEh lp+fNyCWsnSpYZjvyrt4hx2Bfzk7TJJcS0GdWUzQ7UPXgViDfuIjH37oNdDt5Y2wXv6s VkR7AT7SQkhSWVBbEoTG1hdP1ThpUuWnZbbavhfA/3Oy+v5TGNeB/gWoNXBuaPQcYo8Y MTE1BNAFRTkf0p+jnUmZBbpAe0DZyOhPr+pCciQJUwmYzpNtvu/CEFD7cnPpwcB5XsAn 7ZZT0IBUFJ0xYrf8F4pIQ/BztUIz7iYZbKlIj38GXnIOCqrok0wR2KX3ZYiDyoASzgOA 21hA== X-Gm-Message-State: AFqh2krYVQ2oKT4FQRujWja7a66WTaqShFgzuSirV438fTCx29EIN/xj be/ryBkt1tzRIBuCpb7s/QnoGl8pOkU= X-Google-Smtp-Source: AMrXdXvUPDxrn4y21UKMvAWORpGeWfa/Ug/LOec+T5MpYmnHhsOtI7V0f55wuovNZa71BqVMEl3XpaOGjK0= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a05:6a00:1401:b0:58d:9483:734e with SMTP id l1-20020a056a00140100b0058d9483734emr121553pfu.55.1673919423000; Mon, 16 Jan 2023 17:37:03 -0800 (PST) Date: Mon, 16 Jan 2023 17:35:36 -0800 In-Reply-To: <20230117013542.371944-1-reijiw@google.com> Mime-Version: 1.0 References: <20230117013542.371944-1-reijiw@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230117013542.371944-3-reijiw@google.com> Subject: [PATCH v2 2/8] KVM: arm64: PMU: Use reset_pmu_reg() for PMUSERENR_EL0 and PMCCFILTR_EL0 From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The default reset function for PMU registers (reset_pmu_reg()) now simply clears a specified register. Use that function for PMUSERENR_EL0 and PMCCFILTR_EL0, as KVM currently clears those registers on vCPU reset (NOTE: All non-RES0 fields of those registers have UNKNOWN reset values, and the same fields of their AArch32 registers have 0 reset values). No functional change intended. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index ec4bdaf71a15..4959658b502c 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1747,7 +1747,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { * in 32bit mode. Here we choose to reset it as zero for consistency. */ { PMU_SYS_REG(SYS_PMUSERENR_EL0), .access = access_pmuserenr, - .reset = reset_val, .reg = PMUSERENR_EL0, .val = 0 }, + .reg = PMUSERENR_EL0 }, { PMU_SYS_REG(SYS_PMOVSSET_EL0), .access = access_pmovs, .reg = PMOVSSET_EL0 }, @@ -1903,7 +1903,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { * in 32bit mode. Here we choose to reset it as zero for consistency. */ { PMU_SYS_REG(SYS_PMCCFILTR_EL0), .access = access_pmu_evtyper, - .reset = reset_val, .reg = PMCCFILTR_EL0, .val = 0 }, + .reg = PMCCFILTR_EL0 }, { SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 }, { SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 }, From patchwork Tue Jan 17 01:35:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13104024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37B48C46467 for ; Tue, 17 Jan 2023 01:37:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234598AbjAQBhW (ORCPT ); Mon, 16 Jan 2023 20:37:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41758 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234753AbjAQBhS (ORCPT ); Mon, 16 Jan 2023 20:37:18 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57AC723665 for ; Mon, 16 Jan 2023 17:37:17 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id t12-20020a170902b20c00b00192e3d10c5bso20867311plr.4 for ; Mon, 16 Jan 2023 17:37:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=R/yQ6qOJ9nE5sbZwT5+Hdlb+OS4oscumD9F8iuYsEP0=; b=BIanP+6o+DalVgd3npztrKDyle5vEuGPPH2y24H9sG6gr8WHAqWrGxjc4qJOF3b7Pe pZ0ocJObA8sR1zKG3b4dtjvur3wtGP/F+RP81Lu2vmg/KJG3A5cY61EclFYbzoZmAD9R 2t4xMqS2xPUWqi2ecaKApshFrbENbCi/23GuUYaeL+AQrZrLlr8C9X+eg1PV68Zzp/rJ FrpWTAt69byVLoVLigmqkOdQfbZOb83t1hj2H4gRXV+CPU32Jwi2Sv7wPPlbpjfe7N/r ZshXJAWwiJybZsnx4UMyJrAPpFVwfdEa+QeblTavPTbowIcGOkJ0WFEYxuUoeS2bNEq2 3cOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=R/yQ6qOJ9nE5sbZwT5+Hdlb+OS4oscumD9F8iuYsEP0=; b=pF+conzEL4+kQxwVeYwua2hKtVjABOc16IZcSM5xlmsUjCERq3s6WIx7V+/JTGeL18 4cpPiLEXRV+IBalWLreim7MdO+1+6dutcJDQfzf2x7FkAADxjA7p0MfSCHjOkCfKysxv Kfko2dw0uGGt2FJUIc5ITRz8GkGwuQLCWImVU0CluSnrtcRDi+Y+nMaYwepzlHBhtucs ayEJhI1bj0nTQSI4+yqhdc+Yjp0+ktvlbVpsRQxilimMladWpPO9F4cUcLRGJuCvZOA0 ktDjlW4D/0YfeAHv5u55hSMQsMvjeoUeu4MA1fUQVuasDQddPTN+GxlQ6Fz+fRYsq+Rt Pggg== X-Gm-Message-State: AFqh2kr2fjLp/WcEjdlC4XeBemqtNt5wjyPOzsdJouLaW0WBE5saCIFs 3qxoe0hJP0uA5x9pY4xgKMHvd4A7AZM= X-Google-Smtp-Source: AMrXdXu1BwvGS43nNucWc8njSPNaqgCGpTv5cPH0A8yykvGIC9UMKkE3HnsLwdKjNbWczW0pdOjTVkd832g= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a05:6a00:1401:b0:58b:bf1f:e213 with SMTP id l1-20020a056a00140100b0058bbf1fe213mr110981pfu.50.1673919436912; Mon, 16 Jan 2023 17:37:16 -0800 (PST) Date: Mon, 16 Jan 2023 17:35:37 -0800 In-Reply-To: <20230117013542.371944-1-reijiw@google.com> Mime-Version: 1.0 References: <20230117013542.371944-1-reijiw@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230117013542.371944-4-reijiw@google.com> Subject: [PATCH v2 3/8] KVM: arm64: PMU: Preserve vCPU's PMCR_EL0.N value on vCPU reset From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The number of PMU event counters is indicated in PMCR_EL0.N. For a vCPU with PMUv3 configured, its value will be the same as the host value by default. Userspace can set PMCR_EL0.N for the vCPU to a lower value than the host value using KVM_SET_ONE_REG. However, it is practically unsupported, as reset_pmcr() resets PMCR_EL0.N to the host value on vCPU reset. Change reset_pmcr() to preserve the vCPU's PMCR_EL0.N value on vCPU reset so that userspace can limit the number of the PMU event counter on the vCPU. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/pmu-emul.c | 6 ++++++ arch/arm64/kvm/sys_regs.c | 4 +++- 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 24908400e190..937a272b00a5 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -213,6 +213,12 @@ void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu) for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) pmu->pmc[i].idx = i; + + /* + * Initialize PMCR_EL0 for the vCPU with the host value so that + * the value is available at the very first vCPU reset. + */ + __vcpu_sys_reg(vcpu, PMCR_EL0) = read_sysreg(pmcr_el0); } /** diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 4959658b502c..67c1bd39b478 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -637,8 +637,10 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) if (!kvm_arm_support_pmu_v3()) return; + /* PMCR_EL0 for the vCPU is set to the host value at vCPU creation. */ + /* Only preserve PMCR_EL0.N, and reset the rest to 0 */ - pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT); + pmcr = __vcpu_sys_reg(vcpu, r->reg) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT); if (!kvm_supports_32bit_el0()) pmcr |= ARMV8_PMU_PMCR_LC; From patchwork Tue Jan 17 01:35:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13104036 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6DECC67871 for ; Tue, 17 Jan 2023 01:38:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235063AbjAQBit (ORCPT ); Mon, 16 Jan 2023 20:38:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235152AbjAQBi1 (ORCPT ); Mon, 16 Jan 2023 20:38:27 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53C7229E02 for ; Mon, 16 Jan 2023 17:38:26 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id n27-20020aa7985b000000b0058dae392ac5so1601027pfq.10 for ; Mon, 16 Jan 2023 17:38:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=SDhGZMGcl/+rCpQW7EzsP5A7EK7uMx6I717AP5BdSVE=; b=g3A+F2lovTBusknN04Pspphh2sOZEygyJHo6g0SrJ2qVG/CZS9TOHepUiFMEbeu2P7 +pbgUZcUYF9GlDu2Noju5TPTZatLo7dvV3wDHPHHLXd7YJ9d+17CzdZGeEHU0NL4BMYF rcTYh00PPwtNgsZ5yVDHHXSEHB5qB5IITaq2PmI67NCedEepfxXkNWDlmZrX+EBPgqAI pIGKaWKriMwjqFxECsiRor9wGaYSvU1+JKqrYzRyYHNVdGAb8aiO6l5y2aoYhYKiNmyz GLk9+y/dlVZ3Ae67oQRNlOcxe92GGHLfiG6kT5OlPZtVfxpx1K9Mty5YY4ISn/Piwj1A DtIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=SDhGZMGcl/+rCpQW7EzsP5A7EK7uMx6I717AP5BdSVE=; b=v1jVU/Umt6NVh7zks9cwVs+UHD1KQI+pZhn2+Y358CCka3EZpqdacQfSLCr2z2DGTi 9GZeT+TiSUQK3liUFYkrzS+xz7jErnDjJKGqG/sj7uKCagM9k8V8rmK2nCdzRSzskVDI LeEGnipniRMWzBcsbw14I1h2SPZYoIoWnQ6IapTuP0XPF/CZMMQ4hxiusIj84Vpc/y9q kUKakmj4GtTSbwFuD69JqbYAWhQll1drBxpDyL4wOwMQQg09PlD+ABMACecwl3gOgLvZ 55pcDG1hkERFzw0nQGV9XGHT3LyEQicFjbsGj0NnO26k5HQjWBs0zwMzaim4Qw2SDrud XRPA== X-Gm-Message-State: AFqh2kqf3Uj2h2R89woLDvEKGVN4PKTOFIqUgAAagYQcjAsmJC5h03SO 8h3UoEIFgC/0/Y+HtZLGdneR4+WOhZU= X-Google-Smtp-Source: AMrXdXuVU/+Elb5Rp0AHcmNgcHCXEJox4jaCx2mUWqCwLJEwmIXLFTyN3/3HemE7+V5HT6N4PiGUjHLD8m8= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:aa7:9f95:0:b0:578:1e16:f788 with SMTP id z21-20020aa79f95000000b005781e16f788mr134098pfr.12.1673919505841; Mon, 16 Jan 2023 17:38:25 -0800 (PST) Date: Mon, 16 Jan 2023 17:35:38 -0800 In-Reply-To: <20230117013542.371944-1-reijiw@google.com> Mime-Version: 1.0 References: <20230117013542.371944-1-reijiw@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230117013542.371944-5-reijiw@google.com> Subject: [PATCH v2 4/8] KVM: arm64: PMU: Disallow userspace to set PMCR.N greater than the host value From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently, KVM allows userspace to set PMCR_EL0 to any values with KVM_SET_ONE_REG for a vCPU with PMUv3 configured. Disallow userspace to set PMCR_EL0.N to a value that is greater than the host value (KVM_SET_ONE_REG will fail), as KVM doesn't support more event counters than the host HW implements. Although this is an ABI change, this change only affects userspace setting PMCR_EL0.N to a larger value than the host. As accesses to unadvertised event counters indices is CONSTRAINED UNPREDICTABLE behavior, and PMCR_EL0.N was reset to the host value on every vCPU reset before this series, I can't think of any use case where a user space would do that. Also, ignore writes to read-only bits that are cleared on vCPU reset, and RES{0,1} bits (including writable bits that KVM doesn't support yet), as those bits shouldn't be modified (at least with the current KVM). Signed-off-by: Reiji Watanabe Reviewed-by: Marc Zyngier --- arch/arm64/kvm/sys_regs.c | 39 ++++++++++++++++++++++++++++++++++++++- 1 file changed, 38 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 67c1bd39b478..e4bff9621473 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -958,6 +958,43 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, return true; } +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, + u64 val) +{ + u64 host_pmcr, host_n, new_n, mutable_mask; + + new_n = (val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; + + host_pmcr = read_sysreg(pmcr_el0); + host_n = (host_pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; + + /* The vCPU can't have more counters than the host have. */ + if (new_n > host_n) + return -EINVAL; + + /* + * Ignore writes to RES0 bits, read only bits that are cleared on + * vCPU reset, and writable bits that KVM doesn't support yet. + * (i.e. only PMCR.N and bits [7:0] are mutable from userspace) + * The LP bit is RES0 when FEAT_PMUv3p5 is not supported on the vCPU. + * But, we leave the bit as it is here, as the vCPU's PMUver might + * be changed later (NOTE: the bit will be cleared on first vCPU run + * if necessary). + */ + mutable_mask = (ARMV8_PMU_PMCR_MASK | + (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT)); + val &= mutable_mask; + val |= (__vcpu_sys_reg(vcpu, r->reg) & ~mutable_mask); + + /* The LC bit is RES1 when AArch32 is not supported */ + if (!kvm_supports_32bit_el0()) + val |= ARMV8_PMU_PMCR_LC; + + __vcpu_sys_reg(vcpu, r->reg) = val; + + return 0; +} + /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */ #define DBG_BCR_BVR_WCR_WVR_EL1(n) \ { SYS_DESC(SYS_DBGBVRn_EL1(n)), \ @@ -1718,7 +1755,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_SVCR), undef_access }, { PMU_SYS_REG(SYS_PMCR_EL0), .access = access_pmcr, - .reset = reset_pmcr, .reg = PMCR_EL0 }, + .reset = reset_pmcr, .reg = PMCR_EL0, .set_user = set_pmcr }, { PMU_SYS_REG(SYS_PMCNTENSET_EL0), .access = access_pmcnten, .reg = PMCNTENSET_EL0 }, { PMU_SYS_REG(SYS_PMCNTENCLR_EL0), From patchwork Tue Jan 17 01:35:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13104037 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C072C54EBE for ; Tue, 17 Jan 2023 01:39:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235097AbjAQBjA (ORCPT ); Mon, 16 Jan 2023 20:39:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234753AbjAQBir (ORCPT ); Mon, 16 Jan 2023 20:38:47 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F2E8252B9 for ; Mon, 16 Jan 2023 17:38:46 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id t13-20020a056902018d00b0074747131938so32319024ybh.12 for ; Mon, 16 Jan 2023 17:38:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qn3YPHvbxT5N4Qy2asilovtHbC32P/cAZl7ccVXdKtc=; b=ps90UF8Ft99WylNXrWYNE4MO7zDTBu3M8Mp96CZR8egM3fLE17D15LqeAJkTHATWE5 UbIYTnfoCORb9HozFsTP2fkKyFsiW6V+M13Ha46nzxSfzs6RRD8NexKa4s82DCuOQ4/a f5N5m9MW0QxrSfh4gI+sUmRaPT9kYLVY5XwWxaKrMaY3a6qzopj6bBGJowzJJ+QxKKOD Uju6RKz90mZgMiTllO/MGekdro5+M8KmKaUlZ/W1WT7M2TgVksdTo0ARv17fj6SHUEYX Q+OkXg1OxXdx/cduLwWVsia0lBgSOSEYb/gvQhfd4ulMjbiTXxx0b9dru6zzuRDNLm2p vS2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qn3YPHvbxT5N4Qy2asilovtHbC32P/cAZl7ccVXdKtc=; b=LMN8pWLXtpHHHhV8HKvinJasTU2uRHC9XbAOl6zTPLoZF/jiHbynfol2gc1S6L6/v2 8qyH3IeT+wmcfH4eMkPwBOLoN+FA3DXRc5/bQmOWEt8ZG2t3AnzRVMm7Qtatcyvbmvbo XcwWivKkEZD61DyB2O932PrqX4mQDjKMS+13mmxR+fMFayIxN017D3vtjWmtih5m80F3 fIUETfAHdEXNQ45zSnrVm1LEOmJAfPO5iA+v/zDNN6f4ZC281pacgSrDO7rt5KcFRCrQ bfmtrBEtXrsmdqtZW0vRnDt8v2qvrcPbRBLr5Ld0VXik85/50vFHDBzgUM+3Tq4Aawl4 Zbyw== X-Gm-Message-State: AFqh2kpvbn76lcdE9GYO97gBeY6Cni4PuKShU3aXS11DWQ6p3NuKhMN2 h/7OSfXnkuDvKCl7QDx0LqKXFjSiSto= X-Google-Smtp-Source: AMrXdXsNmOW0DXYrD3lYCFSF5n4KYrY9S7FyZUF591lYQr5SPkh2y/F4l353CfQXDs9MBDKj4CwubdSUs8g= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a25:f305:0:b0:7b4:c646:6ad9 with SMTP id c5-20020a25f305000000b007b4c6466ad9mr137830ybs.68.1673919525543; Mon, 16 Jan 2023 17:38:45 -0800 (PST) Date: Mon, 16 Jan 2023 17:35:39 -0800 In-Reply-To: <20230117013542.371944-1-reijiw@google.com> Mime-Version: 1.0 References: <20230117013542.371944-1-reijiw@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230117013542.371944-6-reijiw@google.com> Subject: [PATCH v2 5/8] tools: arm64: Import perf_event.h From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Copy perf_event.h from the kernel's arch/arm64/include/asm/perf_event.h. The following patches will use macros defined in this header. Signed-off-by: Reiji Watanabe --- tools/arch/arm64/include/asm/perf_event.h | 258 ++++++++++++++++++++++ 1 file changed, 258 insertions(+) create mode 100644 tools/arch/arm64/include/asm/perf_event.h diff --git a/tools/arch/arm64/include/asm/perf_event.h b/tools/arch/arm64/include/asm/perf_event.h new file mode 100644 index 000000000000..b2ae51f5f93d --- /dev/null +++ b/tools/arch/arm64/include/asm/perf_event.h @@ -0,0 +1,258 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2012 ARM Ltd. + */ + +#ifndef __ASM_PERF_EVENT_H +#define __ASM_PERF_EVENT_H + +#define ARMV8_PMU_MAX_COUNTERS 32 +#define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1) + +/* + * Common architectural and microarchitectural event numbers. + */ +#define ARMV8_PMUV3_PERFCTR_SW_INCR 0x0000 +#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL 0x0001 +#define ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL 0x0002 +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL 0x0003 +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE 0x0004 +#define ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL 0x0005 +#define ARMV8_PMUV3_PERFCTR_LD_RETIRED 0x0006 +#define ARMV8_PMUV3_PERFCTR_ST_RETIRED 0x0007 +#define ARMV8_PMUV3_PERFCTR_INST_RETIRED 0x0008 +#define ARMV8_PMUV3_PERFCTR_EXC_TAKEN 0x0009 +#define ARMV8_PMUV3_PERFCTR_EXC_RETURN 0x000A +#define ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED 0x000B +#define ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED 0x000C +#define ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED 0x000D +#define ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED 0x000E +#define ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED 0x000F +#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED 0x0010 +#define ARMV8_PMUV3_PERFCTR_CPU_CYCLES 0x0011 +#define ARMV8_PMUV3_PERFCTR_BR_PRED 0x0012 +#define ARMV8_PMUV3_PERFCTR_MEM_ACCESS 0x0013 +#define ARMV8_PMUV3_PERFCTR_L1I_CACHE 0x0014 +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB 0x0015 +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE 0x0016 +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL 0x0017 +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB 0x0018 +#define ARMV8_PMUV3_PERFCTR_BUS_ACCESS 0x0019 +#define ARMV8_PMUV3_PERFCTR_MEMORY_ERROR 0x001A +#define ARMV8_PMUV3_PERFCTR_INST_SPEC 0x001B +#define ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED 0x001C +#define ARMV8_PMUV3_PERFCTR_BUS_CYCLES 0x001D +#define ARMV8_PMUV3_PERFCTR_CHAIN 0x001E +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE 0x001F +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE 0x0020 +#define ARMV8_PMUV3_PERFCTR_BR_RETIRED 0x0021 +#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED_RETIRED 0x0022 +#define ARMV8_PMUV3_PERFCTR_STALL_FRONTEND 0x0023 +#define ARMV8_PMUV3_PERFCTR_STALL_BACKEND 0x0024 +#define ARMV8_PMUV3_PERFCTR_L1D_TLB 0x0025 +#define ARMV8_PMUV3_PERFCTR_L1I_TLB 0x0026 +#define ARMV8_PMUV3_PERFCTR_L2I_CACHE 0x0027 +#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_REFILL 0x0028 +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_ALLOCATE 0x0029 +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_REFILL 0x002A +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE 0x002B +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB 0x002C +#define ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL 0x002D +#define ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL 0x002E +#define ARMV8_PMUV3_PERFCTR_L2D_TLB 0x002F +#define ARMV8_PMUV3_PERFCTR_L2I_TLB 0x0030 +#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS 0x0031 +#define ARMV8_PMUV3_PERFCTR_LL_CACHE 0x0032 +#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS 0x0033 +#define ARMV8_PMUV3_PERFCTR_DTLB_WALK 0x0034 +#define ARMV8_PMUV3_PERFCTR_ITLB_WALK 0x0035 +#define ARMV8_PMUV3_PERFCTR_LL_CACHE_RD 0x0036 +#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS_RD 0x0037 +#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS_RD 0x0038 +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_LMISS_RD 0x0039 +#define ARMV8_PMUV3_PERFCTR_OP_RETIRED 0x003A +#define ARMV8_PMUV3_PERFCTR_OP_SPEC 0x003B +#define ARMV8_PMUV3_PERFCTR_STALL 0x003C +#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND 0x003D +#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND 0x003E +#define ARMV8_PMUV3_PERFCTR_STALL_SLOT 0x003F + +/* Statistical profiling extension microarchitectural events */ +#define ARMV8_SPE_PERFCTR_SAMPLE_POP 0x4000 +#define ARMV8_SPE_PERFCTR_SAMPLE_FEED 0x4001 +#define ARMV8_SPE_PERFCTR_SAMPLE_FILTRATE 0x4002 +#define ARMV8_SPE_PERFCTR_SAMPLE_COLLISION 0x4003 + +/* AMUv1 architecture events */ +#define ARMV8_AMU_PERFCTR_CNT_CYCLES 0x4004 +#define ARMV8_AMU_PERFCTR_STALL_BACKEND_MEM 0x4005 + +/* long-latency read miss events */ +#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_LMISS 0x4006 +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_LMISS_RD 0x4009 +#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_LMISS 0x400A +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_LMISS_RD 0x400B + +/* Trace buffer events */ +#define ARMV8_PMUV3_PERFCTR_TRB_WRAP 0x400C +#define ARMV8_PMUV3_PERFCTR_TRB_TRIG 0x400E + +/* Trace unit events */ +#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT0 0x4010 +#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT1 0x4011 +#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT2 0x4012 +#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT3 0x4013 +#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT4 0x4018 +#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT5 0x4019 +#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT6 0x401A +#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT7 0x401B + +/* additional latency from alignment events */ +#define ARMV8_PMUV3_PERFCTR_LDST_ALIGN_LAT 0x4020 +#define ARMV8_PMUV3_PERFCTR_LD_ALIGN_LAT 0x4021 +#define ARMV8_PMUV3_PERFCTR_ST_ALIGN_LAT 0x4022 + +/* Armv8.5 Memory Tagging Extension events */ +#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED 0x4024 +#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_RD 0x4025 +#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_WR 0x4026 + +/* ARMv8 recommended implementation defined event types */ +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD 0x0040 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR 0x0041 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD 0x0042 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR 0x0043 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_INNER 0x0044 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_OUTER 0x0045 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_VICTIM 0x0046 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_CLEAN 0x0047 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_INVAL 0x0048 + +#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD 0x004C +#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR 0x004D +#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD 0x004E +#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR 0x004F +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_RD 0x0050 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WR 0x0051 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_RD 0x0052 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_WR 0x0053 + +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_VICTIM 0x0056 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_CLEAN 0x0057 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_INVAL 0x0058 + +#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_RD 0x005C +#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_WR 0x005D +#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_RD 0x005E +#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_WR 0x005F +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD 0x0060 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR 0x0061 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_SHARED 0x0062 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NOT_SHARED 0x0063 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NORMAL 0x0064 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_PERIPH 0x0065 +#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_RD 0x0066 +#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_WR 0x0067 +#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LD_SPEC 0x0068 +#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_ST_SPEC 0x0069 +#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LDST_SPEC 0x006A + +#define ARMV8_IMPDEF_PERFCTR_LDREX_SPEC 0x006C +#define ARMV8_IMPDEF_PERFCTR_STREX_PASS_SPEC 0x006D +#define ARMV8_IMPDEF_PERFCTR_STREX_FAIL_SPEC 0x006E +#define ARMV8_IMPDEF_PERFCTR_STREX_SPEC 0x006F +#define ARMV8_IMPDEF_PERFCTR_LD_SPEC 0x0070 +#define ARMV8_IMPDEF_PERFCTR_ST_SPEC 0x0071 +#define ARMV8_IMPDEF_PERFCTR_LDST_SPEC 0x0072 +#define ARMV8_IMPDEF_PERFCTR_DP_SPEC 0x0073 +#define ARMV8_IMPDEF_PERFCTR_ASE_SPEC 0x0074 +#define ARMV8_IMPDEF_PERFCTR_VFP_SPEC 0x0075 +#define ARMV8_IMPDEF_PERFCTR_PC_WRITE_SPEC 0x0076 +#define ARMV8_IMPDEF_PERFCTR_CRYPTO_SPEC 0x0077 +#define ARMV8_IMPDEF_PERFCTR_BR_IMMED_SPEC 0x0078 +#define ARMV8_IMPDEF_PERFCTR_BR_RETURN_SPEC 0x0079 +#define ARMV8_IMPDEF_PERFCTR_BR_INDIRECT_SPEC 0x007A + +#define ARMV8_IMPDEF_PERFCTR_ISB_SPEC 0x007C +#define ARMV8_IMPDEF_PERFCTR_DSB_SPEC 0x007D +#define ARMV8_IMPDEF_PERFCTR_DMB_SPEC 0x007E + +#define ARMV8_IMPDEF_PERFCTR_EXC_UNDEF 0x0081 +#define ARMV8_IMPDEF_PERFCTR_EXC_SVC 0x0082 +#define ARMV8_IMPDEF_PERFCTR_EXC_PABORT 0x0083 +#define ARMV8_IMPDEF_PERFCTR_EXC_DABORT 0x0084 + +#define ARMV8_IMPDEF_PERFCTR_EXC_IRQ 0x0086 +#define ARMV8_IMPDEF_PERFCTR_EXC_FIQ 0x0087 +#define ARMV8_IMPDEF_PERFCTR_EXC_SMC 0x0088 + +#define ARMV8_IMPDEF_PERFCTR_EXC_HVC 0x008A +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_PABORT 0x008B +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_DABORT 0x008C +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_OTHER 0x008D +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_IRQ 0x008E +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_FIQ 0x008F +#define ARMV8_IMPDEF_PERFCTR_RC_LD_SPEC 0x0090 +#define ARMV8_IMPDEF_PERFCTR_RC_ST_SPEC 0x0091 + +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_RD 0x00A0 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WR 0x00A1 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_RD 0x00A2 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_WR 0x00A3 + +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_VICTIM 0x00A6 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_CLEAN 0x00A7 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_INVAL 0x00A8 + +/* + * Per-CPU PMCR: config reg + */ +#define ARMV8_PMU_PMCR_E (1 << 0) /* Enable all counters */ +#define ARMV8_PMU_PMCR_P (1 << 1) /* Reset all counters */ +#define ARMV8_PMU_PMCR_C (1 << 2) /* Cycle counter reset */ +#define ARMV8_PMU_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */ +#define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */ +#define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/ +#define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */ +#define ARMV8_PMU_PMCR_LP (1 << 7) /* Long event counter enable */ +#define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */ +#define ARMV8_PMU_PMCR_N_MASK 0x1f +#define ARMV8_PMU_PMCR_MASK 0xff /* Mask for writable bits */ + +/* + * PMOVSR: counters overflow flag status reg + */ +#define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */ +#define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK + +/* + * PMXEVTYPER: Event selection reg + */ +#define ARMV8_PMU_EVTYPE_MASK 0xc800ffff /* Mask for writable bits */ +#define ARMV8_PMU_EVTYPE_EVENT 0xffff /* Mask for EVENT bits */ + +/* + * Event filters for PMUv3 + */ +#define ARMV8_PMU_EXCLUDE_EL1 (1U << 31) +#define ARMV8_PMU_EXCLUDE_EL0 (1U << 30) +#define ARMV8_PMU_INCLUDE_EL2 (1U << 27) + +/* + * PMUSERENR: user enable reg + */ +#define ARMV8_PMU_USERENR_MASK 0xf /* Mask for writable bits */ +#define ARMV8_PMU_USERENR_EN (1 << 0) /* PMU regs can be accessed at EL0 */ +#define ARMV8_PMU_USERENR_SW (1 << 1) /* PMSWINC can be written at EL0 */ +#define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */ +#define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */ + +/* PMMIR_EL1.SLOTS mask */ +#define ARMV8_PMU_SLOTS_MASK 0xff + +#define ARMV8_PMU_BUS_SLOTS_SHIFT 8 +#define ARMV8_PMU_BUS_SLOTS_MASK 0xff +#define ARMV8_PMU_BUS_WIDTH_SHIFT 16 +#define ARMV8_PMU_BUS_WIDTH_MASK 0xf + +#endif /* __ASM_PERF_EVENT_H */ From patchwork Tue Jan 17 01:35:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13104038 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8671C54EBE for ; Tue, 17 Jan 2023 01:39:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235121AbjAQBjP (ORCPT ); Mon, 16 Jan 2023 20:39:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233226AbjAQBiz (ORCPT ); Mon, 16 Jan 2023 20:38:55 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7DA4F25E2E for ; Mon, 16 Jan 2023 17:38:53 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id 84-20020a630257000000b00477f88d334eso13281326pgc.11 for ; Mon, 16 Jan 2023 17:38:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=V9Xdlz5TADTAdGDo/JsJp+XJkTXco9QFzQliqe+E4CA=; b=meU+gSSBML0kyeAmNCiSryFqm3DQl4uHU5HeiGkk+xA91l36K+LRriYS+VpZtEil6U ozUr987lUi0E/Q7Ya0hhUwEnROtIt1ntc0WiHDkFwtTuEHHAOEFPm0jGpmsweRIu8DpZ fuDUB7Q8f0vAs1vFNasM53zLKZD5Vk3oKLYKwPdQETfTtX2gdpBb5PY4WLc7FM5MnaeO rDwYfKBUKlZppg9JEGlOv2E3taLpy5Do+X7zrf3zFMjTO7r3zJZB/JO/cozwEbBELV0r JF9xY80jGEwptMo03oZWsHRwHBSDRGmuqLHdrGNusBHsjQtXK2BuykvPOCAPIG+Pjy2e Z1eA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=V9Xdlz5TADTAdGDo/JsJp+XJkTXco9QFzQliqe+E4CA=; b=X/+K1loiFQPOjR462O4YpL7f1modx+AQjIre77N3dz0u00ppZ0s+GfkceQcyOMjkhr JWTiNEfI6949QTax/g1DnyrbA8Bc+bSkJ4RbhuXZ9n4dTj13w039tyFtHFWnH3OSsY36 FC9wt+B1zZ+D1KeXOgrVqHZBBmoUTHErTx9JOnCj2MVKELrCILBRKmwXadFTDZ3/XgIF q0ukY855rH1yf5e3IL+c9j80PtlXxKuGbbsjIQWb8m1oDKqZ1moNipYGM/NhWYN0TmG3 eKwu/X7WMNh9uLDzxN915Q5hGoaLXNMkAm4RYvphJNBFyOhJsmN9AaRj6fnjYg5fgx03 FWUQ== X-Gm-Message-State: AFqh2kr1xAi8InBcn/S1Kjx8BlA8bIeG7SFedi3P2OBL3kAD4+VjYQ+v 4Vco2l/1dsmC6ik9ZcKi7VTxNIkueKU= X-Google-Smtp-Source: AMrXdXtTUTubURR3mIjnRFcUYcBmgHTDjj2M7nhKtx2Ovq/uajWaFO0i17CcnIbF7gKr97VufeezGslkKSA= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a63:c16:0:b0:4ce:53ac:6161 with SMTP id b22-20020a630c16000000b004ce53ac6161mr55737pgl.348.1673919533056; Mon, 16 Jan 2023 17:38:53 -0800 (PST) Date: Mon, 16 Jan 2023 17:35:40 -0800 In-Reply-To: <20230117013542.371944-1-reijiw@google.com> Mime-Version: 1.0 References: <20230117013542.371944-1-reijiw@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230117013542.371944-7-reijiw@google.com> Subject: [PATCH v2 6/8] KVM: selftests: aarch64: Introduce vpmu_counter_access test From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Introduce vpmu_counter_access test for arm64 platforms. The test configures PMUv3 for a vCPU, sets PMCR_EL1.N for the vCPU, and check if the guest can consistently see the same number of the PMU event counters (PMCR_EL1.N) that userspace sets. This test case is done with each of the PMCR_EL1.N values from 0 to 31 (With the PMCR_EL1.N values greater than the host value, the test expects KVM_SET_ONE_REG for the PMCR_EL1 to fail). Signed-off-by: Reiji Watanabe --- tools/testing/selftests/kvm/Makefile | 1 + .../kvm/aarch64/vpmu_counter_access.c | 212 ++++++++++++++++++ 2 files changed, 213 insertions(+) create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 1750f91dd936..b27fea0ce591 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -143,6 +143,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/psci_test TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config TEST_GEN_PROGS_aarch64 += aarch64/vgic_init TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq +TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access TEST_GEN_PROGS_aarch64 += access_tracking_perf_test TEST_GEN_PROGS_aarch64 += demand_paging_test TEST_GEN_PROGS_aarch64 += dirty_log_test diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c new file mode 100644 index 000000000000..704a2500b7e1 --- /dev/null +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c @@ -0,0 +1,212 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * vpmu_counter_access - Test vPMU event counter access + * + * Copyright (c) 2022 Google LLC. + * + * This test checks if the guest can see the same number of the PMU event + * counters (PMCR_EL1.N) that userspace sets. + * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. + */ +#include +#include +#include +#include +#include +#include + +/* The max number of the PMU event counters (excluding the cycle counter) */ +#define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) + +static uint64_t pmcr_extract_n(uint64_t pmcr_val) +{ + return (pmcr_val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; +} + +/* + * The guest is configured with PMUv3 with @expected_pmcr_n number of + * event counters. + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N. + */ +static void guest_code(uint64_t expected_pmcr_n) +{ + uint64_t pmcr, pmcr_n; + + GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS); + + pmcr = read_sysreg(pmcr_el0); + pmcr_n = pmcr_extract_n(pmcr); + + /* Make sure that PMCR_EL0.N indicates the value userspace set */ + GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n); + + GUEST_DONE(); +} + +#define GICD_BASE_GPA 0x8000000ULL +#define GICR_BASE_GPA 0x80A0000ULL + +/* Create a VM that has one vCPU with PMUv3 configured. */ +static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, + int *gic_fd) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + struct kvm_vcpu_init init; + uint8_t pmuver; + uint64_t dfr0, irq = 23; + struct kvm_device_attr irq_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_IRQ, + .addr = (uint64_t)&irq, + }; + struct kvm_device_attr init_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_INIT, + }; + + vm = vm_create(1); + + /* Create vCPU with PMUv3 */ + vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); + init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); + vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); + *gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA); + + /* Make sure that PMUv3 support is indicated in the ID register */ + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); + pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0); + TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF && + pmuver >= ID_AA64DFR0_PMUVER_8_0, + "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver); + + /* Initialize vPMU */ + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); + + *vcpup = vcpu; + return vm; +} + +static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) +{ + struct ucall uc; + + vcpu_args_set(vcpu, 1, pmcr_n); + vcpu_run(vcpu); + switch (get_ucall(vcpu, &uc)) { + case UCALL_ABORT: + REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx"); + break; + case UCALL_DONE: + break; + default: + TEST_FAIL("Unknown ucall %lu", uc.cmd); + break; + } +} + +/* + * Create a guest with one vCPU, set the PMCR_EL1.N for the vCPU to @pmcr_n, + * and run the test. + */ +static void run_test(uint64_t pmcr_n) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int gic_fd; + uint64_t sp, pmcr, pmcr_orig; + struct kvm_vcpu_init init; + + pr_debug("Test with pmcr_n %lu\n", pmcr_n); + vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); + + /* Save the initial sp to restore them later to run the guest again */ + vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp); + + /* Update the PMCR_EL1.N with @pmcr_n */ + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); + pmcr = pmcr_orig & ~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT); + pmcr |= (pmcr_n & ARMV8_PMU_PMCR_N_MASK) << ARMV8_PMU_PMCR_N_SHIFT; + vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr); + + run_vcpu(vcpu, pmcr_n); + + /* + * Reset and re-initialize the vCPU, and run the guest code again to + * check if PMCR_EL1.N is preserved. + */ + vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); + init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); + aarch64_vcpu_setup(vcpu, &init); + vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp); + vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code); + + run_vcpu(vcpu, pmcr_n); + + close(gic_fd); + kvm_vm_free(vm); +} + +/* + * Create a guest with one vCPU, and attempt to set the PMCR_EL1.N for + * the vCPU to @pmcr_n, which is larger than the host value. + * The attempt should fail as @pmcr_n is too big to set for the vCPU. + */ +static void run_error_test(uint64_t pmcr_n) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int gic_fd, ret; + uint64_t pmcr, pmcr_orig; + + pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); + vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); + + /* Update the PMCR_EL1.N with @pmcr_n */ + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); + pmcr = pmcr_orig & ~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT); + pmcr |= (pmcr_n & ARMV8_PMU_PMCR_N_MASK) << ARMV8_PMU_PMCR_N_SHIFT; + + /* This should fail as @pmcr_n is too big to set for the vCPU */ + ret = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr); + TEST_ASSERT(ret, "Setting PMCR to 0x%lx (orig PMCR 0x%lx) didn't fail", + pmcr, pmcr_orig); + + close(gic_fd); + kvm_vm_free(vm); +} + +/* + * Return the default number of implemented PMU event counters excluding + * the cycle counter (i.e. PMCR_EL1.N value) for the guest. + */ +static uint64_t get_pmcr_n_limit(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int gic_fd; + uint64_t pmcr; + + vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); + close(gic_fd); + kvm_vm_free(vm); + return pmcr_extract_n(pmcr); +} + +int main(void) +{ + uint64_t i, pmcr_n; + + TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + + pmcr_n = get_pmcr_n_limit(); + for (i = 0; i <= pmcr_n; i++) + run_test(i); + + for (i = pmcr_n + 1; i < ARMV8_PMU_PMCR_N_MASK; i++) + run_error_test(i); + + return 0; +} From patchwork Tue Jan 17 01:35:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13104039 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4E0AC46467 for ; Tue, 17 Jan 2023 01:39:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235115AbjAQBja (ORCPT ); Mon, 16 Jan 2023 20:39:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43354 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235113AbjAQBjK (ORCPT ); Mon, 16 Jan 2023 20:39:10 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E55FA274B7 for ; Mon, 16 Jan 2023 17:39:08 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id z10-20020a170902ccca00b001898329db72so21103377ple.21 for ; Mon, 16 Jan 2023 17:39:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=z22qmgC5SC+sXKM98jN0p2rTRyOhH1vtDSh4QyA73tQ=; b=mtQbW1S8VzTFeytH60IQZQ060TZq/nHfKQ3XTJGQoJg+YoQa2QOUHTIHGOPhxn4h/B bMsy9U8ZFWRw21ExEbp2JIBM1Q6hnchoCNzDu4RtZu9BY+9DhlO6rDI9Jf3CwD6H8dzB iQKJg4RZQJ/cZt656EdUc7fKAvXPQkN5M7riQVzoUXPOgWy3/+Mi6oxeh0gwqGLsQUPq KmDD0LW27fOFTQfwPP6Zxbxejl1taVTqAGxjnpF2wkmgpwL7w+5ZjVOvcWL41WbdQsdf EjXc92KFYV1HHhuB2y0c1i4oXFhmS3hsyN+OAw0MrX5yE7ROWOcQjNZ6/5CknMbe0fEV 4g9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=z22qmgC5SC+sXKM98jN0p2rTRyOhH1vtDSh4QyA73tQ=; b=I+itkkHsdTyQM4RgfjqvqJRvrqiZSkASJvbKcNjW9zk0AUudlSB4ePat1eqwVQdLQf bnGC8N3nu0Ajt4dwtatRG9wp6/ZVN8IX+x40ZqCtUorQiTLkYkzEZAWRXpC3Enuk7mW0 n3wf78V6Hi4lEYE96qfTkD9FeoypCkNVSUVQlc62YYegnPUtUdjKFeAOezaIQySor8Bl /QpRnA+YBLomGhkQUmbuPBh85VOtouyiy0iCtNLkovkMwGfZ7+eTFhCk0iC8twbNC3/S WVE8Gv1u1Zbm3/E10+Z3uZbnpOhxFUh9t36BQ6AVAMH3b3BLFlI5QJdq0OBzPivK9mlo MYwg== X-Gm-Message-State: AFqh2kq6iR55/1MxZGUKcEzkEQq0IbWlUAuiNAe6zUNNb65E1eqh4gvo pzbdZphuFAdRVfvgA2tUR9H6R5I/dSM= X-Google-Smtp-Source: AMrXdXtoAVGT2Mqh49fYUGjJmWRnikQapFCzp4vONG0Dv1M3K/q6HKU2SUNr4GP7cu1LPnk5skNO32w68qk= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a05:6a00:1585:b0:58b:c659:7f50 with SMTP id u5-20020a056a00158500b0058bc6597f50mr133389pfk.43.1673919548469; Mon, 16 Jan 2023 17:39:08 -0800 (PST) Date: Mon, 16 Jan 2023 17:35:41 -0800 In-Reply-To: <20230117013542.371944-1-reijiw@google.com> Mime-Version: 1.0 References: <20230117013542.371944-1-reijiw@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230117013542.371944-8-reijiw@google.com> Subject: [PATCH v2 7/8] KVM: selftests: aarch64: vPMU register test for implemented counters From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new test case to the vpmu_counter_access test to check if PMU registers or their bits for implemented counters on the vCPU are readable/writable as expected, and can be programmed to count events. Signed-off-by: Reiji Watanabe --- .../kvm/aarch64/vpmu_counter_access.c | 347 +++++++++++++++++- 1 file changed, 344 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c index 704a2500b7e1..54b69c76c824 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c @@ -5,7 +5,8 @@ * Copyright (c) 2022 Google LLC. * * This test checks if the guest can see the same number of the PMU event - * counters (PMCR_EL1.N) that userspace sets. + * counters (PMCR_EL1.N) that userspace sets, and if the guest can access + * those counters. * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. */ #include @@ -18,19 +19,350 @@ /* The max number of the PMU event counters (excluding the cycle counter) */ #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) +/* + * The macros and functions below for reading/writing PMEVT{CNTR,TYPER}_EL0 + * were basically copied from arch/arm64/kernel/perf_event.c. + */ +#define PMEVN_CASE(n, case_macro) \ + case n: case_macro(n); break + +#define PMEVN_SWITCH(x, case_macro) \ + do { \ + switch (x) { \ + PMEVN_CASE(0, case_macro); \ + PMEVN_CASE(1, case_macro); \ + PMEVN_CASE(2, case_macro); \ + PMEVN_CASE(3, case_macro); \ + PMEVN_CASE(4, case_macro); \ + PMEVN_CASE(5, case_macro); \ + PMEVN_CASE(6, case_macro); \ + PMEVN_CASE(7, case_macro); \ + PMEVN_CASE(8, case_macro); \ + PMEVN_CASE(9, case_macro); \ + PMEVN_CASE(10, case_macro); \ + PMEVN_CASE(11, case_macro); \ + PMEVN_CASE(12, case_macro); \ + PMEVN_CASE(13, case_macro); \ + PMEVN_CASE(14, case_macro); \ + PMEVN_CASE(15, case_macro); \ + PMEVN_CASE(16, case_macro); \ + PMEVN_CASE(17, case_macro); \ + PMEVN_CASE(18, case_macro); \ + PMEVN_CASE(19, case_macro); \ + PMEVN_CASE(20, case_macro); \ + PMEVN_CASE(21, case_macro); \ + PMEVN_CASE(22, case_macro); \ + PMEVN_CASE(23, case_macro); \ + PMEVN_CASE(24, case_macro); \ + PMEVN_CASE(25, case_macro); \ + PMEVN_CASE(26, case_macro); \ + PMEVN_CASE(27, case_macro); \ + PMEVN_CASE(28, case_macro); \ + PMEVN_CASE(29, case_macro); \ + PMEVN_CASE(30, case_macro); \ + default: \ + GUEST_ASSERT_1(0, x); \ + } \ + } while (0) + +#define RETURN_READ_PMEVCNTRN(n) \ + return read_sysreg(pmevcntr##n##_el0) +static unsigned long read_pmevcntrn(int n) +{ + PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN); + return 0; +} + +#define WRITE_PMEVCNTRN(n) \ + write_sysreg(val, pmevcntr##n##_el0) +static void write_pmevcntrn(int n, unsigned long val) +{ + PMEVN_SWITCH(n, WRITE_PMEVCNTRN); + isb(); +} + +#define READ_PMEVTYPERN(n) \ + return read_sysreg(pmevtyper##n##_el0) +static unsigned long read_pmevtypern(int n) +{ + PMEVN_SWITCH(n, READ_PMEVTYPERN); + return 0; +} + +#define WRITE_PMEVTYPERN(n) \ + write_sysreg(val, pmevtyper##n##_el0) +static void write_pmevtypern(int n, unsigned long val) +{ + PMEVN_SWITCH(n, WRITE_PMEVTYPERN); + isb(); +} + +/* Read PMEVTCNTR_EL0 through PMXEVCNTR_EL0 */ +static inline unsigned long read_sel_evcntr(int sel) +{ + write_sysreg(sel, pmselr_el0); + isb(); + return read_sysreg(pmxevcntr_el0); +} + +/* Write PMEVTCNTR_EL0 through PMXEVCNTR_EL0 */ +static inline void write_sel_evcntr(int sel, unsigned long val) +{ + write_sysreg(sel, pmselr_el0); + isb(); + write_sysreg(val, pmxevcntr_el0); + isb(); +} + +/* Read PMEVTTYPER_EL0 through PMXEVTYPER_EL0 */ +static inline unsigned long read_sel_evtyper(int sel) +{ + write_sysreg(sel, pmselr_el0); + isb(); + return read_sysreg(pmxevtyper_el0); +} + +/* Write PMEVTTYPER_EL0 through PMXEVTYPER_EL0 */ +static inline void write_sel_evtyper(int sel, unsigned long val) +{ + write_sysreg(sel, pmselr_el0); + isb(); + write_sysreg(val, pmxevtyper_el0); + isb(); +} + +static inline void enable_counter(int idx) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmcntenset_el0); + isb(); +} + +static inline void disable_counter(int idx) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmcntenclr_el0); + isb(); +} + +/* + * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}_EL0 + * accessors that test cases will use. Each of the accessors will + * either directly reads/writes PMEVT{CNTR,TYPER}_EL0 + * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through + * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()). + * + * This is used to test that combinations of those accessors provide + * the consistent behavior. + */ +struct pmc_accessor { + /* A function to be used to read PMEVTCNTR_EL0 */ + unsigned long (*read_cntr)(int idx); + /* A function to be used to write PMEVTCNTR_EL0 */ + void (*write_cntr)(int idx, unsigned long val); + /* A function to be used to read PMEVTTYPER_EL0 */ + unsigned long (*read_typer)(int idx); + /* A function to be used write PMEVTTYPER_EL0 */ + void (*write_typer)(int idx, unsigned long val); +}; + +struct pmc_accessor pmc_accessors[] = { + /* test with all direct accesses */ + { read_pmevcntrn, write_pmevcntrn, read_pmevtypern, write_pmevtypern }, + /* test with all indirect accesses */ + { read_sel_evcntr, write_sel_evcntr, read_sel_evtyper, write_sel_evtyper }, + /* read with direct accesses, and write with indirect accesses */ + { read_pmevcntrn, write_sel_evcntr, read_pmevtypern, write_sel_evtyper }, + /* read with indirect accesses, and write with direct accesses */ + { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern }, +}; + +static void pmu_disable_reset(void) +{ + uint64_t pmcr = read_sysreg(pmcr_el0); + + /* Reset all counters, disabling them */ + pmcr &= ~ARMV8_PMU_PMCR_E; + write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0); + isb(); +} + +static void pmu_enable(void) +{ + uint64_t pmcr = read_sysreg(pmcr_el0); + + /* Reset all counters, disabling them */ + pmcr |= ARMV8_PMU_PMCR_E; + write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0); + isb(); +} + +static bool pmu_event_is_supported(uint64_t event) +{ + GUEST_ASSERT_1(event < 64, event); + return (read_sysreg(pmceid0_el0) & BIT(event)); +} + static uint64_t pmcr_extract_n(uint64_t pmcr_val) { return (pmcr_val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; } +#define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected) \ +{ \ + uint64_t _tval = read_sysreg(regname); \ + \ + if (set_expected) \ + GUEST_ASSERT_3((_tval & mask), _tval, mask, set_expected); \ + else \ + GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\ +} + +/* + * Check if @mask bits in {PMCNTEN,PMOVS}{SET,CLR} registers + * are set or cleared as specified in @set_expected. + */ +static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected) +{ + GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmovsset_el0, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmovsclr_el0, mask, set_expected); +} + +/* + * Check if the bit in {PMCNTEN,PMOVS}{SET,CLR} registers corresponding + * to the specified counter (@pmc_idx) can be read/written as expected. + * When @set_op is true, it tries to set the bit for the counter in + * those registers by writing the SET registers (the bit won't be set + * if the counter is not implemented though). + * Otherwise, it tries to clear the bits in the registers by writing + * the CLR registers. + * Then, it checks if the values indicated in the registers are as expected. + */ +static void test_bitmap_pmu_regs(int pmc_idx, bool set_op) +{ + uint64_t pmcr_n, test_bit = BIT(pmc_idx); + bool set_expected = false; + + if (set_op) { + write_sysreg(test_bit, pmcntenset_el0); + write_sysreg(test_bit, pmovsset_el0); + + /* The bit will be set only if the counter is implemented */ + pmcr_n = pmcr_extract_n(read_sysreg(pmcr_el0)); + set_expected = (pmc_idx < pmcr_n) ? true : false; + } else { + write_sysreg(test_bit, pmcntenclr_el0); + write_sysreg(test_bit, pmovsclr_el0); + } + check_bitmap_pmu_regs(test_bit, set_expected); +} + +/* + * Tests for reading/writing registers for the (implemented) event counter + * specified by @pmc_idx. + */ +static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx) +{ + uint64_t write_data, read_data, read_data_prev, test_bit; + + /* Disable all PMCs and reset all PMCs to zero. */ + pmu_disable_reset(); + + + /* + * Tests for reading/writing {PMCNTEN,PMOVS}{SET,CLR}_EL1. + */ + + test_bit = 1ul << pmc_idx; + /* Make sure that the bit in those registers are set to 0 */ + test_bitmap_pmu_regs(test_bit, false); + /* Test if setting the bit in those registers works */ + test_bitmap_pmu_regs(test_bit, true); + /* Test if clearing the bit in those registers works */ + test_bitmap_pmu_regs(test_bit, false); + + + /* + * Tests for reading/writing the event type register. + */ + + read_data = acc->read_typer(pmc_idx); + /* + * Set the event type register to an arbitrary value just for testing + * of reading/writing the register. + * ArmARM says that for the event from 0x0000 to 0x003F, + * the value indicated in the PMEVTYPER_EL0.evtCount field is + * the value written to the field even when the specified event + * is not supported. + */ + write_data = (ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMUV3_PERFCTR_INST_RETIRED); + acc->write_typer(pmc_idx, write_data); + read_data = acc->read_typer(pmc_idx); + GUEST_ASSERT_4(read_data == write_data, + pmc_idx, acc, read_data, write_data); + + + /* + * Tests for reading/writing the event count register. + */ + + read_data = acc->read_cntr(pmc_idx); + + /* The count value must be 0, as it is not used after the reset */ + GUEST_ASSERT_3(read_data == 0, pmc_idx, acc, read_data); + + write_data = read_data + pmc_idx + 0x12345; + acc->write_cntr(pmc_idx, write_data); + read_data = acc->read_cntr(pmc_idx); + GUEST_ASSERT_4(read_data == write_data, + pmc_idx, acc, read_data, write_data); + + + /* The following test requires the INST_RETIRED event support. */ + if (!pmu_event_is_supported(ARMV8_PMUV3_PERFCTR_INST_RETIRED)) + return; + + pmu_enable(); + acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED); + + /* + * Make sure that the counter doesn't count the INST_RETIRED + * event when disabled, and the counter counts the event when enabled. + */ + disable_counter(pmc_idx); + read_data_prev = acc->read_cntr(pmc_idx); + read_data = acc->read_cntr(pmc_idx); + GUEST_ASSERT_4(read_data == read_data_prev, + pmc_idx, acc, read_data, read_data_prev); + + enable_counter(pmc_idx); + read_data = acc->read_cntr(pmc_idx); + + /* + * The counter should be increased by at least 1, as there is at + * least one instruction between enabling the counter and reading + * the counter (the test assumes that all event counters are not + * being used by the host's higher priority events). + */ + GUEST_ASSERT_4(read_data > read_data_prev, + pmc_idx, acc, read_data, read_data_prev); +} + /* * The guest is configured with PMUv3 with @expected_pmcr_n number of * event counters. - * Check if @expected_pmcr_n is consistent with PMCR_EL0.N. + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and + * if reading/writing PMU registers for implemented counters can work + * as expected. */ static void guest_code(uint64_t expected_pmcr_n) { uint64_t pmcr, pmcr_n; + int i, pmc; GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS); @@ -40,6 +372,15 @@ static void guest_code(uint64_t expected_pmcr_n) /* Make sure that PMCR_EL0.N indicates the value userspace set */ GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n); + /* + * Tests for reading/writing PMU registers for implemented counters. + * Use each combination of PMEVT{CNTR,TYPER}_EL0 accessor functions. + */ + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { + for (pmc = 0; pmc < pmcr_n; pmc++) + test_access_pmc_regs(&pmc_accessors[i], pmc); + } + GUEST_DONE(); } @@ -96,7 +437,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) vcpu_run(vcpu); switch (get_ucall(vcpu, &uc)) { case UCALL_ABORT: - REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx"); + REPORT_GUEST_ASSERT_4(uc, "values:%#lx %#lx %#lx %#lx"); break; case UCALL_DONE: break; From patchwork Tue Jan 17 01:35:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13104040 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF9EAC46467 for ; Tue, 17 Jan 2023 01:39:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235159AbjAQBjm (ORCPT ); Mon, 16 Jan 2023 20:39:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235163AbjAQBj0 (ORCPT ); Mon, 16 Jan 2023 20:39:26 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 71C9F2A177 for ; Mon, 16 Jan 2023 17:39:25 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id h1-20020a17090a470100b0022646263abfso13647474pjg.6 for ; Mon, 16 Jan 2023 17:39:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9LL7tftOZDJOgLTiYp8vW+fAMrQll9LeJRbkYGzCLag=; b=CnERzyszg/Hb8IMyVOaCBOb6VPyNNlkqXzRAf9wf5othQzIoLi4uWAD6qhsBW5gZf+ 1GntiyuxXFijWyasescCQf47QiTCvgdtHBxGNRqbXVa97ytgJwiZ6FC2wjmgdF+dUp7M uFsY2f2wwkKWBn/+L1f8L350SqvgdFMV6VO2pEVc/PdppISolmY47QJrR9kvq4wCnKlJ rzEwYUFd0tz/RhmxZXod6ngEqnS2RrqLg1PAowS2zbOl4PpatL2+xeihH5V8sDiMK1EK bhPVcLsjEID4+Qg+w7QcXOc6ZuxGHE3BfxgnVIp/RrqTD7qNILtatGB6gRybbH2/V/9X CMpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9LL7tftOZDJOgLTiYp8vW+fAMrQll9LeJRbkYGzCLag=; b=YJ/1o2UmHQMgjhBGyZGcrELgr6MI18iPkfu+VuEXyxIEWF99rTtBRXLG55D6quLiQ1 Rp1pk6sDhQa6q4D87b5WeXLIs9Abai/OLvZHd3Aqteqy2p0m3WH8RnFXNEV9qrppkdWa UNhthrya1yw0nWcvDfYqmeZ3grMJPv2sJgtVx3M9VAy5P3O+eQe94xcfN70tJacWlv3L 0qUQuVei4yFDz7BZxSxiAuNevTuNy176yCCJPb/TPMh3bmyh4ccy7ZvO/Ok1x05doG2w zMTL8Ld/U3zVY60tP3DLKirQ4+kpqnNbMiYAjSYRsKtjXe5unPHnDJCJScSlY1qfGSuH ohKw== X-Gm-Message-State: AFqh2koBvQ+N6zX/I+2AeY4FOqOW7DMvA1ApD+v9MXuu5ivcHVU6yZZn w5JqKmqYLtJdrpmxTyIqYDbo5x+S724= X-Google-Smtp-Source: AMrXdXve04ysOODIOGhktUl8uDzFzF2hsytI+az+ghuCP2ETz3X45JZkP9gwEd57jma4buGbY07kcuA7cgI= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a05:6a00:440b:b0:58c:2309:b7e with SMTP id br11-20020a056a00440b00b0058c23090b7emr119824pfb.34.1673919564950; Mon, 16 Jan 2023 17:39:24 -0800 (PST) Date: Mon, 16 Jan 2023 17:35:42 -0800 In-Reply-To: <20230117013542.371944-1-reijiw@google.com> Mime-Version: 1.0 References: <20230117013542.371944-1-reijiw@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230117013542.371944-9-reijiw@google.com> Subject: [PATCH v2 8/8] KVM: selftests: aarch64: vPMU register test for unimplemented counters From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new test case to the vpmu_counter_access test to check if PMU registers or their bits for unimplemented counters are not accessible or are RAZ, as expected. Signed-off-by: Reiji Watanabe --- .../kvm/aarch64/vpmu_counter_access.c | 103 +++++++++++++++++- .../selftests/kvm/include/aarch64/processor.h | 1 + 2 files changed, 98 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c index 54b69c76c824..a7e34d63808b 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c @@ -5,8 +5,8 @@ * Copyright (c) 2022 Google LLC. * * This test checks if the guest can see the same number of the PMU event - * counters (PMCR_EL1.N) that userspace sets, and if the guest can access - * those counters. + * counters (PMCR_EL1.N) that userspace sets, if the guest can access + * those counters, and if the guest cannot access any other counters. * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. */ #include @@ -179,6 +179,51 @@ struct pmc_accessor pmc_accessors[] = { { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern }, }; +#define INVALID_EC (-1ul) +uint64_t expected_ec = INVALID_EC; +uint64_t op_end_addr; + +static void guest_sync_handler(struct ex_regs *regs) +{ + uint64_t esr, ec; + + esr = read_sysreg(esr_el1); + ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK; + GUEST_ASSERT_4(op_end_addr && (expected_ec == ec), + regs->pc, esr, ec, expected_ec); + + /* Will go back to op_end_addr after the handler exits */ + regs->pc = op_end_addr; + + /* + * Clear op_end_addr, and setting expected_ec to INVALID_EC + * as a sign that an exception has occurred. + */ + op_end_addr = 0; + expected_ec = INVALID_EC; +} + +/* + * Run the given operation that should trigger an exception with the + * given exception class. The exception handler (guest_sync_handler) + * will reset op_end_addr to 0, and expected_ec to INVALID_EC, and + * will come back to the instruction at the @done_label. + * The @done_label must be a unique label in this test program. + */ +#define TEST_EXCEPTION(ec, ops, done_label) \ +{ \ + extern int done_label; \ + \ + WRITE_ONCE(op_end_addr, (uint64_t)&done_label); \ + GUEST_ASSERT(ec != INVALID_EC); \ + WRITE_ONCE(expected_ec, ec); \ + dsb(ish); \ + ops; \ + asm volatile(#done_label":"); \ + GUEST_ASSERT(!op_end_addr); \ + GUEST_ASSERT(expected_ec == INVALID_EC); \ +} + static void pmu_disable_reset(void) { uint64_t pmcr = read_sysreg(pmcr_el0); @@ -352,16 +397,38 @@ static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx) pmc_idx, acc, read_data, read_data_prev); } +/* + * Tests for reading/writing registers for the unimplemented event counter + * specified by @pmc_idx (>= PMCR_EL1.N). + */ +static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx) +{ + /* + * Reading/writing the event count/type registers should cause + * an UNDEFINED exception. + */ + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_cntr(pmc_idx), inv_rd_cntr); + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0), inv_wr_cntr); + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_typer(pmc_idx), inv_rd_typer); + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_typer(pmc_idx, 0), inv_wr_typer); + /* + * The bit corresponding to the (unimplemented) counter in + * {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers should be RAZ. + */ + test_bitmap_pmu_regs(pmc_idx, 1); + test_bitmap_pmu_regs(pmc_idx, 0); +} + /* * The guest is configured with PMUv3 with @expected_pmcr_n number of * event counters. * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and - * if reading/writing PMU registers for implemented counters can work - * as expected. + * if reading/writing PMU registers for implemented or unimplemented + * counters can work as expected. */ static void guest_code(uint64_t expected_pmcr_n) { - uint64_t pmcr, pmcr_n; + uint64_t pmcr, pmcr_n, unimp_mask; int i, pmc; GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS); @@ -372,6 +439,14 @@ static void guest_code(uint64_t expected_pmcr_n) /* Make sure that PMCR_EL0.N indicates the value userspace set */ GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n); + /* + * Make sure that (RAZ) bits corresponding to unimplemented event + * counters in {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers are reset to zero. + * (NOTE: bits for implemented event counters are reset to UNKNOWN) + */ + unimp_mask = GENMASK_ULL(ARMV8_PMU_MAX_GENERAL_COUNTERS - 1, pmcr_n); + check_bitmap_pmu_regs(unimp_mask, false); + /* * Tests for reading/writing PMU registers for implemented counters. * Use each combination of PMEVT{CNTR,TYPER}_EL0 accessor functions. @@ -381,6 +456,14 @@ static void guest_code(uint64_t expected_pmcr_n) test_access_pmc_regs(&pmc_accessors[i], pmc); } + /* + * Tests for reading/writing PMU registers for unimplemented counters. + * Use each combination of PMEVT{CNTR,TYPER}_EL0 accessor functions. + */ + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { + for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++) + test_access_invalid_pmc_regs(&pmc_accessors[i], pmc); + } GUEST_DONE(); } @@ -394,7 +477,7 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, struct kvm_vm *vm; struct kvm_vcpu *vcpu; struct kvm_vcpu_init init; - uint8_t pmuver; + uint8_t pmuver, ec; uint64_t dfr0, irq = 23; struct kvm_device_attr irq_attr = { .group = KVM_ARM_VCPU_PMU_V3_CTRL, @@ -407,11 +490,18 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, }; vm = vm_create(1); + vm_init_descriptor_tables(vm); + /* Catch exceptions for easier debugging */ + for (ec = 0; ec < ESR_EC_NUM; ec++) { + vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, ec, + guest_sync_handler); + } /* Create vCPU with PMUv3 */ vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); + vcpu_init_descriptor_tables(vcpu); *gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA); /* Make sure that PMUv3 support is indicated in the ID register */ @@ -480,6 +570,7 @@ static void run_test(uint64_t pmcr_n) vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); aarch64_vcpu_setup(vcpu, &init); + vcpu_init_descriptor_tables(vcpu); vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp); vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code); diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h index 5f977528e09c..52d87809356c 100644 --- a/tools/testing/selftests/kvm/include/aarch64/processor.h +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h @@ -104,6 +104,7 @@ enum { #define ESR_EC_SHIFT 26 #define ESR_EC_MASK (ESR_EC_NUM - 1) +#define ESR_EC_UNKNOWN 0x0 #define ESR_EC_SVC64 0x15 #define ESR_EC_IABT 0x21 #define ESR_EC_DABT 0x25