From patchwork Sat May 27 04:02:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13257541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CBE70C77B7E for ; Sat, 27 May 2023 04:04:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=uk2eM44Z0M/KhKja8zcws5Colx/8n68OCrvFcLVhl8c=; b=LOs5NhB6My4/58skbehleK+8Cl QkQhnVBhTDgjNvZ4yJKHN4LWt6kO13zNCMgS/mGoKWs5ZiukSPK7elrFK451lceXEhmSkVEXl9fQh GqeBCBi2YeZrN8G5/jtUfwIncpAwor8c2UgelmRudRIaeoM67p3BVWgySvcvdYuks65DwEHA15zOU 4WFj6/GXorJnBhFli41M3qlnN0zUifg/y3ACxBL1HwsdJI9uqP7vg6g9CZgDb/aqyOJPjXFRIbtkn wN/M/ayAfkUCh6Ff7i1IwfxRj15Zf5ftLrGiQqM4tUQsgWkU2LtM3EYYGKDc2hn8fq1dYm63zjf+X tkca+nHA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q2lAR-004nyW-1z; Sat, 27 May 2023 04:04:35 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q2lAO-004nxK-0e for linux-arm-kernel@lists.infradead.org; Sat, 27 May 2023 04:04:33 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-56536dd5f79so35521617b3.3 for ; Fri, 26 May 2023 21:04:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685160270; x=1687752270; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YHzsFiaji3R6LFuYZV3olh4p0y72F3S93sl+y+EJDgg=; b=tXRV7/8vaXCBmNvtRzMV0UTHrROqViarCwmmvALY8aLcpKOu4xRm91QNaRI34zQ8v8 yvLYZ1GU58l5FB/cjDoO40R/Z9cSiIJqqRZNSNv4nT8eeNOM9sC/MTkqBlahXlSb6FOb YEz1zKPvzwS09qQoUfpVwwChYweXdNma7oX+Su9R3sb4Apr9TGtS1UBKzdIVs4NTCGyY efKwRDyUP4EN4C9cvir8Cel8fAQPgCtt3xAF7dx0txEhPQARq6ttu0gHmWMud3io/OYo NtwEmzBt55fsPBQMr0neMCUGj7Kuyq78I2B4S7IX5XU/3LtYh0XQ4L7d7gXjDRmn9kz5 teGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685160270; x=1687752270; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YHzsFiaji3R6LFuYZV3olh4p0y72F3S93sl+y+EJDgg=; b=gDZ5wrVcg1F3L+3qDkPe9hZIOsgXCZ/Jl1+eHUcACTYcmXrhdB7xvWrQE0Mr7KqrdE dk2odtbtIn4GMGbji7k+93qyFhs7LH+k4DLSEMGFBg/9Dd/xJQKHEiWa9bcbwoEC92H7 4zYvWnY2WAB9mpnu12cWmbX+We1nGZgMWnuvPuB+OR0fIb+Z8XJb1M7d3uHlG4eFNH2L 2xmOq+8B2N5U4iRiP8rcaSIlKprkEEIKXxyqbZrDp6gKZPbEsRoUDDhLcD6t4wcvrwag Oanz4KaG5BTEqAKSGOcMznX/GL7ai3ETEpBZECvfVIQmZNT0kjsKq1oEkEfFhRbiJFVT XAsQ== X-Gm-Message-State: AC+VfDxTN5768VRPn3MbheUH570t2RuHaA7HYNP+Q1o5NgG7OlGMuUJM pyv+zZVIYINJOlPU0CFToKF3teS3W+w= X-Google-Smtp-Source: ACHHUZ4VFYAdIHLqeJ14za1MxIqkvJrkGV6Yf1oNFWGOG8pJc58iQwSXrcfI0IoqFC+oVQuwesnB8fXEw+Y= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a81:b71d:0:b0:55a:3133:86fa with SMTP id v29-20020a81b71d000000b0055a313386famr2380185ywh.3.1685160270657; Fri, 26 May 2023 21:04:30 -0700 (PDT) Date: Fri, 26 May 2023 21:02:33 -0700 In-Reply-To: <20230527040236.1875860-1-reijiw@google.com> Mime-Version: 1.0 References: <20230527040236.1875860-1-reijiw@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230527040236.1875860-2-reijiw@google.com> Subject: [PATCH 1/4] KVM: arm64: PMU: Introduce a helper to set the guest's PMU From: Reiji Watanabe To: Marc Zyngier , Oliver Upton , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata , Will Deacon , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230526_210432_240178_C720840A X-CRM114-Status: GOOD ( 14.38 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a new helper function to set the guest's PMU (kvm->arch.arm_pmu), and use it when the guest's PMU needs to be set. This helper will make it easier for the following patches to modify the relevant code. No functional change intended. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/pmu-emul.c | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 45727d50d18d..d50c8f7a2410 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -869,6 +869,21 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq) return true; } +static int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) +{ + lockdep_assert_held(&kvm->arch.config_lock); + + if (!arm_pmu) { + arm_pmu = kvm_pmu_probe_armpmu(); + if (!arm_pmu) + return -ENODEV; + } + + kvm->arch.arm_pmu = arm_pmu; + + return 0; +} + static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) { struct kvm *kvm = vcpu->kvm; @@ -888,7 +903,7 @@ static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) break; } - kvm->arch.arm_pmu = arm_pmu; + WARN_ON_ONCE(kvm_arm_set_vm_pmu(kvm, arm_pmu)); cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus); ret = 0; break; @@ -913,9 +928,10 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) if (!kvm->arch.arm_pmu) { /* No PMU set, get the default one */ - kvm->arch.arm_pmu = kvm_pmu_probe_armpmu(); - if (!kvm->arch.arm_pmu) - return -ENODEV; + int ret = kvm_arm_set_vm_pmu(kvm, NULL); + + if (ret) + return ret; } switch (attr->attr) { From patchwork Sat May 27 04:02:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13257542 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AE68BC77B7E for ; Sat, 27 May 2023 04:05:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ngVgq1Fk1EsjP2FV+FV02gxraKvhoeX5gVruuqNbMos=; b=36nH4iwO02y5hFw+1nyyyT6Nn+ SipQM5JATQ85jz0ZjZmdFEYa0AAB1QnsNpVNk4x5bpH0UBGXCS7vMR99HxBrWWtb1TSq/g0XPQg1d fiH7sHvlXRNpkmc/Uq0S4rTNJDFrwwxLnOzlskWoTdwJeEn8GfexLsMNv0fUtiwb9YyakhnaTEHzq yUBV6BIch16M3h0zDqfXxW2QvV1X+s19YpzMgBKoWoDG8EZO/XE44yzDS4jq63xnMAoWx2oRKNqON W3RENdHwFEM/W8T7g9gJGuUTcxc2s++lrTvlPoyWx4xdhVFHbylnsISvGERawPAuUrbBXpAT2vouZ wM+iSImQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q2lAZ-004o0s-1H; Sat, 27 May 2023 04:04:43 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q2lAW-004nzi-1S for linux-arm-kernel@lists.infradead.org; Sat, 27 May 2023 04:04:41 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-bacfa4ef059so3277832276.2 for ; Fri, 26 May 2023 21:04:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685160278; x=1687752278; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uUrVHxQ1VjsIKZJ0cYmjatxxlv1RidcUPcAwfkxMDeQ=; b=mnKD5djePan6cHUiRAmZh38Jmo+ropUJ0h0VmSE6uXDB/X12f01Q1L295u/FPmsJe7 psC4a9zcrCkyAa0xQFqKrV9FLzwyA7ATKehSlM0c7ERZJii79G9dsfEwowCoo0nNDTF3 5NCRrlT59GgmTp/pVXGlfeSkIkv5YFtnrW0wlZi2s2rqz21oT09bP6qfK4fb5hBCSC09 PxuvQeOOnXn4blCcFNbFRNEXN7GPF+KeUXP2ym9XDgMEyPD8icS7IKN2yqxz5oVLyYu7 cUUYIHqVISmSIVQvx8JAolK7vvPmc8Lg/sCVUk2mHRfx0V9b2DSRQEcl4CWhrALeA4cm HmOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685160278; x=1687752278; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uUrVHxQ1VjsIKZJ0cYmjatxxlv1RidcUPcAwfkxMDeQ=; b=kYDBBRW7BMzpJ7X3hTQh9pd7rqQhG2y2FE6PgfLUbmosXkaBewoLvOyV6H2fZE8KQe mmcvPYDeZCgAASC8c3Pk38Ur6EAP1bDALuBQaw8L+ZylhQAaEEuH/i6qtS8wMMKGbloB e6Dh1jlZz9LAId35Ej3Gon+l0+neZ1BuU8952373aSB4d3CZm8kFbpJLCNgCeBUyGIRb 5C60eu/z65ZcNFDTIdJgXgm/+s+zdCL7vJNCh+fdc3QbnUB1QYpan+1KVrInzm70iZo2 5zaXbj3ACLwAz7ZM8ohTt6+IRJAiMNGNmn9ZCG0nlWgP79Waiq2ApvGti++W3JtvXjUv fFEQ== X-Gm-Message-State: AC+VfDwbs9Rm+8qjlSLnXZg7Y0r8eKWPKgYSEEmAiJhUSAnVYfcn0rGv /4zTBfSVkOZZWOM2HlGzjMW5J9qhJuQ= X-Google-Smtp-Source: ACHHUZ40X1JtVnbU0yW1OlIarBd66lox6g/g0byncu954rsHLASWFxfD7mYTWnHgIRpIDTHXydpwbi5LsS4= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a25:2646:0:b0:bac:f582:ef18 with SMTP id m67-20020a252646000000b00bacf582ef18mr2153443ybm.5.1685160278549; Fri, 26 May 2023 21:04:38 -0700 (PDT) Date: Fri, 26 May 2023 21:02:34 -0700 In-Reply-To: <20230527040236.1875860-1-reijiw@google.com> Mime-Version: 1.0 References: <20230527040236.1875860-1-reijiw@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230527040236.1875860-3-reijiw@google.com> Subject: [PATCH 2/4] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset From: Reiji Watanabe To: Marc Zyngier , Oliver Upton , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata , Will Deacon , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230526_210440_492956_E86A4F3C X-CRM114-Status: GOOD ( 17.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Set the default PMU for the guest on the first vCPU reset, not when userspace initially uses KVM_ARM_VCPU_PMU_V3_CTRL. The following patches will use the PMUVer of the PMU as the default value of the ID_AA64DFR0_EL1.PMUVer for vCPUs with PMU configured. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/pmu-emul.c | 10 +--------- arch/arm64/kvm/reset.c | 20 +++++++++++++------- include/kvm/arm_pmu.h | 6 ++++++ 3 files changed, 20 insertions(+), 16 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index d50c8f7a2410..0194a94c4bae 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -869,7 +869,7 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq) return true; } -static int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) +int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) { lockdep_assert_held(&kvm->arch.config_lock); @@ -926,14 +926,6 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) if (vcpu->arch.pmu.created) return -EBUSY; - if (!kvm->arch.arm_pmu) { - /* No PMU set, get the default one */ - int ret = kvm_arm_set_vm_pmu(kvm, NULL); - - if (ret) - return ret; - } - switch (attr->attr) { case KVM_ARM_VCPU_PMU_V3_IRQ: { int __user *uaddr = (int __user *)(long)attr->addr; diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index b5dee8e57e77..f5e24492926c 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -258,13 +258,24 @@ static int kvm_set_vm_width(struct kvm_vcpu *vcpu) int kvm_reset_vcpu(struct kvm_vcpu *vcpu) { struct vcpu_reset_state reset_state; + struct kvm *kvm = vcpu->kvm; int ret; bool loaded; u32 pstate; - mutex_lock(&vcpu->kvm->arch.config_lock); + mutex_lock(&kvm->arch.config_lock); ret = kvm_set_vm_width(vcpu); - mutex_unlock(&vcpu->kvm->arch.config_lock); + if (!ret && kvm_vcpu_has_pmu(vcpu)) { + if (!kvm_arm_support_pmu_v3()) + ret = -EINVAL; + else if (unlikely(!kvm->arch.arm_pmu)) + /* + * As no PMU is set for the guest yet, + * set the default one. + */ + ret = kvm_arm_set_vm_pmu(kvm, NULL); + } + mutex_unlock(&kvm->arch.config_lock); if (ret) return ret; @@ -315,11 +326,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) } else { pstate = VCPU_RESET_PSTATE_EL1; } - - if (kvm_vcpu_has_pmu(vcpu) && !kvm_arm_support_pmu_v3()) { - ret = -EINVAL; - goto out; - } break; } diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 1a6a695ca67a..5ece2a3c1858 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -96,6 +96,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P5) u8 kvm_arm_pmu_get_pmuver_limit(void); +int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu); #else struct kvm_pmu { @@ -168,6 +169,11 @@ static inline u8 kvm_arm_pmu_get_pmuver_limit(void) return 0; } +static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) +{ + return 0; +} + #endif #endif From patchwork Sat May 27 04:02:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13257543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D5E0BC77B7E for ; Sat, 27 May 2023 04:05:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=pSCUSOGY4b6y+m+h1+qmk32HfQ73OnT0d/oz4twGJq8=; b=xJhi0oxSZCpwpc0vwLS9EOgnng ca6GwhvFf3nkxylM0ODWZeHqfdeZIrw+pSeBrYkk1V4r0bEWuFFIpbpK118PTFmHe2ybuui7/wAdB HSku6cU+Z9ZsnkFaRfcSXbu558hUFtawQ+WErHYQtcUyFEab7Pwaqf6QK1JA1P8LJ8pkPQgDrvrt7 ZFz7GEqNo15ETd5Rp9pReySjweMqBERF7xPxi3jhhTfj0aPKEC3vMxx/heIiuNJsCPhVT3rn289NP H64h9nt/8DqHfGnOTw1cG029tNxtrgC2dgr6b7maA/nXFJck3WflefTkpwCxRl4JNrs6lC5DmlZGQ UQoNVH7g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q2lAv-004o7E-1t; Sat, 27 May 2023 04:05:05 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q2lAs-004o5k-0G for linux-arm-kernel@lists.infradead.org; Sat, 27 May 2023 04:05:03 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-babb53e6952so3125617276.0 for ; Fri, 26 May 2023 21:05:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685160300; x=1687752300; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CVdJfxalrKrA0ocQWScQBMNSin7rCXi39ddZCI6I+Lw=; b=PO0DcdNrgP5ptJ8hPQddsyKkA1m1XITJeCVHvd+D6lH6psREctBh/5HOysJRX8ykir jLhbwRKYRAxPDohZZekbUWdT0nbBsQ9KZvaNRl7TuWF2bECg+CKEw84XFxahCJpGNmjG LQsh5VJ+vpYOKQDTOx9ULlWjuL+ZsaVkSqKv3J10C25RvApaafyyY/VWTNAW/mWF1x1v ub23jkoE/KMzNuFLcfOa0Gl9/3lN+WKyYqIp/ly07rtOeFQLDF57jd1vvplhffSEBvdE 7Az/+KeXU0dgrnDJ0l8JVkvR184IFiQ/jU4obXNBLJQqOyqHGDzgiyMbSUiFtUKxQFVt GTHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685160300; x=1687752300; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CVdJfxalrKrA0ocQWScQBMNSin7rCXi39ddZCI6I+Lw=; b=d9jYzCefI0N1x9vpN8WRtZoTXdgRSNn0109NASE4mWpUaG9SPZn4PNeX6O7kTlw6Ki vcL9WLVA+HX1WNLk+gY2FiWtF9YOkz5OsvO9uoXm9bL9mog2HruoChmRyBnyq3aESd+M 9k6ubwce3XEHkAzSdYZOM8BhjimMSnLRS45+uZtom2U7k/UEtA8UNpvLAJoqUVxknpiu 5WgM3TDzlJHjm1l5iT+n7mZqWeVY1mv1fs7yOSy5IWx8nAd/D8xnpb8WgWZ6JgCOSFfG A9gsCfZKePtlefb9ezIOWPSElhAs1T5HwZqRddOVcdRXORivyEMTZOxgOuS8/Cxxw19+ msNw== X-Gm-Message-State: AC+VfDwwTi6qC7l5D97c1jlrR3BTA8fjSt1G1dZiAkzrZ4/jCN5BfW2A k3SFwKLCxtmV7B2EJrtU+1T/KhEBY4w= X-Google-Smtp-Source: ACHHUZ7xYLN8vvRmoJXz8BY6YHV6oSkGdNHun+pcrFKUH5BINMpkznVSajKgNRJLTycZ6lMNKmVFYhheiy8= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a25:8f8d:0:b0:bac:6ba0:abe with SMTP id u13-20020a258f8d000000b00bac6ba00abemr2094316ybl.10.1685160299956; Fri, 26 May 2023 21:04:59 -0700 (PDT) Date: Fri, 26 May 2023 21:02:35 -0700 In-Reply-To: <20230527040236.1875860-1-reijiw@google.com> Mime-Version: 1.0 References: <20230527040236.1875860-1-reijiw@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230527040236.1875860-4-reijiw@google.com> Subject: [PATCH 3/4] KVM: arm64: PMU: Use PMUVer of the guest's PMU for ID_AA64DFR0.PMUVer From: Reiji Watanabe To: Marc Zyngier , Oliver Upton , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata , Will Deacon , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230526_210502_125040_1DC8161C X-CRM114-Status: GOOD ( 25.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently, KVM uses the sanitized value of ID_AA64DFR0_EL1.PMUVer as the default value and the limit value of this field for vCPUs with PMU configured. But, the sanitized value could be inappropriate for the vCPUs on some heterogeneous PMU systems, as arm64_ftr_bits for PMUVer is defined as FTR_EXACT with safe_val == 0 (if the ID_AA64DFR0_EL1.PMUVer of all PEs on the host is not uniform, the sanitized value will be 0). Use the PMUver of the guest's PMU (kvm->arch.arm_pmu->pmuver) as the default value and the limit value of ID_AA64DFR0_EL1.PMUVer for vCPUs with PMU configured. When the guest's PMU is switched to a different PMU, reset the value of ID_AA64DFR0_EL1.PMUVer for the vCPUs based on the new PMU, unless userspace has already modified the PMUVer and the value is still valid even with the new PMU. Signed-off-by: Reiji Watanabe --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/kvm/arm.c | 6 ---- arch/arm64/kvm/pmu-emul.c | 28 +++++++++++++----- arch/arm64/kvm/sys_regs.c | 48 ++++++++++++++++++++----------- include/kvm/arm_pmu.h | 4 +-- 5 files changed, 57 insertions(+), 31 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 7e7e19ef6993..8ca0e7210a59 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -231,6 +231,8 @@ struct kvm_arch { #define KVM_ARCH_FLAG_TIMER_PPIS_IMMUTABLE 7 /* SMCCC filter initialized for the VM */ #define KVM_ARCH_FLAG_SMCCC_FILTER_CONFIGURED 8 + /* PMUVer set by userspace for the VM */ +#define KVM_ARCH_FLAG_PMUVER_DIRTY 9 unsigned long flags; /* diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 14391826241c..3c2fddfe90f7 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -164,12 +164,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) set_default_spectre(kvm); kvm_arm_init_hypercalls(kvm); - /* - * Initialise the default PMUver before there is a chance to - * create an actual PMU. - */ - kvm->arch.dfr0_pmuver.imp = kvm_arm_pmu_get_pmuver_limit(); - return 0; err_free_cpumask: diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 0194a94c4bae..6cd08d5e5b72 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -871,6 +871,8 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq) int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) { + u8 new_limit; + lockdep_assert_held(&kvm->arch.config_lock); if (!arm_pmu) { @@ -880,6 +882,22 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) } kvm->arch.arm_pmu = arm_pmu; + new_limit = kvm_arm_pmu_get_pmuver_limit(kvm); + + /* + * Reset the value of ID_AA64DFR0_EL1.PMUVer to the new limit value, + * unless the current value was set by userspace and is still a valid + * value for the new PMU. + */ + if (!test_bit(KVM_ARCH_FLAG_PMUVER_DIRTY, &kvm->arch.flags)) { + kvm->arch.dfr0_pmuver.imp = new_limit; + return 0; + } + + if (kvm->arch.dfr0_pmuver.imp > new_limit) { + kvm->arch.dfr0_pmuver.imp = new_limit; + clear_bit(KVM_ARCH_FLAG_PMUVER_DIRTY, &kvm->arch.flags); + } return 0; } @@ -1049,13 +1067,9 @@ int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) return -ENXIO; } -u8 kvm_arm_pmu_get_pmuver_limit(void) +u8 kvm_arm_pmu_get_pmuver_limit(struct kvm *kvm) { - u64 tmp; + u8 host_pmuver = kvm->arch.arm_pmu ? kvm->arch.arm_pmu->pmuver : 0; - tmp = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1); - tmp = cpuid_feature_cap_perfmon_field(tmp, - ID_AA64DFR0_EL1_PMUVer_SHIFT, - ID_AA64DFR0_EL1_PMUVer_V3P5); - return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp); + return min_t(u8, host_pmuver, ID_AA64DFR0_EL1_PMUVer_V3P5); } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 71b12094d613..a76155ad997c 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1382,8 +1382,11 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, { u8 pmuver, host_pmuver; bool valid_pmu; + u64 current_val = read_id_reg(vcpu, rd); + int ret = -EINVAL; - host_pmuver = kvm_arm_pmu_get_pmuver_limit(); + mutex_lock(&vcpu->kvm->arch.config_lock); + host_pmuver = kvm_arm_pmu_get_pmuver_limit(vcpu->kvm); /* * Allow AA64DFR0_EL1.PMUver to be set from userspace as long @@ -1393,26 +1396,31 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, */ pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val); if ((pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)) - return -EINVAL; + goto out; valid_pmu = (pmuver != 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF); /* Make sure view register and PMU support do match */ if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) - return -EINVAL; + goto out; /* We can only differ with PMUver, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); + val ^= current_val; val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); if (val) - return -EINVAL; + goto out; - if (valid_pmu) + if (valid_pmu) { vcpu->kvm->arch.dfr0_pmuver.imp = pmuver; - else + set_bit(KVM_ARCH_FLAG_PMUVER_DIRTY, &vcpu->kvm->arch.flags); + } else vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver; - return 0; + ret = 0; +out: + mutex_unlock(&vcpu->kvm->arch.config_lock); + + return ret; } static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, @@ -1421,8 +1429,11 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, { u8 perfmon, host_perfmon; bool valid_pmu; + u64 current_val = read_id_reg(vcpu, rd); + int ret = -EINVAL; - host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit()); + mutex_lock(&vcpu->kvm->arch.config_lock); + host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit(vcpu->kvm)); /* * Allow DFR0_EL1.PerfMon to be set from userspace as long as @@ -1433,26 +1444,31 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, perfmon = FIELD_GET(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), val); if ((perfmon != ID_DFR0_EL1_PerfMon_IMPDEF && perfmon > host_perfmon) || (perfmon != 0 && perfmon < ID_DFR0_EL1_PerfMon_PMUv3)) - return -EINVAL; + goto out; valid_pmu = (perfmon != 0 && perfmon != ID_DFR0_EL1_PerfMon_IMPDEF); /* Make sure view register and PMU support do match */ if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) - return -EINVAL; + goto out; /* We can only differ with PerfMon, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); + val ^= current_val; val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); if (val) - return -EINVAL; + goto out; - if (valid_pmu) + if (valid_pmu) { vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon); - else + set_bit(KVM_ARCH_FLAG_PMUVER_DIRTY, &vcpu->kvm->arch.flags); + } else vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon); - return 0; + ret = 0; +out: + mutex_unlock(&vcpu->kvm->arch.config_lock); + + return ret; } /* diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 5ece2a3c1858..00c05d17cf3a 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -95,7 +95,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); #define kvm_pmu_is_3p5(vcpu) \ (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P5) -u8 kvm_arm_pmu_get_pmuver_limit(void); +u8 kvm_arm_pmu_get_pmuver_limit(struct kvm *kvm); int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu); #else @@ -164,7 +164,7 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {} static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {} static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {} -static inline u8 kvm_arm_pmu_get_pmuver_limit(void) +static inline u8 kvm_arm_pmu_get_pmuver_limit(struct kvm *kvm) { return 0; } From patchwork Sat May 27 04:02:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13257544 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AFD1EC77B7E for ; Sat, 27 May 2023 04:05:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=uFrx45tzaiwJmgSKyi2rUzPMsnP5jh/DvJJoNertgfo=; b=zH7M3au4J8r2NGJ7fl9JhLL39i vtqATg8ryYf3P9X+DGfDHZnWr/e4CCcs6PyqVjSfamI7plbWzq+WNILhATJccD7vMayQMHGBqnM1Z izaITr5tY+R+XxYdkE1aV6W3fFYtLtEfiy3dXLa2qxZdyAHKaYulsLLcM4fq0JSOrzkNSoE9j0RTI 6AEyKqoRB3gvQXoJOGUa4DolClkQNVD2a7JnfcHpF5Pf9Svgvn9VmCS+x5iSRNcniGgS1oBQffWBc UCWi9j1z9s5uLt+jcCsNHu8UoF2QHHzhg/bgtmJu5O2Yon4PCBqkt7ThEGMAKueb2Si8oiXwSlCuE XMdrWz0Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q2lB3-004o9N-2X; Sat, 27 May 2023 04:05:13 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q2lB1-004o8O-05 for linux-arm-kernel@lists.infradead.org; Sat, 27 May 2023 04:05:12 +0000 Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-256366ac520so723967a91.0 for ; Fri, 26 May 2023 21:05:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685160309; x=1687752309; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uKyUjzGxMpAFYUO03Bjkr2AfOXHzBadtPHxiD64SwA0=; b=u3TLHrYK4S39RSZshsMoRKMWs10XcbGSNj6L5Phe7dIdJ8To+/hWEiSAFlLozdP517 a3kiQSbkpg/uFOIMqYe3m2LWkPiWaB+8XxzdbUUSqP0vFKc+fVD7mAH+VCv4r2X3W5ak uPQJv4yWH26iZeg82TXekzt1mh7lEMk1c+ZMJRRuvtNZSxTOLdT3C/2E3dsf6KjXSkP9 7Nvp+oC5avEw3t85SDLkag6Mu77J41BX5J6HRHF8xwUtOPAhy+UVqH+FrBnu0ZMYzdkA KM6Zoby653oxCtPpjytcghuzuzQMuNNIWf+6sF4JnN/w/mKIh5bZ0bKcvU9+dvPxZTBz Fy+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685160309; x=1687752309; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uKyUjzGxMpAFYUO03Bjkr2AfOXHzBadtPHxiD64SwA0=; b=lZQzKbhpJFsO0ayg7PufUzKGHniMvngWLOgyT1PydR8VPH/TfDD3ZJnNX0Kgs6wHMp cj5jinmZt4VU1MZTMzT+6qIFoK6JBFr6DrOpVHhz669e/Fpwc2cGxBb+SOuG9hjvk1GY 4YMPRL3zpZOjAm0n3sGdE6mD79ghdZVlRkV4cF7a6CATKBvy8pikDYZ8OYgsAT2EOGIj AhIbBG/fG1bxW2+AwveQyhprjpFaLZLgO5tZv5UlHUyDqezV7+TBgNJyDLPQYHn1sQpO 1cMpYHVySZpU57S9TywsGW7HZpPeDOzG9cHYRMgTgjEiodRuG/ALMZD1l4fRDkl7HLDt z1Fg== X-Gm-Message-State: AC+VfDwvn6u+HhAmCOARXwzHVS+pN+EQHVoGnxJP2LrWnD7jJlrhWsFv +JEGDWJOVBGFhpZAt3kD1MMp1n1tyeY= X-Google-Smtp-Source: ACHHUZ4Zo6lTvJnnh4eGslCPLrtwqCpnLb6UvNOS8aOzf5ORprs/RuwVER61Ok7lDJ8BWcoPB83lX7RlfiE= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a17:90a:ec08:b0:24d:e14a:6361 with SMTP id l8-20020a17090aec0800b0024de14a6361mr188030pjy.3.1685160309467; Fri, 26 May 2023 21:05:09 -0700 (PDT) Date: Fri, 26 May 2023 21:02:36 -0700 In-Reply-To: <20230527040236.1875860-1-reijiw@google.com> Mime-Version: 1.0 References: <20230527040236.1875860-1-reijiw@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230527040236.1875860-5-reijiw@google.com> Subject: [PATCH 4/4] KVM: arm64: PMU: Don't use the PMUVer of the PMU set for guest From: Reiji Watanabe To: Marc Zyngier , Oliver Upton , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata , Will Deacon , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230526_210511_067225_271B574C X-CRM114-Status: GOOD ( 16.70 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Avoid using the PMUVer of the PMU hardware that is associated to the guest, except in a few cases, as the PMUVer may be different from the value of ID_AA64DFR0_EL1.PMUVer for the guest. The first case is when using the PMUVer as the limit value of the ID_AA64DFR0_EL1.PMUVer for the guest. The second case is when using the PMUVer to determine the valid range of events for KVM_ARM_VCPU_PMU_V3_FILTER, as it has been allowing userspace to specify events that are valid for the PMU hardware, regardless of the value of the guest's ID_AA64DFR0_EL1.PMUVer. KVM will change the valid range of the event that the guest can use based on the value of the guest's ID_AA64DFR0_EL1.PMUVer though. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/pmu-emul.c | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 6cd08d5e5b72..67512b13ba2d 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -35,12 +35,8 @@ static struct kvm_pmc *kvm_vcpu_idx_to_pmc(struct kvm_vcpu *vcpu, int cnt_idx) return &vcpu->arch.pmu.pmc[cnt_idx]; } -static u32 kvm_pmu_event_mask(struct kvm *kvm) +static u32 __kvm_pmu_event_mask(u8 pmuver) { - unsigned int pmuver; - - pmuver = kvm->arch.arm_pmu->pmuver; - switch (pmuver) { case ID_AA64DFR0_EL1_PMUVer_IMP: return GENMASK(9, 0); @@ -55,6 +51,11 @@ static u32 kvm_pmu_event_mask(struct kvm *kvm) } } +static u32 kvm_pmu_event_mask(struct kvm *kvm) +{ + return __kvm_pmu_event_mask(kvm->arch.dfr0_pmuver.imp); +} + /** * kvm_pmc_is_64bit - determine if counter is 64bit * @pmc: counter context @@ -757,7 +758,7 @@ u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) * Don't advertise STALL_SLOT, as PMMIR_EL0 is handled * as RAZ */ - if (vcpu->kvm->arch.arm_pmu->pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P4) + if (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P4) val &= ~BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32); base = 32; } @@ -970,11 +971,17 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) return 0; } case KVM_ARM_VCPU_PMU_V3_FILTER: { + u8 pmuver = kvm_arm_pmu_get_pmuver_limit(kvm); struct kvm_pmu_event_filter __user *uaddr; struct kvm_pmu_event_filter filter; int nr_events; - nr_events = kvm_pmu_event_mask(kvm) + 1; + /* + * Allow userspace to specify an event filter for the entire + * event range supported by PMUVer of the hardware, rather + * than the guest's PMUVer for KVM backward compatibility. + */ + nr_events = __kvm_pmu_event_mask(pmuver) + 1; uaddr = (struct kvm_pmu_event_filter __user *)(long)attr->addr;