From patchwork Fri Feb 3 04:02:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13126979 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4A08AC636CC for ; Fri, 3 Feb 2023 04:06:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=JKMd1EFmDAe7EzeHMWi/B7zoQiuYiSphnFzWRR7SPFU=; b=wPL7BG9YZDG7ohg6GS3DM1uPA9 ZSj76D6TEwEkHzwSu/KOxbRbTebxol25DdB1GmlgjiRLPerw2bs6LNJFilUjlj0dPcAHeJk20Bp/t mSETDKEFTrMlYN5nPpEWh6ujHiFo071D7C3rlzA/xq8KjChuEeV7ON1kmTwBtckPlXhB50vRs9CHQ K9j/Dpvup9Nq4IjCyxR41LS7cx/tFxWtXF5WMVdWJEb+/3W0yrj3jY58NxPO4ej8LSuNw82vsRTID YadFivu96GKwNOLyy2wcxJ36UJDan3DWnVlqp7wF44p/Cq/L1rE06KBy7NvuQkw3KYEu69YCl2x6w v2eheIIA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnKd-000GyI-C6; Fri, 03 Feb 2023 04:05:47 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnKa-000Gxb-Rn for linux-arm-kernel@lists.infradead.org; Fri, 03 Feb 2023 04:05:46 +0000 Received: by mail-yb1-xb49.google.com with SMTP id y125-20020a25c883000000b0086349255277so2047131ybf.8 for ; Thu, 02 Feb 2023 20:05:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ipr4xhXjsZ3xlHgWCFzXNIu5kTM+BU4HEjtM53FIJC8=; b=c/d6PHNBqDvTeLHEKZO2QuLxxu4Dw58XcVsbS9TqYVD79JfHoLbKk5m+MlHoePbKvZ m+RiHImDM2atdcUtTq5p/p8ZKHwuJd0SnifljBbJaRr/xGiJOzWYhvkowagdjqZtMJRa ykLSRR6cNJ0zBMjRG6h6BR3ebZrbrlzZXuPZQ7TfzIPwZ8L7qTHTiolQAWapRGHaiNUj FAsvVziElhXi20ioCU8YNcHjKZwgyMOFIcWFZxuG9F/C/plHhQM317I3Zd0F09qlUS+d kGva4rNxavCSAw189iTIgHWJ6FmmRcmyhko0spN052hlBYOUey3NTCi7JFmr/nGCwkCP QCmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ipr4xhXjsZ3xlHgWCFzXNIu5kTM+BU4HEjtM53FIJC8=; b=TuVAZ/4jstDa++SwsqCADY3TVqk17aZkv2wlez1PeN4lMlrNtbfXt535bI4mn85Ow0 aMyc1bISA/4JnjOg6En/+jEMtVcVbiW71f1lgOMMecx9R660AsvhSDcgBDOZoGqu7G6C b+6cmcKsHoVXH1h8xiEEdjV7OL6dPtP88kKqd8cFchxuLszlWK7iYllMTflUUdQ7PlX1 R/saVqcXtx+uVij7rIXN/4ooxsY8/Rz1c1fb49Kre7t3UG5G4mMN9riVmLI32rrTQRWs uFoeF2BWVqD0BfgoDiamRApAd2OvnjhNlzVmVU4BHVolZj2N9DVxyNzIyRBk3g0ORUqY KfmQ== X-Gm-Message-State: AO0yUKWca275XTEiy+DiRkk0BixP60ysdIqyShRAMbB2NTfwR34FawoP IJHXyEH6DooHvb0pbbOtURZGAQDwlOU= X-Google-Smtp-Source: AK7set9hXHYpsfeEyNZjmKuwBd86TPK7rc/HyFmmnFFTRFa6jkP13qBYeAs3sOwd8+uNbKkJ6PM4+ACIplY= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a81:af02:0:b0:4d7:50a4:630d with SMTP id n2-20020a81af02000000b004d750a4630dmr1108788ywh.518.1675397143239; Thu, 02 Feb 2023 20:05:43 -0800 (PST) Date: Thu, 2 Feb 2023 20:02:29 -0800 In-Reply-To: <20230203040242.1792453-1-reijiw@google.com> Mime-Version: 1.0 References: <20230203040242.1792453-1-reijiw@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230203040242.1792453-2-reijiw@google.com> Subject: [PATCH v3 01/14] KVM: arm64: PMU: Introduce a helper to set the guest's PMU From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Shaoqin@google.com, Huang@google.com, shahuang@redhat.com, Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230202_200544_921304_04152540 X-CRM114-Status: GOOD ( 14.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a new helper function to set the guest's PMU (kvm->arch.arm_pmu), and use it when the guest's PMU needs to be set. This helper will make it easier for the following patches to modify the relevant code. No functional change intended. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/pmu-emul.c | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 24908400e190..f2a89f414297 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -867,6 +867,21 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq) return true; } +static int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) +{ + lockdep_assert_held(&kvm->lock); + + if (!arm_pmu) { + arm_pmu = kvm_pmu_probe_armpmu(); + if (!arm_pmu) + return -ENODEV; + } + + kvm->arch.arm_pmu = arm_pmu; + + return 0; +} + static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) { struct kvm *kvm = vcpu->kvm; @@ -886,7 +901,7 @@ static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) break; } - kvm->arch.arm_pmu = arm_pmu; + WARN_ON_ONCE(kvm_arm_set_vm_pmu(kvm, arm_pmu)); cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus); ret = 0; break; @@ -911,10 +926,11 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) mutex_lock(&kvm->lock); if (!kvm->arch.arm_pmu) { /* No PMU set, get the default one */ - kvm->arch.arm_pmu = kvm_pmu_probe_armpmu(); - if (!kvm->arch.arm_pmu) { + int ret = kvm_arm_set_vm_pmu(kvm, NULL); + + if (ret) { mutex_unlock(&kvm->lock); - return -ENODEV; + return ret; } } mutex_unlock(&kvm->lock); From patchwork Fri Feb 3 04:20:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13126990 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 147EDC61DA4 for ; Fri, 3 Feb 2023 04:23:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: Mime-Version:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To: References:List-Owner; bh=rgZnYx7sHavbTju5oj8KOPUopIrziuEwS7y4UXSGpZk=; b=MmX 4kNdAFtSNGwQ/LchM4bhN+nKZzsilxTiU88AeodZN6SAy7AoyFv/Vwjf+hD/CznJSdgOIqyuYzm9S njTv0YoRdnQMYLerUlkiMhAuvXgZdPTU/8c/qaa7Kq5pJRZwVXCkeXC1DK1xYv/wA6bDNAm4dkY+T rZwt5F+1UmNFn8t+q1O1cWL3LaJgrisqYjPLpy86fWX2BtiUZC9Yb9M9cigjusZGS5XuOYEa6buED 3q4e2UUv8M8y4XsWWfPVVydXA5g8YiiErhX0UpcEzDU7Hjvz7c/JxufYYIsG1JQKVW20b0ynhbdE5 mYsMw3rgtldd0hg45FDhTTrvLTEFgqw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnav-000JOn-M8; Fri, 03 Feb 2023 04:22:37 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnas-000JOM-L6 for linux-arm-kernel@lists.infradead.org; Fri, 03 Feb 2023 04:22:36 +0000 Received: by mail-yb1-xb49.google.com with SMTP id k15-20020a5b0a0f000000b007eba3f8e3baso3749467ybq.4 for ; Thu, 02 Feb 2023 20:22:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=zNambxZo/zJEKdZ21/UzwbR/QB5YuwhFt4sLsPMGzRM=; b=ZXWg2YKjmgm8ZlDFLgf/h3LV7rsep5h+hB+GPf+f/Hp2pGFBB+QuUTJEfdOyPK43IM 7laY1MW/Zic6ekY1+j6LKyqnoXJMzMWal+uq++tOE9209KooB6lJ8AQ+LJd42h+4o+6f ge/fxxPWWOuoD398oUXnpHIiZAAm9rIHay5LZL1MsUzc88Sdy+cjlid+9ZPVO6+cyU2r NPAxpdEB/PU+orfZJ5rf9GLhbyd6qKfdGHBSwW9w3aMoy4XSh8cAoKERa01VkJVjgOJ1 1lV/kheaWCTEiXzzb4TRzXxEJchtdQS9cAvdFW5vgUn9ehiz//B+XA89XGQBVVmhBiFd NEEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=zNambxZo/zJEKdZ21/UzwbR/QB5YuwhFt4sLsPMGzRM=; b=dthE7UBvrTEhzA0HFdSVKstxG2i5RmphfHCZPMHG9eLT6GJeZO/vd2o2MJkNY3izZv BVoO3cvX/q2tghdWLt81VE59RBKrr3f3uHDJs99zKXmq/VLRb5C3T2g+01wqEv+P2J4I Ytl1cmMNi4b+y9dLVJUr18AhOPZ+ILUGMJa0o/YMTwNVtwcsF06EsACelWEhAeuKHzrv r69Ad3fAKMHqskLU5jx3zAQdqvVRUs0btpekurJa+l4Oxo7jjS9uWeZJM6KL4nUzE0xx YpW4/c8W1sWURW4wrXZCWu/hs3xveJFyO3Wc3ARKDYFkCtDhmUmoupdTGOR3aggkH03d ux1Q== X-Gm-Message-State: AO0yUKUnztFBEL1/OkUCoYStk8djYzrw1KcPdQzphc0OQoJJ02IiUyVD 7mb7wl7/itaUjHx62jOlyUR0xmrBVos= X-Google-Smtp-Source: AK7set+yYZuo2SuxCc2twGG4s4cPWYmyIoMLoUQ5PzkRSMgb4h2JJuq4ymVVPsz70jixviVErQ+2k59pul4= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a25:f50b:0:b0:7fb:c189:6d84 with SMTP id a11-20020a25f50b000000b007fbc1896d84mr769321ybe.601.1675398152601; Thu, 02 Feb 2023 20:22:32 -0800 (PST) Date: Thu, 2 Feb 2023 20:20:44 -0800 Mime-Version: 1.0 X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230203042056.1794649-1-reijiw@google.com> Subject: [PATCH v3 02/14] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Shaoqin Huang , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230202_202234_716119_2F5A5BBC X-CRM114-Status: GOOD ( 23.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For vCPUs with PMU configured, KVM uses the sanitized value (returned from read_sanitised_ftr_reg()) of ID_AA64DFR0_EL1.PMUVer as the default value and the limit value of that field. The sanitized value could be inappropriate for these on some heterogeneous PMU systems, as only one of PMUs on the system can be associated with the guest. Also, since the PMUVer is defined as FTR_EXACT with safe_val == 0 (in cpufeature.c), it will be zero when any PEs on the system have a different PMUVer value than the other PEs (i.e. the guest with PMU configured might see PMUVer == 0). As a guest with PMU configured is associated with one of PMUs on the system, the default and the limit PMUVer value for the guest should be set based on that PMU. Since the PMU won't be associated with the guest until some vcpu device attribute for PMU is set, KVM doesn't have that information until then though. Set the default PMU for the guest on the first vCPU reset. The following patches will use the PMUVer of the PMU as the default value of the ID_AA64DFR0_EL1.PMUVer for vCPUs with PMU configured. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/pmu-emul.c | 14 +------------- arch/arm64/kvm/reset.c | 21 ++++++++++++++------- include/kvm/arm_pmu.h | 6 ++++++ 3 files changed, 21 insertions(+), 20 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index f2a89f414297..c98020ca427e 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -867,7 +867,7 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq) return true; } -static int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) +int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) { lockdep_assert_held(&kvm->lock); @@ -923,18 +923,6 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) if (vcpu->arch.pmu.created) return -EBUSY; - mutex_lock(&kvm->lock); - if (!kvm->arch.arm_pmu) { - /* No PMU set, get the default one */ - int ret = kvm_arm_set_vm_pmu(kvm, NULL); - - if (ret) { - mutex_unlock(&kvm->lock); - return ret; - } - } - mutex_unlock(&kvm->lock); - switch (attr->attr) { case KVM_ARM_VCPU_PMU_V3_IRQ: { int __user *uaddr = (int __user *)(long)attr->addr; diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index e0267f672b8a..5d1e1acfe6ce 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -248,18 +248,30 @@ static int kvm_set_vm_width(struct kvm_vcpu *vcpu) */ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) { + struct kvm *kvm = vcpu->kvm; struct vcpu_reset_state reset_state; int ret; bool loaded; u32 pstate; - mutex_lock(&vcpu->kvm->lock); + mutex_lock(&kvm->lock); ret = kvm_set_vm_width(vcpu); if (!ret) { reset_state = vcpu->arch.reset_state; WRITE_ONCE(vcpu->arch.reset_state.reset, false); + + /* + * When the vCPU has a PMU, but no PMU is set for the guest + * yet, set the default one. + */ + if (kvm_vcpu_has_pmu(vcpu) && unlikely(!kvm->arch.arm_pmu)) { + if (kvm_arm_support_pmu_v3()) + ret = kvm_arm_set_vm_pmu(kvm, NULL); + else + ret = -EINVAL; + } } - mutex_unlock(&vcpu->kvm->lock); + mutex_unlock(&kvm->lock); if (ret) return ret; @@ -297,11 +309,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) } else { pstate = VCPU_RESET_PSTATE_EL1; } - - if (kvm_vcpu_has_pmu(vcpu) && !kvm_arm_support_pmu_v3()) { - ret = -EINVAL; - goto out; - } break; } diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 628775334d5e..7b5c5c8c634b 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -96,6 +96,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P5) u8 kvm_arm_pmu_get_pmuver_limit(void); +int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu); #else struct kvm_pmu { @@ -168,6 +169,11 @@ static inline u8 kvm_arm_pmu_get_pmuver_limit(void) return 0; } +static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) +{ + return 0; +} + #endif #endif From patchwork Fri Feb 3 04:20:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13126991 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B47B7C636D4 for ; Fri, 3 Feb 2023 04:23:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=PumKd7ZKyU0iPJKzSfT5iMAAdnCbqCkmsHsfUCL2t+E=; b=Lt176uuPisjSueh2dbvEDp8WOM 4iqLLReVSMi0OT/5SN9eAXUp3zxaRwtlp3bLVjLbWID/r1eqD6Ck1SbA6LqaHKS+ozSBlh8LHroey MG4tY0q4xTn4DUWLjmckDRo0OMA3tM+GUFEGpvH+eqrXPFyqznIiwgLk4pGadDb5lAYBZjqLcUHSV q+jye02nMzZuhaPnqlLyxaQO45KDoebAUzyHShEoWQNM8wLNa4PRKKwUW/B7GzpMETfH3VUR7JBPd 5468U/sq49b+XIn1NxoWwnXE0qfU4zjvwd/tFioyXgTomSqrabxm0JZ8r+9ZhpEPW9BwcQN2Xepd4 mL8tZVVg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnbK-000JSq-Hv; Fri, 03 Feb 2023 04:23:02 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnbH-000JRI-7W for linux-arm-kernel@lists.infradead.org; Fri, 03 Feb 2023 04:23:00 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5258dd49a5cso4193857b3.5 for ; Thu, 02 Feb 2023 20:22:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Zh23L8RDzGrOG493Hoc2xtIcNQwj7Zx258tJ/hDjdbg=; b=LOiR52h0CwRLGp36DX1ZjonnDEoN3p1DFxAGEXkvjBWiSm8AicN+69FW2OnZ0m9CRg s+bcXNjMeUmi+MafAmpA497N8B5QLl0aDZdqJTlHld03CGnWHsijZws8+3u9wmrQEO94 +/TdBSKKXDmpiPWhDFkuSHyKGGFyjZVLHRREc9IHRGdL2rKyIkB8P7V8pL7k7ufs7Rw0 43M/iuzsoMtwiRuuUZ/5/nkWtW8FutvL2UesM/pDBmyYpnVM9aeYmhUMUHfV4qZGx27Z 0sGEdotLODUDSqJklEul4+74q9QgSHBbsgJER3Lul8ZiyFVYRVmgAEEhbUX2E9mXvDcL g5bw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Zh23L8RDzGrOG493Hoc2xtIcNQwj7Zx258tJ/hDjdbg=; b=M64YySplx+uSZwFmWbaDK1KIizq41F2X/gafHbnCOGeAJbcMAAGoC6H94WIdDKF5bo h7ezXy+IHAhljoeuI9NOsn2M2M8ULVc8J9LtoWPUyGDwm6cUehC5fgKhBdPlyqrkqKT+ 7XbwkH34jjFeVt2g+7DgYlCwQ1e7PMZ3WjN9MnkdYkbcN4C+4ztUVYeb0pVVlMFE/OMa 7okqSFZcEibWZR4Aq5q/2smHINGTpnxMhvgNiaSYI9YTEx0/nl3rjmhgUSMeOVMH0nYH ukz134bJsozxIP8JlABLrJxq/KvmTAnWi60eCe4ysjED8FP9ruys+IkiOUZJh9SND7hx +/cg== X-Gm-Message-State: AO0yUKUH1uFzp+2dS1VB7pDDj+ATQKoZ15NaytbMlsCMDh5WuaedpvAz 0CnD5si3rXLv5o1jPCvMit2zU+SuD08= X-Google-Smtp-Source: AK7set/f7tCnxbVAOphOa+pn50t+vK+KBWmf/f9IPGWE6ooHqpdSfbk9C+oLKEZYPGto6MBz/P2BmnRQrVQ= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a25:7795:0:b0:855:fa17:4f62 with SMTP id s143-20020a257795000000b00855fa174f62mr4ybc.4.1675398173757; Thu, 02 Feb 2023 20:22:53 -0800 (PST) Date: Thu, 2 Feb 2023 20:20:45 -0800 In-Reply-To: <20230203042056.1794649-1-reijiw@google.com> Mime-Version: 1.0 References: <20230203042056.1794649-1-reijiw@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230203042056.1794649-2-reijiw@google.com> Subject: [PATCH v3 03/14] KVM: arm64: PMU: Don't use the sanitized value for PMUVer From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Shaoqin Huang , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230202_202259_295441_57ED0E78 X-CRM114-Status: GOOD ( 23.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now, the guest has the default PMU on the first vCPU reset when a PMU is configured for any of vCPUs on the guest. For a vCPU with PMU configured, use the PMUver of the guest's PMU as the default value of ID_AA64DFR0_EL1.PMUVer and as the limit value of the field, instead of its sanitized value (The sanitized value could be inappropriate for these on some heterogeneous PMU systems, as only one of PMUs on the system can be associated with the guest. See the previous patch for more details). When a PMU for the guest is changed, the PMUVer for the guest will be reset based on the new PMU. On heterogeneous systems, this might end up changing the PMUVer that is set by userspace for the guest if userspace changes the PMUVer before using KVM_ARM_VCPU_PMU_V3_SET_PMU. This change isn't nice though. Other options considered are not updating the PMUVer even when the PMU for the guest is changed, or setting PMUVer to the new limit value only when it is larger than the limit. The former might end up exposing PMUVer that KVM can't support. The latter is inconvenient as the default PMUVer for the PMU set by KVM_ARM_VCPU_PMU_V3_SET_PMU will be an unknown (but supported) value, and userspace explicitly need to set the PMUVer for the guest to use the host PMUVer value. Signed-off-by: Reiji Watanabe --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/arm.c | 6 ----- arch/arm64/kvm/pmu-emul.c | 2 ++ arch/arm64/kvm/sys_regs.c | 38 +++++++++++++++++++++---------- include/kvm/arm_pmu.h | 1 - 5 files changed, 29 insertions(+), 19 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 35a159d131b5..33839077a95c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -230,6 +230,7 @@ struct kvm_arch { struct { u8 imp:4; u8 unimp:4; + u8 imp_limit; } dfr0_pmuver; /* Hypercall features firmware registers' descriptor */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 9c5573bc4614..41f478344a4d 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -164,12 +164,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) set_default_spectre(kvm); kvm_arm_init_hypercalls(kvm); - /* - * Initialise the default PMUver before there is a chance to - * create an actual PMU. - */ - kvm->arch.dfr0_pmuver.imp = kvm_arm_pmu_get_pmuver_limit(); - return 0; err_free_cpumask: diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index c98020ca427e..49580787ee09 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -878,6 +878,8 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) } kvm->arch.arm_pmu = arm_pmu; + kvm->arch.dfr0_pmuver.imp_limit = min_t(u8, arm_pmu->pmuver, ID_AA64DFR0_EL1_PMUVer_V3P5); + kvm->arch.dfr0_pmuver.imp = kvm->arch.dfr0_pmuver.imp_limit; return 0; } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index c6cbfe6b854b..c1ec4a68b914 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1259,8 +1259,11 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, { u8 pmuver, host_pmuver; bool valid_pmu; + u64 current_val = read_id_reg(vcpu, rd); + int ret = -EINVAL; - host_pmuver = kvm_arm_pmu_get_pmuver_limit(); + mutex_lock(&vcpu->kvm->lock); + host_pmuver = vcpu->kvm->arch.dfr0_pmuver.imp_limit; /* * Allow AA64DFR0_EL1.PMUver to be set from userspace as long @@ -1270,26 +1273,30 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, */ pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val); if ((pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)) - return -EINVAL; + goto out; valid_pmu = (pmuver != 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF); /* Make sure view register and PMU support do match */ if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) - return -EINVAL; + goto out; /* We can only differ with PMUver, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); + val ^= current_val; val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); if (val) - return -EINVAL; + goto out; if (valid_pmu) vcpu->kvm->arch.dfr0_pmuver.imp = pmuver; else vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver; - return 0; + ret = 0; +out: + mutex_unlock(&vcpu->kvm->lock); + + return ret; } static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, @@ -1298,8 +1305,11 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, { u8 perfmon, host_perfmon; bool valid_pmu; + u64 current_val = read_id_reg(vcpu, rd); + int ret = -EINVAL; - host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit()); + mutex_lock(&vcpu->kvm->lock); + host_perfmon = pmuver_to_perfmon(vcpu->kvm->arch.dfr0_pmuver.imp_limit); /* * Allow DFR0_EL1.PerfMon to be set from userspace as long as @@ -1310,26 +1320,30 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, perfmon = FIELD_GET(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), val); if ((perfmon != ID_DFR0_EL1_PerfMon_IMPDEF && perfmon > host_perfmon) || (perfmon != 0 && perfmon < ID_DFR0_EL1_PerfMon_PMUv3)) - return -EINVAL; + goto out; valid_pmu = (perfmon != 0 && perfmon != ID_DFR0_EL1_PerfMon_IMPDEF); /* Make sure view register and PMU support do match */ if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) - return -EINVAL; + goto out; /* We can only differ with PerfMon, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); + val ^= current_val; val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); if (val) - return -EINVAL; + goto out; if (valid_pmu) vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon); else vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon); - return 0; + ret = 0; +out: + mutex_unlock(&vcpu->kvm->lock); + + return ret; } /* diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 7b5c5c8c634b..c7da46c7377e 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -95,7 +95,6 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); #define kvm_pmu_is_3p5(vcpu) \ (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P5) -u8 kvm_arm_pmu_get_pmuver_limit(void); int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu); #else From patchwork Fri Feb 3 04:20:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13126992 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AA3D7C636CC for ; Fri, 3 Feb 2023 04:24:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=FEApnehfImNSSFldUcJl7WSyZUuh/l/o+ywWsdD6+XM=; b=Hu2nKD3BAvkxDYuFC/Je/i9GOA q5JtBbYxIdzTy4Pc2KbQ3MmR+xWETI/Dwbc1zJ8zCY9yUOfwv1q8RcVr6GZpPxn+0naoprf94ZiGv 9SveLV3vUeHAHMK1X5aGtzpA/H7Mc+iJhfzI0ilNKuyWd6eDVyh7N9C7lOjq93NFBB7QgETir4P+9 Vrbj1xZ7ITGErAS6Ywj2Qjwj2x4G9PybwFHLfWrsYr3WkOm0+kVJEb2mhbHsYAqyBvNxkx0iHO2Jd wcXkgXgMQi/jD0oS7TuLBX5SrwnW0oqJO9tm871+jF6AHrQx/t5zac0n5r6gv8ZVPEca1hMBUA7Xd jYStVSqA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnbk-000JZy-UB; Fri, 03 Feb 2023 04:23:29 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnbh-000JXk-7q for linux-arm-kernel@lists.infradead.org; Fri, 03 Feb 2023 04:23:26 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-4c11ae6ab25so39884167b3.8 for ; Thu, 02 Feb 2023 20:23:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bsxNffR5T4+GCyVU1rQER4x81/cUB6ikFRzN3Ch65v8=; b=im+oblvNpOXv0tNoFXmcYty6GGMOtyQ16C4YJhhKMWKcpj4ix+8iIeWPIjk1y+P7TK TAme7nDlXvpnp7/PRBqUXsdjOZA3WnjKzaHxuwhdNmKSwHHk+ToMvvS5HS+7eiP7zduL kVaMmsc3UmkmkCqgDoZAs/ksnsY7kREkUCfzQFH3xkJ9P4NVjEFhs1kx9wx6RZp1bxO9 lLncFtAdu9yOSttS6YvvkPSPpo8zevrYj3t8gbuDhwTlfzhJHXhx8fhdL7x1/033uaQj hYKNtWH2UjCjoWeQ4vBxBdUQMKLUQJGrszIgDTGrnK9E9o2Iee3nbcFmKoKmVsG3ZTqQ z6Og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bsxNffR5T4+GCyVU1rQER4x81/cUB6ikFRzN3Ch65v8=; b=Jq/RXTe8mLkM4euQidcZfON+GDSqHmsZhN+cCPkO60ioAz/Hm14vnkP1VH1PepTp1J KMjRzGefdVRx4f/XUUBC3VvkhQC3ea68OfOQNJzwm98r1agKvUwu6+cSnpzO68DWsObg 2mtmHZBCOTZPoo2YrzIqBd85t15qi0JdDtz8IHNG3JUCMlALjAlQBH0KLeFuMLH8Fiju b2S4evX6JtZZVT9VFQtetPkfAYljJfdvexj+P0J8uFEIiMOrfCslVGBDzBARuS2d3iyO AjQD1T/DPcVFBIt8lJHGmQ/ypYuXKxXVctKFWJLxwuR71UBdQEjTC6zzKYFDOeK3Zi0S nUGg== X-Gm-Message-State: AO0yUKV1dbKsTT8gqFEUZJBIBXRx8xMRiulYVmqfOj7tuY1o/lfV5yrM 5TruZ3krIp04CB4pHEscQzP4zzLpy0M= X-Google-Smtp-Source: AK7set/wpJYN/xrCwxDVv1wcqFOvXuH8Zi1m5CeU9pjOV2IyOIeXUl9PfgDkrAGzUL5m4JkVGoZ6jumXfPY= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a81:4b8c:0:b0:525:483e:5cb4 with SMTP id y134-20020a814b8c000000b00525483e5cb4mr0ywa.6.1675398201590; Thu, 02 Feb 2023 20:23:21 -0800 (PST) Date: Thu, 2 Feb 2023 20:20:46 -0800 In-Reply-To: <20230203042056.1794649-1-reijiw@google.com> Mime-Version: 1.0 References: <20230203042056.1794649-1-reijiw@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230203042056.1794649-3-reijiw@google.com> Subject: [PATCH v3 04/14] KVM: arm64: PMU: Don't use the PMUVer of the PMU set for the guest From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Shaoqin Huang , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230202_202325_299558_DE215988 X-CRM114-Status: GOOD ( 16.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org KVM uses two potentially different PMUVer for a vCPU with PMU configured (kvm->arch.dfr0_pmuver.imp and kvm->arch.arm_pmu->pmuver). Stop using the host's PMUVer (arm_pmu->pmuver) in most cases, as the PMUVer for the guest (kvm->arch.dfr0_pmuver.imp) could be set by userspace (could be lower than the host's PMUVer). The only exception to KVM using the host's PMUVer is to create an event filter (KVM_ARM_VCPU_PMU_V3_FILTER). For this, KVM uses the value to determine the valid range of the event, and as the size of the event filter bitmap. Using the host's PMUVer here will allow KVM to keep the compatibility with the current behavior of the PMU_V3_FILTER. Also, that will allow KVM to keep the entire filter when PMUVer for the guest is changed, and KVM only need to change the actual range of use. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/pmu-emul.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 49580787ee09..701728ad78d6 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -35,12 +35,8 @@ static struct kvm_pmc *kvm_vcpu_idx_to_pmc(struct kvm_vcpu *vcpu, int cnt_idx) return &vcpu->arch.pmu.pmc[cnt_idx]; } -static u32 kvm_pmu_event_mask(struct kvm *kvm) +static u32 __kvm_pmu_event_mask(u8 pmuver) { - unsigned int pmuver; - - pmuver = kvm->arch.arm_pmu->pmuver; - switch (pmuver) { case ID_AA64DFR0_EL1_PMUVer_IMP: return GENMASK(9, 0); @@ -55,6 +51,11 @@ static u32 kvm_pmu_event_mask(struct kvm *kvm) } } +static u32 kvm_pmu_event_mask(struct kvm *kvm) +{ + return __kvm_pmu_event_mask(kvm->arch.dfr0_pmuver.imp); +} + /** * kvm_pmc_is_64bit - determine if counter is 64bit * @pmc: counter context @@ -755,7 +756,7 @@ u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) * Don't advertise STALL_SLOT, as PMMIR_EL0 is handled * as RAZ */ - if (vcpu->kvm->arch.arm_pmu->pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P4) + if (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P4) val &= ~BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32); base = 32; } @@ -955,7 +956,12 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) struct kvm_pmu_event_filter filter; int nr_events; - nr_events = kvm_pmu_event_mask(kvm) + 1; + /* + * Allocate an event filter for the entire range supported + * by the PMU hardware so we can simply change the actual + * range of use when the PMUVer for the guest is changed. + */ + nr_events = __kvm_pmu_event_mask(kvm->arch.dfr0_pmuver.imp_limit) + 1; uaddr = (struct kvm_pmu_event_filter __user *)(long)attr->addr; From patchwork Fri Feb 3 04:20:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13126993 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A2EDDC61DA4 for ; Fri, 3 Feb 2023 04:24:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=hEg3eecH1WG6BWWIj4EE6XLC39wcEA1kW/N8zST48YA=; b=T7IZY7F+Ewtl6IVfuJsby7PtBC NjddVjOx81byA+C9nEu8uTJnMxw0GFNu24ZQJ76IhwDoLqVAxjA9z2/lUh11T7TEz0pFBkT+MA/Es Nnzn5ap9s4QFOrv67Pu7h4HQoCWu5xDa6aPRYqOOvH5B3K7pu33tpMRkE7sMu6dmCPhZzIq3sDejx cEKrUrgYfl7sXdWfhdNk5ch+b0WWXKzjgpWP7E4QIXCRNznx9GnhbD32FWSCsToO//VIUwK6Rd4od VLUTyV/tNTK3/619IsNm+xhMU5HttKp8Ed3tHHYB1wx4i77keo9xhs4bkF+s7a4OU70nk209YK328 PyRzFZZA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnbu-000Jdh-9Q; Fri, 03 Feb 2023 04:23:38 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnbh-000JYC-J4 for linux-arm-kernel@lists.infradead.org; Fri, 03 Feb 2023 04:23:26 +0000 Received: by mail-yb1-xb49.google.com with SMTP id y14-20020a2586ce000000b0086167203873so2422986ybm.13 for ; Thu, 02 Feb 2023 20:23:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+lTOMqZEuFEaNT61qUBO+2GaxyavVhauG1igBc20tgA=; b=PK76gQxhOVDeyvTP2E//JguBNyOHQYe6aDfiMNG68uFtZ9Mt4eIqGhceH3w4Tv7uK6 Ps8Ut5EjxC3aMnq6OJYmBBigANjGhqm+gCn+KuQ9YBPVKD/qwjeSRqBGfODmGmRWLJdj 2PMnBQ6d6M75FDeTIMhss8wGcYnF9vyYi3oUQh8fiNqXeFCpu/828VnUB2mEFMws9Ifx J8WiePKArOMpW4alrfqJEgQC5SUW/VpG/BKR7R+qp7s/eimIrxAHcCo6OLMkRxEZauZH 399ohqTJulad0xDkIoLYjzzxMQ48OPDQvNH2s+ZZdkyt3BUeMAz8N/g+Hz37CEWdycwb OEvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+lTOMqZEuFEaNT61qUBO+2GaxyavVhauG1igBc20tgA=; b=wQXuy+ATvwyQIruV5Fkar4vINXdBIZ4TqKJJiKd905BaosxTKgZiVVMcPsPl6VjyHQ EUxOPE3JxcGQrfguzq4DM7xGUiTjhoKNs/23a1FdWQrWtmuQ5jr0NuRsGbFDVne01bws h501cyUlXrWpI/xteQvjTqgHxsJJ1Myqc96xfI2z//FBtBudoj26458ajoZm7rUDJ3RO truQJ/d5rE3tLQWBs1e05PgYxyHFtvWQfRn4X4hi+GBzjydmXFLzMci2rYnlC2Y6sZdw ffyTjbcLK5tIUatoCGNop81xXtnXJGL6IT+R1GuGzcDBcChcD72myZ7PpghQ00DRVq1f E37Q== X-Gm-Message-State: AO0yUKUD30WKMA8c5HaMeb9CMYBdadMcKoTQsprLvaCzCC15kcFJ35Yo L/KzJr3w4AS9ZkMkHOyRRtp2SLvc658= X-Google-Smtp-Source: AK7set9IanJjSULDLBxl3uXOGzf9AFE8QBFbNG5ZQ2mcI+jJcEc6I9RwVCGqxiME38zDnC7Jl4VQnbJ+yNI= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a25:b285:0:b0:855:fdcb:4460 with SMTP id k5-20020a25b285000000b00855fdcb4460mr2ybj.1.1675398203551; Thu, 02 Feb 2023 20:23:23 -0800 (PST) Date: Thu, 2 Feb 2023 20:20:47 -0800 In-Reply-To: <20230203042056.1794649-1-reijiw@google.com> Mime-Version: 1.0 References: <20230203042056.1794649-1-reijiw@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230203042056.1794649-4-reijiw@google.com> Subject: [PATCH v3 05/14] KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} on vCPU reset From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Shaoqin Huang , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230202_202325_644915_873CBC35 X-CRM114-Status: GOOD ( 15.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On vCPU reset, PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and PMOVS{SET,CLR}_EL1 for a vCPU are reset by reset_pmu_reg(). This function clears RAZ bits of those registers corresponding to unimplemented event counters on the vCPU, and sets bits corresponding to implemented event counters to a predefined pseudo UNKNOWN value (some bits are set to 1). The function identifies (un)implemented event counters on the vCPU based on the PMCR_EL0.N value on the host. Using the host value for this would be problematic when KVM supports letting userspace set PMCR_EL0.N to a value different from the host value (some of the RAZ bits of those registers could end up being set to 1). Fix this by clearing the registers so that it can ensure that all the RAZ bits are cleared even when the PMCR_EL0.N value for the vCPU is different from the host value. Use reset_val() to do this instead of fixing reset_pmu_reg(), and remove reset_pmu_reg(), as it is no longer used. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 19 +------------------ 1 file changed, 1 insertion(+), 18 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index c1ec4a68b914..e6e419157856 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -602,23 +602,6 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu, return REG_HIDDEN; } -static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) -{ - u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX); - - /* No PMU available, any PMU reg may UNDEF... */ - if (!kvm_arm_support_pmu_v3()) - return; - - n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT; - n &= ARMV8_PMU_PMCR_N_MASK; - if (n) - mask |= GENMASK(n - 1, 0); - - reset_unknown(vcpu, r); - __vcpu_sys_reg(vcpu, r->reg) &= mask; -} - static void reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { reset_unknown(vcpu, r); @@ -976,7 +959,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, trap_wcr, reset_wcr, 0, 0, get_wcr, set_wcr } #define PMU_SYS_REG(r) \ - SYS_DESC(r), .reset = reset_pmu_reg, .visibility = pmu_visibility + SYS_DESC(r), .reset = reset_val, .visibility = pmu_visibility /* Macro to expand the PMEVCNTRn_EL0 register */ #define PMU_PMEVCNTR_EL0(n) \ From patchwork Fri Feb 3 04:20:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13126994 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 60C94C61DA4 for ; Fri, 3 Feb 2023 04:24:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=WQY/S55yN8ASI0aUO4x6vuKAxWhwWSkFR6P+j0zjkpk=; b=k+frN29HklwVRkmytZSuvZ56/G xOKU80oj4vHaZBYGLg5I2v6NUMeJto62iN/nbw5HUMpDeIDmCdlEl3WihFWYRjBw5XWWvHx8kAaPR zYaQAnX3nQ7P5/jzOFHYIYysjxDYFrfAL7pWJJOMRRuiEW/RN74DyQuFON0KCRNj5pQwrP6AbiIW1 LMAeNy4VdgZb9cIXwqn55aIa4KJzOweNmW4oIiicaovUuVaz+I8rJNieudTrslgVCTKl4kzJC43YJ i6aC9DIoNlLyvNlH7A55hmookC4xiPKWeaLgPhX5U8W0DeHm6Bt2+HhRt6GJDYYjUyVZVJ5ODbCA9 em6yPArw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnc4-000Jit-IK; Fri, 03 Feb 2023 04:23:48 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnbj-000JZ1-9L for linux-arm-kernel@lists.infradead.org; Fri, 03 Feb 2023 04:23:28 +0000 Received: by mail-yb1-xb49.google.com with SMTP id w70-20020a25df49000000b00803e799d7b1so3795965ybg.10 for ; Thu, 02 Feb 2023 20:23:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HBCz7z4MHncYI2KSfyYRfjQx9Mlcxqfcto8G8RgoI+M=; b=WofD4bhyrBhdRY20IdvABnvAxrZJGDg14FMZs4aRUuxFzcyyJNLD8BNBBOaCBqFvKm b+adIuQveo0NZ2IyySWJTMXaRtjsGVoB9q9vxBIimzF8cOvUB73PMiVm5IqyYRtpWWV7 U1/MCwDf1QXL9F7T53pC+otZQ9ZT48w+Eh2gtY2UNGgo+v7tL3ENLIOci92h5nA1fwTO WmmAW0qdiBcWdAT9FsPT+2fUjjG5gputhLsp4bqaRCALKB2qk0fjiArKzUnIeEKPymXU +V9Vcf87R4tkBTJO0TS/VlrxUAGw+6IidvTOCHVsMgkNg4kwnDRKBO4/MwXxSoKUkA2n CcHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HBCz7z4MHncYI2KSfyYRfjQx9Mlcxqfcto8G8RgoI+M=; b=cffYv9JW+UYpGgxGA+zGNdOx86aLxFL9zL9GNTvDIOh4aQy5CK8AUEXqaMP6k7zIgW +E7DFmiLq23HhKDjNCsHAjQlw3dlsZYJlwyn1wwoojmiw3SYTHCNs+Um8p7/eK7RWFUt RLAWsOJUG6tf2ggigNqzPAGn0gUeMj935xMfdZEBWvXnAipZBUmQBV1RxeQvnXixgVjg sPpk7FgB0+adJc7Y1OkATInkG1m39myJjHLH620rF5HhUy4GvZKtGyOThuiDqC1PzvBk eoArV10PdF87E3SKSm0Ouzig2ur6SrRDKYtCWMML6yLEmZgDnpRtceGAhGdspDbnIBMN IzVQ== X-Gm-Message-State: AO0yUKXsu9xuaIYZjn/OikCYmlBavfuU+TA9y11pXLOjYFplarIY2QHc xzd8uVobG6+lOTB6ysZztoalXWw0i8k= X-Google-Smtp-Source: AK7set/t8pe6lygSGCd7T7jks2ZMy/qqzT75Fgk2uLNRlJkG/RDtkFqloigxMAKyLeMC43xNsHeIvtJtJq8= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a25:cad4:0:b0:855:fd13:bc37 with SMTP id a203-20020a25cad4000000b00855fd13bc37mr4ybg.6.1675398205745; Thu, 02 Feb 2023 20:23:25 -0800 (PST) Date: Thu, 2 Feb 2023 20:20:48 -0800 In-Reply-To: <20230203042056.1794649-1-reijiw@google.com> Mime-Version: 1.0 References: <20230203042056.1794649-1-reijiw@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230203042056.1794649-5-reijiw@google.com> Subject: [PATCH v3 06/14] KVM: arm64: PMU: Don't define the sysreg reset() for PM{USERENR,CCFILTR}_EL0 From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Shaoqin Huang , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230202_202327_361905_376BDD54 X-CRM114-Status: GOOD ( 12.77 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The default reset function for PMU registers (defined by PMU_SYS_REG) now simply clears a specified register. Use the default one for PMUSERENR_EL0 and PMCCFILTR_EL0, as KVM currently clears those registers on vCPU reset (NOTE: All non-RES0 fields of those registers have UNKNOWN reset values, and the same fields of their AArch32 registers have 0 reset values). No functional change intended. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index e6e419157856..790f028a1686 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1752,7 +1752,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { * in 32bit mode. Here we choose to reset it as zero for consistency. */ { PMU_SYS_REG(SYS_PMUSERENR_EL0), .access = access_pmuserenr, - .reset = reset_val, .reg = PMUSERENR_EL0, .val = 0 }, + .reg = PMUSERENR_EL0 }, { PMU_SYS_REG(SYS_PMOVSSET_EL0), .access = access_pmovs, .reg = PMOVSSET_EL0 }, @@ -1908,7 +1908,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { * in 32bit mode. Here we choose to reset it as zero for consistency. */ { PMU_SYS_REG(SYS_PMCCFILTR_EL0), .access = access_pmu_evtyper, - .reset = reset_val, .reg = PMCCFILTR_EL0, .val = 0 }, + .reg = PMCCFILTR_EL0 }, { SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 }, { SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 }, From patchwork Fri Feb 3 04:20:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13126995 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 95199C636CC for ; Fri, 3 Feb 2023 04:24:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=NL2jBxiIyVK1JOPWVaH0TVY57F/Z+2nbgKpzH3BfNuY=; b=qslKdSmW0UyISTBWBxWu+4P/qQ HQnDFBpFKdZqFEXK7sblamTfsMU5qUcaHf1uNOUsB9aLOqkitBJuMNYp87rQJ1+ccsXv8Mxi5d1xq C926aQtrNBni8Kdw10GvDJ8cu3QBrqmq5f55ZG2x//SjwusWeW3iZt0KfL104g+SyPUD5ZzQ3LQa+ TCbmRaerQWvZJ8cMxl9ADZrQezV/PFREsiH4VQCiY4tNWenoKK/u13wJ5+G3unA/uCX1GaytspgMr /1wwumZUUgPZ/0iewhA8JHAdzK8P8hRF7CPIULoIj/jHn4GATBPSFquoT5BT6Ruxsl8fkrgqb4S7/ btmrantQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNncG-000JoD-5x; Fri, 03 Feb 2023 04:24:00 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnbm-000JZz-H6 for linux-arm-kernel@lists.infradead.org; Fri, 03 Feb 2023 04:23:31 +0000 Received: by mail-pl1-x64a.google.com with SMTP id jk21-20020a170903331500b00198d701ad9cso720859plb.20 for ; Thu, 02 Feb 2023 20:23:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xpEXHfZvLbpfaKhqxhiDfUYZgyXkf2aFSNA+vKwneRY=; b=CU20XaZTa84GhtBhLVM8cHDsTfRVWu4SeFehyh9NtswcjAvNgffGKt3di3DbSi7VAe eKj9fkwDnkXLIPB+veEf4siptUwbGFLPfGqnALD6xThehUT8jFHOFFwYgXQ4z1hFsTFt zqsn8A4HBczz68w25URq3WBkWLi5DR9WuO/7AFJ2IKsKJDPQ1PrheVLqulPEZHUXLQmg otch6vhX54gs5STs2gr2WwWBGGMA7KuG8e86/2UjQtBJ9DPZuDrJP8vA9MqaqzlsrKRa zGCPrl8k5lZp9mZ0WG55FX/JmkIZShK4zbca2yxFxbWUbRy4210F5mz4pKpL5wdRk43n 6P/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xpEXHfZvLbpfaKhqxhiDfUYZgyXkf2aFSNA+vKwneRY=; b=5o49NKh0JQUthj7Z0aNe+YuVpALIowORi5uLTGq4u5n2PQaLEsB2DdAa7tK3O+6qnn /Ptnc+wEacRe46wF4tg+2wt6hDAN+d/Hb9h79nlvjRdpu0FP6Xg/2Y1DggHelhWPBQfW ULKyGHqdNcq81dELmKTR0NLwIkF5ZlGlJ5lV4INdgovE58A481wWRc1lgdmefzRU/C6e smJh2jij6SWO1UztP7gh1J0D8FykewaiMyf0my0bwY6XEAqfZLPFx6wuGVvprdFUhguV OsMOoC/AMsGk5IjA0aMSw/vps7Cp/jjpk98Swjm2IM9kiI9lLn3iSecf5eHTvFoYHlJl xZBQ== X-Gm-Message-State: AO0yUKVh4ZHcD2zLoRsytXoW5b5r049OcEGjLvGvDUNR3AgQw+MhFIeE w5khUE8P1Kj2VqkPN7mole7l7ShhjMo= X-Google-Smtp-Source: AK7set/0mGNoOdEuR0rgdSC8ryLt9r67mzIzERGJmYbc/256SmGadiMn1crQB06HpA2fBqvDNM3pX7bJq7E= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a63:1c1e:0:b0:4df:c991:3a74 with SMTP id c30-20020a631c1e000000b004dfc9913a74mr1414141pgc.94.1675398207662; Thu, 02 Feb 2023 20:23:27 -0800 (PST) Date: Thu, 2 Feb 2023 20:20:49 -0800 In-Reply-To: <20230203042056.1794649-1-reijiw@google.com> Mime-Version: 1.0 References: <20230203042056.1794649-1-reijiw@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230203042056.1794649-6-reijiw@google.com> Subject: [PATCH v3 07/14] KVM: arm64: PMU: Simplify extracting PMCR_EL0.N From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Shaoqin Huang , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230202_202330_584667_6196B291 X-CRM114-Status: GOOD ( 14.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Some code extracts PMCR_EL0.N using ARMV8_PMU_PMCR_N_SHIFT and ARMV8_PMU_PMCR_N_MASK. Define ARMV8_PMU_PMCR_N (0x1f << 11), and simplify those codes using FIELD_GET() and/or ARMV8_PMU_PMCR_N. The following patches will also use these macros to extract PMCR_EL0.N. No functional change intended. Signed-off-by: Reiji Watanabe --- arch/arm64/include/asm/perf_event.h | 2 +- arch/arm64/kernel/perf_event.c | 3 +-- arch/arm64/kvm/pmu-emul.c | 3 +-- arch/arm64/kvm/sys_regs.c | 7 +++---- 4 files changed, 6 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h index 3eaf462f5752..eeef8d56d9c8 100644 --- a/arch/arm64/include/asm/perf_event.h +++ b/arch/arm64/include/asm/perf_event.h @@ -219,7 +219,7 @@ #define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */ #define ARMV8_PMU_PMCR_LP (1 << 7) /* Long event counter enable */ #define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */ -#define ARMV8_PMU_PMCR_N_MASK 0x1f +#define ARMV8_PMU_PMCR_N (0x1f << ARMV8_PMU_PMCR_N_SHIFT) #define ARMV8_PMU_PMCR_MASK 0xff /* Mask for writable bits */ /* diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index a5193f2146a6..1775d89a9144 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -1158,8 +1158,7 @@ static void __armv8pmu_probe_pmu(void *info) probe->present = true; /* Read the nb of CNTx counters supported from PMNC */ - cpu_pmu->num_events = (armv8pmu_pmcr_read() >> ARMV8_PMU_PMCR_N_SHIFT) - & ARMV8_PMU_PMCR_N_MASK; + cpu_pmu->num_events = FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read()); /* Add the CPU cycles counter */ cpu_pmu->num_events += 1; diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 701728ad78d6..9dbf532e264e 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -246,9 +246,8 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) { - u64 val = __vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT; + u64 val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0)); - val &= ARMV8_PMU_PMCR_N_MASK; if (val == 0) return BIT(ARMV8_PMU_CYCLE_IDX); else diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 790f028a1686..9b410a2ea20c 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -629,7 +629,7 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) return; /* Only preserve PMCR_EL0.N, and reset the rest to 0 */ - pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT); + pmcr = read_sysreg(pmcr_el0) & ARMV8_PMU_PMCR_N; if (!kvm_supports_32bit_el0()) pmcr |= ARMV8_PMU_PMCR_LC; @@ -736,10 +736,9 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p, static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx) { - u64 pmcr, val; + u64 val; - pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0); - val = (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; + val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0)); if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) { kvm_inject_undefined(vcpu); return false; From patchwork Fri Feb 3 04:20:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13126996 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 81E8AC636CC for ; Fri, 3 Feb 2023 04:25:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=3ghq/z+pjYAB2d0Mgyd4VW+WgfAbdONc0+Zj0Etl1SQ=; b=vbMl6rY6H9kOvidXCGJeIiWONc WfCddzENB1FV08QWS2E61jjfBU2IK4aievLb2plWOz3ij2UNsWtRDYG3JHZiPBSYFGYZ5B07gl4ht nAO2Ac+hOX8W7KJQDrRB8m2e7zkZmnBQyOhfUoK71yT+gRm9dTPJttK7k/1Ha7N8I5Zk3P1ysHaes 4ETgd/InYJAeHroN+33Mk6l0fAUuDgyXx11QZedoOAE3aeIlrmghYPScbcucsXRZQqTNmDEL9VRJz S4iPwQhhLK3JhBUkuRekK/Fi0GfReelUdEBPaPCBZq3u+bH8YDYJP0P5w7wpLfZmAmGSq0r3aWdyX bRLyi8AA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNncS-000JuM-F6; Fri, 03 Feb 2023 04:24:13 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnbn-000Jae-9f for linux-arm-kernel@lists.infradead.org; Fri, 03 Feb 2023 04:23:33 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5073cf66299so40321537b3.17 for ; Thu, 02 Feb 2023 20:23:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yzvhwovBcpfH5Id8QVcQZYhzIH6v7QJQjrrO5d9JpHc=; b=QnMej+CLhju6FHGDBqsvgEWdfiUSThXRAMLghzDGMQu/8MzWrBXtfIZ1zcLSNcTnsC AerFzpV33p2aei59Hks4gHzTwRJdrjyDL+XImwfUtdqiFcYoA9xyoLTB+7es+2GYWS1+ iJNi2OhLQ2Rk0E3ZvYAAIZeZPmLjOCBMKZUJRF5Xa3pPEw8PZLy3vOiQTD6/Seiw3AAr ezkja9yT6fVXEHr287h1I8JGwGVnHe9mkOh5K23XHzHfKFVBwuyHeWz1vVnpvF8X1lNh XaW1SlsJOXgPyDTyhRCWFJG00jesrlo77R9hjXK4X9ZES87Vqfe8Cjauov8kHYRHH+cn sc3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yzvhwovBcpfH5Id8QVcQZYhzIH6v7QJQjrrO5d9JpHc=; b=gghSYCpsJ/uPc1eyeGx/vfvaEtV6C9HiOB0DdRqjHZt7eEv918NA7RGS3mAQHQRQ+h JQ+VWP+jnpzLMmW3x60dS3unN176TQCEUHhdoQyWHXB5YNwB4OKR/nCKVgr/ei7FtrGf WZxoUePiDLESIkNbulqYfZuuBe3gP1J3nXoMYzM4kpptaWgGBG1CUBPnmYQN2g0SLQb3 ZYb85+XKP0HEqMVh1nVhr/KXBDyd5c0+D+Is8Mzrm4+mT6O30pjoGo9pOEKUlIRXF6MK vqk7B1ZWFf/55DhaCPWNNjgHsENwfRoL66QZN0SQ+mHZ784sFgewSSbR25jxvoWCiWqJ B1ew== X-Gm-Message-State: AO0yUKVsm7tQM70WdUnWGYO+8284cjnMk3LpW8cdL+/gbktSjJwlqArl 6e509SUOj5ixRJQW2WXVNxHvYi/UCtM= X-Google-Smtp-Source: AK7set+1F/D8PBGBlL4BU44CaQlVjsUA7tl2iBxzRBl6MEoL+rn4TIlm0Vhuq7MWX5MtUAUNsP7+S3iAA04= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a25:c3c2:0:b0:80e:e94f:f10b with SMTP id t185-20020a25c3c2000000b0080ee94ff10bmr981568ybf.5.1675398209711; Thu, 02 Feb 2023 20:23:29 -0800 (PST) Date: Thu, 2 Feb 2023 20:20:50 -0800 In-Reply-To: <20230203042056.1794649-1-reijiw@google.com> Mime-Version: 1.0 References: <20230203042056.1794649-1-reijiw@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230203042056.1794649-7-reijiw@google.com> Subject: [PATCH v3 08/14] KVM: arm64: PMU: Add a helper to read a vCPU's PMCR_EL0 From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Shaoqin Huang , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230202_202331_390194_D3EB1760 X-CRM114-Status: GOOD ( 17.74 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add a helper to read a vCPU's PMCR_EL0, and use it when KVM reads a vCPU's PMCR_EL0. The PMCR_EL0 value is tracked by a sysreg file per each vCPU. The following patches will make (only) PMCR_EL0.N track per guest. Having the new helper will be useful to combine the PMCR_EL0.N field (tracked per guest) and the other fields (tracked per vCPU) to provide the value of PMCR_EL0. No functional change intended. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/arm.c | 3 +-- arch/arm64/kvm/pmu-emul.c | 17 +++++++++++------ arch/arm64/kvm/sys_regs.c | 6 +++--- include/kvm/arm_pmu.h | 6 ++++++ 4 files changed, 21 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 41f478344a4d..ee3df098a75e 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -753,8 +753,7 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu) } if (kvm_check_request(KVM_REQ_RELOAD_PMU, vcpu)) - kvm_pmu_handle_pmcr(vcpu, - __vcpu_sys_reg(vcpu, PMCR_EL0)); + kvm_pmu_handle_pmcr(vcpu, kvm_vcpu_read_pmcr(vcpu)); if (kvm_check_request(KVM_REQ_SUSPEND, vcpu)) return kvm_vcpu_suspend(vcpu); diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 9dbf532e264e..5ff44224148f 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -68,7 +68,7 @@ static bool kvm_pmc_is_64bit(struct kvm_pmc *pmc) static bool kvm_pmc_has_64bit_overflow(struct kvm_pmc *pmc) { - u64 val = __vcpu_sys_reg(kvm_pmc_to_vcpu(pmc), PMCR_EL0); + u64 val = kvm_vcpu_read_pmcr(kvm_pmc_to_vcpu(pmc)); return (pmc->idx < ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LP)) || (pmc->idx == ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LC)); @@ -246,7 +246,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) { - u64 val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0)); + u64 val = FIELD_GET(ARMV8_PMU_PMCR_N, kvm_vcpu_read_pmcr(vcpu)); if (val == 0) return BIT(ARMV8_PMU_CYCLE_IDX); @@ -267,7 +267,7 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) if (!kvm_vcpu_has_pmu(vcpu)) return; - if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val) + if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) || !val) return; for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { @@ -319,7 +319,7 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) { u64 reg = 0; - if ((__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) { + if ((kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E)) { reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0); reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1); @@ -421,7 +421,7 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu, { int i; - if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) + if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E)) return; /* Weed out disabled counters */ @@ -562,7 +562,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc) { struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc); - return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) && + return (kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) && (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(pmc->idx)); } @@ -1071,3 +1071,8 @@ u8 kvm_arm_pmu_get_pmuver_limit(void) ID_AA64DFR0_EL1_PMUVer_V3P5); return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp); } + +u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) +{ + return __vcpu_sys_reg(vcpu, PMCR_EL0); +} diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 9b410a2ea20c..9f8c25e49a5a 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -680,7 +680,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, * Only update writeable bits of PMCR (continuing into * kvm_pmu_handle_pmcr() as well) */ - val = __vcpu_sys_reg(vcpu, PMCR_EL0); + val = kvm_vcpu_read_pmcr(vcpu); val &= ~ARMV8_PMU_PMCR_MASK; val |= p->regval & ARMV8_PMU_PMCR_MASK; if (!kvm_supports_32bit_el0()) @@ -689,7 +689,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, kvm_vcpu_pmu_restore_guest(vcpu); } else { /* PMCR.P & PMCR.C are RAZ */ - val = __vcpu_sys_reg(vcpu, PMCR_EL0) + val = kvm_vcpu_read_pmcr(vcpu) & ~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C); p->regval = val; } @@ -738,7 +738,7 @@ static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx) { u64 val; - val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0)); + val = FIELD_GET(ARMV8_PMU_PMCR_N, kvm_vcpu_read_pmcr(vcpu)); if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) { kvm_inject_undefined(vcpu); return false; diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index c7da46c7377e..8f55c9055058 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -97,6 +97,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu); +u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu); #else struct kvm_pmu { }; @@ -173,6 +174,11 @@ static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) return 0; } +static inline u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) +{ + return 0; +} + #endif #endif From patchwork Fri Feb 3 04:20:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13127001 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 85798C636CC for ; Fri, 3 Feb 2023 04:27:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=vSojyOppztNxRfm4CZEHIKtps5i6xDg6zLw0/NB1zT8=; b=UYte66+hIRDEkhNUuoaiAYEQue QRimZEg3mOuF6RzmLr4wMt7iJPpLchBNzxtWaONYh8ZAiZn0+k3CvkCvckdA7dHGyaXpD4VSUQmLv KMxOcqssQqF0u8rPO4bX8O6YyyPyPCD9XVEaAxoeXeSCR6sWPZErK2q/R1RnT2tKPbByOu1kHHAGt JEay70VBLKlDpTxycxPiosB8ZcRmpH8s2TZb4TMTtH6p58pHgE0PER17wn+ZA3IIRBN9Z8CsoX4Wz HXSC0dHZlLLIM6d+mdPT2v6C6rBKjGmRG9mu8KofQqGQerdN2gvQRgfaajWsv6n6tnzMyni15j3+n YIGivfqg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNneX-000KwC-As; Fri, 03 Feb 2023 04:26:21 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnby-000Jbd-0X for linux-arm-kernel@lists.infradead.org; Fri, 03 Feb 2023 04:23:45 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id n69-20020a254048000000b0085db6b675e7so3070019yba.22 for ; Thu, 02 Feb 2023 20:23:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ninu9t9khJ537NvgTgh0SdQwYTtXrvr6XzPdAKmZIas=; b=WGULXC2QPAXoA4h/oZtvz4aNUv7DSZK/ONRwUzYMLoP592X5Q/E5BRHxp8BZknpYPN LzOXTwfE6GzZQGgHClpFJRrZhq045B6jl+4uEW47EBEwllRae/x20XHGSfIwsFgEIAGm d64MX2ffTMJhoe0WZ+aI7TgEAog8Wl5c9qRgrSi10MTuB/XB+8uE4i4LIoi5jUR5prr1 iy9I3E3C9k/b4bBdV/7HJ1dsPZRFGJ6TgI08LOJrnPjN0ThtqnfW7ZT1p1Rgfw8X3+dK 2x9BELzDb6a+DzDiAFthgvIm8O+pp082VhQ5HGdkc0PXFzr4/jnRRClSvgOIZq98t6GA esPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ninu9t9khJ537NvgTgh0SdQwYTtXrvr6XzPdAKmZIas=; b=rEpG4Cux29Ht6SgfnpzZTHmDBdL4tt4AOAe8L+i62MShQU3IaO9nQICeOKUDuQevXk kwjHuQviYDQhoM8o7DYGR38Xg9ElVOY5tnc8zqunvLqPFBFkcWjAwO9uOrL3Y+b3lwwo fq2uQ1yk/yDmu4LnTivV7aPvv+SbNnrAAVo7N3SndMAMX1c2E4sz2g4IFRMjqI7rk4UV VkdDaUL10yO0g3yB0g3JGOYyDbQsYS01OS3FxN9R5I/H06DYKljWavm+QY0ImGK31qv2 IXQtKbIZsOmx5oG5S7x5BKODoB+3NcNpKkvU/K6tU6E2pR/0TEUunyPth9JQzr+XFQA4 Ej8w== X-Gm-Message-State: AO0yUKXbETZuq49O3eGkCyl/8CryrG5Z9O66pHVNcgHMZ3KvI1prlzuU dcM0sivSUDvPARdTBenamM+PBEX88Jw= X-Google-Smtp-Source: AK7set9BfDCBCkZHMuaDtYMLVaqKLlk8U79SXBDu/y2Co+a51NjKGyqtXjdF+Ce+/Hsy9OH5ScLNMezBLhY= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a81:d46:0:b0:4ff:e4bc:b56f with SMTP id 67-20020a810d46000000b004ffe4bcb56fmr977377ywn.488.1675398211280; Thu, 02 Feb 2023 20:23:31 -0800 (PST) Date: Thu, 2 Feb 2023 20:20:51 -0800 In-Reply-To: <20230203042056.1794649-1-reijiw@google.com> Mime-Version: 1.0 References: <20230203042056.1794649-1-reijiw@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230203042056.1794649-8-reijiw@google.com> Subject: [PATCH v3 09/14] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Shaoqin Huang , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230202_202342_083880_E7F1B461 X-CRM114-Status: GOOD ( 21.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The number of PMU event counters is indicated in PMCR_EL0.N. For a vCPU with PMUv3 configured, the value is set to the same value as the current PE on every vCPU reset. Unless the vCPU is pinned to PEs that has the PMU associated to the guest from the initial vCPU reset, the value might be different from the PMU's PMCR_EL0.N on heterogeneous PMU systems. Fix this by setting the vCPU's PMCR_EL0.N to the PMU's PMCR_EL0.N value. Track the PMCR_EL0.N per guest, as only one PMU can be set for the guest (PMCR_EL0.N must be the same for all vCPUs of the guest), and it is convenient for updating the value. KVM does not yet support userspace modifying PMCR_EL0.N. The following patch will add support for that. Signed-off-by: Reiji Watanabe --- arch/arm64/include/asm/kvm_host.h | 3 +++ arch/arm64/kvm/pmu-emul.c | 14 +++++++++++++- arch/arm64/kvm/sys_regs.c | 17 ++++++++++------- 3 files changed, 26 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 33839077a95c..734f1b6f7468 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -233,6 +233,9 @@ struct kvm_arch { u8 imp_limit; } dfr0_pmuver; + /* PMCR_EL0.N value for the guest */ + u8 pmcr_n; + /* Hypercall features firmware registers' descriptor */ struct kvm_smccc_features smccc_feat; diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 5ff44224148f..3053c06db7a9 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -680,6 +680,9 @@ void kvm_host_pmu_init(struct arm_pmu *pmu) if (!entry) goto out_unlock; + WARN_ON((pmu->num_events <= 0) && + (pmu->num_events > ARMV8_PMU_MAX_COUNTERS)); + entry->arm_pmu = pmu; list_add_tail(&entry->entry, &arm_pmus); @@ -881,6 +884,13 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) kvm->arch.dfr0_pmuver.imp_limit = min_t(u8, arm_pmu->pmuver, ID_AA64DFR0_EL1_PMUVer_V3P5); kvm->arch.dfr0_pmuver.imp = kvm->arch.dfr0_pmuver.imp_limit; + /* + * Both the num_events and PMCR_EL0.N indicates the number of + * PMU event counters, but the former includes the cycle counter + * while the latter does not. + */ + kvm->arch.pmcr_n = arm_pmu->num_events - 1; + return 0; } @@ -1074,5 +1084,7 @@ u8 kvm_arm_pmu_get_pmuver_limit(void) u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) { - return __vcpu_sys_reg(vcpu, PMCR_EL0); + u64 pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0) & ~ARMV8_PMU_PMCR_N; + + return pmcr | ((u64)vcpu->kvm->arch.pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 9f8c25e49a5a..aba93db29697 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -624,12 +624,8 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 pmcr; - /* No PMU available, PMCR_EL0 may UNDEF... */ - if (!kvm_arm_support_pmu_v3()) - return; - /* Only preserve PMCR_EL0.N, and reset the rest to 0 */ - pmcr = read_sysreg(pmcr_el0) & ARMV8_PMU_PMCR_N; + pmcr = kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_N; if (!kvm_supports_32bit_el0()) pmcr |= ARMV8_PMU_PMCR_LC; @@ -946,6 +942,13 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, return true; } +static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, + u64 *val) +{ + *val = kvm_vcpu_read_pmcr(vcpu); + return 0; +} + /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */ #define DBG_BCR_BVR_WCR_WVR_EL1(n) \ { SYS_DESC(SYS_DBGBVRn_EL1(n)), \ @@ -1719,8 +1722,8 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_CTR_EL0), access_ctr }, { SYS_DESC(SYS_SVCR), undef_access }, - { PMU_SYS_REG(SYS_PMCR_EL0), .access = access_pmcr, - .reset = reset_pmcr, .reg = PMCR_EL0 }, + { PMU_SYS_REG(SYS_PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr, + .reg = PMCR_EL0, .get_user = get_pmcr }, { PMU_SYS_REG(SYS_PMCNTENSET_EL0), .access = access_pmcnten, .reg = PMCNTENSET_EL0 }, { PMU_SYS_REG(SYS_PMCNTENCLR_EL0), From patchwork Fri Feb 3 04:20:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13127000 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 510BEC636CC for ; Fri, 3 Feb 2023 04:26:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Nc7nq1lzW4FGU6I8PqhamPrs8MDFlLJu3hJNWKClw9k=; b=Jea6a1ZpzapbrfOk1lTbFi0bRF SL2xcgduiGcO6gCabaa5DAIBnm3X1J3IpZxZ0VnL/iaCX2IgndilCBUuf8Lz+5l4ufAjsIAVDiQqC EHIWzBs1B/J6wVq3DqgU9dMNX3kfJT2SV/IQfvEcmXQUF98dxWYKphTEkdsl7KYEX1yoTS2ZkzX7U 3QTjCv1W3DfCHsTfZsSiT1MDUSba9dIzrrcnN/F6T6syVTHibJuyCul4pT2EXy7vZZsHFY+RSu3/g TySG9xxCXgfsrkKhe++Kj6wWg7302CjnZdTwyyxjK3mByjEnvoqyZ1aWwSUk9BsIWlUSPMOKQVaRD GoKiXzCw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNne3-000KiP-J9; Fri, 03 Feb 2023 04:25:52 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnby-000Jc4-0E for linux-arm-kernel@lists.infradead.org; Fri, 03 Feb 2023 04:23:44 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id b18-20020a253412000000b0085747dc8317so3749275yba.15 for ; Thu, 02 Feb 2023 20:23:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rjHuOROaPh04hBIryzlS3UOwUzWVXjEKlopD+fhxM6Q=; b=ZwkdcLbW28CGT5q2oMQ7rph4wKqhu4THcTXKCGfhgxf1olfxceL475/g9ILy30bq7X Yt2shk2Z3/55BWhYiUqR8iJl6EysMSZEwq4WZzCZMZWy2orA2HMxpsbZIg/o3bh8ZqfW 5DROz9ICqHUHCb/bfiExo5XUkTHjkqCq9SqHMLxMoNGjfi8jC3Rdd6HNTdVswsRkJeHG ZotPLrCH4uX0eKstAc2GRJyeQjE/igLlQbBpVBqklScwk6e9vuBHBwyUj7k9PMo4SFua q30MOaQFuh+kKToOgv38mrEZ4Uo0uxfyNEKnY2lUcaTjXqY0kWWnPySfOEdCafW22npv iEbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rjHuOROaPh04hBIryzlS3UOwUzWVXjEKlopD+fhxM6Q=; b=a4D4BqyLiUtXUJpKjrP0NQBAEh3tRY+Y8EX6Crizyh3I2diQI4iDl70rvwV58ACbMI c5ueIs+MHrkHsbONUfmnwCs3+6nCr8vQeFLjECLdFdFnoA/HkCvivjX5L5zDnraB7cYs XM0MYOCRJA4Hiw2MVbl2zDMwk4sEs1zDyuwwjttmtK5yKSBjoHno/Zw4BNpJy+xnlsIU gejDP5UTEX9cS8O77B8Ovg2/YzCTExGFXi5QyBdLiEuX7nD96uPMdftXMemMOeHFnQ5j hFyQaLROm2I0I0zaxUHQM8Xx4F9id3AHpQH9bCoucdIOg3Nwgu0fNiSownFFHflxf4Q/ 6+dw== X-Gm-Message-State: AO0yUKXwqAHgduqNSDh7oXMAeV0geHXu/yi1imo9Awn3RaPjYfl8Ysmt MLPoLreagHckgvsHctMFJbY0yLSJ0e4= X-Google-Smtp-Source: AK7set/rNgLZtJ+6USHfQmvNtMEkp1YLxm5xnM8Ib/JZT7xCM+iM+JYo+d/C8Pil6f6TG+WD4DTI5r/fK3E= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a81:d10c:0:b0:507:b797:f1b with SMTP id w12-20020a81d10c000000b00507b7970f1bmr905069ywi.468.1675398212958; Thu, 02 Feb 2023 20:23:32 -0800 (PST) Date: Thu, 2 Feb 2023 20:20:52 -0800 In-Reply-To: <20230203042056.1794649-1-reijiw@google.com> Mime-Version: 1.0 References: <20230203042056.1794649-1-reijiw@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230203042056.1794649-9-reijiw@google.com> Subject: [PATCH v3 10/14] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Shaoqin Huang , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230202_202342_081995_F0BC28F5 X-CRM114-Status: GOOD ( 24.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org KVM does not yet support userspace modifying PMCR_EL0.N (With the previous patch, KVM ignores what is written by upserspace). Add support userspace limiting PMCR_EL0.N. Disallow userspace to set PMCR_EL0.N to a value that is greater than the host value (KVM_SET_ONE_REG will fail), as KVM doesn't support more event counters than the host HW implements. Although this is an ABI change, this change only affects userspace setting PMCR_EL0.N to a larger value than the host. As accesses to unadvertised event counters indices is CONSTRAINED UNPREDICTABLE behavior, and PMCR_EL0.N was reset to the host value on every vCPU reset before this series, I can't think of any use case where a user space would do that. Also, ignore writes to read-only bits that are cleared on vCPU reset, and RES{0,1} bits (including writable bits that KVM doesn't support yet), as those bits shouldn't be modified (at least with the current KVM). Signed-off-by: Reiji Watanabe --- arch/arm64/include/asm/kvm_host.h | 3 ++ arch/arm64/kvm/pmu-emul.c | 1 + arch/arm64/kvm/sys_regs.c | 48 ++++++++++++++++++++++++++++++- 3 files changed, 51 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 734f1b6f7468..cd0014d1ec16 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -236,6 +236,9 @@ struct kvm_arch { /* PMCR_EL0.N value for the guest */ u8 pmcr_n; + /* Limit value of PMCR_EL0.N for the guest */ + u8 pmcr_n_limit; + /* Hypercall features firmware registers' descriptor */ struct kvm_smccc_features smccc_feat; diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 3053c06db7a9..ff4ec678afbd 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -890,6 +890,7 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) * while the latter does not. */ kvm->arch.pmcr_n = arm_pmu->num_events - 1; + kvm->arch.pmcr_n_limit = arm_pmu->num_events - 1; return 0; } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index aba93db29697..959bd142b797 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -949,6 +949,52 @@ static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, return 0; } +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, + u64 val) +{ + struct kvm *kvm = vcpu->kvm; + u64 new_n, mutable_mask; + int ret = 0; + + new_n = FIELD_GET(ARMV8_PMU_PMCR_N, val); + + if (unlikely(new_n != kvm->arch.pmcr_n)) { + mutex_lock(&kvm->lock); + /* + * The vCPU can't have more counters than the PMU + * hardware implements. + */ + if (new_n <= kvm->arch.pmcr_n_limit) + kvm->arch.pmcr_n = new_n; + else + ret = -EINVAL; + + mutex_unlock(&kvm->lock); + if (ret) + return ret; + } + + /* + * Ignore writes to RES0 bits, read only bits that are cleared on + * vCPU reset, and writable bits that KVM doesn't support yet. + * (i.e. only PMCR.N and bits [7:0] are mutable from userspace) + * The LP bit is RES0 when FEAT_PMUv3p5 is not supported on the vCPU. + * But, we leave the bit as it is here, as the vCPU's PMUver might + * be changed later (NOTE: the bit will be cleared on first vCPU run + * if necessary). + */ + mutable_mask = (ARMV8_PMU_PMCR_MASK | ARMV8_PMU_PMCR_N); + val &= mutable_mask; + val |= (__vcpu_sys_reg(vcpu, r->reg) & ~mutable_mask); + + /* The LC bit is RES1 when AArch32 is not supported */ + if (!kvm_supports_32bit_el0()) + val |= ARMV8_PMU_PMCR_LC; + + __vcpu_sys_reg(vcpu, r->reg) = val; + return 0; +} + /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */ #define DBG_BCR_BVR_WCR_WVR_EL1(n) \ { SYS_DESC(SYS_DBGBVRn_EL1(n)), \ @@ -1723,7 +1769,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_SVCR), undef_access }, { PMU_SYS_REG(SYS_PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr, - .reg = PMCR_EL0, .get_user = get_pmcr }, + .reg = PMCR_EL0, .get_user = get_pmcr, .set_user = set_pmcr }, { PMU_SYS_REG(SYS_PMCNTENSET_EL0), .access = access_pmcnten, .reg = PMCNTENSET_EL0 }, { PMU_SYS_REG(SYS_PMCNTENCLR_EL0), From patchwork Fri Feb 3 04:20:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13126997 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1CD55C61DA4 for ; Fri, 3 Feb 2023 04:25:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ujGtHexIvColnKQbhGhRe9HP+GeVmrPJ7zbgEp8VKPg=; b=rk7fLNo6L6JFofySOYNsXG3+AR LVuGnConxfR873It+Lr5YVe15p4DsI+BAlq+MHogZ8ju/rVvdZxTNK+XyAZHwT5se0qygpKOt6j7d NUf9RWxBXrT5MwtzDyPC8eE0c9eKdUo01xEf5TCRJpNoC8Z3g2mF9BK4IkfETzKM3ubvie3hJZPxx nMh6XBf3h1rJ2QDYtLZJaUB21QS7e2ay09OWBgbN8JBgUCD9O8mKoAHRjG8eAnpg0pyT9GzaCGtR2 mc4e2Zztstc/KBih/E2T9u+91Umhydvh9L9PoLfgHJpAPBY0RBck/IL6ZfCsqJ46nQNqnxf3A0Mnp 73CuOtZg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNncg-000K1A-O1; Fri, 03 Feb 2023 04:24:26 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnbs-000Jce-DE for linux-arm-kernel@lists.infradead.org; Fri, 03 Feb 2023 04:23:38 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-517f8be4b00so40158227b3.3 for ; Thu, 02 Feb 2023 20:23:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=O+sPE33ivtI8WE/rDkZQ2EbbJL3GaZmuED/u7j1Xo3U=; b=KM0kw7T7zC00dPfva7kvByVE1CMZaLbvSt2+V7yUs+zgs9YwlVdulsInodNRh9ZcZ5 1OWD0rh8zOaL/IJ6Wzrn2nLOc4RhA8qZgej/oJ+XiHMvRJM6pXvsJgqtka1s7U1E9eZ5 XxF3RiyiAZfi0rR6fCcp2f8vaVWzGXFQpLbvoGAmVgRU1lGmCRsOQg4G715kuQSkaQ/+ 4bEVm0xiXxnTAr36f2D8jQSjvQ8yoSZC3j7NyoyGTPEsIluPxCAkStKP8jEbRSwdC8PL OIU1MJK5igSVM6pzUttPsTKr9ylfbvwCb4fYEMzSu2IX9G9cRXnT7PzaNuvPc6P23RF1 9vug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=O+sPE33ivtI8WE/rDkZQ2EbbJL3GaZmuED/u7j1Xo3U=; b=XkTEqTD5dGnQYC2P0gP8TBsB19BFZxhJcWtV+jlKF9JPAXIc4wzDUZ/hbjWfvvwu5E pwywZ9bj3GNvD1DFinD76Kd4mW6JGfnPtejqlT4GUan9PCa+CRoRD0mUaFp6TsLsIkwJ Sxe32SIzIa/mZk2iXnwcDOGvYrCdKUQERKFENQRZs3TIzdCIg/dykq+QU2PHbUDcQGHE CJpMxVUY/DRl1ZNI0ZseNqQT8LEdO/3wJi0VTQS6k7F83C0Fg3D7wr4XLF+i1ZBuFP5x nuFPsCGT0vA+KR03AItbcP3jw+S6gAU8ddN9ieK88YTbALJy1ojMpNnIgyCvPWin5KaW u4Ag== X-Gm-Message-State: AO0yUKXQ4BLegYGwwkAJ6R+/ZvEKzvIgJsDKPFk92fSthpmRSu8D9R3q gN+1ypo8fdklKO7j7ZGNvXGPpu+7FoE= X-Google-Smtp-Source: AK7set9fcHlSIUFCg091CQP8E8krjIZkef11HaDEGEnxlazYf07LgZU7h0XYKOhXTeXbcAwyFgK8mrUBNUU= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a25:b3c7:0:b0:855:fd7e:b85d with SMTP id x7-20020a25b3c7000000b00855fd7eb85dmr7ybf.8.1675398214464; Thu, 02 Feb 2023 20:23:34 -0800 (PST) Date: Thu, 2 Feb 2023 20:20:53 -0800 In-Reply-To: <20230203042056.1794649-1-reijiw@google.com> Mime-Version: 1.0 References: <20230203042056.1794649-1-reijiw@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230203042056.1794649-10-reijiw@google.com> Subject: [PATCH v3 11/14] tools: arm64: Import perf_event.h From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Shaoqin Huang , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230202_202336_481176_BEBBEF12 X-CRM114-Status: GOOD ( 13.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Copy perf_event.h from the kernel's arch/arm64/include/asm/perf_event.h. The following patches will use macros defined in this header. Signed-off-by: Reiji Watanabe --- tools/arch/arm64/include/asm/perf_event.h | 258 ++++++++++++++++++++++ 1 file changed, 258 insertions(+) create mode 100644 tools/arch/arm64/include/asm/perf_event.h diff --git a/tools/arch/arm64/include/asm/perf_event.h b/tools/arch/arm64/include/asm/perf_event.h new file mode 100644 index 000000000000..97e49a4d4969 --- /dev/null +++ b/tools/arch/arm64/include/asm/perf_event.h @@ -0,0 +1,258 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2012 ARM Ltd. + */ + +#ifndef __ASM_PERF_EVENT_H +#define __ASM_PERF_EVENT_H + +#define ARMV8_PMU_MAX_COUNTERS 32 +#define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1) + +/* + * Common architectural and microarchitectural event numbers. + */ +#define ARMV8_PMUV3_PERFCTR_SW_INCR 0x0000 +#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL 0x0001 +#define ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL 0x0002 +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL 0x0003 +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE 0x0004 +#define ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL 0x0005 +#define ARMV8_PMUV3_PERFCTR_LD_RETIRED 0x0006 +#define ARMV8_PMUV3_PERFCTR_ST_RETIRED 0x0007 +#define ARMV8_PMUV3_PERFCTR_INST_RETIRED 0x0008 +#define ARMV8_PMUV3_PERFCTR_EXC_TAKEN 0x0009 +#define ARMV8_PMUV3_PERFCTR_EXC_RETURN 0x000A +#define ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED 0x000B +#define ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED 0x000C +#define ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED 0x000D +#define ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED 0x000E +#define ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED 0x000F +#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED 0x0010 +#define ARMV8_PMUV3_PERFCTR_CPU_CYCLES 0x0011 +#define ARMV8_PMUV3_PERFCTR_BR_PRED 0x0012 +#define ARMV8_PMUV3_PERFCTR_MEM_ACCESS 0x0013 +#define ARMV8_PMUV3_PERFCTR_L1I_CACHE 0x0014 +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB 0x0015 +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE 0x0016 +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL 0x0017 +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB 0x0018 +#define ARMV8_PMUV3_PERFCTR_BUS_ACCESS 0x0019 +#define ARMV8_PMUV3_PERFCTR_MEMORY_ERROR 0x001A +#define ARMV8_PMUV3_PERFCTR_INST_SPEC 0x001B +#define ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED 0x001C +#define ARMV8_PMUV3_PERFCTR_BUS_CYCLES 0x001D +#define ARMV8_PMUV3_PERFCTR_CHAIN 0x001E +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE 0x001F +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE 0x0020 +#define ARMV8_PMUV3_PERFCTR_BR_RETIRED 0x0021 +#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED_RETIRED 0x0022 +#define ARMV8_PMUV3_PERFCTR_STALL_FRONTEND 0x0023 +#define ARMV8_PMUV3_PERFCTR_STALL_BACKEND 0x0024 +#define ARMV8_PMUV3_PERFCTR_L1D_TLB 0x0025 +#define ARMV8_PMUV3_PERFCTR_L1I_TLB 0x0026 +#define ARMV8_PMUV3_PERFCTR_L2I_CACHE 0x0027 +#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_REFILL 0x0028 +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_ALLOCATE 0x0029 +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_REFILL 0x002A +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE 0x002B +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB 0x002C +#define ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL 0x002D +#define ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL 0x002E +#define ARMV8_PMUV3_PERFCTR_L2D_TLB 0x002F +#define ARMV8_PMUV3_PERFCTR_L2I_TLB 0x0030 +#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS 0x0031 +#define ARMV8_PMUV3_PERFCTR_LL_CACHE 0x0032 +#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS 0x0033 +#define ARMV8_PMUV3_PERFCTR_DTLB_WALK 0x0034 +#define ARMV8_PMUV3_PERFCTR_ITLB_WALK 0x0035 +#define ARMV8_PMUV3_PERFCTR_LL_CACHE_RD 0x0036 +#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS_RD 0x0037 +#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS_RD 0x0038 +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_LMISS_RD 0x0039 +#define ARMV8_PMUV3_PERFCTR_OP_RETIRED 0x003A +#define ARMV8_PMUV3_PERFCTR_OP_SPEC 0x003B +#define ARMV8_PMUV3_PERFCTR_STALL 0x003C +#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND 0x003D +#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND 0x003E +#define ARMV8_PMUV3_PERFCTR_STALL_SLOT 0x003F + +/* Statistical profiling extension microarchitectural events */ +#define ARMV8_SPE_PERFCTR_SAMPLE_POP 0x4000 +#define ARMV8_SPE_PERFCTR_SAMPLE_FEED 0x4001 +#define ARMV8_SPE_PERFCTR_SAMPLE_FILTRATE 0x4002 +#define ARMV8_SPE_PERFCTR_SAMPLE_COLLISION 0x4003 + +/* AMUv1 architecture events */ +#define ARMV8_AMU_PERFCTR_CNT_CYCLES 0x4004 +#define ARMV8_AMU_PERFCTR_STALL_BACKEND_MEM 0x4005 + +/* long-latency read miss events */ +#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_LMISS 0x4006 +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_LMISS_RD 0x4009 +#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_LMISS 0x400A +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_LMISS_RD 0x400B + +/* Trace buffer events */ +#define ARMV8_PMUV3_PERFCTR_TRB_WRAP 0x400C +#define ARMV8_PMUV3_PERFCTR_TRB_TRIG 0x400E + +/* Trace unit events */ +#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT0 0x4010 +#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT1 0x4011 +#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT2 0x4012 +#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT3 0x4013 +#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT4 0x4018 +#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT5 0x4019 +#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT6 0x401A +#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT7 0x401B + +/* additional latency from alignment events */ +#define ARMV8_PMUV3_PERFCTR_LDST_ALIGN_LAT 0x4020 +#define ARMV8_PMUV3_PERFCTR_LD_ALIGN_LAT 0x4021 +#define ARMV8_PMUV3_PERFCTR_ST_ALIGN_LAT 0x4022 + +/* Armv8.5 Memory Tagging Extension events */ +#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED 0x4024 +#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_RD 0x4025 +#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_WR 0x4026 + +/* ARMv8 recommended implementation defined event types */ +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD 0x0040 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR 0x0041 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD 0x0042 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR 0x0043 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_INNER 0x0044 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_OUTER 0x0045 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_VICTIM 0x0046 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_CLEAN 0x0047 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_INVAL 0x0048 + +#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD 0x004C +#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR 0x004D +#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD 0x004E +#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR 0x004F +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_RD 0x0050 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WR 0x0051 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_RD 0x0052 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_WR 0x0053 + +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_VICTIM 0x0056 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_CLEAN 0x0057 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_INVAL 0x0058 + +#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_RD 0x005C +#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_WR 0x005D +#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_RD 0x005E +#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_WR 0x005F +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD 0x0060 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR 0x0061 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_SHARED 0x0062 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NOT_SHARED 0x0063 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NORMAL 0x0064 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_PERIPH 0x0065 +#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_RD 0x0066 +#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_WR 0x0067 +#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LD_SPEC 0x0068 +#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_ST_SPEC 0x0069 +#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LDST_SPEC 0x006A + +#define ARMV8_IMPDEF_PERFCTR_LDREX_SPEC 0x006C +#define ARMV8_IMPDEF_PERFCTR_STREX_PASS_SPEC 0x006D +#define ARMV8_IMPDEF_PERFCTR_STREX_FAIL_SPEC 0x006E +#define ARMV8_IMPDEF_PERFCTR_STREX_SPEC 0x006F +#define ARMV8_IMPDEF_PERFCTR_LD_SPEC 0x0070 +#define ARMV8_IMPDEF_PERFCTR_ST_SPEC 0x0071 +#define ARMV8_IMPDEF_PERFCTR_LDST_SPEC 0x0072 +#define ARMV8_IMPDEF_PERFCTR_DP_SPEC 0x0073 +#define ARMV8_IMPDEF_PERFCTR_ASE_SPEC 0x0074 +#define ARMV8_IMPDEF_PERFCTR_VFP_SPEC 0x0075 +#define ARMV8_IMPDEF_PERFCTR_PC_WRITE_SPEC 0x0076 +#define ARMV8_IMPDEF_PERFCTR_CRYPTO_SPEC 0x0077 +#define ARMV8_IMPDEF_PERFCTR_BR_IMMED_SPEC 0x0078 +#define ARMV8_IMPDEF_PERFCTR_BR_RETURN_SPEC 0x0079 +#define ARMV8_IMPDEF_PERFCTR_BR_INDIRECT_SPEC 0x007A + +#define ARMV8_IMPDEF_PERFCTR_ISB_SPEC 0x007C +#define ARMV8_IMPDEF_PERFCTR_DSB_SPEC 0x007D +#define ARMV8_IMPDEF_PERFCTR_DMB_SPEC 0x007E + +#define ARMV8_IMPDEF_PERFCTR_EXC_UNDEF 0x0081 +#define ARMV8_IMPDEF_PERFCTR_EXC_SVC 0x0082 +#define ARMV8_IMPDEF_PERFCTR_EXC_PABORT 0x0083 +#define ARMV8_IMPDEF_PERFCTR_EXC_DABORT 0x0084 + +#define ARMV8_IMPDEF_PERFCTR_EXC_IRQ 0x0086 +#define ARMV8_IMPDEF_PERFCTR_EXC_FIQ 0x0087 +#define ARMV8_IMPDEF_PERFCTR_EXC_SMC 0x0088 + +#define ARMV8_IMPDEF_PERFCTR_EXC_HVC 0x008A +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_PABORT 0x008B +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_DABORT 0x008C +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_OTHER 0x008D +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_IRQ 0x008E +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_FIQ 0x008F +#define ARMV8_IMPDEF_PERFCTR_RC_LD_SPEC 0x0090 +#define ARMV8_IMPDEF_PERFCTR_RC_ST_SPEC 0x0091 + +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_RD 0x00A0 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WR 0x00A1 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_RD 0x00A2 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_WR 0x00A3 + +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_VICTIM 0x00A6 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_CLEAN 0x00A7 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_INVAL 0x00A8 + +/* + * Per-CPU PMCR: config reg + */ +#define ARMV8_PMU_PMCR_E (1 << 0) /* Enable all counters */ +#define ARMV8_PMU_PMCR_P (1 << 1) /* Reset all counters */ +#define ARMV8_PMU_PMCR_C (1 << 2) /* Cycle counter reset */ +#define ARMV8_PMU_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */ +#define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */ +#define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/ +#define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */ +#define ARMV8_PMU_PMCR_LP (1 << 7) /* Long event counter enable */ +#define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */ +#define ARMV8_PMU_PMCR_N (0x1f << ARMV8_PMU_PMCR_N_SHIFT) +#define ARMV8_PMU_PMCR_MASK 0xff /* Mask for writable bits */ + +/* + * PMOVSR: counters overflow flag status reg + */ +#define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */ +#define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK + +/* + * PMXEVTYPER: Event selection reg + */ +#define ARMV8_PMU_EVTYPE_MASK 0xc800ffff /* Mask for writable bits */ +#define ARMV8_PMU_EVTYPE_EVENT 0xffff /* Mask for EVENT bits */ + +/* + * Event filters for PMUv3 + */ +#define ARMV8_PMU_EXCLUDE_EL1 (1U << 31) +#define ARMV8_PMU_EXCLUDE_EL0 (1U << 30) +#define ARMV8_PMU_INCLUDE_EL2 (1U << 27) + +/* + * PMUSERENR: user enable reg + */ +#define ARMV8_PMU_USERENR_MASK 0xf /* Mask for writable bits */ +#define ARMV8_PMU_USERENR_EN (1 << 0) /* PMU regs can be accessed at EL0 */ +#define ARMV8_PMU_USERENR_SW (1 << 1) /* PMSWINC can be written at EL0 */ +#define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */ +#define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */ + +/* PMMIR_EL1.SLOTS mask */ +#define ARMV8_PMU_SLOTS_MASK 0xff + +#define ARMV8_PMU_BUS_SLOTS_SHIFT 8 +#define ARMV8_PMU_BUS_SLOTS_MASK 0xff +#define ARMV8_PMU_BUS_WIDTH_SHIFT 16 +#define ARMV8_PMU_BUS_WIDTH_MASK 0xf + +#endif /* __ASM_PERF_EVENT_H */ From patchwork Fri Feb 3 04:20:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13126999 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1B0C1C61DA4 for ; Fri, 3 Feb 2023 04:26:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=AzfIEtNYCR3rIYwMpdrwn1+ypICfiNTl+9rNVY7mNHY=; b=roOxz2+kAPavlSe7ypWWBmv+Bl WO0HKjdgY8Q/R3W9olC+mUx4P/c5t/CP4Lt39AlyLXSY7t/2RX+tgRP2LNN+A3wm5zuZTItE3FBu4 SI3Kvy2CbyYOuw0znrA1I9NK8BhVorFlLY3i2iKQOHadMBiLAGuyCEAXpy6pDtfXAHmMWYLITpBWx xPQALArB2ZjHpQJhcKFIQDe12+RzJXrF05LV1gGROgDVOP+F561o6aSw2I3MlQhrSSRYQFkDm5Orb zYA3Jxgy4FFOviGvL7noR8jmy0j4QHF0zpGhxLgh5ciiyj96XLp/hqQP5QHINyPZFUFQrAy6lOLaW 2za/YKow==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNndc-000KUT-5d; Fri, 03 Feb 2023 04:25:25 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnbx-000JdT-Ra for linux-arm-kernel@lists.infradead.org; Fri, 03 Feb 2023 04:23:43 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id z9-20020a25ba49000000b007d4416e3667so3779747ybj.23 for ; Thu, 02 Feb 2023 20:23:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=507Q0A41sVHEvE3chpmWCkfVFc0NpWHbKwWhnRm7udU=; b=XQUxHqowVha0bXqmzcWPGy34bzVC3129Uf74CVXCK22hfFfr34BfxwR3qSRGxP5jrA o5q2BedXvd/CypTtI6dMH0NKwg5eRY/WBSKq8XCwBzVK4Vu1ZMts7uS9GwNk5Jmyw0N+ CuVuaXyU1adP9IohCNn+GnkrySiKJGwLx8F6cifJ2Zv/pbfl6DoOdmwYB1xlOZIJP5yB YC5oM9EzwHjzvVK+zpmdFWqT4texzgcStWTHFqW/6K81XD+mDMOEYHfOY2CAh5pmBBu0 n4KrvkwUSd8VgwTVAvFNATEQjnPSvEoKAmf/DodVlVrhf/bCYOKatGe5S6QOcuCFNsfX WUGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=507Q0A41sVHEvE3chpmWCkfVFc0NpWHbKwWhnRm7udU=; b=yikG6AkH4eXMIP1ppzCqNFGnWDdGc83fk+GBbGFweTUuT9dfrPphp2uwD08VOCUF/d f30JuR+cLOK55GKmJbAjym544ZzAM7EiA8OMnveynazpETyvVE6HlknPSTrnXJJGk6rZ mCcwJl8Bd5EoY6GHMQf3Y0S79DB3HcZIrmRvdAqwPP76eI0MkKeel3nj17D0ER/NsANl 0tUNHuCnA2jm0OgvitBAafmUEEv3QQSSkL31yyh0ezBsC8hD5SUIGzPT6I3c8C/eqKc8 0vjMcWu0+QHH7w3WFg6S7ODWmM1TVZvyHXQ5cuhGaPXQqig4e3E25v12ANfruCeT+ZzW P+RQ== X-Gm-Message-State: AO0yUKUaehfS+mKLe0LKphnYW/1p+7im9Qe33ALEJDwJPpvf1iJy9eUM X/y7eHes5lwd5ffG4IyyJmbYPCWh5jM= X-Google-Smtp-Source: AK7set9mCm43mO8p74aATtv8IHkdtWnaAtoy3N2TDX+baZ3w+wL/LOIE7hDAjOiRNwpeMrVyZr+Jku1WTI4= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a05:6902:28d:b0:867:66ee:d1ff with SMTP id v13-20020a056902028d00b0086766eed1ffmr170500ybh.388.1675398216367; Thu, 02 Feb 2023 20:23:36 -0800 (PST) Date: Thu, 2 Feb 2023 20:20:54 -0800 In-Reply-To: <20230203042056.1794649-1-reijiw@google.com> Mime-Version: 1.0 References: <20230203042056.1794649-1-reijiw@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230203042056.1794649-11-reijiw@google.com> Subject: [PATCH v3 12/14] KVM: selftests: aarch64: Introduce vpmu_counter_access test From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Shaoqin Huang , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230202_202341_923560_AE37B6F3 X-CRM114-Status: GOOD ( 26.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce vpmu_counter_access test for arm64 platforms. The test configures PMUv3 for a vCPU, sets PMCR_EL0.N for the vCPU, and check if the guest can consistently see the same number of the PMU event counters (PMCR_EL0.N) that userspace sets. This test case is done with each of the PMCR_EL0.N values from 0 to 31 (With the PMCR_EL0.N values greater than the host value, the test expects KVM_SET_ONE_REG for the PMCR_EL0 to fail). Signed-off-by: Reiji Watanabe --- tools/testing/selftests/kvm/Makefile | 1 + .../kvm/aarch64/vpmu_counter_access.c | 207 ++++++++++++++++++ 2 files changed, 208 insertions(+) create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 1750f91dd936..b27fea0ce591 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -143,6 +143,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/psci_test TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config TEST_GEN_PROGS_aarch64 += aarch64/vgic_init TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq +TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access TEST_GEN_PROGS_aarch64 += access_tracking_perf_test TEST_GEN_PROGS_aarch64 += demand_paging_test TEST_GEN_PROGS_aarch64 += dirty_log_test diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c new file mode 100644 index 000000000000..7a4333f64dae --- /dev/null +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c @@ -0,0 +1,207 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * vpmu_counter_access - Test vPMU event counter access + * + * Copyright (c) 2022 Google LLC. + * + * This test checks if the guest can see the same number of the PMU event + * counters (PMCR_EL0.N) that userspace sets. + * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. + */ +#include +#include +#include +#include +#include +#include + +/* The max number of the PMU event counters (excluding the cycle counter) */ +#define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) + +/* + * The guest is configured with PMUv3 with @expected_pmcr_n number of + * event counters. + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N. + */ +static void guest_code(uint64_t expected_pmcr_n) +{ + uint64_t pmcr, pmcr_n; + + GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS); + + pmcr = read_sysreg(pmcr_el0); + pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); + + /* Make sure that PMCR_EL0.N indicates the value userspace set */ + GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n); + + GUEST_DONE(); +} + +#define GICD_BASE_GPA 0x8000000ULL +#define GICR_BASE_GPA 0x80A0000ULL + +/* Create a VM that has one vCPU with PMUv3 configured. */ +static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, + int *gic_fd) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + struct kvm_vcpu_init init; + uint8_t pmuver; + uint64_t dfr0, irq = 23; + struct kvm_device_attr irq_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_IRQ, + .addr = (uint64_t)&irq, + }; + struct kvm_device_attr init_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_INIT, + }; + + vm = vm_create(1); + + /* Create vCPU with PMUv3 */ + vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); + init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); + vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); + *gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA); + + /* Make sure that PMUv3 support is indicated in the ID register */ + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); + pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0); + TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF && + pmuver >= ID_AA64DFR0_PMUVER_8_0, + "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver); + + /* Initialize vPMU */ + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); + + *vcpup = vcpu; + return vm; +} + +static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) +{ + struct ucall uc; + + vcpu_args_set(vcpu, 1, pmcr_n); + vcpu_run(vcpu); + switch (get_ucall(vcpu, &uc)) { + case UCALL_ABORT: + REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx"); + break; + case UCALL_DONE: + break; + default: + TEST_FAIL("Unknown ucall %lu", uc.cmd); + break; + } +} + +/* + * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_n, + * and run the test. + */ +static void run_test(uint64_t pmcr_n) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int gic_fd; + uint64_t sp, pmcr, pmcr_orig; + struct kvm_vcpu_init init; + + pr_debug("Test with pmcr_n %lu\n", pmcr_n); + vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); + + /* Save the initial sp to restore them later to run the guest again */ + vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp); + + /* Update the PMCR_EL0.N with @pmcr_n */ + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); + pmcr = pmcr_orig & ~ARMV8_PMU_PMCR_N; + pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); + vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr); + + run_vcpu(vcpu, pmcr_n); + + /* + * Reset and re-initialize the vCPU, and run the guest code again to + * check if PMCR_EL0.N is preserved. + */ + vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); + init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); + aarch64_vcpu_setup(vcpu, &init); + vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp); + vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code); + + run_vcpu(vcpu, pmcr_n); + + close(gic_fd); + kvm_vm_free(vm); +} + +/* + * Create a guest with one vCPU, and attempt to set the PMCR_EL0.N for + * the vCPU to @pmcr_n, which is larger than the host value. + * The attempt should fail as @pmcr_n is too big to set for the vCPU. + */ +static void run_error_test(uint64_t pmcr_n) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int gic_fd, ret; + uint64_t pmcr, pmcr_orig; + + pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); + vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); + + /* Update the PMCR_EL0.N with @pmcr_n */ + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); + pmcr = pmcr_orig & ~ARMV8_PMU_PMCR_N; + pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); + + /* This should fail as @pmcr_n is too big to set for the vCPU */ + ret = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr); + TEST_ASSERT(ret, "Setting PMCR to 0x%lx (orig PMCR 0x%lx) didn't fail", + pmcr, pmcr_orig); + + close(gic_fd); + kvm_vm_free(vm); +} + +/* + * Return the default number of implemented PMU event counters excluding + * the cycle counter (i.e. PMCR_EL0.N value) for the guest. + */ +static uint64_t get_pmcr_n_limit(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int gic_fd; + uint64_t pmcr; + + vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); + close(gic_fd); + kvm_vm_free(vm); + return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); +} + +int main(void) +{ + uint64_t i, pmcr_n; + + TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + + pmcr_n = get_pmcr_n_limit(); + for (i = 0; i <= pmcr_n; i++) + run_test(i); + + for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++) + run_error_test(i); + + return 0; +} From patchwork Fri Feb 3 04:20:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13127002 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 80015C636CC for ; Fri, 3 Feb 2023 04:27:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=zF6+hvwjgY5Rx5xf72EmTTgP77T7CiZoNCLC4r/Oaj4=; b=uR4Pv7Lk53yQDjQcVIeoK/ca17 YqzW35F/RFx36Png36Av3N+B6FmpakaQf/fgq3T3nSfnGrDxPIRm4dZDsxRs+LuaaZlNihHGDY3pe /gVQSlNqvx1eZKPTayqoL9SBsoi+ybQrUzsMJO0ExKQbUYiKmQpxxBMGpvxgKS5sGkNa6TPq4peNI NK3o2q1+t5b04diuvLXwNuXjYcncJftFJUmKTfIFLF7kQpNTgW2C/CP91etT5gtFEUe6Ij+V+ioc+ ruUuLyghu300nc2gFNMTvj0ihTRQyrG3X8lL3XOuEwU2y5y5xAx02Z3J5T5D9FLVum6rkyGpTHO/v lAbdUZww==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnex-000L8j-2R; Fri, 03 Feb 2023 04:26:47 +0000 Received: from mail-oi1-x24a.google.com ([2607:f8b0:4864:20::24a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnby-000Jdo-2W for linux-arm-kernel@lists.infradead.org; Fri, 03 Feb 2023 04:23:45 +0000 Received: by mail-oi1-x24a.google.com with SMTP id bq17-20020a05680823d100b0037ad91d5262so413617oib.7 for ; Thu, 02 Feb 2023 20:23:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=1ASot95vLYR7mfqi2SNp1UaFdibSpHYnWcB6CeNVCDM=; b=ZWFPNSRXR9slqlt5fbdOPxtienWLl3hUbSnGRuslGnhtUhRSqymHD4fZG8LGVjO3xY sTCbIowNifJHOaNz345+AO6Iye7pqY5bTQdFHYLw0uE8VETt7iqJ/AYETodfqKMHdIjU BB4DNKMFR5ETO8HlFcYUjOLpY3QjB3Sz4620fRdZQJjktS6+XqBhJlZ34WWCduSJN6HE VA5QsWVq7aIybNXphVw1o98/SlNEzAEF6KWyfKTTcLttrcrHczhOaDWLG95Q89iQnHTf R+kUX8tfIYFwHVPIZabeIhIq+ZZUIril12cYtvbNVoDhtxu1du0j0gZUTp0menHV3EHD cULA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1ASot95vLYR7mfqi2SNp1UaFdibSpHYnWcB6CeNVCDM=; b=VBpWi6+zZf9jdWLp5Qn11y5Qymv4S3R7mmlAUzxlNX0MKux0hLN+vV2wtbqE5+G8IU xcrEHCcU8t03ntV/UwGrKn+pBuHCKstadC3To5Dpt11Lh8uKhz3k0IbaW1N6GTXDTpZE 9Z0X4PqaWjLxTUebq/VQ3N5ybi7YZ2JRe2S0np+5ArfcCSms5ycp7anld9PyK/E35/zP 14ped2SUGOd7EUEkgJb6vHswPdl4Q6O2qurHVPTHfXPQ9x3Cw88Bc74BbRHihi4rJKwq bC8r/y0VAQSTA+OsaRI6msFDIXRKT6WW8mp7y7GTtq6F3Q2osBvcWxXqY8n+YBNFJ8ov 0Dkw== X-Gm-Message-State: AO0yUKUHcG3CJsuZRGiG4iq6DJdJHZ+tnz3E8KZ5iCt+fKAa7axPOQAV Gfuq+r3j/Pzj5U2oUwAZ7GKcqe5IWBo= X-Google-Smtp-Source: AK7set/0d6cbws5xO8Mb8eOGKgUTmnZvZZ7y59gkVz71Ze47fG9xfT4dUddgdgtT/3m1oJy37Pii8HFABzY= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a05:6808:618a:b0:378:5c1:49e7 with SMTP id dn10-20020a056808618a00b0037805c149e7mr270407oib.58.1675398217785; Thu, 02 Feb 2023 20:23:37 -0800 (PST) Date: Thu, 2 Feb 2023 20:20:55 -0800 In-Reply-To: <20230203042056.1794649-1-reijiw@google.com> Mime-Version: 1.0 References: <20230203042056.1794649-1-reijiw@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230203042056.1794649-12-reijiw@google.com> Subject: [PATCH v3 13/14] KVM: selftests: aarch64: vPMU register test for implemented counters From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Shaoqin Huang , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230202_202342_194488_91BBE965 X-CRM114-Status: GOOD ( 31.58 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add a new test case to the vpmu_counter_access test to check if PMU registers or their bits for implemented counters on the vCPU are readable/writable as expected, and can be programmed to count events. Signed-off-by: Reiji Watanabe --- .../kvm/aarch64/vpmu_counter_access.c | 351 +++++++++++++++++- 1 file changed, 348 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c index 7a4333f64dae..0bc09528c01b 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c @@ -5,7 +5,8 @@ * Copyright (c) 2022 Google LLC. * * This test checks if the guest can see the same number of the PMU event - * counters (PMCR_EL0.N) that userspace sets. + * counters (PMCR_EL0.N) that userspace sets, and if the guest can access + * those counters. * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. */ #include @@ -18,14 +19,349 @@ /* The max number of the PMU event counters (excluding the cycle counter) */ #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) +/* + * The macros and functions below for reading/writing PMEVT{CNTR,TYPER}_EL0 + * were basically copied from arch/arm64/kernel/perf_event.c. + */ +#define PMEVN_CASE(n, case_macro) \ + case n: case_macro(n); break + +#define PMEVN_SWITCH(x, case_macro) \ + do { \ + switch (x) { \ + PMEVN_CASE(0, case_macro); \ + PMEVN_CASE(1, case_macro); \ + PMEVN_CASE(2, case_macro); \ + PMEVN_CASE(3, case_macro); \ + PMEVN_CASE(4, case_macro); \ + PMEVN_CASE(5, case_macro); \ + PMEVN_CASE(6, case_macro); \ + PMEVN_CASE(7, case_macro); \ + PMEVN_CASE(8, case_macro); \ + PMEVN_CASE(9, case_macro); \ + PMEVN_CASE(10, case_macro); \ + PMEVN_CASE(11, case_macro); \ + PMEVN_CASE(12, case_macro); \ + PMEVN_CASE(13, case_macro); \ + PMEVN_CASE(14, case_macro); \ + PMEVN_CASE(15, case_macro); \ + PMEVN_CASE(16, case_macro); \ + PMEVN_CASE(17, case_macro); \ + PMEVN_CASE(18, case_macro); \ + PMEVN_CASE(19, case_macro); \ + PMEVN_CASE(20, case_macro); \ + PMEVN_CASE(21, case_macro); \ + PMEVN_CASE(22, case_macro); \ + PMEVN_CASE(23, case_macro); \ + PMEVN_CASE(24, case_macro); \ + PMEVN_CASE(25, case_macro); \ + PMEVN_CASE(26, case_macro); \ + PMEVN_CASE(27, case_macro); \ + PMEVN_CASE(28, case_macro); \ + PMEVN_CASE(29, case_macro); \ + PMEVN_CASE(30, case_macro); \ + default: \ + GUEST_ASSERT_1(0, x); \ + } \ + } while (0) + +#define RETURN_READ_PMEVCNTRN(n) \ + return read_sysreg(pmevcntr##n##_el0) +static unsigned long read_pmevcntrn(int n) +{ + PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN); + return 0; +} + +#define WRITE_PMEVCNTRN(n) \ + write_sysreg(val, pmevcntr##n##_el0) +static void write_pmevcntrn(int n, unsigned long val) +{ + PMEVN_SWITCH(n, WRITE_PMEVCNTRN); + isb(); +} + +#define READ_PMEVTYPERN(n) \ + return read_sysreg(pmevtyper##n##_el0) +static unsigned long read_pmevtypern(int n) +{ + PMEVN_SWITCH(n, READ_PMEVTYPERN); + return 0; +} + +#define WRITE_PMEVTYPERN(n) \ + write_sysreg(val, pmevtyper##n##_el0) +static void write_pmevtypern(int n, unsigned long val) +{ + PMEVN_SWITCH(n, WRITE_PMEVTYPERN); + isb(); +} + +/* Read PMEVTCNTR_EL0 through PMXEVCNTR_EL0 */ +static inline unsigned long read_sel_evcntr(int sel) +{ + write_sysreg(sel, pmselr_el0); + isb(); + return read_sysreg(pmxevcntr_el0); +} + +/* Write PMEVTCNTR_EL0 through PMXEVCNTR_EL0 */ +static inline void write_sel_evcntr(int sel, unsigned long val) +{ + write_sysreg(sel, pmselr_el0); + isb(); + write_sysreg(val, pmxevcntr_el0); + isb(); +} + +/* Read PMEVTYPER_EL0 through PMXEVTYPER_EL0 */ +static inline unsigned long read_sel_evtyper(int sel) +{ + write_sysreg(sel, pmselr_el0); + isb(); + return read_sysreg(pmxevtyper_el0); +} + +/* Write PMEVTYPER_EL0 through PMXEVTYPER_EL0 */ +static inline void write_sel_evtyper(int sel, unsigned long val) +{ + write_sysreg(sel, pmselr_el0); + isb(); + write_sysreg(val, pmxevtyper_el0); + isb(); +} + +static inline void enable_counter(int idx) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmcntenset_el0); + isb(); +} + +static inline void disable_counter(int idx) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmcntenclr_el0); + isb(); +} + +/* + * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}_EL0 + * accessors that test cases will use. Each of the accessors will + * either directly reads/writes PMEVT{CNTR,TYPER}_EL0 + * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through + * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()). + * + * This is used to test that combinations of those accessors provide + * the consistent behavior. + */ +struct pmc_accessor { + /* A function to be used to read PMEVTCNTR_EL0 */ + unsigned long (*read_cntr)(int idx); + /* A function to be used to write PMEVTCNTR_EL0 */ + void (*write_cntr)(int idx, unsigned long val); + /* A function to be used to read PMEVTYPER_EL0 */ + unsigned long (*read_typer)(int idx); + /* A function to be used to write PMEVTYPER_EL0 */ + void (*write_typer)(int idx, unsigned long val); +}; + +struct pmc_accessor pmc_accessors[] = { + /* test with all direct accesses */ + { read_pmevcntrn, write_pmevcntrn, read_pmevtypern, write_pmevtypern }, + /* test with all indirect accesses */ + { read_sel_evcntr, write_sel_evcntr, read_sel_evtyper, write_sel_evtyper }, + /* read with direct accesses, and write with indirect accesses */ + { read_pmevcntrn, write_sel_evcntr, read_pmevtypern, write_sel_evtyper }, + /* read with indirect accesses, and write with direct accesses */ + { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern }, +}; + +static void pmu_disable_reset(void) +{ + uint64_t pmcr = read_sysreg(pmcr_el0); + + /* Reset all counters, disabling them */ + pmcr &= ~ARMV8_PMU_PMCR_E; + write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0); + isb(); +} + +static void pmu_enable(void) +{ + uint64_t pmcr = read_sysreg(pmcr_el0); + + /* Reset all counters, disabling them */ + pmcr |= ARMV8_PMU_PMCR_E; + write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0); + isb(); +} + +static bool pmu_event_is_supported(uint64_t event) +{ + GUEST_ASSERT_1(event < 64, event); + return (read_sysreg(pmceid0_el0) & BIT(event)); +} + +#define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected) \ +{ \ + uint64_t _tval = read_sysreg(regname); \ + \ + if (set_expected) \ + GUEST_ASSERT_3((_tval & mask), _tval, mask, set_expected); \ + else \ + GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\ +} + +/* + * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers + * are set or cleared as specified in @set_expected. + */ +static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected) +{ + GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmintenset_el1, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmintenclr_el1, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmovsset_el0, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmovsclr_el0, mask, set_expected); +} + +/* + * Check if the bit in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers corresponding + * to the specified counter (@pmc_idx) can be read/written as expected. + * When @set_op is true, it tries to set the bit for the counter in + * those registers by writing the SET registers (the bit won't be set + * if the counter is not implemented though). + * Otherwise, it tries to clear the bits in the registers by writing + * the CLR registers. + * Then, it checks if the values indicated in the registers are as expected. + */ +static void test_bitmap_pmu_regs(int pmc_idx, bool set_op) +{ + uint64_t pmcr_n, test_bit = BIT(pmc_idx); + bool set_expected = false; + + if (set_op) { + write_sysreg(test_bit, pmcntenset_el0); + write_sysreg(test_bit, pmintenset_el1); + write_sysreg(test_bit, pmovsset_el0); + + /* The bit will be set only if the counter is implemented */ + pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0)); + set_expected = (pmc_idx < pmcr_n) ? true : false; + } else { + write_sysreg(test_bit, pmcntenclr_el0); + write_sysreg(test_bit, pmintenclr_el1); + write_sysreg(test_bit, pmovsclr_el0); + } + check_bitmap_pmu_regs(test_bit, set_expected); +} + +/* + * Tests for reading/writing registers for the (implemented) event counter + * specified by @pmc_idx. + */ +static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx) +{ + uint64_t write_data, read_data, read_data_prev, test_bit; + + /* Disable all PMCs and reset all PMCs to zero. */ + pmu_disable_reset(); + + + /* + * Tests for reading/writing {PMCNTEN,PMINTEN,PMOVS}{SET,CLR}_EL1. + */ + + test_bit = 1ul << pmc_idx; + /* Make sure that the bit in those registers are set to 0 */ + test_bitmap_pmu_regs(test_bit, false); + /* Test if setting the bit in those registers works */ + test_bitmap_pmu_regs(test_bit, true); + /* Test if clearing the bit in those registers works */ + test_bitmap_pmu_regs(test_bit, false); + + + /* + * Tests for reading/writing the event type register. + */ + + read_data = acc->read_typer(pmc_idx); + /* + * Set the event type register to an arbitrary value just for testing + * of reading/writing the register. + * ArmARM says that for the event from 0x0000 to 0x003F, + * the value indicated in the PMEVTYPER_EL0.evtCount field is + * the value written to the field even when the specified event + * is not supported. + */ + write_data = (ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMUV3_PERFCTR_INST_RETIRED); + acc->write_typer(pmc_idx, write_data); + read_data = acc->read_typer(pmc_idx); + GUEST_ASSERT_4(read_data == write_data, + pmc_idx, acc, read_data, write_data); + + + /* + * Tests for reading/writing the event count register. + */ + + read_data = acc->read_cntr(pmc_idx); + + /* The count value must be 0, as it is not used after the reset */ + GUEST_ASSERT_3(read_data == 0, pmc_idx, acc, read_data); + + write_data = read_data + pmc_idx + 0x12345; + acc->write_cntr(pmc_idx, write_data); + read_data = acc->read_cntr(pmc_idx); + GUEST_ASSERT_4(read_data == write_data, + pmc_idx, acc, read_data, write_data); + + + /* The following test requires the INST_RETIRED event support. */ + if (!pmu_event_is_supported(ARMV8_PMUV3_PERFCTR_INST_RETIRED)) + return; + + pmu_enable(); + acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED); + + /* + * Make sure that the counter doesn't count the INST_RETIRED + * event when disabled, and the counter counts the event when enabled. + */ + disable_counter(pmc_idx); + read_data_prev = acc->read_cntr(pmc_idx); + read_data = acc->read_cntr(pmc_idx); + GUEST_ASSERT_4(read_data == read_data_prev, + pmc_idx, acc, read_data, read_data_prev); + + enable_counter(pmc_idx); + read_data = acc->read_cntr(pmc_idx); + + /* + * The counter should be increased by at least 1, as there is at + * least one instruction between enabling the counter and reading + * the counter (the test assumes that all event counters are not + * being used by the host's higher priority events). + */ + GUEST_ASSERT_4(read_data > read_data_prev, + pmc_idx, acc, read_data, read_data_prev); +} + /* * The guest is configured with PMUv3 with @expected_pmcr_n number of * event counters. - * Check if @expected_pmcr_n is consistent with PMCR_EL0.N. + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and + * if reading/writing PMU registers for implemented counters can work + * as expected. */ static void guest_code(uint64_t expected_pmcr_n) { uint64_t pmcr, pmcr_n; + int i, pmc; GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS); @@ -35,6 +371,15 @@ static void guest_code(uint64_t expected_pmcr_n) /* Make sure that PMCR_EL0.N indicates the value userspace set */ GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n); + /* + * Tests for reading/writing PMU registers for implemented counters. + * Use each combination of PMEVT{CNTR,TYPER}_EL0 accessor functions. + */ + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { + for (pmc = 0; pmc < pmcr_n; pmc++) + test_access_pmc_regs(&pmc_accessors[i], pmc); + } + GUEST_DONE(); } @@ -91,7 +436,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) vcpu_run(vcpu); switch (get_ucall(vcpu, &uc)) { case UCALL_ABORT: - REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx"); + REPORT_GUEST_ASSERT_4(uc, "values:%#lx %#lx %#lx %#lx"); break; case UCALL_DONE: break; From patchwork Fri Feb 3 04:20:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13126998 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1B11CC636CC for ; Fri, 3 Feb 2023 04:25:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=UpHNs4kbz+cgpYTV1vLjKtYTkPo6ZXaiN0fFpioBi3Y=; b=g4MAiLrwtMZwc4kzdpg0PcMjSI CmYGwzsHBDodjtR24Q+waqE/yxXSLwtQTz8V0pdXaf05V/+AKMw0uKY6WAX3ZsKtWPRvXQ2/Q0+hH ahGEKUrj8qvJBUMlMPDVhOPy56VUU7IGbPuA+48jgW4eJwlhNomvpCf1LYhelUdSn5mcwaMlC36Uy zLcUtjGEAhikyFdOi57CrYknupDIvqZTbxblYlH+4tfSOf+XZnL1USC2wGKf/yV6Oes34OXOfS6RP GOOxOF1QT2pSCcMoihcaLw42UBlxy90OzRiCN38AeZ6fxJIjz1X1QatNrDmDXzl+4nCO0LYE+sO88 cncG36Ew==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnd3-000KCz-VA; Fri, 03 Feb 2023 04:24:50 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnbx-000JeY-8Y for linux-arm-kernel@lists.infradead.org; Fri, 03 Feb 2023 04:23:43 +0000 Received: by mail-yb1-xb49.google.com with SMTP id g138-20020a25db90000000b0080c27bde887so3747424ybf.18 for ; Thu, 02 Feb 2023 20:23:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=wS6ovSw2l5ggiHf/+2aYL24ak7aY/snRJJm1/5+P/vY=; b=c4oXyj6/kdPbgHPAtc17W2PopIolg7S0oY2LHJJue5XDaVcxvqZWmlRIkMi4mhoY8m UMjEQCS9I0bX6FPF8rM7ZO5xjwQOM8iV/TxLioyKvDglbBo25FpWQ8T/kI97sabxMIjc 0M+cK5scde0bESDtT0dFSUssQO7mRK0RIVW4Yh5uNNGDXltcJiwGkETudrowcK9MaFOX wBRgbuhwcOoLMdNm1RpkpmoWVuGRQyyCYdM0cwLRqzPatieDqvQu3B1GujiOgwNclibj jDfusGStcHE6aOfGavZB0tFmXB9fNmtNqeSJP2dIBXichEiLGGcsM3hsTEYOWop+oY+7 EPMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wS6ovSw2l5ggiHf/+2aYL24ak7aY/snRJJm1/5+P/vY=; b=inazxcQFIkTneP7wJrZD6A7pUajb6UamsWECIbrThiFLrmEYHfSlSoEqk3VmaYaZDu 7/Qjp4PsD6vRwDS2oGOMVF48idC16SfycCqVwyO+Js//w/VAdvMsFPCRny4TryCTXHmS NgIz62PyY8HyAjQvuYTjbuSda+F0l4guSZKCyIJn5vLWutxGtOPBW92CBoyEEgoI35Yy 6veT8F/qPa0Wfcob5MpC5b/d+S7Mgvbevr7uTnOqiJpCB+dN82W8uzBKntVXCkZ64el0 Jum2jprfS0TGIBbiJ8BLFyaBBEFFypEvNEfvFn6zFJtvpi4jzI8za/TnDcUNJrMct8Jm zwAA== X-Gm-Message-State: AO0yUKXTy5Cgk4jFKeRGXBI+xaG2I9J0FdJAq0JZ0L8ip/MCOwk0D8WG oGo4guvJy06CDlxKW8ad8di2elTW4Ms= X-Google-Smtp-Source: AK7set9O0rwi68CcBO6Yn8SBI8AuIF4DjwFlK27RrTbMaEdNsGromUvawAmzA2yTKfLBrP/BLCTxQzAhyUA= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a81:4d44:0:b0:525:a2b1:8878 with SMTP id a65-20020a814d44000000b00525a2b18878mr2ywb.6.1675398219325; Thu, 02 Feb 2023 20:23:39 -0800 (PST) Date: Thu, 2 Feb 2023 20:20:56 -0800 In-Reply-To: <20230203042056.1794649-1-reijiw@google.com> Mime-Version: 1.0 References: <20230203042056.1794649-1-reijiw@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230203042056.1794649-13-reijiw@google.com> Subject: [PATCH v3 14/14] KVM: selftests: aarch64: vPMU register test for unimplemented counters From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Shaoqin Huang , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230202_202341_332046_6106B21F X-CRM114-Status: GOOD ( 24.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add a new test case to the vpmu_counter_access test to check if PMU registers or their bits for unimplemented counters are not accessible or are RAZ, as expected. Signed-off-by: Reiji Watanabe --- .../kvm/aarch64/vpmu_counter_access.c | 111 ++++++++++++++++-- .../selftests/kvm/include/aarch64/processor.h | 1 + 2 files changed, 102 insertions(+), 10 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c index 0bc09528c01b..79ed7c894592 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c @@ -5,8 +5,8 @@ * Copyright (c) 2022 Google LLC. * * This test checks if the guest can see the same number of the PMU event - * counters (PMCR_EL0.N) that userspace sets, and if the guest can access - * those counters. + * counters (PMCR_EL0.N) that userspace sets, if the guest can access + * those counters, and if the guest cannot access any other counters. * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. */ #include @@ -20,7 +20,7 @@ #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) /* - * The macros and functions below for reading/writing PMEVT{CNTR,TYPER}_EL0 + * The macros and functions below for reading/writing PMEV{CNTR,TYPER}_EL0 * were basically copied from arch/arm64/kernel/perf_event.c. */ #define PMEVN_CASE(n, case_macro) \ @@ -148,9 +148,9 @@ static inline void disable_counter(int idx) } /* - * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}_EL0 + * The pmc_accessor structure has pointers to PMEV{CNTR,TYPER}_EL0 * accessors that test cases will use. Each of the accessors will - * either directly reads/writes PMEVT{CNTR,TYPER}_EL0 + * either directly reads/writes PMEV{CNTR,TYPER}_EL0 * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()). * @@ -179,6 +179,51 @@ struct pmc_accessor pmc_accessors[] = { { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern }, }; +#define INVALID_EC (-1ul) +uint64_t expected_ec = INVALID_EC; +uint64_t op_end_addr; + +static void guest_sync_handler(struct ex_regs *regs) +{ + uint64_t esr, ec; + + esr = read_sysreg(esr_el1); + ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK; + GUEST_ASSERT_4(op_end_addr && (expected_ec == ec), + regs->pc, esr, ec, expected_ec); + + /* Will go back to op_end_addr after the handler exits */ + regs->pc = op_end_addr; + + /* + * Clear op_end_addr, and setting expected_ec to INVALID_EC + * as a sign that an exception has occurred. + */ + op_end_addr = 0; + expected_ec = INVALID_EC; +} + +/* + * Run the given operation that should trigger an exception with the + * given exception class. The exception handler (guest_sync_handler) + * will reset op_end_addr to 0, and expected_ec to INVALID_EC, and + * will come back to the instruction at the @done_label. + * The @done_label must be a unique label in this test program. + */ +#define TEST_EXCEPTION(ec, ops, done_label) \ +{ \ + extern int done_label; \ + \ + WRITE_ONCE(op_end_addr, (uint64_t)&done_label); \ + GUEST_ASSERT(ec != INVALID_EC); \ + WRITE_ONCE(expected_ec, ec); \ + dsb(ish); \ + ops; \ + asm volatile(#done_label":"); \ + GUEST_ASSERT(!op_end_addr); \ + GUEST_ASSERT(expected_ec == INVALID_EC); \ +} + static void pmu_disable_reset(void) { uint64_t pmcr = read_sysreg(pmcr_el0); @@ -351,16 +396,38 @@ static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx) pmc_idx, acc, read_data, read_data_prev); } +/* + * Tests for reading/writing registers for the unimplemented event counter + * specified by @pmc_idx (>= PMCR_EL0.N). + */ +static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx) +{ + /* + * Reading/writing the event count/type registers should cause + * an UNDEFINED exception. + */ + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_cntr(pmc_idx), inv_rd_cntr); + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0), inv_wr_cntr); + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_typer(pmc_idx), inv_rd_typer); + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_typer(pmc_idx, 0), inv_wr_typer); + /* + * The bit corresponding to the (unimplemented) counter in + * {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers should be RAZ. + */ + test_bitmap_pmu_regs(pmc_idx, 1); + test_bitmap_pmu_regs(pmc_idx, 0); +} + /* * The guest is configured with PMUv3 with @expected_pmcr_n number of * event counters. * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and - * if reading/writing PMU registers for implemented counters can work - * as expected. + * if reading/writing PMU registers for implemented or unimplemented + * counters can work as expected. */ static void guest_code(uint64_t expected_pmcr_n) { - uint64_t pmcr, pmcr_n; + uint64_t pmcr, pmcr_n, unimp_mask; int i, pmc; GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS); @@ -371,15 +438,31 @@ static void guest_code(uint64_t expected_pmcr_n) /* Make sure that PMCR_EL0.N indicates the value userspace set */ GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n); + /* + * Make sure that (RAZ) bits corresponding to unimplemented event + * counters in {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers are reset to zero. + * (NOTE: bits for implemented event counters are reset to UNKNOWN) + */ + unimp_mask = GENMASK_ULL(ARMV8_PMU_MAX_GENERAL_COUNTERS - 1, pmcr_n); + check_bitmap_pmu_regs(unimp_mask, false); + /* * Tests for reading/writing PMU registers for implemented counters. - * Use each combination of PMEVT{CNTR,TYPER}_EL0 accessor functions. + * Use each combination of PMEV{CNTR,TYPER}_EL0 accessor functions. */ for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { for (pmc = 0; pmc < pmcr_n; pmc++) test_access_pmc_regs(&pmc_accessors[i], pmc); } + /* + * Tests for reading/writing PMU registers for unimplemented counters. + * Use each combination of PMEV{CNTR,TYPER}_EL0 accessor functions. + */ + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { + for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++) + test_access_invalid_pmc_regs(&pmc_accessors[i], pmc); + } GUEST_DONE(); } @@ -393,7 +476,7 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, struct kvm_vm *vm; struct kvm_vcpu *vcpu; struct kvm_vcpu_init init; - uint8_t pmuver; + uint8_t pmuver, ec; uint64_t dfr0, irq = 23; struct kvm_device_attr irq_attr = { .group = KVM_ARM_VCPU_PMU_V3_CTRL, @@ -406,11 +489,18 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, }; vm = vm_create(1); + vm_init_descriptor_tables(vm); + /* Catch exceptions for easier debugging */ + for (ec = 0; ec < ESR_EC_NUM; ec++) { + vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, ec, + guest_sync_handler); + } /* Create vCPU with PMUv3 */ vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); + vcpu_init_descriptor_tables(vcpu); *gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA); /* Make sure that PMUv3 support is indicated in the ID register */ @@ -479,6 +569,7 @@ static void run_test(uint64_t pmcr_n) vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); aarch64_vcpu_setup(vcpu, &init); + vcpu_init_descriptor_tables(vcpu); vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp); vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code); diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h index 5f977528e09c..52d87809356c 100644 --- a/tools/testing/selftests/kvm/include/aarch64/processor.h +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h @@ -104,6 +104,7 @@ enum { #define ESR_EC_SHIFT 26 #define ESR_EC_MASK (ESR_EC_NUM - 1) +#define ESR_EC_UNKNOWN 0x0 #define ESR_EC_SVC64 0x15 #define ESR_EC_IABT 0x21 #define ESR_EC_DABT 0x25