From patchwork Mon Feb 4 16:53:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Murray X-Patchwork-Id: 10796103 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 06F7113B5 for ; Mon, 4 Feb 2019 16:54:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E50542AFAF for ; Mon, 4 Feb 2019 16:54:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D3FA92BD53; Mon, 4 Feb 2019 16:54:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5AEE12BD21 for ; Mon, 4 Feb 2019 16:54:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=UzlFe9tBq7ZmSFgXWjgJJradkC+yThHn+iN8zaVo+XQ=; b=dLpKZqCUX7jIvhURXN8tif2TqB PARpurSC6s7qtS4FLSBUgPD0kyN1dSu0+Yk2EXxFIh7w6gQQDc0ywJS+1GjBWr1L3B1cT/ij1cDlu IR3/kmn5nGvnAYGEpqDROvmc/DB8v6Pzam8Xmjpfa2+NNXJySkNbuHuJF9z7gvcwJJdGPasiunoXB Y9JBMDxgf772pTL+lkM+tVFMiShdp0PPwO07ejgIRY+09VZqyEijuUJLSkL07d4CCoWRcMTditZ61 Ga3IDlLGCcPEFf1O1A7rkr4aMbjCifRYu2HlN2UWfe4xJGxy3buoursCiCyxYpqvHRfphI7mtxaCV 9lM+vACQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gqhVo-00046x-TL; Mon, 04 Feb 2019 16:54:24 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gqhVE-0003XY-CC for linux-arm-kernel@lists.infradead.org; Mon, 04 Feb 2019 16:53:50 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6350C15AD; Mon, 4 Feb 2019 08:53:46 -0800 (PST) Received: from e119886-lin.cambridge.arm.com (unknown [10.37.6.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 02CDB3F589; Mon, 4 Feb 2019 08:53:44 -0800 (PST) From: Andrew Murray To: Christoffer Dall , Marc Zyngier Subject: [PATCH v2 1/5] KVM: arm/arm64: rename kvm_pmu_{enable/disable}_counter functions Date: Mon, 4 Feb 2019 16:53:34 +0000 Message-Id: <1549299218-44714-2-git-send-email-andrew.murray@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1549299218-44714-1-git-send-email-andrew.murray@arm.com> References: <1549299218-44714-1-git-send-email-andrew.murray@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190204_085348_427230_30DBF4A4 X-CRM114-Status: GOOD ( 12.52 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Julien Thierry , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, suzuki.poulose@arm.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The kvm_pmu_{enable/disable}_counter functions can enabled/disable multiple counters at once as they operate on a bitmask. Let's make this clearer by renaming the function. Signed-off-by: Andrew Murray Reviewed-by: Julien Thierry Reviewed-by: Suzuki K Poulose --- arch/arm64/kvm/sys_regs.c | 4 ++-- include/kvm/arm_pmu.h | 8 ++++---- virt/kvm/arm/pmu.c | 12 ++++++------ 3 files changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index e3e3722..4a6fbac 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -857,11 +857,11 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p, if (r->Op2 & 0x1) { /* accessing PMCNTENSET_EL0 */ __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val; - kvm_pmu_enable_counter(vcpu, val); + kvm_pmu_enable_counter_mask(vcpu, val); } else { /* accessing PMCNTENCLR_EL0 */ __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val; - kvm_pmu_disable_counter(vcpu, val); + kvm_pmu_disable_counter_mask(vcpu, val); } } else { p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask; diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index f87fe20..b73f31b 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -46,8 +46,8 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val); u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu); void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu); void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu); -void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val); -void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val); +void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val); +void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val); void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu); void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu); bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu); @@ -83,8 +83,8 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) } static inline void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {} static inline void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) {} -static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {} -static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {} +static inline void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val) {} +static inline void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) {} static inline void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {} static inline void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {} static inline bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu) diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c index 1c5b76c..c5a722a 100644 --- a/virt/kvm/arm/pmu.c +++ b/virt/kvm/arm/pmu.c @@ -135,13 +135,13 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) } /** - * kvm_pmu_enable_counter - enable selected PMU counter + * kvm_pmu_enable_counter_mask - enable selected PMU counters * @vcpu: The vcpu pointer * @val: the value guest writes to PMCNTENSET register * * Call perf_event_enable to start counting the perf event */ -void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) +void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) { int i; struct kvm_pmu *pmu = &vcpu->arch.pmu; @@ -164,13 +164,13 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) } /** - * kvm_pmu_disable_counter - disable selected PMU counter + * kvm_pmu_disable_counter_mask - disable selected PMU counters * @vcpu: The vcpu pointer * @val: the value guest writes to PMCNTENCLR register * * Call perf_event_disable to stop counting the perf event */ -void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) +void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val) { int i; struct kvm_pmu *pmu = &vcpu->arch.pmu; @@ -347,10 +347,10 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) mask = kvm_pmu_valid_counter_mask(vcpu); if (val & ARMV8_PMU_PMCR_E) { - kvm_pmu_enable_counter(vcpu, + kvm_pmu_enable_counter_mask(vcpu, __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask); } else { - kvm_pmu_disable_counter(vcpu, mask); + kvm_pmu_disable_counter_mask(vcpu, mask); } if (val & ARMV8_PMU_PMCR_C) From patchwork Mon Feb 4 16:53:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Murray X-Patchwork-Id: 10796101 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7C119922 for ; Mon, 4 Feb 2019 16:54:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 67C8E28712 for ; Mon, 4 Feb 2019 16:54:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5A688287ED; Mon, 4 Feb 2019 16:54:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 045BA28712 for ; Mon, 4 Feb 2019 16:54:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=48TRsuWPPdpmo4JxdoqP/zv5FR1+Q5e5Zqd4kPHGEvU=; b=O2vpTksAot9RYOnQRt2M6dPLij I6IcpZLxdRWjMjuOYQIssL03pSYgFYMTU/TANgbY6z4Ua0h8sPcCoS7IW8lFnjG84uNV1ddv+DqUe Y1UxyexKU68HHGGd1ww9/lIQ7aGJN5XeDfwQzM947s3CoQLHTNTx1tz+th3r1aV7Fv/l6HWlP3Je2 8u4Ym3O9sNS1DP04GJxzcPzaEPGK9MAogcsdXJN3Dg/9SNPz3Tx+lnpW+PvT9trD4s902hhlX+3Qb 91x1nGh/VxkCXiE/Ss4S0TEt4H+rXhb91ZRfVahQ+r+rf5d0sEPSjg28RXpUs0dfvUnNbZXNxuxmw p57D+KhQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gqhVS-0003ip-0u; Mon, 04 Feb 2019 16:54:02 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gqhVE-0003Xl-8b for linux-arm-kernel@lists.infradead.org; Mon, 04 Feb 2019 16:53:49 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1A21115BE; Mon, 4 Feb 2019 08:53:48 -0800 (PST) Received: from e119886-lin.cambridge.arm.com (unknown [10.37.6.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AC2383F589; Mon, 4 Feb 2019 08:53:46 -0800 (PST) From: Andrew Murray To: Christoffer Dall , Marc Zyngier Subject: [PATCH v2 2/5] KVM: arm/arm64: extract duplicated code to own function Date: Mon, 4 Feb 2019 16:53:35 +0000 Message-Id: <1549299218-44714-3-git-send-email-andrew.murray@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1549299218-44714-1-git-send-email-andrew.murray@arm.com> References: <1549299218-44714-1-git-send-email-andrew.murray@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190204_085348_305044_E1B7FAFC X-CRM114-Status: GOOD ( 12.45 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Julien Thierry , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, suzuki.poulose@arm.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Let's reduce code duplication by extracting common code to its own function. Signed-off-by: Andrew Murray Reviewed-by: Suzuki K Poulose --- virt/kvm/arm/pmu.c | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c index c5a722a..6e7c179 100644 --- a/virt/kvm/arm/pmu.c +++ b/virt/kvm/arm/pmu.c @@ -65,6 +65,19 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) } /** + * kvm_pmu_release_perf_event - remove the perf event + * @pmc: The PMU counter pointer + */ +static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc) +{ + if (pmc->perf_event) { + perf_event_disable(pmc->perf_event); + perf_event_release_kernel(pmc->perf_event); + pmc->perf_event = NULL; + } +} + +/** * kvm_pmu_stop_counter - stop PMU counter * @pmc: The PMU counter pointer * @@ -79,9 +92,7 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc) reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX) ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx; __vcpu_sys_reg(vcpu, reg) = counter; - perf_event_disable(pmc->perf_event); - perf_event_release_kernel(pmc->perf_event); - pmc->perf_event = NULL; + kvm_pmu_release_perf_event(pmc); } } @@ -112,15 +123,8 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) int i; struct kvm_pmu *pmu = &vcpu->arch.pmu; - for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { - struct kvm_pmc *pmc = &pmu->pmc[i]; - - if (pmc->perf_event) { - perf_event_disable(pmc->perf_event); - perf_event_release_kernel(pmc->perf_event); - pmc->perf_event = NULL; - } - } + for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) + kvm_pmu_release_perf_event(&pmu->pmc[i]); } u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) From patchwork Mon Feb 4 16:53:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Murray X-Patchwork-Id: 10796127 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3DE8413B5 for ; Mon, 4 Feb 2019 16:54:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2959B2BD54 for ; Mon, 4 Feb 2019 16:54:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1D56C2BD5E; Mon, 4 Feb 2019 16:54:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9C0E72BD54 for ; Mon, 4 Feb 2019 16:54:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Kll8OF1his3tL5EEr2j7QIiIWGa1iSPD9/l2g36Afhw=; b=XUGemCYqTUi3SoCp2H/lOf5nv7 7Oo+lq9iiwvc73LOKsvhitPCT3jjlBz6XIzeGXzvkg25mZG8HxiHZG5lQ329psj9nETTMmSXMr3oG CkuXOFnlqYFvV1nIaeO/+KIaczbpaxKEkV2dg5u1eCIgEAsuR/83CR4saJ9vDaPNUg4yBeNTcR95Y 3xxHLCNQytqahjgf2tUbRHksbaT9wMEKFAPXKrFHJS7xgc/nE5MjeOjoPkiil926+j/MLEOwIWXNe hFdLF82it4d53gzoOJ0X+aHs+XymOn27xfNVBy4HhuQknYled/hqy6ItJVDf02unxzLMyAjK+NhiQ +ZITWENw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gqhW5-0004Lj-PQ; Mon, 04 Feb 2019 16:54:41 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gqhVG-0003YP-49 for linux-arm-kernel@lists.infradead.org; Mon, 04 Feb 2019 16:53:51 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C431915AB; Mon, 4 Feb 2019 08:53:49 -0800 (PST) Received: from e119886-lin.cambridge.arm.com (unknown [10.37.6.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 631E13F589; Mon, 4 Feb 2019 08:53:48 -0800 (PST) From: Andrew Murray To: Christoffer Dall , Marc Zyngier Subject: [PATCH v2 3/5] KVM: arm/arm64: re-create event when setting counter value Date: Mon, 4 Feb 2019 16:53:36 +0000 Message-Id: <1549299218-44714-4-git-send-email-andrew.murray@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1549299218-44714-1-git-send-email-andrew.murray@arm.com> References: <1549299218-44714-1-git-send-email-andrew.murray@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190204_085350_171779_C70DD43F X-CRM114-Status: GOOD ( 16.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Julien Thierry , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, suzuki.poulose@arm.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The perf event sample_period is currently set based upon the current counter value, when PMXEVTYPER is written to and the perf event is created. However the user may choose to write the type before the counter value in which case sample_period will be set incorrectly. Let's instead decouple event creation from PMXEVTYPER and (re)create the event in either suitation. Signed-off-by: Andrew Murray Reviewed-by: Julien Thierry Reviewed-by: Suzuki K Poulose --- virt/kvm/arm/pmu.c | 40 +++++++++++++++++++++++++++++++--------- 1 file changed, 31 insertions(+), 9 deletions(-) diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c index 6e7c179..95d74ec 100644 --- a/virt/kvm/arm/pmu.c +++ b/virt/kvm/arm/pmu.c @@ -24,6 +24,7 @@ #include #include +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx); /** * kvm_pmu_get_counter_value - get PMU counter value * @vcpu: The vcpu pointer @@ -62,6 +63,9 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) reg = (select_idx == ARMV8_PMU_CYCLE_IDX) ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx; __vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx); + + /* Recreate the perf event to reflect the updated sample_period */ + kvm_pmu_create_perf_event(vcpu, select_idx); } /** @@ -378,23 +382,22 @@ static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx) } /** - * kvm_pmu_set_counter_event_type - set selected counter to monitor some event + * kvm_pmu_create_perf_event - create a perf event for a counter * @vcpu: The vcpu pointer - * @data: The data guest writes to PMXEVTYPER_EL0 + * @data: Type of event as per PMXEVTYPER_EL0 format * @select_idx: The number of selected counter - * - * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an - * event with given hardware event number. Here we call perf_event API to - * emulate this action and create a kernel perf event for it. */ -void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, - u64 select_idx) +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx) { struct kvm_pmu *pmu = &vcpu->arch.pmu; struct kvm_pmc *pmc = &pmu->pmc[select_idx]; struct perf_event *event; struct perf_event_attr attr; - u64 eventsel, counter; + u64 eventsel, counter, reg, data; + + reg = (select_idx == ARMV8_PMU_CYCLE_IDX) + ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx; + data = __vcpu_sys_reg(vcpu, reg); kvm_pmu_stop_counter(vcpu, pmc); eventsel = data & ARMV8_PMU_EVTYPE_EVENT; @@ -431,6 +434,25 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, pmc->perf_event = event; } +/** + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event + * @vcpu: The vcpu pointer + * @data: The data guest writes to PMXEVTYPER_EL0 + * @select_idx: The number of selected counter + * + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an + * event with given hardware event number. Here we call perf_event API to + * emulate this action and create a kernel perf event for it. + */ +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, + u64 select_idx) +{ + u64 event_type = data & ARMV8_PMU_EVTYPE_MASK; + + __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type; + kvm_pmu_create_perf_event(vcpu, select_idx); +} + bool kvm_arm_support_pmu_v3(void) { /* From patchwork Mon Feb 4 16:53:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Murray X-Patchwork-Id: 10796129 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 860FE13B5 for ; Mon, 4 Feb 2019 16:55:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6C2E52B1F1 for ; Mon, 4 Feb 2019 16:55:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5B0002BD63; Mon, 4 Feb 2019 16:55:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A99C82BD61 for ; Mon, 4 Feb 2019 16:55:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=oLTnZZWbaW1e6QnA6xbzXMYebmCWnHWnAcv7Hyi/H4Q=; b=KzSUX0/n+7bPYqmfPrZA4Gt7jc APMjPDt4Qju0aZG62YS0tMqlHL3SSWGZ6PVgN6b+Ck9ZWholp3spmCQ18/a3+qwg33V22267ZwgUi rMV9E19TwRlkJNE8pUU0uHo4Uf4RiZXRIFavAY7hdkbdy0KogkaBcYUk9+SdmOa6Rr8TcXcbxXA28 VyW0FYVpSpjxDuRrw183pk/tURGXlhH98AWdDQqvh1YcxOckHd84zTADS2iNAtgnsy50jimMK3D7h nAQvhE6Wf37gIhVthQsTPLI38Alt81EHFeKZsioxgSGoTUdg4PN/8jbWBWqtfKH3A/h5KvXbAo96F 89ctYS9Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gqhWN-0004cI-Nd; Mon, 04 Feb 2019 16:54:59 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gqhVI-0003Zz-6G for linux-arm-kernel@lists.infradead.org; Mon, 04 Feb 2019 16:53:55 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A859E15AB; Mon, 4 Feb 2019 08:53:51 -0800 (PST) Received: from e119886-lin.cambridge.arm.com (unknown [10.37.6.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1AE223F589; Mon, 4 Feb 2019 08:53:49 -0800 (PST) From: Andrew Murray To: Christoffer Dall , Marc Zyngier Subject: [PATCH v2 4/5] KVM: arm/arm64: lazily create perf events on enable Date: Mon, 4 Feb 2019 16:53:37 +0000 Message-Id: <1549299218-44714-5-git-send-email-andrew.murray@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1549299218-44714-1-git-send-email-andrew.murray@arm.com> References: <1549299218-44714-1-git-send-email-andrew.murray@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190204_085352_686327_FBFC8ACE X-CRM114-Status: GOOD ( 18.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Julien Thierry , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, suzuki.poulose@arm.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP To prevent re-creating perf events everytime the counter registers are changed, let's instead lazily create the event when the event is first enabled and destroy it when it changes. Signed-off-by: Andrew Murray --- virt/kvm/arm/pmu.c | 96 ++++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 68 insertions(+), 28 deletions(-) diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c index 95d74ec..a64aeb2 100644 --- a/virt/kvm/arm/pmu.c +++ b/virt/kvm/arm/pmu.c @@ -24,7 +24,10 @@ #include #include +static void kvm_pmu_sync_counter_enable(struct kvm_vcpu *vcpu, u64 select_idx); static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx); +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc); + /** * kvm_pmu_get_counter_value - get PMU counter value * @vcpu: The vcpu pointer @@ -59,13 +62,15 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) { u64 reg; + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; reg = (select_idx == ARMV8_PMU_CYCLE_IDX) ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx; __vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx); - /* Recreate the perf event to reflect the updated sample_period */ - kvm_pmu_create_perf_event(vcpu, select_idx); + kvm_pmu_stop_counter(vcpu, pmc); + kvm_pmu_sync_counter_enable(vcpu, select_idx); } /** @@ -83,6 +88,7 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc) /** * kvm_pmu_stop_counter - stop PMU counter + * @vcpu: The vcpu pointer * @pmc: The PMU counter pointer * * If this counter has been configured to monitor some event, release it here. @@ -143,6 +149,24 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) } /** + * kvm_pmu_enable_counter - create/enable a counter + * @vcpu: The vcpu pointer + * @select_idx: The counter index + */ +static void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 select_idx) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + + if (!pmc->perf_event) + kvm_pmu_create_perf_event(vcpu, select_idx); + + perf_event_enable(pmc->perf_event); + if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE) + kvm_debug("failed to enable perf event\n"); +} + +/** * kvm_pmu_enable_counter_mask - enable selected PMU counters * @vcpu: The vcpu pointer * @val: the value guest writes to PMCNTENSET register @@ -152,8 +176,6 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) { int i; - struct kvm_pmu *pmu = &vcpu->arch.pmu; - struct kvm_pmc *pmc; if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val) return; @@ -162,16 +184,39 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) if (!(val & BIT(i))) continue; - pmc = &pmu->pmc[i]; - if (pmc->perf_event) { - perf_event_enable(pmc->perf_event); - if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE) - kvm_debug("fail to enable perf event\n"); - } + kvm_pmu_enable_counter(vcpu, i); } } /** + * kvm_pmu_sync_counter_enable - reenable a counter if it should be enabled + * @vcpu: The vcpu pointer + * @select_idx: The counter index + */ +static void kvm_pmu_sync_counter_enable(struct kvm_vcpu *vcpu, + u64 select_idx) +{ + u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); + + if (set & BIT(select_idx)) + kvm_pmu_enable_counter_mask(vcpu, BIT(select_idx)); +} + +/** + * kvm_pmu_disable_counter - disable selected PMU counter + * @vcpu: The vcpu pointer + * @pmc: The counter to disable + */ +static void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 select_idx) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + + if (pmc->perf_event) + perf_event_disable(pmc->perf_event); +} + +/** * kvm_pmu_disable_counter_mask - disable selected PMU counters * @vcpu: The vcpu pointer * @val: the value guest writes to PMCNTENCLR register @@ -181,8 +226,6 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val) { int i; - struct kvm_pmu *pmu = &vcpu->arch.pmu; - struct kvm_pmc *pmc; if (!val) return; @@ -191,9 +234,7 @@ void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val) if (!(val & BIT(i))) continue; - pmc = &pmu->pmc[i]; - if (pmc->perf_event) - perf_event_disable(pmc->perf_event); + kvm_pmu_disable_counter(vcpu, i); } } @@ -375,16 +416,9 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) } } -static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx) -{ - return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) && - (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx)); -} - /** - * kvm_pmu_create_perf_event - create a perf event for a counter + * kvm_pmu_counter_create_enabled_perf_event - create a perf event for a counter * @vcpu: The vcpu pointer - * @data: Type of event as per PMXEVTYPER_EL0 format * @select_idx: The number of selected counter */ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx) @@ -399,7 +433,6 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx) ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx; data = __vcpu_sys_reg(vcpu, reg); - kvm_pmu_stop_counter(vcpu, pmc); eventsel = data & ARMV8_PMU_EVTYPE_EVENT; /* Software increment event does't need to be backed by a perf event */ @@ -411,7 +444,7 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx) attr.type = PERF_TYPE_RAW; attr.size = sizeof(attr); attr.pinned = 1; - attr.disabled = !kvm_pmu_counter_is_enabled(vcpu, select_idx); + attr.disabled = 1; attr.exclude_user = data & ARMV8_PMU_EXCLUDE_EL0 ? 1 : 0; attr.exclude_kernel = data & ARMV8_PMU_EXCLUDE_EL1 ? 1 : 0; attr.exclude_hv = 1; /* Don't count EL2 events */ @@ -447,10 +480,17 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx) void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, u64 select_idx) { - u64 event_type = data & ARMV8_PMU_EVTYPE_MASK; + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + u64 reg, event_type = data & ARMV8_PMU_EVTYPE_MASK; + + kvm_pmu_stop_counter(vcpu, pmc); + + reg = (select_idx == ARMV8_PMU_CYCLE_IDX) + ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx; - __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type; - kvm_pmu_create_perf_event(vcpu, select_idx); + __vcpu_sys_reg(vcpu, reg) = event_type; + kvm_pmu_sync_counter_enable(vcpu, select_idx); } bool kvm_arm_support_pmu_v3(void) From patchwork Mon Feb 4 16:53:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Murray X-Patchwork-Id: 10796145 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 99DE913B4 for ; Mon, 4 Feb 2019 17:06:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 82D0E2BD20 for ; Mon, 4 Feb 2019 17:06:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7313C2BD63; Mon, 4 Feb 2019 17:06:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 77FAD2BD20 for ; Mon, 4 Feb 2019 17:06:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=IVXdX/bgxkwUhpMSGzj99dkefpfTkNGDTeZDY5IHA5Q=; b=Oh48wYJyVK39Fz+mAL2xJfKSne QUd0Z93adMAoXs9C8J6UuNQ/ieRQXUzWMYQoBj/xNwySr+8retFWpTGpJmTS9dYEeu1mH3frOLzxq Apt7QYnS1NZQiLbk1aUzGVUfvI2RjyFeHk/gtRlDfTXdCBbtpPeKK7ZIidPsIjXp6D0LYs08UnTxu 2O9R+YzFfsIAAllzqm+8Ki4WkRFcbVHkr4xwGFfxNqOt1f5BQ3QOQvJanBZVbjZRWP8CBUkbaeM2V YDhscdMfb18asCsSMoSfLlXyngWETug+HO6LGnyos1D5amsEgXXx7y9EaV4VfAP6KzqBcJiJ/Rc8N O+gwYj5A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gqhho-0002Mw-Ca; Mon, 04 Feb 2019 17:06:48 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gqhVK-0003c0-61 for linux-arm-kernel@lists.infradead.org; Mon, 04 Feb 2019 16:54:14 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 88F6815AD; Mon, 4 Feb 2019 08:53:53 -0800 (PST) Received: from e119886-lin.cambridge.arm.com (unknown [10.37.6.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F190A3F589; Mon, 4 Feb 2019 08:53:51 -0800 (PST) From: Andrew Murray To: Christoffer Dall , Marc Zyngier Subject: [PATCH v2 5/5] KVM: arm/arm64: support chained PMU counters Date: Mon, 4 Feb 2019 16:53:38 +0000 Message-Id: <1549299218-44714-6-git-send-email-andrew.murray@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1549299218-44714-1-git-send-email-andrew.murray@arm.com> References: <1549299218-44714-1-git-send-email-andrew.murray@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190204_085354_927304_E9CDBD7C X-CRM114-Status: GOOD ( 22.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Julien Thierry , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, suzuki.poulose@arm.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Emulate chained PMU counters by creating a single 64 bit event counter for a pair of chained KVM counters. Signed-off-by: Andrew Murray --- include/kvm/arm_pmu.h | 1 + virt/kvm/arm/pmu.c | 321 +++++++++++++++++++++++++++++++++++++++++--------- 2 files changed, 269 insertions(+), 53 deletions(-) diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index b73f31b..8e691ee 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -29,6 +29,7 @@ struct kvm_pmc { u8 idx; /* index into the pmu->pmc array */ struct perf_event *perf_event; u64 bitmask; + u64 overflow_count; }; struct kvm_pmu { diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c index a64aeb2..9318130 100644 --- a/virt/kvm/arm/pmu.c +++ b/virt/kvm/arm/pmu.c @@ -24,9 +24,25 @@ #include #include +#define ARMV8_PMUV3_PERFCTR_CHAIN 0x1E +static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu, + u64 pair_low); +static void kvm_pmu_stop_release_perf_event(struct kvm_vcpu *vcpu, + u64 select_idx); +static void kvm_pmu_sync_counter_enable_pair(struct kvm_vcpu *vcpu, u64 pair_low); static void kvm_pmu_sync_counter_enable(struct kvm_vcpu *vcpu, u64 select_idx); static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx); -static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc); + +/** + * kvm_pmu_counter_is_high_word - is select_idx high counter of 64bit event + * @pmc: The PMU counter pointer + * @select_idx: The counter index + */ +static inline bool kvm_pmu_counter_is_high_word(struct kvm_pmc *pmc) +{ + return ((pmc->perf_event->attr.config1 & 0x1) + && (pmc->idx % 2)); +} /** * kvm_pmu_get_counter_value - get PMU counter value @@ -35,22 +51,70 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc); */ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) { - u64 counter, reg, enabled, running; + u64 counter_idx, reg, enabled, running, incr; struct kvm_pmu *pmu = &vcpu->arch.pmu; struct kvm_pmc *pmc = &pmu->pmc[select_idx]; reg = (select_idx == ARMV8_PMU_CYCLE_IDX) ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx; - counter = __vcpu_sys_reg(vcpu, reg); + counter_idx = __vcpu_sys_reg(vcpu, reg); /* The real counter value is equal to the value of counter register plus * the value perf event counts. */ - if (pmc->perf_event) - counter += perf_event_read_value(pmc->perf_event, &enabled, + if (pmc->perf_event) { + incr = perf_event_read_value(pmc->perf_event, &enabled, &running); - return counter & pmc->bitmask; + if (kvm_pmu_counter_is_high_word(pmc)) { + u64 counter_low, counter; + + reg = (select_idx == ARMV8_PMU_CYCLE_IDX) + ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx - 1; + counter_low = __vcpu_sys_reg(vcpu, reg); + counter = lower_32_bits(counter_low) | (counter_idx << 32); + counter_idx = upper_32_bits(counter + incr); + } else { + counter_idx += incr; + } + } + + return counter_idx & pmc->bitmask; +} + +/** + * kvm_pmu_counter_is_enabled - is a counter active + * @vcpu: The vcpu pointer + * @select_idx: The counter index + */ +static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx) +{ + u64 mask = kvm_pmu_valid_counter_mask(vcpu); + + return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) && + (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask & BIT(select_idx)); +} + +/** + * kvnm_pmu_event_is_chained - is a pair of counters chained and enabled + * @vcpu: The vcpu pointer + * @select_idx: The low counter index + */ +static bool kvm_pmu_event_is_chained(struct kvm_vcpu *vcpu, u64 pair_low) +{ + u64 eventsel, reg; + + reg = (pair_low + 1 == ARMV8_PMU_CYCLE_IDX) + ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + pair_low + 1; + eventsel = __vcpu_sys_reg(vcpu, reg) & ARMV8_PMU_EVTYPE_EVENT; + if (eventsel != ARMV8_PMUV3_PERFCTR_CHAIN) + return false; + + if (kvm_pmu_counter_is_enabled(vcpu, pair_low) != + kvm_pmu_counter_is_enabled(vcpu, pair_low + 1)) + return false; + + return true; } /** @@ -61,29 +125,45 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) */ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) { - u64 reg; - struct kvm_pmu *pmu = &vcpu->arch.pmu; - struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + u64 reg, pair_low; reg = (select_idx == ARMV8_PMU_CYCLE_IDX) ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx; __vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx); - kvm_pmu_stop_counter(vcpu, pmc); - kvm_pmu_sync_counter_enable(vcpu, select_idx); + pair_low = (select_idx % 2) ? select_idx - 1 : select_idx; + + /* Recreate the perf event to reflect the updated sample_period */ + if (kvm_pmu_event_is_chained(vcpu, pair_low)) { + kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low); + kvm_pmu_sync_counter_enable_pair(vcpu, pair_low); + } else { + kvm_pmu_stop_release_perf_event(vcpu, select_idx); + kvm_pmu_sync_counter_enable(vcpu, select_idx); + } } /** * kvm_pmu_release_perf_event - remove the perf event * @pmc: The PMU counter pointer */ -static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc) +static void kvm_pmu_release_perf_event(struct kvm_vcpu *vcpu, + struct kvm_pmc *pmc) { - if (pmc->perf_event) { - perf_event_disable(pmc->perf_event); + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc_alt; + u64 pair_alt; + + pair_alt = (pmc->idx % 2) ? pmc->idx - 1 : pmc->idx + 1; + pmc_alt = &pmu->pmc[pair_alt]; + + if (pmc->perf_event) perf_event_release_kernel(pmc->perf_event); - pmc->perf_event = NULL; - } + + if (pmc->perf_event == pmc_alt->perf_event) + pmc_alt->perf_event = NULL; + + pmc->perf_event = NULL; } /** @@ -91,22 +171,65 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc) * @vcpu: The vcpu pointer * @pmc: The PMU counter pointer * - * If this counter has been configured to monitor some event, release it here. + * If this counter has been configured to monitor some event, stop it here. */ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc) { u64 counter, reg; if (pmc->perf_event) { + perf_event_disable(pmc->perf_event); counter = kvm_pmu_get_counter_value(vcpu, pmc->idx); reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX) ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx; __vcpu_sys_reg(vcpu, reg) = counter; - kvm_pmu_release_perf_event(pmc); } } /** + * kvm_pmu_stop_release_perf_event_pair - stop and release a pair of counters + * @vcpu: The vcpu pointer + * @pmc_low: The PMU counter pointer for lower word + * @pmc_high: The PMU counter pointer for higher word + * + * As chained counters share the underlying perf event, we stop them + * both first before discarding the underlying perf event + */ +static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu, + u64 idx_low) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc_low = &pmu->pmc[idx_low]; + struct kvm_pmc *pmc_high = &pmu->pmc[idx_low + 1]; + + /* Stopping a counter involves adding the perf event value to the + * vcpu sys register value prior to releasing the perf event. As + * kvm_pmu_get_counter_value may depend on the low counter value we + * must always stop the high counter first + */ + kvm_pmu_stop_counter(vcpu, pmc_high); + kvm_pmu_stop_counter(vcpu, pmc_low); + + kvm_pmu_release_perf_event(vcpu, pmc_high); + kvm_pmu_release_perf_event(vcpu, pmc_low); +} + +/** + * kvm_pmu_stop_release_perf_event - stop and release a counter + * @vcpu: The vcpu pointer + * @select_idx: The counter index + */ +static void kvm_pmu_stop_release_perf_event(struct kvm_vcpu *vcpu, + u64 select_idx) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + + kvm_pmu_stop_counter(vcpu, pmc); + kvm_pmu_release_perf_event(vcpu, pmc); +} + +/** * kvm_pmu_vcpu_reset - reset pmu state for cpu * @vcpu: The vcpu pointer * @@ -117,7 +240,7 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu = &vcpu->arch.pmu; for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { - kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]); + kvm_pmu_stop_release_perf_event(vcpu, i); pmu->pmc[i].idx = i; pmu->pmc[i].bitmask = 0xffffffffUL; } @@ -134,7 +257,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu = &vcpu->arch.pmu; for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) - kvm_pmu_release_perf_event(&pmu->pmc[i]); + kvm_pmu_release_perf_event(vcpu, &pmu->pmc[i]); } u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) @@ -167,53 +290,115 @@ static void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 select_idx) } /** + * kvm_pmu_sync_counter_enable - reenable a counter if it should be enabled + * @vcpu: The vcpu pointer + * @select_idx: The counter index + */ +static void kvm_pmu_sync_counter_enable(struct kvm_vcpu *vcpu, + u64 select_idx) +{ + kvm_pmu_enable_counter_mask(vcpu, BIT(select_idx)); +} + +/** + * kvm_pmu_sync_counter_enable_pair - reenable a pair if they should be enabled + * @vcpu: The vcpu pointer + * @pair_low: The low counter index + */ +static void kvm_pmu_sync_counter_enable_pair(struct kvm_vcpu *vcpu, u64 pair_low) +{ + kvm_pmu_enable_counter_mask(vcpu, BIT(pair_low) | BIT(pair_low + 1)); +} + +/** + * kvm_pmu_enable_counter_pair - enable counters pair at a time + * @vcpu: The vcpu pointer + * @val: counters to enable + * @pair_low: The low counter index + */ +static void kvm_pmu_enable_counter_pair(struct kvm_vcpu *vcpu, u64 val, + u64 pair_low) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc_low = &pmu->pmc[pair_low]; + struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1]; + + if (kvm_pmu_event_is_chained(vcpu, pair_low)) { + if (pmc_low->perf_event != pmc_high->perf_event) + kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low); + } + + if (val & BIT(pair_low)) + kvm_pmu_enable_counter(vcpu, pair_low); + + if (val & BIT(pair_low+1)) + kvm_pmu_enable_counter(vcpu, pair_low + 1); +} + +/** * kvm_pmu_enable_counter_mask - enable selected PMU counters * @vcpu: The vcpu pointer - * @val: the value guest writes to PMCNTENSET register + * @val: the value guest writes to PMCNTENSET register or a subset * * Call perf_event_enable to start counting the perf event */ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) { int i; + u64 mask = kvm_pmu_valid_counter_mask(vcpu); + u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask; if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val) return; - for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { - if (!(val & BIT(i))) - continue; - - kvm_pmu_enable_counter(vcpu, i); - } + for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2) + kvm_pmu_enable_counter_pair(vcpu, val & set, i); } /** - * kvm_pmu_sync_counter_enable - reenable a counter if it should be enabled + * kvm_pmu_disable_counter - disable selected PMU counter * @vcpu: The vcpu pointer * @select_idx: The counter index */ -static void kvm_pmu_sync_counter_enable(struct kvm_vcpu *vcpu, - u64 select_idx) +static void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 select_idx) { - u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; - if (set & BIT(select_idx)) - kvm_pmu_enable_counter_mask(vcpu, BIT(select_idx)); + if (pmc->perf_event) { + perf_event_disable(pmc->perf_event); + if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE) + kvm_debug("fail to enable perf event\n"); + } } /** - * kvm_pmu_disable_counter - disable selected PMU counter - * @vcpu: The vcpu pointer - * @pmc: The counter to disable + * kvm_pmu_disable_counter_pair - disable counters pair at a time + * @val: counters to disable + * @pair_low: The low counter index */ -static void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 select_idx) +static void kvm_pmu_disable_counter_pair(struct kvm_vcpu *vcpu, u64 val, + u64 pair_low) { struct kvm_pmu *pmu = &vcpu->arch.pmu; - struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + struct kvm_pmc *pmc_low = &pmu->pmc[pair_low]; + struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1]; + + if (!kvm_pmu_event_is_chained(vcpu, pair_low)) { + if (pmc_low->perf_event == pmc_high->perf_event) { + if (pmc_low->perf_event) { + kvm_pmu_stop_release_perf_event_pair(vcpu, + pair_low); + kvm_pmu_sync_counter_enable_pair(vcpu, pair_low); + } + } + } - if (pmc->perf_event) - perf_event_disable(pmc->perf_event); + if (val & BIT(pair_low)) + kvm_pmu_disable_counter(vcpu, pair_low); + + if (val & BIT(pair_low + 1)) + kvm_pmu_disable_counter(vcpu, pair_low + 1); } /** @@ -230,12 +415,8 @@ void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val) if (!val) return; - for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { - if (!(val & BIT(i))) - continue; - - kvm_pmu_disable_counter(vcpu, i); - } + for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2) + kvm_pmu_disable_counter_pair(vcpu, val, i); } static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) @@ -346,6 +527,17 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event, __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx); + if (kvm_pmu_event_is_chained(vcpu, idx)) { + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc_high = &pmu->pmc[idx + 1]; + + if (!(--pmc_high->overflow_count)) { + __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx + 1); + pmc_high->overflow_count = U32_MAX + 1UL; + } + + } + if (kvm_pmu_overflow_status(vcpu)) { kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu); kvm_vcpu_kick(vcpu); @@ -440,6 +632,10 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx) select_idx != ARMV8_PMU_CYCLE_IDX) return; + /* Handled by even event */ + if (eventsel == ARMV8_PMUV3_PERFCTR_CHAIN) + return; + memset(&attr, 0, sizeof(struct perf_event_attr)); attr.type = PERF_TYPE_RAW; attr.size = sizeof(attr); @@ -452,6 +648,9 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx) attr.config = (select_idx == ARMV8_PMU_CYCLE_IDX) ? ARMV8_PMUV3_PERFCTR_CPU_CYCLES : eventsel; + if (kvm_pmu_event_is_chained(vcpu, select_idx)) + attr.config1 |= 0x1; + counter = kvm_pmu_get_counter_value(vcpu, select_idx); /* The initial sample period (overflow count) of an event. */ attr.sample_period = (-counter) & pmc->bitmask; @@ -464,6 +663,14 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx) return; } + if (kvm_pmu_event_is_chained(vcpu, select_idx)) { + struct kvm_pmc *pmc_next = &pmu->pmc[select_idx + 1]; + + pmc_next->perf_event = event; + counter = kvm_pmu_get_counter_value(vcpu, select_idx + 1); + pmc_next->overflow_count = (-counter) & pmc->bitmask; + } + pmc->perf_event = event; } @@ -480,17 +687,25 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx) void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, u64 select_idx) { - struct kvm_pmu *pmu = &vcpu->arch.pmu; - struct kvm_pmc *pmc = &pmu->pmc[select_idx]; - u64 reg, event_type = data & ARMV8_PMU_EVTYPE_MASK; - - kvm_pmu_stop_counter(vcpu, pmc); + u64 eventsel, event_type, pair_low, reg; reg = (select_idx == ARMV8_PMU_CYCLE_IDX) ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx; - __vcpu_sys_reg(vcpu, reg) = event_type; - kvm_pmu_sync_counter_enable(vcpu, select_idx); + eventsel = data & ARMV8_PMU_EVTYPE_EVENT; + event_type = data & ARMV8_PMU_EVTYPE_MASK; + pair_low = (select_idx % 2) ? select_idx - 1 : select_idx; + + if (kvm_pmu_event_is_chained(vcpu, pair_low) || + eventsel == ARMV8_PMUV3_PERFCTR_CHAIN) { + kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low); + __vcpu_sys_reg(vcpu, reg) = event_type; + kvm_pmu_sync_counter_enable_pair(vcpu, pair_low); + } else { + kvm_pmu_stop_release_perf_event(vcpu, pair_low); + __vcpu_sys_reg(vcpu, reg) = event_type; + kvm_pmu_sync_counter_enable(vcpu, pair_low); + } } bool kvm_arm_support_pmu_v3(void)