From patchwork Tue Jan 22 10:49:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Murray X-Patchwork-Id: 10775347 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7A58317F0 for ; Tue, 22 Jan 2019 10:50:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 685112A539 for ; Tue, 22 Jan 2019 10:50:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5A59C2A5AA; Tue, 22 Jan 2019 10:50:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0DC0B2A539 for ; Tue, 22 Jan 2019 10:50:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=xnepHrPS36zMnr/9P9dYajJZwWGLXkfiaqqIumMkwc8=; b=alPp+Slnmwgupb2QCVGDSAU9FV JkWeTvksgPkwmre73WaQHv+t+nunNcRXb+4A82YLxfEp3UZiA8HuTgVnceHBDO3ZRF5twN+0Ly5N7 qNNEqw5pyGCPYE+Bi6xxu6H1Z9Yjpb4avOD6gtyqPMuw8WfxyB1FV+IuCnpRo8tQfWdJjCK2kUvCm 3/dqMAG3t2np+UobOHFYztzh2mIZZkQd4pL0mIrqImkyN/2pBC8OoDJeGpr2Xfm49F7fsYtPx2Kl5 8HBR/Jub+5hnyB7y/af4kfT9A5Sfyc9vSeXOBQ7pQd0Du5BeecPgATqdKY8zXXPEjvu7qw9zqa9PY x0PrUsbQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gltdS-0001zr-2m; Tue, 22 Jan 2019 10:50:26 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gltd8-0000vs-5I for linux-arm-kernel@lists.infradead.org; Tue, 22 Jan 2019 10:50:07 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3E3A91596; Tue, 22 Jan 2019 02:50:04 -0800 (PST) Received: from e119886-lin.cambridge.arm.com (unknown [10.37.6.11]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 08DE73F6A8; Tue, 22 Jan 2019 02:50:02 -0800 (PST) From: Andrew Murray To: Christoffer Dall , Marc Zyngier Subject: [PATCH 1/4] KVM: arm/arm64: extract duplicated code to own function Date: Tue, 22 Jan 2019 10:49:54 +0000 Message-Id: <1548154197-5470-2-git-send-email-andrew.murray@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1548154197-5470-1-git-send-email-andrew.murray@arm.com> References: <1548154197-5470-1-git-send-email-andrew.murray@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190122_025006_201793_6460EA14 X-CRM114-Status: GOOD ( 13.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, suzuki.poulose@arm.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Let's reduce code duplication by extracting common code to its own function. Signed-off-by: Andrew Murray Reviewed-by: Suzuki K Poulose --- virt/kvm/arm/pmu.c | 24 +++++++++++++++--------- 1 file changed, 15 insertions(+), 9 deletions(-) diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c index 1c5b76c..531d27f 100644 --- a/virt/kvm/arm/pmu.c +++ b/virt/kvm/arm/pmu.c @@ -65,6 +65,19 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) } /** + * kvm_pmu_release_perf_event - remove the perf event + * @pmc: The PMU counter pointer + */ +static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc) +{ + if (pmc->perf_event) { + perf_event_disable(pmc->perf_event); + perf_event_release_kernel(pmc->perf_event); + pmc->perf_event = NULL; + } +} + +/** * kvm_pmu_stop_counter - stop PMU counter * @pmc: The PMU counter pointer * @@ -79,9 +92,7 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc) reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX) ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx; __vcpu_sys_reg(vcpu, reg) = counter; - perf_event_disable(pmc->perf_event); - perf_event_release_kernel(pmc->perf_event); - pmc->perf_event = NULL; + kvm_pmu_release_perf_event(pmc); } } @@ -114,12 +125,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { struct kvm_pmc *pmc = &pmu->pmc[i]; - - if (pmc->perf_event) { - perf_event_disable(pmc->perf_event); - perf_event_release_kernel(pmc->perf_event); - pmc->perf_event = NULL; - } + kvm_pmu_release_perf_event(pmc); } } From patchwork Tue Jan 22 10:49:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Murray X-Patchwork-Id: 10775351 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7421414E5 for ; Tue, 22 Jan 2019 10:51:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 61F6D2A510 for ; Tue, 22 Jan 2019 10:51:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 55E352A5AA; Tue, 22 Jan 2019 10:51:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 782CC2A510 for ; Tue, 22 Jan 2019 10:51:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ioT1KTG+a+fzWog/9vC34J7ajf09c6wX+b3hJUmVfN8=; b=dhqqJ4orcCoJfGCDi3nIKDgMTU 9XBJV5aerEvW2IqWCFamqM50RYdM8HUHEqYqG99C6kv5N8oDHuCLY0oEeGYHWg3G1li7fC+bn2JaD kZcysgKaeWlsl+Op3UR5Q2MZfWAfWUj7hZP/TGxX9n3RVDUiaLM+C5smH7bx68t7vbyRvjKK5nJIc iez7PqNEBEU1bxaTXoT1txhIsGYVb4TadkUKezBbolSanD83cd+Z4UpRmS4YhG/bcCKeve6xvT6/8 bvKjtKubG0RoJWzf5m3KL09/w1zh7rA4ZjIXZ/FWBbxJlota/Fj+q/sOm5Oego+55xqhjvyVq+qrF +X5IX8ng==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1glte8-0002cf-A0; Tue, 22 Jan 2019 10:51:08 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gltd8-0001FI-2J for linux-arm-kernel@lists.infradead.org; Tue, 22 Jan 2019 10:50:12 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BC6DA15AD; Tue, 22 Jan 2019 02:50:05 -0800 (PST) Received: from e119886-lin.cambridge.arm.com (unknown [10.37.6.11]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8668F3F6A8; Tue, 22 Jan 2019 02:50:04 -0800 (PST) From: Andrew Murray To: Christoffer Dall , Marc Zyngier Subject: [PATCH 2/4] KVM: arm/arm64: re-create event when setting counter value Date: Tue, 22 Jan 2019 10:49:55 +0000 Message-Id: <1548154197-5470-3-git-send-email-andrew.murray@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1548154197-5470-1-git-send-email-andrew.murray@arm.com> References: <1548154197-5470-1-git-send-email-andrew.murray@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190122_025006_115058_00742F10 X-CRM114-Status: GOOD ( 14.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, suzuki.poulose@arm.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The perf event sample_period is currently set based upon the current counter value, when PMXEVTYPER is written to and the perf event is created. However the user may choose to write the type before the counter value in which case sample_period will be set incorrectly. Let's instead decouple event creation from PMXEVTYPER and (re)create the event in either suitation. Signed-off-by: Andrew Murray --- virt/kvm/arm/pmu.c | 39 ++++++++++++++++++++++++++++++--------- 1 file changed, 30 insertions(+), 9 deletions(-) diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c index 531d27f..4464899 100644 --- a/virt/kvm/arm/pmu.c +++ b/virt/kvm/arm/pmu.c @@ -24,6 +24,8 @@ #include #include +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data, + u64 select_idx); /** * kvm_pmu_get_counter_value - get PMU counter value * @vcpu: The vcpu pointer @@ -57,11 +59,18 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) */ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) { - u64 reg; + u64 reg, data; reg = (select_idx == ARMV8_PMU_CYCLE_IDX) ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx; __vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx); + + reg = (select_idx == ARMV8_PMU_CYCLE_IDX) + ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx; + data = __vcpu_sys_reg(vcpu, reg + select_idx); + + /* Recreate the perf event to reflect the updated sample_period */ + kvm_pmu_create_perf_event(vcpu, data, select_idx); } /** @@ -380,17 +389,13 @@ static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx) } /** - * kvm_pmu_set_counter_event_type - set selected counter to monitor some event + * kvm_pmu_create_perf_event - create a perf event for a counter * @vcpu: The vcpu pointer - * @data: The data guest writes to PMXEVTYPER_EL0 + * @data: Type of event as per PMXEVTYPER_EL0 format * @select_idx: The number of selected counter - * - * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an - * event with given hardware event number. Here we call perf_event API to - * emulate this action and create a kernel perf event for it. */ -void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, - u64 select_idx) +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data, + u64 select_idx) { struct kvm_pmu *pmu = &vcpu->arch.pmu; struct kvm_pmc *pmc = &pmu->pmc[select_idx]; @@ -433,6 +438,22 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, pmc->perf_event = event; } +/** + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event + * @vcpu: The vcpu pointer + * @data: The data guest writes to PMXEVTYPER_EL0 + * @select_idx: The number of selected counter + * + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an + * event with given hardware event number. Here we call perf_event API to + * emulate this action and create a kernel perf event for it. + */ +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, + u64 select_idx) +{ + kvm_pmu_create_perf_event(vcpu, data, select_idx); +} + bool kvm_arm_support_pmu_v3(void) { /* From patchwork Tue Jan 22 10:49:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Murray X-Patchwork-Id: 10775349 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 456B9746 for ; Tue, 22 Jan 2019 10:50:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2F6852A510 for ; Tue, 22 Jan 2019 10:50:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 22EB72A5BC; Tue, 22 Jan 2019 10:50:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 558DF2A510 for ; Tue, 22 Jan 2019 10:50:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=uK8r1TVEWKLL28sUsPKZ2bcV+ZGDjbN9Xbx0GPSv5+A=; b=RIadkzjFiWt0j7P06cQw1RFfJY zN//A+U1ZBYfwmsjsbbIi12A6ZCuPxtPTfcvMNy0pPr5NC6mQsPxE+QMohxpeImh6n2ms1Etoh7Ph 1nPNVU4vKb6I6uQqLVHNJKujKZliMlO287J0zQrvFeq+HdohSefzd9xslzr80rgmBfmm2Sf+qJW3q 38npwoJ4ylq+X86nSsjCEh+gnansysL1i3njYh8BX/M4L7W2AAyNs5ksWf8fhzQYDHxVmQq2QwQ1/ Ap1/SiN+9I4wXqcvhGnKR8HKv01MSbJQSbPUPsqtQ7gQoZZj9c8yeMa+6ngsq9sxTGTv9aBbsxIgB youJvs/A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gltdh-0002Ca-Ja; Tue, 22 Jan 2019 10:50:41 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gltd9-0001aN-VK for linux-arm-kernel@lists.infradead.org; Tue, 22 Jan 2019 10:50:09 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 70FCAA78; Tue, 22 Jan 2019 02:50:07 -0800 (PST) Received: from e119886-lin.cambridge.arm.com (unknown [10.37.6.11]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 126A93F6A8; Tue, 22 Jan 2019 02:50:05 -0800 (PST) From: Andrew Murray To: Christoffer Dall , Marc Zyngier Subject: [PATCH 3/4] KVM: arm/arm64: lazily create perf events on enable Date: Tue, 22 Jan 2019 10:49:56 +0000 Message-Id: <1548154197-5470-4-git-send-email-andrew.murray@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1548154197-5470-1-git-send-email-andrew.murray@arm.com> References: <1548154197-5470-1-git-send-email-andrew.murray@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190122_025008_019427_5A93910D X-CRM114-Status: GOOD ( 17.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, suzuki.poulose@arm.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP To prevent re-creating perf events everytime the counter registers are changed, let's instead lazily create the event when the event is first enabled and destroy it when it changes. Signed-off-by: Andrew Murray --- virt/kvm/arm/pmu.c | 114 ++++++++++++++++++++++++++++++++++++----------------- 1 file changed, 78 insertions(+), 36 deletions(-) diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c index 4464899..1921ca9 100644 --- a/virt/kvm/arm/pmu.c +++ b/virt/kvm/arm/pmu.c @@ -24,8 +24,11 @@ #include #include -static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data, - u64 select_idx); +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair); +static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu, + u64 select_idx); +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc); + /** * kvm_pmu_get_counter_value - get PMU counter value * @vcpu: The vcpu pointer @@ -59,18 +62,16 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) */ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) { - u64 reg, data; + u64 reg; + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; reg = (select_idx == ARMV8_PMU_CYCLE_IDX) ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx; __vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx); - reg = (select_idx == ARMV8_PMU_CYCLE_IDX) - ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx; - data = __vcpu_sys_reg(vcpu, reg + select_idx); - - /* Recreate the perf event to reflect the updated sample_period */ - kvm_pmu_create_perf_event(vcpu, data, select_idx); + kvm_pmu_stop_counter(vcpu, pmc); + kvm_pmu_reenable_enabled_single(vcpu, select_idx); } /** @@ -88,6 +89,7 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc) /** * kvm_pmu_stop_counter - stop PMU counter + * @vcpu: The vcpu pointer * @pmc: The PMU counter pointer * * If this counter has been configured to monitor some event, release it here. @@ -150,6 +152,25 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) } /** + * kvm_pmu_enable_counter_single - create/enable a unpaired counter + * @vcpu: The vcpu pointer + * @select_idx: The counter index + */ +static void kvm_pmu_enable_counter_single(struct kvm_vcpu *vcpu, u64 select_idx) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + + if (!pmc->perf_event) { + kvm_pmu_counter_create_enabled_perf_event(vcpu, select_idx); + } else if (pmc->perf_event) { + perf_event_enable(pmc->perf_event); + if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE) + kvm_debug("fail to enable perf event\n"); + } +} + +/** * kvm_pmu_enable_counter - enable selected PMU counter * @vcpu: The vcpu pointer * @val: the value guest writes to PMCNTENSET register @@ -159,8 +180,6 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) { int i; - struct kvm_pmu *pmu = &vcpu->arch.pmu; - struct kvm_pmc *pmc; if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val) return; @@ -169,16 +188,44 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) if (!(val & BIT(i))) continue; - pmc = &pmu->pmc[i]; - if (pmc->perf_event) { - perf_event_enable(pmc->perf_event); - if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE) - kvm_debug("fail to enable perf event\n"); - } + kvm_pmu_enable_counter_single(vcpu, i); } } /** + * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled + * @vcpu: The vcpu pointer + * @select_idx: The counter index + */ +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, + u64 select_idx) +{ + u64 mask = kvm_pmu_valid_counter_mask(vcpu); + u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask; + + if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) + return; + + if (set & BIT(select_idx)) + kvm_pmu_enable_counter_single(vcpu, select_idx); +} + +/** + * kvm_pmu_disable_counter - disable selected PMU counter + * @vcpu: The vcpu pointer + * @pmc: The counter to dissable + */ +static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu, + u64 select_idx) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + + if (pmc->perf_event) + perf_event_disable(pmc->perf_event); +} + +/** * kvm_pmu_disable_counter - disable selected PMU counter * @vcpu: The vcpu pointer * @val: the value guest writes to PMCNTENCLR register @@ -188,8 +235,6 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) { int i; - struct kvm_pmu *pmu = &vcpu->arch.pmu; - struct kvm_pmc *pmc; if (!val) return; @@ -198,9 +243,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) if (!(val & BIT(i))) continue; - pmc = &pmu->pmc[i]; - if (pmc->perf_event) - perf_event_disable(pmc->perf_event); + kvm_pmu_disable_counter_single(vcpu, i); } } @@ -382,28 +425,22 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) } } -static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx) -{ - return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) && - (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx)); -} - /** - * kvm_pmu_create_perf_event - create a perf event for a counter + * kvm_pmu_counter_create_enabled_perf_event - create a perf event for a counter * @vcpu: The vcpu pointer - * @data: Type of event as per PMXEVTYPER_EL0 format * @select_idx: The number of selected counter */ -static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data, - u64 select_idx) +static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu, + u64 select_idx) { struct kvm_pmu *pmu = &vcpu->arch.pmu; struct kvm_pmc *pmc = &pmu->pmc[select_idx]; struct perf_event *event; struct perf_event_attr attr; - u64 eventsel, counter; + u64 eventsel, counter, data; + + data = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx); - kvm_pmu_stop_counter(vcpu, pmc); eventsel = data & ARMV8_PMU_EVTYPE_EVENT; /* Software increment event does't need to be backed by a perf event */ @@ -415,7 +452,6 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data, attr.type = PERF_TYPE_RAW; attr.size = sizeof(attr); attr.pinned = 1; - attr.disabled = !kvm_pmu_counter_is_enabled(vcpu, select_idx); attr.exclude_user = data & ARMV8_PMU_EXCLUDE_EL0 ? 1 : 0; attr.exclude_kernel = data & ARMV8_PMU_EXCLUDE_EL1 ? 1 : 0; attr.exclude_hv = 1; /* Don't count EL2 events */ @@ -451,7 +487,13 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data, void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, u64 select_idx) { - kvm_pmu_create_perf_event(vcpu, data, select_idx); + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + u64 event_type = data & ARMV8_PMU_EVTYPE_MASK; + + kvm_pmu_stop_counter(vcpu, pmc); + __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type; + kvm_pmu_reenable_enabled_single(vcpu, select_idx); } bool kvm_arm_support_pmu_v3(void) From patchwork Tue Jan 22 10:49:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Murray X-Patchwork-Id: 10775353 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6BB27746 for ; Tue, 22 Jan 2019 10:51:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 58A322A510 for ; Tue, 22 Jan 2019 10:51:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4CB922A5AA; Tue, 22 Jan 2019 10:51:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 362E02A539 for ; Tue, 22 Jan 2019 10:51:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Tkhs7lBcWRfmmiFlt7J9KiQwmuTv1fY6PUeBFw7o69Y=; b=bccSUkilhjqEPWvLYrqF8d+zu7 z7SOuqKNrMFuieYQKdHLx4mNip5+99U9tcnS6NXI5bnkVc064qVF5BWb3BqybZWQb+iiCpmPSRSRA juiELEr/kIOLWRiEjP4x3OPcKN/mXIe/YnJCiLUTL+6awXwWVdAhDCND4INcLbzKjeYph34JrWW1a 449ZuAgVRSAZj4JVxE+TobpQey8SbwHupAfaeN4QLwdgcQrKfmZJaTdPnnnFGEDU6wnvNN5ACbPgT arINHJ8AOd+6IMj0aiehl0MoSYB2+U/EIVTNvKVoLLvyCUZ/fQPSVfv9Lg64NxBe5Mb8oPhK7EhAj BuAynggg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1glteK-0002qG-DI; Tue, 22 Jan 2019 10:51:20 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gltdB-0001kt-Dg for linux-arm-kernel@lists.infradead.org; Tue, 22 Jan 2019 10:50:18 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 24A5B1596; Tue, 22 Jan 2019 02:50:09 -0800 (PST) Received: from e119886-lin.cambridge.arm.com (unknown [10.37.6.11]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B94FB3F6A8; Tue, 22 Jan 2019 02:50:07 -0800 (PST) From: Andrew Murray To: Christoffer Dall , Marc Zyngier Subject: [PATCH 4/4] KVM: arm/arm64: support chained PMU counters Date: Tue, 22 Jan 2019 10:49:57 +0000 Message-Id: <1548154197-5470-5-git-send-email-andrew.murray@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1548154197-5470-1-git-send-email-andrew.murray@arm.com> References: <1548154197-5470-1-git-send-email-andrew.murray@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190122_025009_773973_5A41B217 X-CRM114-Status: GOOD ( 21.69 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, suzuki.poulose@arm.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Emulate chained PMU counters by creating a single 64 bit event counter for a pair of chained KVM counters. Signed-off-by: Andrew Murray --- include/kvm/arm_pmu.h | 2 + virt/kvm/arm/pmu.c | 308 +++++++++++++++++++++++++++++++++++++++++--------- 2 files changed, 258 insertions(+), 52 deletions(-) diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index f87fe20..d4f3b28 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -29,6 +29,8 @@ struct kvm_pmc { u8 idx; /* index into the pmu->pmc array */ struct perf_event *perf_event; u64 bitmask; + u64 sample_period; + u64 left; }; struct kvm_pmu { diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c index 1921ca9..d111d5b 100644 --- a/virt/kvm/arm/pmu.c +++ b/virt/kvm/arm/pmu.c @@ -24,10 +24,26 @@ #include #include +#define ARMV8_PMUV3_PERFCTR_CHAIN 0x1E +static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu, + u64 pair_low); +static void kvm_pmu_stop_release_perf_event_single(struct kvm_vcpu *vcpu, + u64 select_idx); +static void kvm_pmu_reenable_enabled_pair(struct kvm_vcpu *vcpu, u64 pair_low); static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair); static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu, u64 select_idx); -static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc); + +/** + * kvm_pmu_counter_is_high_word - is select_idx high counter of 64bit event + * @pmc: The PMU counter pointer + * @select_idx: The counter index + */ +static inline bool kvm_pmu_counter_is_high_word(struct kvm_pmc *pmc) +{ + return ((pmc->perf_event->attr.config1 & 0x1) + && (pmc->idx % 2)); +} /** * kvm_pmu_get_counter_value - get PMU counter value @@ -36,7 +52,7 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc); */ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) { - u64 counter, reg, enabled, running; + u64 counter, reg, enabled, running, incr; struct kvm_pmu *pmu = &vcpu->arch.pmu; struct kvm_pmc *pmc = &pmu->pmc[select_idx]; @@ -47,14 +63,53 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) /* The real counter value is equal to the value of counter register plus * the value perf event counts. */ - if (pmc->perf_event) - counter += perf_event_read_value(pmc->perf_event, &enabled, + if (pmc->perf_event) { + incr = perf_event_read_value(pmc->perf_event, &enabled, &running); + if (kvm_pmu_counter_is_high_word(pmc)) + incr = upper_32_bits(incr); + counter += incr; + } + return counter & pmc->bitmask; } /** + * kvm_pmu_counter_is_enabled - is a counter active + * @vcpu: The vcpu pointer + * @select_idx: The counter index + */ +static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx) +{ + u64 mask = kvm_pmu_valid_counter_mask(vcpu); + + return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) && + (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask & BIT(select_idx)); +} + +/** + * kvnm_pmu_event_is_chained - is a pair of counters chained and enabled + * @vcpu: The vcpu pointer + * @select_idx: The low counter index + */ +static bool kvm_pmu_event_is_chained(struct kvm_vcpu *vcpu, u64 pair_low) +{ + u64 eventsel; + + eventsel = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + pair_low + 1) & + ARMV8_PMU_EVTYPE_EVENT; + if (eventsel != ARMV8_PMUV3_PERFCTR_CHAIN) + return false; + + if (kvm_pmu_counter_is_enabled(vcpu, pair_low) != + kvm_pmu_counter_is_enabled(vcpu, pair_low + 1)) + return false; + + return true; +} + +/** * kvm_pmu_set_counter_value - set PMU counter value * @vcpu: The vcpu pointer * @select_idx: The counter index @@ -62,29 +117,45 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) */ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) { - u64 reg; - struct kvm_pmu *pmu = &vcpu->arch.pmu; - struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + u64 reg, pair_low; reg = (select_idx == ARMV8_PMU_CYCLE_IDX) ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx; __vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx); - kvm_pmu_stop_counter(vcpu, pmc); - kvm_pmu_reenable_enabled_single(vcpu, select_idx); + pair_low = (select_idx % 2) ? select_idx - 1 : select_idx; + + /* Recreate the perf event to reflect the updated sample_period */ + if (kvm_pmu_event_is_chained(vcpu, pair_low)) { + kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low); + kvm_pmu_reenable_enabled_pair(vcpu, pair_low); + } else { + kvm_pmu_stop_release_perf_event_single(vcpu, select_idx); + kvm_pmu_reenable_enabled_single(vcpu, select_idx); + } } /** * kvm_pmu_release_perf_event - remove the perf event * @pmc: The PMU counter pointer */ -static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc) +static void kvm_pmu_release_perf_event(struct kvm_vcpu *vcpu, + struct kvm_pmc *pmc) { - if (pmc->perf_event) { - perf_event_disable(pmc->perf_event); + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc_alt; + u64 pair_alt; + + pair_alt = (pmc->idx % 2) ? pmc->idx - 1 : pmc->idx + 1; + pmc_alt = &pmu->pmc[pair_alt]; + + if (pmc->perf_event) perf_event_release_kernel(pmc->perf_event); - pmc->perf_event = NULL; - } + + if (pmc->perf_event == pmc_alt->perf_event) + pmc_alt->perf_event = NULL; + + pmc->perf_event = NULL; } /** @@ -92,22 +163,60 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc) * @vcpu: The vcpu pointer * @pmc: The PMU counter pointer * - * If this counter has been configured to monitor some event, release it here. + * If this counter has been configured to monitor some event, stop it here. */ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc) { u64 counter, reg; if (pmc->perf_event) { + perf_event_disable(pmc->perf_event); counter = kvm_pmu_get_counter_value(vcpu, pmc->idx); reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX) ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx; __vcpu_sys_reg(vcpu, reg) = counter; - kvm_pmu_release_perf_event(pmc); } } /** + * kvm_pmu_stop_release_perf_event_pair - stop and release a pair of counters + * @vcpu: The vcpu pointer + * @pmc_low: The PMU counter pointer for lower word + * @pmc_high: The PMU counter pointer for higher word + * + * As chained counters share the underlying perf event, we stop them + * both first before discarding the underlying perf event + */ +static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu, + u64 idx_low) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc_low = &pmu->pmc[idx_low]; + struct kvm_pmc *pmc_high = &pmu->pmc[idx_low + 1]; + + kvm_pmu_stop_counter(vcpu, pmc_low); + kvm_pmu_stop_counter(vcpu, pmc_high); + + kvm_pmu_release_perf_event(vcpu, pmc_low); + kvm_pmu_release_perf_event(vcpu, pmc_high); +} + +/** + * kvm_pmu_stop_release_perf_event_single - stop and release a counter + * @vcpu: The vcpu pointer + * @select_idx: The counter index + */ +static void kvm_pmu_stop_release_perf_event_single(struct kvm_vcpu *vcpu, + u64 select_idx) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + + kvm_pmu_stop_counter(vcpu, pmc); + kvm_pmu_release_perf_event(vcpu, pmc); +} + +/** * kvm_pmu_vcpu_reset - reset pmu state for cpu * @vcpu: The vcpu pointer * @@ -118,7 +227,7 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu = &vcpu->arch.pmu; for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { - kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]); + kvm_pmu_stop_release_perf_event_single(vcpu, i); pmu->pmc[i].idx = i; pmu->pmc[i].bitmask = 0xffffffffUL; } @@ -136,7 +245,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { struct kvm_pmc *pmc = &pmu->pmc[i]; - kvm_pmu_release_perf_event(pmc); + kvm_pmu_release_perf_event(vcpu, pmc); } } @@ -171,49 +280,81 @@ static void kvm_pmu_enable_counter_single(struct kvm_vcpu *vcpu, u64 select_idx) } /** - * kvm_pmu_enable_counter - enable selected PMU counter + * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled * @vcpu: The vcpu pointer - * @val: the value guest writes to PMCNTENSET register - * - * Call perf_event_enable to start counting the perf event + * @select_idx: The counter index */ -void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, + u64 select_idx) { - int i; + u64 mask = kvm_pmu_valid_counter_mask(vcpu); + u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask; - if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val) + if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) return; - for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { - if (!(val & BIT(i))) - continue; + if (set & BIT(select_idx)) + kvm_pmu_enable_counter_single(vcpu, select_idx); +} - kvm_pmu_enable_counter_single(vcpu, i); +/** + * kvm_pmu_reenable_enabled_pair - reenable a pair if they should be enabled + * @vcpu: The vcpu pointer + * @pair_low: The low counter index + */ +static void kvm_pmu_reenable_enabled_pair(struct kvm_vcpu *vcpu, u64 pair_low) +{ + kvm_pmu_reenable_enabled_single(vcpu, pair_low); + kvm_pmu_reenable_enabled_single(vcpu, pair_low+1); +} + +/** + * kvm_pmu_enable_counter_pair - enable counters pair at a time + * @vcpu: The vcpu pointer + * @val: counters to enable + * @pair_low: The low counter index + */ +static void kvm_pmu_enable_counter_pair(struct kvm_vcpu *vcpu, u64 val, + u64 pair_low) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc_low = &pmu->pmc[pair_low]; + struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1]; + + if (kvm_pmu_event_is_chained(vcpu, pair_low)) { + if (pmc_low->perf_event != pmc_high->perf_event) + kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low); } + + if (val & BIT(pair_low)) + kvm_pmu_enable_counter_single(vcpu, pair_low); + + if (val & BIT(pair_low+1)) + kvm_pmu_enable_counter_single(vcpu, pair_low + 1); } /** - * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled + * kvm_pmu_enable_counter - enable selected PMU counter * @vcpu: The vcpu pointer - * @select_idx: The counter index + * @val: the value guest writes to PMCNTENSET register + * + * Call perf_event_enable to start counting the perf event */ -static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, - u64 select_idx) +void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) { - u64 mask = kvm_pmu_valid_counter_mask(vcpu); - u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask; + int i; - if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) + if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val) return; - if (set & BIT(select_idx)) - kvm_pmu_enable_counter_single(vcpu, select_idx); + for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2) + kvm_pmu_enable_counter_pair(vcpu, val, i); } /** * kvm_pmu_disable_counter - disable selected PMU counter * @vcpu: The vcpu pointer - * @pmc: The counter to dissable + * @select_idx: The counter index */ static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu, u64 select_idx) @@ -221,8 +362,40 @@ static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu, struct kvm_pmu *pmu = &vcpu->arch.pmu; struct kvm_pmc *pmc = &pmu->pmc[select_idx]; - if (pmc->perf_event) + if (pmc->perf_event) { perf_event_disable(pmc->perf_event); + if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE) + kvm_debug("fail to enable perf event\n"); + } +} + +/** + * kvm_pmu_disable_counter_pair - disable counters pair at a time + * @val: counters to disable + * @pair_low: The low counter index + */ +static void kvm_pmu_disable_counter_pair(struct kvm_vcpu *vcpu, u64 val, + u64 pair_low) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc_low = &pmu->pmc[pair_low]; + struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1]; + + if (!kvm_pmu_event_is_chained(vcpu, pair_low)) { + if (pmc_low->perf_event == pmc_high->perf_event) { + if (pmc_low->perf_event) { + kvm_pmu_stop_release_perf_event_pair(vcpu, + pair_low); + kvm_pmu_reenable_enabled_pair(vcpu, pair_low); + } + } + } + + if (val & BIT(pair_low)) + kvm_pmu_disable_counter_single(vcpu, pair_low); + + if (val & BIT(pair_low + 1)) + kvm_pmu_disable_counter_single(vcpu, pair_low + 1); } /** @@ -239,12 +412,8 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) if (!val) return; - for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { - if (!(val & BIT(i))) - continue; - - kvm_pmu_disable_counter_single(vcpu, i); - } + for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2) + kvm_pmu_disable_counter_pair(vcpu, val, i); } static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) @@ -355,6 +524,17 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event, __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx); + if (kvm_pmu_event_is_chained(vcpu, idx + 1)) { + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc_high = &pmu->pmc[idx + 1]; + + if (!(--pmc_high->left)) { + __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx + 1); + pmc_high->left = pmc_high->sample_period; + } + + } + if (kvm_pmu_overflow_status(vcpu)) { kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu); kvm_vcpu_kick(vcpu); @@ -448,6 +628,10 @@ static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu, select_idx != ARMV8_PMU_CYCLE_IDX) return; + /* Handled by even event */ + if (eventsel == ARMV8_PMUV3_PERFCTR_CHAIN) + return; + memset(&attr, 0, sizeof(struct perf_event_attr)); attr.type = PERF_TYPE_RAW; attr.size = sizeof(attr); @@ -459,6 +643,9 @@ static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu, attr.config = (select_idx == ARMV8_PMU_CYCLE_IDX) ? ARMV8_PMUV3_PERFCTR_CPU_CYCLES : eventsel; + if (kvm_pmu_event_is_chained(vcpu, select_idx)) + attr.config1 |= 0x1; + counter = kvm_pmu_get_counter_value(vcpu, select_idx); /* The initial sample period (overflow count) of an event. */ attr.sample_period = (-counter) & pmc->bitmask; @@ -471,6 +658,14 @@ static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu, return; } + if (kvm_pmu_event_is_chained(vcpu, select_idx)) { + struct kvm_pmc *pmc_next = &pmu->pmc[select_idx + 1]; + + pmc_next->perf_event = event; + counter = kvm_pmu_get_counter_value(vcpu, select_idx + 1); + pmc_next->left = (-counter) & pmc->bitmask; + } + pmc->perf_event = event; } @@ -487,13 +682,22 @@ static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu, void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, u64 select_idx) { - struct kvm_pmu *pmu = &vcpu->arch.pmu; - struct kvm_pmc *pmc = &pmu->pmc[select_idx]; - u64 event_type = data & ARMV8_PMU_EVTYPE_MASK; + u64 eventsel, event_type, pair_low; - kvm_pmu_stop_counter(vcpu, pmc); - __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type; - kvm_pmu_reenable_enabled_single(vcpu, select_idx); + eventsel = data & ARMV8_PMU_EVTYPE_EVENT; + event_type = data & ARMV8_PMU_EVTYPE_MASK; + pair_low = (select_idx % 2) ? select_idx - 1 : select_idx; + + if (kvm_pmu_event_is_chained(vcpu, pair_low) || + eventsel == ARMV8_PMUV3_PERFCTR_CHAIN) { + kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low); + __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type; + kvm_pmu_reenable_enabled_pair(vcpu, pair_low); + } else { + kvm_pmu_stop_release_perf_event_single(vcpu, pair_low); + __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type; + kvm_pmu_reenable_enabled_single(vcpu, pair_low); + } } bool kvm_arm_support_pmu_v3(void)