From patchwork Mon Feb 22 09:37:54 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shannon Zhao X-Patchwork-Id: 8374101 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 60FB09F314 for ; Mon, 22 Feb 2016 09:41:37 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8E6D020328 for ; Mon, 22 Feb 2016 09:41:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A6C0020304 for ; Mon, 22 Feb 2016 09:41:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751576AbcBVJlZ (ORCPT ); Mon, 22 Feb 2016 04:41:25 -0500 Received: from szxga01-in.huawei.com ([58.251.152.64]:13812 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753835AbcBVJlV (ORCPT ); Mon, 22 Feb 2016 04:41:21 -0500 Received: from 172.24.1.48 (EHLO szxeml434-hub.china.huawei.com) ([172.24.1.48]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DEY64633; Mon, 22 Feb 2016 17:39:06 +0800 (CST) Received: from HGHY1Z002260041.china.huawei.com (10.177.16.142) by szxeml434-hub.china.huawei.com (10.82.67.225) with Microsoft SMTP Server id 14.3.235.1; Mon, 22 Feb 2016 17:38:59 +0800 From: Shannon Zhao To: , , CC: , , , , , , , , Subject: [PATCH v12 18/21] KVM: ARM64: Free perf event of PMU when destroying vcpu Date: Mon, 22 Feb 2016 17:37:54 +0800 Message-ID: <1456133877-9584-19-git-send-email-zhaoshenglong@huawei.com> X-Mailer: git-send-email 1.9.0.msysgit.0 In-Reply-To: <1456133877-9584-1-git-send-email-zhaoshenglong@huawei.com> References: <1456133877-9584-1-git-send-email-zhaoshenglong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.16.142] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090205.56CAD75F.0014, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 1820cace93e9922ec6f3cd9faf0d4182 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Shannon Zhao When KVM frees VCPU, it needs to free the perf_event of PMU. Signed-off-by: Shannon Zhao Reviewed-by: Marc Zyngier Reviewed-by: Andrew Jones --- arch/arm/kvm/arm.c | 1 + include/kvm/arm_pmu.h | 2 ++ virt/kvm/arm/pmu.c | 21 +++++++++++++++++++++ 3 files changed, 24 insertions(+) diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index f54264c..d2c2cc3 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -266,6 +266,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu) kvm_mmu_free_memory_caches(vcpu); kvm_timer_vcpu_terminate(vcpu); kvm_vgic_vcpu_destroy(vcpu); + kvm_pmu_vcpu_destroy(vcpu); kmem_cache_free(kvm_vcpu_cache, vcpu); } diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 46b773e..9cebf03 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -42,6 +42,7 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx); void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val); u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu); void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu); +void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu); void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val); void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val); void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val); @@ -67,6 +68,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) return 0; } static inline void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {} +static inline void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) {} static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {} static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {} static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {} diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c index 92fff9a..0427af8 100644 --- a/virt/kvm/arm/pmu.c +++ b/virt/kvm/arm/pmu.c @@ -101,6 +101,27 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) } } +/** + * kvm_pmu_vcpu_destroy - free perf event of PMU for cpu + * @vcpu: The vcpu pointer + * + */ +void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) +{ + int i; + struct kvm_pmu *pmu = &vcpu->arch.pmu; + + for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { + struct kvm_pmc *pmc = &pmu->pmc[i]; + + if (pmc->perf_event) { + perf_event_disable(pmc->perf_event); + perf_event_release_kernel(pmc->perf_event); + pmc->perf_event = NULL; + } + } +} + u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) { u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT;