From patchwork Mon Mar 25 18:30:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Murray X-Patchwork-Id: 10869835 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B9C7B14DE for ; Mon, 25 Mar 2019 18:36:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9D78929216 for ; Mon, 25 Mar 2019 18:36:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8D2DC29257; Mon, 25 Mar 2019 18:36:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id F162229216 for ; Mon, 25 Mar 2019 18:36:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KnDZtBBfUXjABA6Hz/BXbtoSmZtFcsKFr1lKx9VRNwQ=; b=DPNkhROkbI8qox CPkq82d5Bz5IuHBY4T2UkrR52SyG0Do8f918RjO/dKRlo3DATPeiheUX7eTijw+zvAV4r5WvDOcSK 6rr/5ih/9rDz9Mh9Z1hLjdLkiGQF7NDUeXQcweRQOzCkw9+tsSQ2VTyM75Ak6GRGotZ1KhZtcjT9l gtBu4jl1CWTWk6Pv8E9fg25W8DCZFFz53cNOwc9SmKPktoBwvj9VHlVZ0zjN0+IO0QgqAtJlInKbG mLu1xw2XX+Utr+gW3wNRPBXx5G3tkEWbnlXL1nGhDBEPiqPAHicMYdtkFZLC2goN3aWV2PNbdzfp5 NZctUgKAr/pR/3vJ4vZQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h8USr-0002NV-Rp; Mon, 25 Mar 2019 18:36:53 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h8UMc-0004EC-RF for linux-arm-kernel@lists.infradead.org; Mon, 25 Mar 2019 18:30:29 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3DE3A174E; Mon, 25 Mar 2019 11:30:22 -0700 (PDT) Received: from e119886-lin.cambridge.arm.com (unknown [10.37.6.20]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A2B2B3F575; Mon, 25 Mar 2019 11:30:20 -0700 (PDT) From: Andrew Murray To: Christoffer Dall , Marc Zyngier Subject: [PATCH v4 5/6] KVM: arm/arm64: represent paired counters with kvm_pmc_pair Date: Mon, 25 Mar 2019 18:30:05 +0000 Message-Id: <20190325183006.33115-6-andrew.murray@arm.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190325183006.33115-1-andrew.murray@arm.com> References: <20190325183006.33115-1-andrew.murray@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190325_113026_940523_AE4E0441 X-CRM114-Status: GOOD ( 15.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Julien Thierry , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Suzuki K Poulose Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The CHAIN PMU event implicitly creates a relationship between a pair of adjacent counters, this is due to the high counter counting overflows occurring in the low counter. To facilitate emulation of chained counters let's represent this relationship via a struct kvm_pmc_pair that holds a pair of counters. Signed-off-by: Andrew Murray --- include/kvm/arm_pmu.h | 13 +++++++- virt/kvm/arm/pmu.c | 78 ++++++++++++++++++++++++++++++++----------- 2 files changed, 71 insertions(+), 20 deletions(-) diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index b73f31baca52..ee80dc8db990 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -22,6 +22,7 @@ #include #define ARMV8_PMU_CYCLE_IDX (ARMV8_PMU_MAX_COUNTERS - 1) +#define ARMV8_PMU_MAX_COUNTER_PAIRS ((ARMV8_PMU_MAX_COUNTERS + 1) >> 1) #ifdef CONFIG_KVM_ARM_PMU @@ -31,9 +32,19 @@ struct kvm_pmc { u64 bitmask; }; +enum kvm_pmc_type { + KVM_PMC_TYPE_PAIR, +}; + +struct kvm_pmc_pair { + struct kvm_pmc low; + struct kvm_pmc high; + enum kvm_pmc_type type; +}; + struct kvm_pmu { int irq_num; - struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS]; + struct kvm_pmc_pair pmc_pair[ARMV8_PMU_MAX_COUNTER_PAIRS]; bool ready; bool created; bool irq_level; diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c index ae1e886d4a1a..08acd60c538a 100644 --- a/virt/kvm/arm/pmu.c +++ b/virt/kvm/arm/pmu.c @@ -25,6 +25,43 @@ #include static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx); + +/** + * kvm_pmu_pair_is_high_counter - determine if select_idx is a high/low counter + * @select_idx: The counter index + */ +static bool kvm_pmu_pair_is_high_counter(u64 select_idx) +{ + return select_idx & 0x1; +} + +/** + * kvm_pmu_get_kvm_pmc_pair - obtain a pmc_pair from a pmc + * @pmc: The PMU counter pointer + */ +static struct kvm_pmc_pair *kvm_pmu_get_kvm_pmc_pair(struct kvm_pmc *pmc) +{ + if (kvm_pmu_pair_is_high_counter(pmc->idx)) + return container_of(pmc, struct kvm_pmc_pair, high); + else + return container_of(pmc, struct kvm_pmc_pair, low); +} + +/** + * kvm_pmu_get_kvm_pmc - obtain a pmc based on select_idx + * @vcpu: The vcpu pointer + * @select_idx: The counter index + */ +static struct kvm_pmc *kvm_pmu_get_kvm_pmc(struct kvm_vcpu *vcpu, + u64 select_idx) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc_pair *pmc_pair = &pmu->pmc_pair[select_idx >> 1]; + + return kvm_pmu_pair_is_high_counter(select_idx) ? &pmc_pair->high + : &pmc_pair->low; +} + /** * kvm_pmu_get_counter_value - get PMU counter value * @vcpu: The vcpu pointer @@ -33,8 +70,7 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx); u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) { u64 counter, reg, enabled, running; - struct kvm_pmu *pmu = &vcpu->arch.pmu; - struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + struct kvm_pmc *pmc = kvm_pmu_get_kvm_pmc(vcpu, select_idx); reg = (select_idx == ARMV8_PMU_CYCLE_IDX) ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx; @@ -108,12 +144,17 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc) void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) { int i; - struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc; + struct kvm_pmc_pair *pmc_pair; for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { - kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]); - pmu->pmc[i].idx = i; - pmu->pmc[i].bitmask = 0xffffffffUL; + pmc = kvm_pmu_get_kvm_pmc(vcpu, i); + kvm_pmu_stop_counter(vcpu, pmc); + pmc->idx = i; + pmc->bitmask = 0xffffffffUL; + + pmc_pair = kvm_pmu_get_kvm_pmc_pair(pmc); + pmc_pair->type = KVM_PMC_TYPE_PAIR; } } @@ -125,10 +166,12 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) { int i; - struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc; - for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) - kvm_pmu_release_perf_event(&pmu->pmc[i]); + for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { + pmc = kvm_pmu_get_kvm_pmc(vcpu, i); + kvm_pmu_release_perf_event(pmc); + } } u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) @@ -152,7 +195,6 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) { int i; - struct kvm_pmu *pmu = &vcpu->arch.pmu; struct kvm_pmc *pmc; if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val) @@ -162,7 +204,7 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) if (!(val & BIT(i))) continue; - pmc = &pmu->pmc[i]; + pmc = kvm_pmu_get_kvm_pmc(vcpu, i); if (pmc->perf_event) { perf_event_enable(pmc->perf_event); if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE) @@ -181,7 +223,6 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val) { int i; - struct kvm_pmu *pmu = &vcpu->arch.pmu; struct kvm_pmc *pmc; if (!val) @@ -191,7 +232,7 @@ void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val) if (!(val & BIT(i))) continue; - pmc = &pmu->pmc[i]; + pmc = kvm_pmu_get_kvm_pmc(vcpu, i); if (pmc->perf_event) perf_event_disable(pmc->perf_event); } @@ -285,9 +326,10 @@ static inline struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc) { struct kvm_pmu *pmu; struct kvm_vcpu_arch *vcpu_arch; + struct kvm_pmc_pair *pair = kvm_pmu_get_kvm_pmc_pair(pmc); - pmc -= pmc->idx; - pmu = container_of(pmc, struct kvm_pmu, pmc[0]); + pair -= (pmc->idx >> 1); + pmu = container_of(pair, struct kvm_pmu, pmc_pair[0]); vcpu_arch = container_of(pmu, struct kvm_vcpu_arch, pmu); return container_of(vcpu_arch, struct kvm_vcpu, arch); } @@ -348,7 +390,6 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) */ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) { - struct kvm_pmu *pmu = &vcpu->arch.pmu; struct kvm_pmc *pmc; u64 mask; int i; @@ -370,7 +411,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) } if (val & ARMV8_PMU_PMCR_LC) { - pmc = &pmu->pmc[ARMV8_PMU_CYCLE_IDX]; + pmc = kvm_pmu_get_kvm_pmc(vcpu, ARMV8_PMU_CYCLE_IDX); pmc->bitmask = 0xffffffffffffffffUL; } } @@ -388,8 +429,7 @@ static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx) */ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx) { - struct kvm_pmu *pmu = &vcpu->arch.pmu; - struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + struct kvm_pmc *pmc = kvm_pmu_get_kvm_pmc(vcpu, select_idx); struct perf_event *event; struct perf_event_attr attr; u64 eventsel, counter, reg, data;