From patchwork Mon Jul 6 02:17:37 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shannon Zhao X-Patchwork-Id: 6719391 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id B4B6E9F38C for ; Mon, 6 Jul 2015 02:20:54 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 81D6D2049C for ; Mon, 6 Jul 2015 02:20:53 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1BCDE20650 for ; Mon, 6 Jul 2015 02:20:52 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZBvzv-0006IP-RY; Mon, 06 Jul 2015 02:19:07 +0000 Received: from mail-pa0-f50.google.com ([209.85.220.50]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZBvzc-0005qk-1w for linux-arm-kernel@lists.infradead.org; Mon, 06 Jul 2015 02:18:48 +0000 Received: by pacws9 with SMTP id ws9so88340291pac.0 for ; Sun, 05 Jul 2015 19:18:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=U8mxIgaL7H1pI7+U3TSf351oRyhMqVLaUbkZ8s5g7O0=; b=CSy2RUNy6jPo0zuurb7iCl62mO99b6xQv7QQKIkxb41ViVJDiWbgdMPZ1nz10xIyxc Pt71mDaCtMlXAxZ5kAgFvzyFMc5Jrp0fPjbKCT0IkUseqdMLLdJ39Cb5nDphoqdzG535 FNXl9OBYeZc/LFOfevwxuH+fpcDePjuET16OauqznbWep8G5HeZkDOfpZfJscDBod8A/ LMDpLsS1rEFUHPLyqdlRKATvaymJRMlW4fZ87RhAoMb02c8Vmg1/Hsy/LE17jJzzajE6 Uu/sY2ArFqBqh1lfn3walVdPHspwOjdKBPrWrn6P/M7Wu3DU04jLhKr+I9R5vukw/sVZ cu1g== X-Gm-Message-State: ALoCoQnJbjjJP6IwfOtue2P9F0CP6howZcdcbEYdAjKqZu8NQOm3QnJZ2LnTZVTQ/4Xk1IkzqCX7 X-Received: by 10.67.5.231 with SMTP id cp7mr100371578pad.36.1436149107173; Sun, 05 Jul 2015 19:18:27 -0700 (PDT) Received: from localhost ([120.136.34.248]) by mx.google.com with ESMTPSA id vl1sm16269056pab.21.2015.07.05.19.18.25 (version=TLSv1 cipher=RC4-SHA bits=128/128); Sun, 05 Jul 2015 19:18:26 -0700 (PDT) From: shannon.zhao@linaro.org To: kvmarm@lists.cs.columbia.edu Subject: [PATCH 07/18] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function Date: Mon, 6 Jul 2015 10:17:37 +0800 Message-Id: <1436149068-3784-8-git-send-email-shannon.zhao@linaro.org> X-Mailer: git-send-email 1.9.5.msysgit.1 In-Reply-To: <1436149068-3784-1-git-send-email-shannon.zhao@linaro.org> References: <1436149068-3784-1-git-send-email-shannon.zhao@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150705_191848_149852_08E71FDC X-CRM114-Status: GOOD ( 18.84 ) X-Spam-Score: -2.6 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kvm@vger.kernel.org, marc.zyngier@arm.com, will.deacon@arm.com, linux-arm-kernel@lists.infradead.org, zhaoshenglong@huawei.com, alex.bennee@linaro.org, christoffer.dall@linaro.org, shannon.zhao@linaro.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Shannon Zhao When we use tools like perf on host, perf passes the event type and the id in this type category to kernel, then kernel will map them to event number and write this number to PMU PMEVTYPER_EL0 register. While we're trapping and emulating guest accesses to PMU registers, we get the event number and map it to the event type and the id reversely. Check whether the event type is same with the one to be set. If not, stop counter to monitor current event and find the event type map id. According to the bits of data to configure this perf_event attr and set exclude_host to 1 for guest. Then call perf_event API to create the corresponding event and save the event pointer. Signed-off-by: Shannon Zhao --- include/kvm/arm_pmu.h | 4 ++ virt/kvm/arm/pmu.c | 173 ++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 177 insertions(+) diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 27d14ca..1050b24 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -45,9 +45,13 @@ struct kvm_pmu { #ifdef CONFIG_KVM_ARM_PMU void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu); +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, unsigned long data, + unsigned long select_idx); void kvm_pmu_init(struct kvm_vcpu *vcpu); #else static inline void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {} +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, unsigned long data, + unsigned long select_idx) {} static inline void kvm_pmu_init(struct kvm_vcpu *vcpu) {} #endif diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c index dc252d0..50a3c82 100644 --- a/virt/kvm/arm/pmu.c +++ b/virt/kvm/arm/pmu.c @@ -18,8 +18,68 @@ #include #include #include +#include #include +/* PMU HW events mapping. */ +static struct kvm_pmu_hw_event_map { + unsigned eventsel; + unsigned event_type; +} kvm_pmu_hw_events[] = { + [0] = { 0x11, PERF_COUNT_HW_CPU_CYCLES }, + [1] = { 0x08, PERF_COUNT_HW_INSTRUCTIONS }, + [2] = { 0x04, PERF_COUNT_HW_CACHE_REFERENCES }, + [3] = { 0x03, PERF_COUNT_HW_CACHE_MISSES }, + [4] = { 0x10, PERF_COUNT_HW_BRANCH_MISSES }, +}; + +/* PMU HW cache events mapping. */ +static struct kvm_pmu_hw_cache_event_map { + unsigned eventsel; + unsigned cache_type; + unsigned cache_op; + unsigned cache_result; +} kvm_pmu_hw_cache_events[] = { + [0] = { 0x04, PERF_COUNT_HW_CACHE_L1D, PERF_COUNT_HW_CACHE_OP_READ, + PERF_COUNT_HW_CACHE_RESULT_ACCESS }, + [1] = { 0x03, PERF_COUNT_HW_CACHE_L1D, PERF_COUNT_HW_CACHE_OP_READ, + PERF_COUNT_HW_CACHE_RESULT_MISS }, + [2] = { 0x04, PERF_COUNT_HW_CACHE_L1D, PERF_COUNT_HW_CACHE_OP_WRITE, + PERF_COUNT_HW_CACHE_RESULT_ACCESS }, + [3] = { 0x03, PERF_COUNT_HW_CACHE_L1D, PERF_COUNT_HW_CACHE_OP_WRITE, + PERF_COUNT_HW_CACHE_RESULT_MISS }, + [4] = { 0x12, PERF_COUNT_HW_CACHE_BPU, PERF_COUNT_HW_CACHE_OP_READ, + PERF_COUNT_HW_CACHE_RESULT_ACCESS }, + [5] = { 0x10, PERF_COUNT_HW_CACHE_BPU, PERF_COUNT_HW_CACHE_OP_READ, + PERF_COUNT_HW_CACHE_RESULT_MISS }, + [6] = { 0x12, PERF_COUNT_HW_CACHE_BPU, PERF_COUNT_HW_CACHE_OP_WRITE, + PERF_COUNT_HW_CACHE_RESULT_ACCESS }, + [7] = { 0x10, PERF_COUNT_HW_CACHE_BPU, PERF_COUNT_HW_CACHE_OP_WRITE, + PERF_COUNT_HW_CACHE_RESULT_MISS }, +}; + +/** + * kvm_pmu_stop_counter - stop PMU counter for the selected counter + * @vcpu: The vcpu pointer + * @select_idx: The counter index + * + * If this counter has been configured to monitor some event, disable and + * release it. + */ +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, + unsigned long select_idx) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + + if (pmc->perf_event) { + perf_event_disable(pmc->perf_event); + perf_event_release_kernel(pmc->perf_event); + } + pmc->perf_event = NULL; + pmc->eventsel = 0xff; +} + /** * kvm_pmu_vcpu_reset - reset pmu state for cpu * @vcpu: The vcpu pointer @@ -27,12 +87,125 @@ */ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) { + int i; struct kvm_pmu *pmu = &vcpu->arch.pmu; + for (i = 0; i < ARMV8_MAX_COUNTERS; i++) + kvm_pmu_stop_counter(vcpu, i); + pmu->overflow_status = 0; pmu->irq_pending = false; } /** + * kvm_pmu_find_hw_event - find hardware event + * @pmu: The pmu pointer + * @event_select: The number of selected event type + * + * Based on the number of selected event type, find out whether it belongs to + * PERF_TYPE_HARDWARE. If so, return the corresponding event id. + */ +static unsigned kvm_pmu_find_hw_event(struct kvm_pmu *pmu, + unsigned long event_select) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(kvm_pmu_hw_events); i++) + if (kvm_pmu_hw_events[i].eventsel == event_select) + break; + + if (i == ARRAY_SIZE(kvm_pmu_hw_events)) + return PERF_COUNT_HW_MAX; + + return kvm_pmu_hw_events[i].event_type; +} + +/** + * kvm_pmu_find_hw_cache_event - find hardware cache event + * @pmu: The pmu pointer + * @event_select: The number of selected event type + * + * Based on the number of selected event type, find out whether it belongs to + * PERF_TYPE_HW_CACHE. If so, return the corresponding event id. + */ +static unsigned kvm_pmu_find_hw_cache_event(struct kvm_pmu *pmu, + unsigned long event_select) +{ + int i; + unsigned config; + + for (i = 0; i < ARRAY_SIZE(kvm_pmu_hw_cache_events); i++) + if (kvm_pmu_hw_cache_events[i].eventsel == event_select) + break; + + if (i == ARRAY_SIZE(kvm_pmu_hw_cache_events)) + return PERF_COUNT_HW_CACHE_MAX; + + config = (kvm_pmu_hw_cache_events[i].cache_type & 0xff) + | ((kvm_pmu_hw_cache_events[i].cache_op & 0xff) << 8) + | ((kvm_pmu_hw_cache_events[i].cache_result & 0xff) << 16); + + return config; +} + +/** + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event + * @vcpu: The vcpu pointer + * @data: The data guest writes to PMXEVTYPER_EL0 + * @select_idx: The number of selected counter + * + * Firstly check whether the event type is same with the one to be set. + * If not, stop counter to monitor current event and find the event type map id. + * According to the bits of data to configure this perf_event attr and set + * exclude_host to 1 for guest. Then call perf_event API to create the + * corresponding event and save the event pointer. + */ +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, unsigned long data, + unsigned long select_idx) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + struct perf_event *event; + struct perf_event_attr attr; + unsigned config, type = PERF_TYPE_RAW; + + if ((data & ARMV8_EVTYPE_EVENT) == pmc->eventsel) + return; + + kvm_pmu_stop_counter(vcpu, select_idx); + pmc->eventsel = data & ARMV8_EVTYPE_EVENT; + + config = kvm_pmu_find_hw_event(pmu, pmc->eventsel); + if (config != PERF_COUNT_HW_MAX) { + type = PERF_TYPE_HARDWARE; + } else { + config = kvm_pmu_find_hw_cache_event(pmu, pmc->eventsel); + if (config != PERF_COUNT_HW_CACHE_MAX) + type = PERF_TYPE_HW_CACHE; + } + + if (type == PERF_TYPE_RAW) + config = pmc->eventsel; + + attr.type = type; + attr.size = sizeof(attr); + attr.pinned = true; + attr.exclude_user = data & ARMV8_EXCLUDE_EL0 ? 1 : 0; + attr.exclude_kernel = data & ARMV8_EXCLUDE_EL1 ? 1 : 0; + attr.exclude_hv = data & ARMV8_INCLUDE_EL2 ? 0 : 1; + attr.exclude_host = 1; + attr.config = config; + attr.sample_period = (-pmc->counter) & (((u64)1 << 32) - 1); + + event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc); + if (IS_ERR(event)) { + kvm_err("kvm: pmu event creation failed %ld\n", + PTR_ERR(event)); + return; + } + pmc->perf_event = event; +} + +/** * kvm_pmu_init - Initialize global PMU state for per vcpu * @vcpu: The vcpu pointer *