From patchwork Tue May 21 15:52:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Murray X-Patchwork-Id: 10954177 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1CC5676 for ; Tue, 21 May 2019 15:53:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0AFAD2897B for ; Tue, 21 May 2019 15:53:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F335F289A6; Tue, 21 May 2019 15:53:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8629728917 for ; Tue, 21 May 2019 15:53:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:To :From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=849UfLgrTsI1IjGB4+kcUPL/FNwMmBYO4rS5x49RUFI=; b=XA0vazwktwEFxU mmf3VDsWRgAShfe9o0x8nYsNcEmpNz6WCU3So53thDAgPd9GG5Sbx9ihhCLpdfexynF3FnrS5qgVt UFJtzosUnaIRX8MB7OYgV55BLVqJjX14JEa1rQXE0Mks2tzQayQRlfAbE/5cXE+YjBVWn0RLd02Yv 5Ls1J/HWJu6vMBpZY3+EgX015AZfcFCngsN+UOWxaZJZw2s0SEOw0yBa94+mx9O3hHlrXo7/nq//l L3SIOnIskQQJiKIERO0S1aD9hLJuxjnk1/ufgGz3APw6Syq+FOIdLQNgsXU+ty2Xep3T+8QbU7fjT QEbKrUphbzAzyaifhQ4g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hT74d-0003a4-Ls; Tue, 21 May 2019 15:53:07 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hT74a-0003ZW-RX for linux-arm-kernel@lists.infradead.org; Tue, 21 May 2019 15:53:06 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 85DDB374; Tue, 21 May 2019 08:53:04 -0700 (PDT) Received: from e119886-lin.cambridge.arm.com (unknown [10.37.6.20]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E63613F575; Tue, 21 May 2019 08:53:02 -0700 (PDT) From: Andrew Murray To: Christoffer Dall , Marc Zyngier Subject: [PATCH v7 0/5] KVM: arm/arm64: add support for chained counters Date: Tue, 21 May 2019 16:52:23 +0100 Message-Id: <20190521155228.903-1-andrew.murray@arm.com> X-Mailer: git-send-email 2.21.0 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190521_085304_902724_A1EFA3CB X-CRM114-Status: GOOD ( 14.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Pouloze , James Morse , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP ARMv8 provides support for chained PMU counters, where an event type of 0x001E is set for odd-numbered counters, the event counter will increment by one for each overflow of the preceding even-numbered counter. Let's emulate this in KVM by creating a 64 bit perf counter when a user chains two emulated counters together. Testing has been performed by hard-coding hwc->sample_period in __hw_perf_event_init (arm_pmu.c) to a small value, this results in regular overflows (for non sampling events). The following command was then used to measure chained and non-chained instruction cycles: perf stat -e armv8_pmuv3/long=1,inst_retired/u \ -e armv8_pmuv3/long=0,inst_retired/u dd if=/dev/zero bs=1M \ count=10 | gzip > /dev/null The reported values were identical (and for non-chained was in the same ballpark when running on a kernel without this patchset). Debug was added to verify that the guest received overflow interrupts for the chain counter. For chained events we only support generating an overflow interrupt on the high counter. We use the attributes of the low counter to determine the attributes of the perf event. Changes since v6: - Drop kvm_pmu_{get,set}_perf_event - Avoid duplicate work by using kvm_pmu_get_pair_counter_value inside kvm_pmu_stop_counter - Use GENMASK for 64bit mask Changes since v5: - Use kvm_pmu_pmc_is_high_counter instead of open coding - Rename kvm_pmu_event_is_chained to kvm_pmu_idx_has_chain_evtype - Use kvm_pmu_get_canonical_pmc only where needed and reintroduce the kvm_pmu_{set, get}_perf_event functions - Drop masking of counter in kvm_pmu_get_pair_counter_value - Only initialise pmc once in kvm_pmu_create_perf_event and other minor changes. Changes since v4: - Track pairs of chained counters with a bitmap instead of using a struct kvm_pmc_pair. - Rebase onto kvmarm/queue Changes since v3: - Simplify approach by not creating events lazily and by introducing a struct kvm_pmc_pair to represent the relationship between adjacent counters. - Rebase onto v5.1-rc2 Changes since v2: - Rebased onto v5.0-rc7 - Add check for cycle counter in correct patch - Minor style, naming and comment changes - Extract armv8pmu_evtype_is_chain from arch/arm64/kernel/perf_event.c into a common header that KVM can use Changes since v1: - Rename kvm_pmu_{enable,disable}_counter to reflect that they can operate on multiple counters at once and use these functions where possible - Fix bugs with overflow handing, kvm_pmu_get_counter_value did not take into consideration the perf counter value overflowing the low counter - Ensure PMCCFILTR_EL0 is used when operating on the cycle counter - Rename kvm_pmu_reenable_enabled_{pair, single} and similar - Always create perf event disabled to simplify logic elsewhere - Move PMCNTENSET_EL0 test to kvm_pmu_enable_counter_mask Andrew Murray (5): KVM: arm/arm64: rename kvm_pmu_{enable/disable}_counter functions KVM: arm/arm64: extract duplicated code to own function KVM: arm/arm64: re-create event when setting counter value arm64: perf: extract chain helper into header KVM: arm/arm64: support chained PMU counters arch/arm64/include/asm/perf_event.h | 5 + arch/arm64/kernel/perf_event.c | 2 +- arch/arm64/kvm/sys_regs.c | 4 +- include/kvm/arm_pmu.h | 10 +- virt/kvm/arm/pmu.c | 322 +++++++++++++++++++++++----- 5 files changed, 279 insertions(+), 64 deletions(-)