From patchwork Wed May 22 15:30:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Murray X-Patchwork-Id: 10956125 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 46A206C5 for ; Wed, 22 May 2019 15:30:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 363242811E for ; Wed, 22 May 2019 15:30:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2A1DF285EB; Wed, 22 May 2019 15:30:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id AD3C428B0C for ; Wed, 22 May 2019 15:30:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:To :From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=njew8fgqPbNpMQiYlm9V7jRBVYyrvBF1tpG6eLMOgV4=; b=EmflpTSdTm7Pjo pEWy31ILS8vgk54Akfm1/Z5E1JifsaU6nleiT9H4dABcv5h479/m/vy1/0NF/F4zEq7+2nqgX67Rg Hbdfruj05qh3eha5bm9uGzRpa6av3B7GXpgqdfevx8fB9QSruWSCyH1yQBfJ4raIx+S3LeQo3W5xx shbyma/lz/IkvPAZ+KLHghJOHgcw27m4kOeneACYYBhxpGGE3lJHi6TKepNMWkKfbe8HyOj3CAzcZ C9IB16Y0S0Zfk8xi2X8tdl7PP4EVu5yNBtTNwyP6BtzEDB8fMub/y6kexTLeRRXjhppfcV6slavzF zqSN+kaO90axUDBoo0pQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hTTCJ-0005Jt-K7; Wed, 22 May 2019 15:30:31 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hTTCG-0005Iz-45 for linux-arm-kernel@lists.infradead.org; Wed, 22 May 2019 15:30:29 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9C67B80D; Wed, 22 May 2019 08:30:27 -0700 (PDT) Received: from e119886-lin.cambridge.arm.com (unknown [10.37.6.20]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1280B3F718; Wed, 22 May 2019 08:30:25 -0700 (PDT) From: Andrew Murray To: Christoffer Dall , Marc Zyngier Subject: [PATCH v8 0/6] KVM: arm/arm64: add support for chained counters Date: Wed, 22 May 2019 16:30:13 +0100 Message-Id: <20190522153019.18645-1-andrew.murray@arm.com> X-Mailer: git-send-email 2.21.0 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190522_083028_174106_F0ED5D90 X-CRM114-Status: GOOD ( 15.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Pouloze , James Morse , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP ARMv8 provides support for chained PMU counters, where an event type of 0x001E is set for odd-numbered counters, the event counter will increment by one for each overflow of the preceding even-numbered counter. Let's emulate this in KVM by creating a 64 bit perf counter when a user chains two emulated counters together. Testing has been performed by hard-coding hwc->sample_period in __hw_perf_event_init (arm_pmu.c) to a small value, this results in regular overflows (for non sampling events). The following command was then used to measure chained and non-chained instruction cycles: perf stat -e armv8_pmuv3/long=1,inst_retired/u \ -e armv8_pmuv3/long=0,inst_retired/u dd if=/dev/zero bs=1M \ count=10 | gzip > /dev/null The reported values were identical (and for non-chained was in the same ballpark when running on a kernel without this patchset). Debug was added to verify that the guest received overflow interrupts for the chain counter. The test was also repeated using the cycle counter (cycle:u). For chained events we only support generating an overflow interrupt on the high counter. We use the attributes of the low counter to determine the attributes of the perf event. Changes since v7: - Remove pmc->bitmask - Remove a couple of instances of using kvm_pmu_get_canonical_pmc when not needed - Remove unused perf_event variable Changes since v6: - Drop kvm_pmu_{get,set}_perf_event - Avoid duplicate work by using kvm_pmu_get_pair_counter_value inside kvm_pmu_stop_counter - Use GENMASK for 64bit mask Changes since v5: - Use kvm_pmu_pmc_is_high_counter instead of open coding - Rename kvm_pmu_event_is_chained to kvm_pmu_idx_has_chain_evtype - Use kvm_pmu_get_canonical_pmc only where needed and reintroduce the kvm_pmu_{set, get}_perf_event functions - Drop masking of counter in kvm_pmu_get_pair_counter_value - Only initialise pmc once in kvm_pmu_create_perf_event and other minor changes. Changes since v4: - Track pairs of chained counters with a bitmap instead of using a struct kvm_pmc_pair. - Rebase onto kvmarm/queue Changes since v3: - Simplify approach by not creating events lazily and by introducing a struct kvm_pmc_pair to represent the relationship between adjacent counters. - Rebase onto v5.1-rc2 Changes since v2: - Rebased onto v5.0-rc7 - Add check for cycle counter in correct patch - Minor style, naming and comment changes - Extract armv8pmu_evtype_is_chain from arch/arm64/kernel/perf_event.c into a common header that KVM can use Changes since v1: - Rename kvm_pmu_{enable,disable}_counter to reflect that they can operate on multiple counters at once and use these functions where possible - Fix bugs with overflow handing, kvm_pmu_get_counter_value did not take into consideration the perf counter value overflowing the low counter - Ensure PMCCFILTR_EL0 is used when operating on the cycle counter - Rename kvm_pmu_reenable_enabled_{pair, single} and similar - Always create perf event disabled to simplify logic elsewhere - Move PMCNTENSET_EL0 test to kvm_pmu_enable_counter_mask Andrew Murray (6): KVM: arm/arm64: rename kvm_pmu_{enable/disable}_counter functions KVM: arm/arm64: extract duplicated code to own function KVM: arm/arm64: re-create event when setting counter value arm64: perf: extract chain helper into header KVM: arm/arm64: remove pmc->bitmask KVM: arm/arm64: support chained PMU counters arch/arm64/include/asm/perf_event.h | 5 + arch/arm64/kernel/perf_event.c | 2 +- arch/arm64/kvm/sys_regs.c | 4 +- include/kvm/arm_pmu.h | 11 +- virt/kvm/arm/pmu.c | 335 ++++++++++++++++++++++------ 5 files changed, 282 insertions(+), 75 deletions(-)