From patchwork Mon Jan 14 16:11:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Murray X-Patchwork-Id: 10763005 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B795291E for ; Mon, 14 Jan 2019 16:12:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A8ABF29480 for ; Mon, 14 Jan 2019 16:12:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A6A2429514; Mon, 14 Jan 2019 16:12:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4E15929506 for ; Mon, 14 Jan 2019 16:12:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=fUE/SETLH4oGabVa3Run3jK5wx/YeX9XXM6T6NPYWOk=; b=JJ6/wxLU4+odqYZ5byiMyDBl5o n9qknfkeK/eb1IgoMOl8yhwdsGPdlVg1L9ZVVNcOXFiZ4nFZ7KhA9JJbGz/XrzzglBIhFd227+73k /neyx8eO1/rl6Sz2c92tkjCrWBJYlUgeBaorXq3K/GcR0kh1k7X8yQhWt1ZEFLEv2cFeodT43u5E/ p1Njlu4PMHpfpAvCBfd3+z+UfSMs2+wZiiwRU2sFnZ2U5UfvpJQ4WbTWUQejrg2UdQea3EYdSate2 SBF4LDcM1kZ5enwfHHu9htDtitfxoiegZmemp1SX+tU9pM1R1SKwaAmZS876Iw7cbz5G1bdH/6A0M V0ncSvjw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gj4qR-0006sd-NK; Mon, 14 Jan 2019 16:12:11 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gj4qD-0006el-04 for linux-arm-kernel@lists.infradead.org; Mon, 14 Jan 2019 16:12:02 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D78BD1596; Mon, 14 Jan 2019 08:11:56 -0800 (PST) Received: from e119886-lin.cambridge.arm.com (unknown [10.37.6.11]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E566F3F5BD; Mon, 14 Jan 2019 08:11:54 -0800 (PST) From: Andrew Murray To: Christoffer Dall , Marc Zyngier , Catalin Marinas , Will Deacon , Mark Rutland Subject: [PATCH v10 1/5] arm64: arm_pmu: remove unnecessary isb instruction Date: Mon, 14 Jan 2019 16:11:44 +0000 Message-Id: <1547482308-29839-2-git-send-email-andrew.murray@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1547482308-29839-1-git-send-email-andrew.murray@arm.com> References: <1547482308-29839-1-git-send-email-andrew.murray@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190114_081157_041122_EFBBB824 X-CRM114-Status: GOOD ( 10.18 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Julien Thierry , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Suzuki K Poulose MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The armv8pmu_enable_event_counter function issues an isb instruction after enabling a pair of counters - this doesn't provide any value and is inconsistent with the armv8pmu_disable_event_counter. In any case armv8pmu_enable_event_counter is always called with the PMU stopped. Starting the PMU with armv8pmu_start results in an isb instruction being issued prior to writing to PMCR_EL0. Let's remove the unnecessary isb instruction. Signed-off-by: Andrew Murray Reviewed-by: Suzuki K Poulose Acked-by: Mark Rutland --- arch/arm64/kernel/perf_event.c | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 1620a37..1c71796 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -533,7 +533,6 @@ static inline void armv8pmu_enable_event_counter(struct perf_event *event) armv8pmu_enable_counter(idx); if (armv8pmu_event_is_chained(event)) armv8pmu_enable_counter(idx - 1); - isb(); } static inline int armv8pmu_disable_counter(int idx) From patchwork Mon Jan 14 16:11:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Murray X-Patchwork-Id: 10763021 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3B583184E for ; Mon, 14 Jan 2019 16:21:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2B0C02951E for ; Mon, 14 Jan 2019 16:21:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1ED9129509; Mon, 14 Jan 2019 16:21:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id AEE8C294B4 for ; Mon, 14 Jan 2019 16:21:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=lBB/sTUSg64P56UBBsT6C9R3VID+gKt1c0JUBaCdf8Q=; b=dgGXdetn5ljnkC0Uxe2stj1PKD T2deh1uFFlsznEkS4Hqls0XAoGMjrudnL4SUMzYK8Dny4e5wPxmAkdMwE2d4/4Lgl0/Mew/P69/ld kj0U/p3lnTmHwC8vtxKpGWtdg4tj802iWzIaG+iJyBjN4VYJzclCNmZNdnDoMdUnbzN3WDkiqFcku ZWWnmeL2V3KK//A0gsu+cajjY3+e1vesXnoYtmVV2BUiPuvBEeYg8TMFTqakuikZyxWj7ewBj4GpC HbZPP8DEEwisqTM3LOokIhECEvvzJqytlIDbegVH7Asm8hs+it92SEmWfBobCnK7Bv7iFl6bGHffj Ot76h/9Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gj4yx-0003qX-4g; Mon, 14 Jan 2019 16:20:59 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gj4qF-0006fT-Im for linux-arm-kernel@lists.infradead.org; Mon, 14 Jan 2019 16:12:14 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4D45480D; Mon, 14 Jan 2019 08:11:59 -0800 (PST) Received: from e119886-lin.cambridge.arm.com (unknown [10.37.6.11]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2EC153F5BD; Mon, 14 Jan 2019 08:11:57 -0800 (PST) From: Andrew Murray To: Christoffer Dall , Marc Zyngier , Catalin Marinas , Will Deacon , Mark Rutland Subject: [PATCH v10 2/5] arm64: KVM: encapsulate kvm_cpu_context in kvm_host_data Date: Mon, 14 Jan 2019 16:11:45 +0000 Message-Id: <1547482308-29839-3-git-send-email-andrew.murray@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1547482308-29839-1-git-send-email-andrew.murray@arm.com> References: <1547482308-29839-1-git-send-email-andrew.murray@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190114_081159_839160_9491D6E8 X-CRM114-Status: GOOD ( 15.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Julien Thierry , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Suzuki K Poulose MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The virt/arm core allocates a kvm_cpu_context_t percpu, at present this is a typedef to kvm_cpu_context and is used to store host cpu context. The kvm_cpu_context structure is also used elsewhere to hold vcpu context. In order to use the percpu to hold additional future host information we encapsulate kvm_cpu_context in a new structure and rename the typedef and percpu to match. Signed-off-by: Andrew Murray Reviewed-by: Suzuki K Poulose --- arch/arm/include/asm/kvm_host.h | 8 ++++++-- arch/arm64/include/asm/kvm_asm.h | 3 ++- arch/arm64/include/asm/kvm_host.h | 14 +++++++++----- arch/arm64/kernel/asm-offsets.c | 1 + virt/kvm/arm/arm.c | 12 +++++++----- 5 files changed, 25 insertions(+), 13 deletions(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index ca56537..6420b40 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -145,7 +145,11 @@ struct kvm_cpu_context { u32 cp15[NR_CP15_REGS]; }; -typedef struct kvm_cpu_context kvm_cpu_context_t; +struct kvm_host_data { + struct kvm_cpu_context host_ctxt; +}; + +typedef struct kvm_host_data kvm_host_data_t; struct kvm_vcpu_arch { struct kvm_cpu_context ctxt; @@ -163,7 +167,7 @@ struct kvm_vcpu_arch { struct kvm_vcpu_fault_info fault; /* Host FP context */ - kvm_cpu_context_t *host_cpu_context; + struct kvm_cpu_context *host_cpu_context; /* VGIC state */ struct vgic_cpu vgic_cpu; diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index f5b79e9..ff73f54 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -108,7 +108,8 @@ extern u32 __kvm_get_mdcr_el2(void); .endm .macro get_host_ctxt reg, tmp - hyp_adr_this_cpu \reg, kvm_host_cpu_state, \tmp + hyp_adr_this_cpu \reg, kvm_host_data, \tmp + add \reg, \reg, #HOST_DATA_CONTEXT .endm .macro get_vcpu_ptr vcpu, ctxt diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 7732d0b..26fd935 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -206,7 +206,11 @@ struct kvm_cpu_context { struct kvm_vcpu *__hyp_running_vcpu; }; -typedef struct kvm_cpu_context kvm_cpu_context_t; +struct kvm_host_data { + struct kvm_cpu_context host_ctxt; +}; + +typedef struct kvm_host_data kvm_host_data_t; struct kvm_vcpu_arch { struct kvm_cpu_context ctxt; @@ -242,7 +246,7 @@ struct kvm_vcpu_arch { struct kvm_guest_debug_arch external_debug_state; /* Pointer to host CPU context */ - kvm_cpu_context_t *host_cpu_context; + struct kvm_cpu_context *host_cpu_context; struct thread_info *host_thread_info; /* hyp VA */ struct user_fpsimd_state *host_fpsimd_state; /* hyp VA */ @@ -387,7 +391,7 @@ void kvm_set_sei_esr(struct kvm_vcpu *vcpu, u64 syndrome); struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr); -DECLARE_PER_CPU(kvm_cpu_context_t, kvm_host_cpu_state); +DECLARE_PER_CPU(kvm_host_data_t, kvm_host_data); void __kvm_enable_ssbs(void); @@ -400,8 +404,8 @@ static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr, * kernel's mapping to the linear mapping, and store it in tpidr_el2 * so that we can use adr_l to access per-cpu variables in EL2. */ - u64 tpidr_el2 = ((u64)this_cpu_ptr(&kvm_host_cpu_state) - - (u64)kvm_ksym_ref(kvm_host_cpu_state)); + u64 tpidr_el2 = ((u64)this_cpu_ptr(&kvm_host_data) - + (u64)kvm_ksym_ref(kvm_host_data)); /* * Call initialization code, and switch to the full blown HYP code. diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 65b8afc..9b03554 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -146,6 +146,7 @@ int main(void) DEFINE(VCPU_FPEXC32_EL2, offsetof(struct kvm_vcpu, arch.ctxt.sys_regs[FPEXC32_EL2])); DEFINE(VCPU_HOST_CONTEXT, offsetof(struct kvm_vcpu, arch.host_cpu_context)); DEFINE(HOST_CONTEXT_VCPU, offsetof(struct kvm_cpu_context, __hyp_running_vcpu)); + DEFINE(HOST_DATA_CONTEXT, offsetof(struct kvm_host_data, host_ctxt)); #endif #ifdef CONFIG_CPU_PM DEFINE(CPU_SUSPEND_SZ, sizeof(struct cpu_suspend_ctx)); diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 9e350fd3..89acb7f 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -56,7 +56,7 @@ __asm__(".arch_extension virt"); #endif -DEFINE_PER_CPU(kvm_cpu_context_t, kvm_host_cpu_state); +DEFINE_PER_CPU(kvm_host_data_t, kvm_host_data); static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); /* Per-CPU variable containing the currently running vcpu. */ @@ -363,8 +363,10 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) { int *last_ran; + kvm_host_data_t *cpu_data; last_ran = this_cpu_ptr(vcpu->kvm->arch.last_vcpu_ran); + cpu_data = this_cpu_ptr(&kvm_host_data); /* * We might get preempted before the vCPU actually runs, but @@ -376,7 +378,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) } vcpu->cpu = cpu; - vcpu->arch.host_cpu_context = this_cpu_ptr(&kvm_host_cpu_state); + vcpu->arch.host_cpu_context = &cpu_data->host_ctxt; kvm_arm_set_running_vcpu(vcpu); kvm_vgic_load(vcpu); @@ -1558,10 +1560,10 @@ static int init_hyp_mode(void) } for_each_possible_cpu(cpu) { - kvm_cpu_context_t *cpu_ctxt; + kvm_host_data_t *cpu_data; - cpu_ctxt = per_cpu_ptr(&kvm_host_cpu_state, cpu); - err = create_hyp_mappings(cpu_ctxt, cpu_ctxt + 1, PAGE_HYP); + cpu_data = per_cpu_ptr(&kvm_host_data, cpu); + err = create_hyp_mappings(cpu_data, cpu_data + 1, PAGE_HYP); if (err) { kvm_err("Cannot map host CPU state: %d\n", err); From patchwork Mon Jan 14 16:11:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Murray X-Patchwork-Id: 10763007 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D4A3391E for ; Mon, 14 Jan 2019 16:12:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BD06928F59 for ; Mon, 14 Jan 2019 16:12:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B0336292DD; Mon, 14 Jan 2019 16:12:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 86A9528F59 for ; Mon, 14 Jan 2019 16:12:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=vD+vuS0sL5Ikpw5SJuYL44Sq/WctTW7+95GkkWywKac=; b=emji77JqBNyxI2dGkopBWld7aR DIpMdLi78FRqclBk2MRdDAtA8BoRV325EXHAqJY6pil/UXIKpdr7phduFFCtroUDhrr74fA7oAMMJ rI5n9c996I7ntpY9b/hxk5RnKkY0Pea+bxBCDqEVuGS//Q/xYEj+bX3Nu0p2dFcnDPg85pq2H/n5Q rSbOupl+Xy4fFHCIIyvbah5TzvnihvKIwpQXm9po/VqAYB8MqYweFIJfcg0IJJRQaAjmmp3vZvvM4 TaWN4kWQDBWS2GAbMd6py1CFJ1j2nBnxezUY/WCIvromsbB/MoFpjuG2baTsZ2q1g/Ocm0C10h+vE iRl7V72A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gj4ql-0007CC-Cx; Mon, 14 Jan 2019 16:12:31 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gj4qH-0006hW-UZ for linux-arm-kernel@lists.infradead.org; Mon, 14 Jan 2019 16:12:10 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8BB5215AB; Mon, 14 Jan 2019 08:12:01 -0800 (PST) Received: from e119886-lin.cambridge.arm.com (unknown [10.37.6.11]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 97F343F5BD; Mon, 14 Jan 2019 08:11:59 -0800 (PST) From: Andrew Murray To: Christoffer Dall , Marc Zyngier , Catalin Marinas , Will Deacon , Mark Rutland Subject: [PATCH v10 3/5] arm64: KVM: add accessors to track guest/host only counters Date: Mon, 14 Jan 2019 16:11:46 +0000 Message-Id: <1547482308-29839-4-git-send-email-andrew.murray@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1547482308-29839-1-git-send-email-andrew.murray@arm.com> References: <1547482308-29839-1-git-send-email-andrew.murray@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190114_081202_395195_B2F6ACE5 X-CRM114-Status: UNSURE ( 9.64 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Julien Thierry , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Suzuki K Poulose MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP In order to effeciently enable/disable guest/host only perf counters at guest entry/exit we add bitfields to kvm_cpu_context for guest and host events as well as accessors for updating them. Signed-off-by: Andrew Murray Reviewed-by: Suzuki K Poulose --- arch/arm64/include/asm/kvm_host.h | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 26fd935..cbfe3d1 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -206,8 +206,14 @@ struct kvm_cpu_context { struct kvm_vcpu *__hyp_running_vcpu; }; +struct kvm_pmu_events { + u32 events_host; + u32 events_guest; +}; + struct kvm_host_data { struct kvm_cpu_context host_ctxt; + struct kvm_pmu_events pmu_events; }; typedef struct kvm_host_data kvm_host_data_t; @@ -468,11 +474,33 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu); +#define KVM_PMU_EVENTS_HOST 1 +#define KVM_PMU_EVENTS_GUEST 2 + #ifdef CONFIG_KVM /* Avoid conflicts with core headers if CONFIG_KVM=n */ static inline int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) { return kvm_arch_vcpu_run_map_fp(vcpu); } +static inline void kvm_set_pmu_events(u32 set, int flags) +{ + struct kvm_host_data *ctx = this_cpu_ptr(&kvm_host_data); + + if (flags & KVM_PMU_EVENTS_HOST) + ctx->pmu_events.events_host |= set; + if (flags & KVM_PMU_EVENTS_GUEST) + ctx->pmu_events.events_guest |= set; +} +static inline void kvm_clr_pmu_events(u32 clr) +{ + struct kvm_host_data *ctx = this_cpu_ptr(&kvm_host_data); + + ctx->pmu_events.events_host &= ~clr; + ctx->pmu_events.events_guest &= ~clr; +} +#else +static inline void kvm_set_pmu_events(u32 set, int flags) {} +static inline void kvm_clr_pmu_events(u32 clr) {} #endif static inline void kvm_arm_vhe_guest_enter(void) From patchwork Mon Jan 14 16:11:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Murray X-Patchwork-Id: 10763009 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D364791E for ; Mon, 14 Jan 2019 16:12:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C317828F59 for ; Mon, 14 Jan 2019 16:12:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B768D292DD; Mon, 14 Jan 2019 16:12:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4498D28F59 for ; Mon, 14 Jan 2019 16:12:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=9svQatlzDPDnqCT0bmL/OMgXMt5KSeucKfPKzAzfWNc=; b=LnUlKG62MN/EKigw4KqCqixjsa bAxyih44epCTBSCqJyJGFEjohkyNybO7OfqsYt9JYYmZvSXm4aP8etW5mv/RFlEyscizhJ+4+oRoF 04Ys5iG6j+UXMKmTQHlZXsaoTyyERkrIg1y89sudR76J2NNTFGIkrmtG3PPPF32J4mh9bdcXtuSSu H7Oy2vo4Rxcs5rUPOtahO63C2/SR+HgB8yt9P7fRWDBg/gGovJAt5w/ncZy/yNAa4GgIc3VD8f+Au 0owvs0MQm9pPght71XYQKmMTBuTJPqCerFlXnRRqw3wmgij28z22iIwUxiHo+9WG/n46D8Ny2BlVL 9lluUhHQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gj4qz-0007SI-Op; Mon, 14 Jan 2019 16:12:45 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gj4qK-0006kI-IG for linux-arm-kernel@lists.infradead.org; Mon, 14 Jan 2019 16:12:11 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C813C1596; Mon, 14 Jan 2019 08:12:03 -0800 (PST) Received: from e119886-lin.cambridge.arm.com (unknown [10.37.6.11]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D65AD3F5BD; Mon, 14 Jan 2019 08:12:01 -0800 (PST) From: Andrew Murray To: Christoffer Dall , Marc Zyngier , Catalin Marinas , Will Deacon , Mark Rutland Subject: [PATCH v10 4/5] arm64: arm_pmu: Add support for exclude_host/exclude_guest attributes Date: Mon, 14 Jan 2019 16:11:47 +0000 Message-Id: <1547482308-29839-5-git-send-email-andrew.murray@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1547482308-29839-1-git-send-email-andrew.murray@arm.com> References: <1547482308-29839-1-git-send-email-andrew.murray@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190114_081205_086988_27BBC04C X-CRM114-Status: GOOD ( 16.74 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Julien Thierry , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Suzuki K Poulose MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Add support for the :G and :H attributes in perf by handling the exclude_host/exclude_guest event attributes. We notify KVM of counters that we wish to be enabled or disabled on guest entry/exit and thus defer from starting or stopping :G events as per the events exclude_host attribute. With both VHE and non-VHE we switch the counters between host/guest at EL2. We are able to eliminate counters counting host events on the boundaries of guest entry/exit when using :G by filtering out EL2 for exclude_host. However when using :H unless exclude_hv is set on non-VHE then there is a small blackout window at the guest entry/exit where host events are not captured. Signed-off-by: Andrew Murray Reviewed-by: Suzuki K Poulose --- arch/arm64/kernel/perf_event.c | 53 ++++++++++++++++++++++++++++++++++++------ 1 file changed, 46 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 1c71796..21c6831 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -26,6 +26,7 @@ #include #include +#include #include #include #include @@ -528,11 +529,27 @@ static inline int armv8pmu_enable_counter(int idx) static inline void armv8pmu_enable_event_counter(struct perf_event *event) { + struct perf_event_attr *attr = &event->attr; int idx = event->hw.idx; + int flags = 0; + u32 counter_bits = BIT(ARMV8_IDX_TO_COUNTER(idx)); - armv8pmu_enable_counter(idx); if (armv8pmu_event_is_chained(event)) - armv8pmu_enable_counter(idx - 1); + counter_bits |= BIT(ARMV8_IDX_TO_COUNTER(idx - 1)); + + if (!attr->exclude_host) + flags |= KVM_PMU_EVENTS_HOST; + if (!attr->exclude_guest) + flags |= KVM_PMU_EVENTS_GUEST; + + kvm_set_pmu_events(counter_bits, flags); + + /* We rely on the hypervisor switch code to enable guest counters */ + if (!attr->exclude_host) { + armv8pmu_enable_counter(idx); + if (armv8pmu_event_is_chained(event)) + armv8pmu_enable_counter(idx - 1); + } } static inline int armv8pmu_disable_counter(int idx) @@ -545,11 +562,21 @@ static inline int armv8pmu_disable_counter(int idx) static inline void armv8pmu_disable_event_counter(struct perf_event *event) { struct hw_perf_event *hwc = &event->hw; + struct perf_event_attr *attr = &event->attr; int idx = hwc->idx; + u32 counter_bits = BIT(ARMV8_IDX_TO_COUNTER(idx)); if (armv8pmu_event_is_chained(event)) - armv8pmu_disable_counter(idx - 1); - armv8pmu_disable_counter(idx); + counter_bits |= BIT(ARMV8_IDX_TO_COUNTER(idx - 1)); + + kvm_clr_pmu_events(counter_bits); + + /* We rely on the hypervisor switch code to disable guest counters */ + if (!attr->exclude_host) { + if (armv8pmu_event_is_chained(event)) + armv8pmu_disable_counter(idx - 1); + armv8pmu_disable_counter(idx); + } } static inline int armv8pmu_enable_intens(int idx) @@ -824,16 +851,25 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event, * Therefore we ignore exclude_hv in this configuration, since * there's no hypervisor to sample anyway. This is consistent * with other architectures (x86 and Power). + * + * To eliminate counting host events on the boundaries of + * guest entry/exit we ensure EL2 is not included in hyp mode + * with !exclude_host. */ if (is_kernel_in_hyp_mode()) { - if (!attr->exclude_kernel) + if (!attr->exclude_kernel && !attr->exclude_host) config_base |= ARMV8_PMU_INCLUDE_EL2; } else { - if (attr->exclude_kernel) - config_base |= ARMV8_PMU_EXCLUDE_EL1; if (!attr->exclude_hv) config_base |= ARMV8_PMU_INCLUDE_EL2; } + + /* + * Filter out !VHE kernels and guest kernels + */ + if (attr->exclude_kernel) + config_base |= ARMV8_PMU_EXCLUDE_EL1; + if (attr->exclude_user) config_base |= ARMV8_PMU_EXCLUDE_EL0; @@ -863,6 +899,9 @@ static void armv8pmu_reset(void *info) armv8pmu_disable_intens(idx); } + /* Clear the counters we flip at guest entry/exit */ + kvm_clr_pmu_events(U32_MAX); + /* * Initialize & Reset PMNC. Request overflow interrupt for * 64 bit cycle counter but cheat in armv8pmu_write_counter(). From patchwork Mon Jan 14 16:11:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Murray X-Patchwork-Id: 10763019 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 918E113BF for ; Mon, 14 Jan 2019 16:21:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8046F293EE for ; Mon, 14 Jan 2019 16:21:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7506629515; Mon, 14 Jan 2019 16:21:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id AF0DF29509 for ; Mon, 14 Jan 2019 16:21:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=AL+y1QsQa//caNahYg2oTFyKr9AFQnphaUhxkUjBFI8=; b=pD6KEJgjxx7dxOPp+kD9sK9Ow+ 3EycJwIeYeLCtV41aB4oISFVInYdiIA5tlFMPwRs7qutHi2epAb2v5KZm93o7PVrmob7y1WZEQWhX +bNiYYxHa5ZUGWdEhneHxpNzCGIQPZqOw0Ub5aInNvLNCshb7Lu58cppak30PndBbgM9ds+bXTqzW ZlnbNUdYMZSwTdUt2p5tST3TnMMf1WQ1SJmQ51atW5qxygeNUk/DjDNgVMXdvu4Qaqad7AUKRn8ns XoP7VMWA987uuGVpUwIJQHG8dxYEhU9BlhE59e8d8QjlC4AaaEK4i9vPARhYCP2KHWorxG7tuLeEJ 9cLTmmGg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gj4yy-0003rD-1U; Mon, 14 Jan 2019 16:21:00 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gj4qN-0006mi-0H for linux-arm-kernel@lists.infradead.org; Mon, 14 Jan 2019 16:12:18 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1159B15BF; Mon, 14 Jan 2019 08:12:06 -0800 (PST) Received: from e119886-lin.cambridge.arm.com (unknown [10.37.6.11]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2087B3F5BD; Mon, 14 Jan 2019 08:12:03 -0800 (PST) From: Andrew Murray To: Christoffer Dall , Marc Zyngier , Catalin Marinas , Will Deacon , Mark Rutland Subject: [PATCH v10 5/5] arm64: KVM: Enable support for :G/:H perf event modifiers Date: Mon, 14 Jan 2019 16:11:48 +0000 Message-Id: <1547482308-29839-6-git-send-email-andrew.murray@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1547482308-29839-1-git-send-email-andrew.murray@arm.com> References: <1547482308-29839-1-git-send-email-andrew.murray@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190114_081207_996041_C5998888 X-CRM114-Status: GOOD ( 15.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Julien Thierry , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Suzuki K Poulose MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Enable/disable event counters as appropriate when entering and exiting the guest to enable support for guest or host only event counting. For both VHE and non-VHE we switch the counters between host/guest at EL2. EL2 is filtered out by the PMU when we are using the :G modifier. The PMU may be on when we change which counters are enabled however we avoid adding an isb as we instead rely on existing context synchronisation events: the isb in kvm_arm_vhe_guest_exit for VHE and the eret from the hvc in kvm_call_hyp. Signed-off-by: Andrew Murray Reviewed-by: Suzuki K Poulose --- arch/arm64/kvm/hyp/switch.c | 60 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 60 insertions(+) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index b0b1478..9018fb3 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -357,6 +357,54 @@ static bool __hyp_text __hyp_switch_fpsimd(struct kvm_vcpu *vcpu) return true; } +static bool __hyp_text __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt) +{ + struct kvm_host_data *host; + struct kvm_pmu_events *pmu; + u32 clr, set; + + host = container_of(host_ctxt, struct kvm_host_data, host_ctxt); + pmu = &host->pmu_events; + + /* We can potentially avoid a sysreg write by only changing bits that + * differ between the guest/host. E.g. where events are enabled in + * both guest and host + */ + clr = pmu->events_host & ~pmu->events_guest; + set = pmu->events_guest & ~pmu->events_host; + + if (clr) + write_sysreg(clr, pmcntenclr_el0); + + if (set) + write_sysreg(set, pmcntenset_el0); + + return (clr || set); +} + +static void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) +{ + struct kvm_host_data *host; + struct kvm_pmu_events *pmu; + u32 clr, set; + + host = container_of(host_ctxt, struct kvm_host_data, host_ctxt); + pmu = &host->pmu_events; + + /* We can potentially avoid a sysreg write by only changing bits that + * differ between the guest/host. E.g. where events are enabled in + * both guest and host + */ + clr = pmu->events_guest & ~pmu->events_host; + set = pmu->events_host & ~pmu->events_guest; + + if (clr) + write_sysreg(clr, pmcntenclr_el0); + + if (set) + write_sysreg(set, pmcntenset_el0); +} + /* * Return true when we were able to fixup the guest exit and should return to * the guest, false when we should restore the host state and return to the @@ -464,12 +512,15 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; + bool pmu_switch_needed; u64 exit_code; host_ctxt = vcpu->arch.host_cpu_context; host_ctxt->__hyp_running_vcpu = vcpu; guest_ctxt = &vcpu->arch.ctxt; + pmu_switch_needed = __pmu_switch_to_guest(host_ctxt); + sysreg_save_host_state_vhe(host_ctxt); /* @@ -511,6 +562,9 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) __debug_switch_to_host(vcpu); + if (pmu_switch_needed) + __pmu_switch_to_host(host_ctxt); + return exit_code; } @@ -519,6 +573,7 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; + bool pmu_switch_needed; u64 exit_code; vcpu = kern_hyp_va(vcpu); @@ -527,6 +582,8 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) host_ctxt->__hyp_running_vcpu = vcpu; guest_ctxt = &vcpu->arch.ctxt; + pmu_switch_needed = __pmu_switch_to_guest(host_ctxt); + __sysreg_save_state_nvhe(host_ctxt); __activate_vm(kern_hyp_va(vcpu->kvm)); @@ -573,6 +630,9 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) */ __debug_switch_to_host(vcpu); + if (pmu_switch_needed) + __pmu_switch_to_host(host_ctxt); + return exit_code; }