From patchwork Fri Apr 8 08:40:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12806295 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 39593C433F5 for ; Fri, 8 Apr 2022 08:42:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Orp/5jo5NjzUj5aGWXQK7wBnATrCCZezQrbpMVB1jaY=; b=D9TEn/dt+vkw2d6zCL2417uY2h uFWf44Vgv2OvgfdbWkK91HVcD/q0zbJr7QJndaIi9C/uS4XJqS3xdGmoln7+rVQdPjZArQssVS6sD xucEAKvOysZGujFBK1CEIP5vDQbHQuQruzMq5h6LbYL5br3aRIWna/4mfNbQBMrTAgLVwj0ow/iW2 8MzSpB4W+gD8UVQI0FGDfnapCSdTpMiBzkixfo3ZvPVMCp1jD5Z85Vj9n7DFAb/bfC+z3kTQVgf/L bs6w6bW+CiHlA3/YcM0BZRUZBKa4M9Vx8XtKVCPiBkvfTL4rjqT0hIZJZleslYXrxpO7rYxzpoPtT 78oPLoKw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nckBC-00FxdM-UY; Fri, 08 Apr 2022 08:41:19 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nckAs-00FxWd-Rg for linux-arm-kernel@lists.infradead.org; Fri, 08 Apr 2022 08:41:00 +0000 Received: by mail-wm1-x34a.google.com with SMTP id z16-20020a05600c0a1000b0038bebbd8548so5671972wmp.3 for ; Fri, 08 Apr 2022 01:40:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=LyDJmbc6nMyu9SqFkdCVfRuw20aqoQ+eVo1CxcriK0Q=; b=AWfeAcuBrGqYMES/PReNxM1XZXjrhduKo6KW+tfZ/y3VNlNUj5YwKv2mJJDYhHQ++/ 9ObdW3SCMl5GDRzCmv1sEcpbdrfyXPaNSp5O9Cr+8nFeTlSm2fG3IndQ0YdInAaeag7U NQB9utZaMymR3+nrvNVt99T5BgD2HGaymqkN/wTT2kldkOJLNIvyeAebzlwu3293Zjam d3IautnXo40AVx4lxQSjklEVdBgt4TnG0uCsgnu00gV/UfOgYXUF2vfCEpJKXG3/GQ38 YCbWypVbzjm1SmJHin+D6nWnYP3FpKD1PJoWmCy9ivRJLiI+3nLo1bhh8gkbgHrM3S8N H+Dw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=LyDJmbc6nMyu9SqFkdCVfRuw20aqoQ+eVo1CxcriK0Q=; b=A6PLm5Wft8fPl+N9Lwkl6jbF++kPY//LtgjiKfT9X/BBLns2hdKcXY0xsfvewGCmM1 qllVYK2b6ToAQjw5Mp25M0QLBVGbfckfsUechmJuywsyE5QQsSHP5Lv5V8xLuoj5p2ml l/s+jZE52ASsCEMJYL6XsvoVzYEYE9belNf1w6giCDYUf/LfQpGHktKwBD8RNuSmcH6X ByuVQen2sXjqfHxSr7ae4JIh3sIBe+rf4VY2LcrGimS3y1xTkv+woTgwoTOUSfRcjwlU PzQnWwljpSv2/33VKVdd/htjD41Rey8NcI9cBmw0PeECPNo+spZxQ1fn7/qrE1NThJIE dU0w== X-Gm-Message-State: AOAM531Dp1/DKL/i4Y5cXR3f7kXGPq1oPy/0mLMlbS6raoo8/lDJz1KL M7HCWHnAa6XrZjr6FAdJznyyB17hFA== X-Google-Smtp-Source: ABdhPJwcwJKTFKHdWVHs0DHwjSzunvpscueaTN17VxvDtj4+qD4ue3cgyCG+Nu8rIuQWDFDjaGq2K96DpA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:600c:3789:b0:38c:bd93:77d6 with SMTP id o9-20020a05600c378900b0038cbd9377d6mr16172413wmr.12.1649407256759; Fri, 08 Apr 2022 01:40:56 -0700 (PDT) Date: Fri, 8 Apr 2022 09:40:50 +0100 In-Reply-To: <20220408084052.3310931-1-tabba@google.com> Message-Id: <20220408084052.3310931-2-tabba@google.com> Mime-Version: 1.0 References: <20220408084052.3310931-1-tabba@google.com> X-Mailer: git-send-email 2.35.1.1178.g4f1659d476-goog Subject: [PATCH v1 1/3] KVM: arm64: Wrapper for getting pmu_events From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, qperret@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, drjones@redhat.com, linux-arm-kernel@lists.infradead.org, tabba@google.com, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220408_014058_938051_5E450143 X-CRM114-Status: GOOD ( 13.29 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Eases migrating away from using hyp data and simplifies the code. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/pmu.c | 42 ++++++++++++++++++++++++++---------------- 1 file changed, 26 insertions(+), 16 deletions(-) diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 03a6c1f4a09a..310d47c9990f 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -25,21 +25,31 @@ static bool kvm_pmu_switch_needed(struct perf_event_attr *attr) return (attr->exclude_host != attr->exclude_guest); } +static struct kvm_pmu_events *get_kvm_pmu_events(void) +{ + struct kvm_host_data *ctx = this_cpu_ptr_hyp_sym(kvm_host_data); + + if (!ctx) + return NULL; + + return &ctx->pmu_events; +} + /* * Add events to track that we may want to switch at guest entry/exit * time. */ void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) { - struct kvm_host_data *ctx = this_cpu_ptr_hyp_sym(kvm_host_data); + struct kvm_pmu_events *pmu = get_kvm_pmu_events(); - if (!kvm_arm_support_pmu_v3() || !ctx || !kvm_pmu_switch_needed(attr)) + if (!kvm_arm_support_pmu_v3() || !pmu || !kvm_pmu_switch_needed(attr)) return; if (!attr->exclude_host) - ctx->pmu_events.events_host |= set; + pmu->events_host |= set; if (!attr->exclude_guest) - ctx->pmu_events.events_guest |= set; + pmu->events_guest |= set; } /* @@ -47,13 +57,13 @@ void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) */ void kvm_clr_pmu_events(u32 clr) { - struct kvm_host_data *ctx = this_cpu_ptr_hyp_sym(kvm_host_data); + struct kvm_pmu_events *pmu = get_kvm_pmu_events(); - if (!kvm_arm_support_pmu_v3() || !ctx) + if (!kvm_arm_support_pmu_v3() || !pmu) return; - ctx->pmu_events.events_host &= ~clr; - ctx->pmu_events.events_guest &= ~clr; + pmu->events_host &= ~clr; + pmu->events_guest &= ~clr; } #define PMEVTYPER_READ_CASE(idx) \ @@ -169,16 +179,16 @@ static void kvm_vcpu_pmu_disable_el0(unsigned long events) */ void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) { - struct kvm_host_data *host; + struct kvm_pmu_events *pmu; u32 events_guest, events_host; if (!kvm_arm_support_pmu_v3() || !has_vhe()) return; preempt_disable(); - host = this_cpu_ptr_hyp_sym(kvm_host_data); - events_guest = host->pmu_events.events_guest; - events_host = host->pmu_events.events_host; + pmu = get_kvm_pmu_events(); + events_guest = pmu->events_guest; + events_host = pmu->events_host; kvm_vcpu_pmu_enable_el0(events_guest); kvm_vcpu_pmu_disable_el0(events_host); @@ -190,15 +200,15 @@ void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) */ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) { - struct kvm_host_data *host; + struct kvm_pmu_events *pmu; u32 events_guest, events_host; if (!kvm_arm_support_pmu_v3() || !has_vhe()) return; - host = this_cpu_ptr_hyp_sym(kvm_host_data); - events_guest = host->pmu_events.events_guest; - events_host = host->pmu_events.events_host; + pmu = get_kvm_pmu_events(); + events_guest = pmu->events_guest; + events_host = pmu->events_host; kvm_vcpu_pmu_enable_el0(events_host); kvm_vcpu_pmu_disable_el0(events_guest); From patchwork Fri Apr 8 08:40:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12806296 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7A09AC433F5 for ; Fri, 8 Apr 2022 08:42:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=4xBT1ODjW5/tnbHs4w5S9STaV/w9neAtdhaWhpq6/MA=; b=Mh37FPXFdUgHsPtF3EzwNNsjv/ 9a02qdLgqpuLRmIlpr8QzwM3R8PI82gB1iEDrSkuVtekxnS1F6l3BN3M5FiLORSINe5n70AePIVUb dbaE0EsjiW2v5tMurBKmkiUkS5gz7GfCu4kUIr8MjtOFOZYyzB5SD31pXIm3OkujGDvZU4igdC1GR yU191S9wUKvI9xf/2sYMsIFzpLScLL4Dp07L1aEwKdUaSMfVGfaDBAktVlbjWLHs0cWxmHwwi39at 3KCecHWNToS/S7lqcM7kzl1CwLHieJUBq3KzNLYjYBAOGdKTnk9Myngt5fhsduNQgzjn6Z60nHaQq 50p3RW/Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nckBN-00FxgA-1X; Fri, 08 Apr 2022 08:41:29 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nckAu-00FxXU-UR for linux-arm-kernel@lists.infradead.org; Fri, 08 Apr 2022 08:41:02 +0000 Received: by mail-wm1-x34a.google.com with SMTP id c62-20020a1c3541000000b003815245c642so5658019wma.6 for ; Fri, 08 Apr 2022 01:40:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=AN6Z4bkLNX2uGfXH7hceP8OFltZJdOCeyELTWYGKgio=; b=mX6soc29ZsfPe6gEIwRg34xBBwnATbjFyw7ih0K/+4fQh1TWWXcFiNG0vHv61WvaPT bBFEsRxVaYJEWLJUiZygEs0fNAJS7Ab789peCZpPz++ae6cvr85lp+PBE9jJZhCMfStr OPzp6wlHmALgLXDq8OCjhvN66uF7rI9ZegxHMkggv5FbqqP6G4c/vRMomn0DzqOO3BzC r4UUCweqtsBIg9+ENunc/671FmQQ6WkWGXRLnb5YapCJpKrEitzeqpfWTH0ESFvKmiAV /8DlYZiacRL/6+u4XmAtf2dnl3Mbd3s93wxxe7xyQNC0mOVW5e8/uSPvz5ASb7H/pmSr i3ZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=AN6Z4bkLNX2uGfXH7hceP8OFltZJdOCeyELTWYGKgio=; b=E3aw9Gu9Bvq8dGrtTOh8SoC4f+GE9dqHhisksYYv2G+IAdok9v2uw8UXsBbKQcyJns Mj82f29BXm0ZU/GtaVNBqJddrop9Iu9LrfdOGZrdQj3Ox3nTrKUWTZwsF0Gm0zwPHW7x JEgywHBfBkOKqr+C5DxPTs+j1nJVTdFzG6HCFBK0EmQN14Nttf3L0D/q1rTBtp1QZoWp 2ziRDtE/AdhQnzTBhjJD75AD54iGPZyS1kCp9z2ye2NnmjQ/sawsvIEwTgDM7WS1E59b GHL/nXVIKte/LmucfwuUWzPdR24t7caSwtRGANW4PwyCK4YOrFD9ZAE6xxc4dxFj19pt E89w== X-Gm-Message-State: AOAM5305ByVrb7qvFSdaipLcwQWmageawEOPKvpg95CIi2IToCZ+R8c3 7EBalQ8HcBNgHw7kaD54ojpNMo4Vag== X-Google-Smtp-Source: ABdhPJyP1boonFGrzHCz7hvU6W8UWP6E9RA1Ls2TTk+LkQKRlXBIsBIcfzdpHXiQa1B0YoMLpQLccAs+FQ== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a7b:c057:0:b0:37b:ebad:c9c8 with SMTP id u23-20020a7bc057000000b0037bebadc9c8mr15647927wmc.61.1649407258833; Fri, 08 Apr 2022 01:40:58 -0700 (PDT) Date: Fri, 8 Apr 2022 09:40:51 +0100 In-Reply-To: <20220408084052.3310931-1-tabba@google.com> Message-Id: <20220408084052.3310931-3-tabba@google.com> Mime-Version: 1.0 References: <20220408084052.3310931-1-tabba@google.com> X-Mailer: git-send-email 2.35.1.1178.g4f1659d476-goog Subject: [PATCH v1 2/3] KVM: arm64: Pass pmu events to hyp via vcpu From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, qperret@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, drjones@redhat.com, linux-arm-kernel@lists.infradead.org, tabba@google.com, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220408_014101_036119_834D3BAC X-CRM114-Status: GOOD ( 17.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Instead of accessing hyp data, pass the pmu events of the current cpu to hyp via the loaded vcpu. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 8 ++------ arch/arm64/kvm/arm.c | 2 +- arch/arm64/kvm/hyp/nvhe/switch.c | 20 ++++++-------------- arch/arm64/kvm/pmu.c | 22 +++++++++++++--------- include/kvm/arm_pmu.h | 6 ++++++ 5 files changed, 28 insertions(+), 30 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 0e96087885fe..b5cdfb6cb9c7 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -244,14 +244,8 @@ struct kvm_cpu_context { struct kvm_vcpu *__hyp_running_vcpu; }; -struct kvm_pmu_events { - u32 events_host; - u32 events_guest; -}; - struct kvm_host_data { struct kvm_cpu_context host_ctxt; - struct kvm_pmu_events pmu_events; }; struct kvm_host_psci_config { @@ -728,6 +722,7 @@ void kvm_set_sei_esr(struct kvm_vcpu *vcpu, u64 syndrome); struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr); DECLARE_KVM_HYP_PER_CPU(struct kvm_host_data, kvm_host_data); +DECLARE_PER_CPU(struct kvm_pmu_events, kvm_pmu_events); static inline void kvm_init_host_cpu_context(struct kvm_cpu_context *cpu_ctxt) { @@ -781,6 +776,7 @@ void kvm_arch_vcpu_put_debug_state_flags(struct kvm_vcpu *vcpu); void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr); void kvm_clr_pmu_events(u32 clr); +void kvm_vcpu_pmu_load(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); #else diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index ba9165e84396..e6f76d843558 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -400,7 +400,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) if (has_vhe()) kvm_vcpu_load_sysregs_vhe(vcpu); kvm_arch_vcpu_load_fp(vcpu); - kvm_vcpu_pmu_restore_guest(vcpu); + kvm_vcpu_pmu_load(vcpu); if (kvm_arm_is_pvtime_enabled(&vcpu->arch)) kvm_make_request(KVM_REQ_RECORD_STEAL, vcpu); diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 6410d21d8695..ff7b29fb9787 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -123,13 +123,9 @@ static void __hyp_vgic_restore_state(struct kvm_vcpu *vcpu) /** * Disable host events, enable guest events */ -static bool __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt) +static bool __pmu_switch_to_guest(struct kvm_vcpu *vcpu) { - struct kvm_host_data *host; - struct kvm_pmu_events *pmu; - - host = container_of(host_ctxt, struct kvm_host_data, host_ctxt); - pmu = &host->pmu_events; + struct kvm_pmu_events *pmu = &vcpu->arch.pmu.events; if (pmu->events_host) write_sysreg(pmu->events_host, pmcntenclr_el0); @@ -143,13 +139,9 @@ static bool __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt) /** * Disable guest events, enable host events */ -static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) +static void __pmu_switch_to_host(struct kvm_vcpu *vcpu) { - struct kvm_host_data *host; - struct kvm_pmu_events *pmu; - - host = container_of(host_ctxt, struct kvm_host_data, host_ctxt); - pmu = &host->pmu_events; + struct kvm_pmu_events *pmu = &vcpu->arch.pmu.events; if (pmu->events_guest) write_sysreg(pmu->events_guest, pmcntenclr_el0); @@ -274,7 +266,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) host_ctxt->__hyp_running_vcpu = vcpu; guest_ctxt = &vcpu->arch.ctxt; - pmu_switch_needed = __pmu_switch_to_guest(host_ctxt); + pmu_switch_needed = __pmu_switch_to_guest(vcpu); __sysreg_save_state_nvhe(host_ctxt); /* @@ -336,7 +328,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) __debug_restore_host_buffers_nvhe(vcpu); if (pmu_switch_needed) - __pmu_switch_to_host(host_ctxt); + __pmu_switch_to_host(vcpu); /* Returning to host will clear PSR.I, remask PMR if needed */ if (system_uses_irq_prio_masking()) diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 310d47c9990f..8f722692fb58 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -5,7 +5,8 @@ */ #include #include -#include + +DEFINE_PER_CPU(struct kvm_pmu_events, kvm_pmu_events); /* * Given the perf event attributes and system type, determine @@ -27,12 +28,7 @@ static bool kvm_pmu_switch_needed(struct perf_event_attr *attr) static struct kvm_pmu_events *get_kvm_pmu_events(void) { - struct kvm_host_data *ctx = this_cpu_ptr_hyp_sym(kvm_host_data); - - if (!ctx) - return NULL; - - return &ctx->pmu_events; + return this_cpu_ptr(&kvm_pmu_events); } /* @@ -43,7 +39,7 @@ void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) { struct kvm_pmu_events *pmu = get_kvm_pmu_events(); - if (!kvm_arm_support_pmu_v3() || !pmu || !kvm_pmu_switch_needed(attr)) + if (!kvm_arm_support_pmu_v3() || !kvm_pmu_switch_needed(attr)) return; if (!attr->exclude_host) @@ -59,7 +55,7 @@ void kvm_clr_pmu_events(u32 clr) { struct kvm_pmu_events *pmu = get_kvm_pmu_events(); - if (!kvm_arm_support_pmu_v3() || !pmu) + if (!kvm_arm_support_pmu_v3()) return; pmu->events_host &= ~clr; @@ -213,3 +209,11 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) kvm_vcpu_pmu_enable_el0(events_host); kvm_vcpu_pmu_disable_el0(events_guest); } + +void kvm_vcpu_pmu_load(struct kvm_vcpu *vcpu) +{ + kvm_vcpu_pmu_restore_guest(vcpu); + + if (kvm_arm_support_pmu_v3() && !has_vhe()) + vcpu->arch.pmu.events = *get_kvm_pmu_events(); +} diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 20193416d214..0b3898e0313f 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -20,6 +20,11 @@ struct kvm_pmc { struct perf_event *perf_event; }; +struct kvm_pmu_events { + u32 events_host; + u32 events_guest; +}; + struct kvm_pmu { int irq_num; struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS]; @@ -27,6 +32,7 @@ struct kvm_pmu { bool created; bool irq_level; struct irq_work overflow_work; + struct kvm_pmu_events events; }; struct arm_pmu_entry { From patchwork Fri Apr 8 08:40:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12806297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C129CC433EF for ; Fri, 8 Apr 2022 08:42:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=5cfl5Ii4TecSbuj2FKgto0/xOujA56VpPVYhAS8nZrE=; b=hIURz+BmMRKwuHW9lIAHUSiyVn v/9lQFsM4S2qu465dubVnfzGT1NCaJm98Noqr5HrTVMcuvXAlj30vB/0AVyKsKWwe7yEOzX4tpVuU igWEmpNR9HhNgyKYrgkFSxvrCnkZOYpWFNhhRtpfiQX1D3BxbKIoE1gmhih1XfM+DeFXbWNlmj+02 dHInghWNGz+eEnOp8unqSzbpEM+1gfyYbN1dwKnVe+EVOiI9QNe3n60mGAKaRyBnEoNSzy1FTz8yi 6H+RtniwBqyXZFW2CnVN+yBJDsCvdqUcg+oYFbVlt1O2EPvnsnnF7sr5eKQWzqtmy41RRCys7s0vi /cjmHa6Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nckBY-00Fxki-GK; Fri, 08 Apr 2022 08:41:40 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nckAw-00FxYT-LL for linux-arm-kernel@lists.infradead.org; Fri, 08 Apr 2022 08:41:04 +0000 Received: by mail-wm1-x34a.google.com with SMTP id z16-20020a05600c0a1000b0038bebbd8548so5672055wmp.3 for ; Fri, 08 Apr 2022 01:41:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Qfv1bEj/1FaZXZU3R+WtmBHPG36iPZiljshgmhtHSRY=; b=q/pdDMcK5gprrylAUTYuOJZsZkavUx9FZWlr6Hs4uzxrrgeJy35Zd+3fz60kJpOjMu dzbOB6q+f7xOq/kVh4PY2Mxr8iSJR5mFtc2hS3ixQy6nq0jWtNz/7x4mBeErP9DBsydD JLTWEWl9IT91vQUdT0mvUJMnPOil8xbOOWLJCDAItOiZ/+IRQpJVx3/THcnT+pGaRtuF ybDszdnnKKI0vbUzcs90ztabWpRLnf93OJRUDyI/OZSCUaGuA3wv+Cmnl9w+K7LRojNH 5cszeXqxYCftWDslzXHt+RB9+227nWxefKDyCW3kn/UeoBlaRF+nD/ECzBghKNdn/BAr zkwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Qfv1bEj/1FaZXZU3R+WtmBHPG36iPZiljshgmhtHSRY=; b=eA1QrLSqobHA+N5RjQt+CF9EUmARCquKcyFpBO2Mr0P02MPKiykiEKK2QPtcZ22sXo 0LCObx5BgoYHh3hEZN4R5csHxJEChzOQW7QccB+oR2sjvSTGRqrtvLGBtjXiy/9Kt5LK CktNkAGY8xTTSzvohsao7CESWF4kRpkdboTAl97L15BsADN75O3KiYjg52zqg+lC5l5Z LNhFY7ytNdmd4mFyzjtArHC/MmJVT9K54BfIDuVtSCK07nnuDTuVeyqHHpDtGirgwPVq 33SfWnOuxmepIa6NcFlyw/oKGfw11PtvL1e9e9ru446mWbvpAkkFIrYzGmqXuuxsThP2 g5GQ== X-Gm-Message-State: AOAM532zi8EMkYari047qe54QuFiePOfwYQGY6MdnuIQ1anzvqn4Da/d sdLRmj88zQC0R8A1JpadJRzqy6E6IQ== X-Google-Smtp-Source: ABdhPJwmyP1B7N90b76cR/52rVQeb/UL9ovxCWsDFhyyeP+/wTKdRHU8ZL4uAjL3dvwe4mBAIoTQowRf5A== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:600c:4e92:b0:38c:73e8:7dcd with SMTP id f18-20020a05600c4e9200b0038c73e87dcdmr16307000wmq.196.1649407260887; Fri, 08 Apr 2022 01:41:00 -0700 (PDT) Date: Fri, 8 Apr 2022 09:40:52 +0100 In-Reply-To: <20220408084052.3310931-1-tabba@google.com> Message-Id: <20220408084052.3310931-4-tabba@google.com> Mime-Version: 1.0 References: <20220408084052.3310931-1-tabba@google.com> X-Mailer: git-send-email 2.35.1.1178.g4f1659d476-goog Subject: [PATCH v1 3/3] KVM: arm64: Reenable pmu in Protected Mode From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, qperret@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, drjones@redhat.com, linux-arm-kernel@lists.infradead.org, tabba@google.com, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220408_014102_778753_C69F7651 X-CRM114-Status: GOOD ( 11.03 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now that the pmu code does not access hyp data, reenable it in protected mode. No functional change intended outside of protected mode. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/pmu-emul.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 78fdc443adc7..dc1779d4c7dd 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -756,8 +756,7 @@ void kvm_host_pmu_init(struct arm_pmu *pmu) { struct arm_pmu_entry *entry; - if (pmu->pmuver == 0 || pmu->pmuver == ID_AA64DFR0_PMUVER_IMP_DEF || - is_protected_kvm_enabled()) + if (pmu->pmuver == 0 || pmu->pmuver == ID_AA64DFR0_PMUVER_IMP_DEF) return; mutex_lock(&arm_pmus_lock);