From patchwork Fri Apr 8 08:40:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12806295 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 39593C433F5 for ; Fri, 8 Apr 2022 08:42:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Orp/5jo5NjzUj5aGWXQK7wBnATrCCZezQrbpMVB1jaY=; b=D9TEn/dt+vkw2d6zCL2417uY2h uFWf44Vgv2OvgfdbWkK91HVcD/q0zbJr7QJndaIi9C/uS4XJqS3xdGmoln7+rVQdPjZArQssVS6sD xucEAKvOysZGujFBK1CEIP5vDQbHQuQruzMq5h6LbYL5br3aRIWna/4mfNbQBMrTAgLVwj0ow/iW2 8MzSpB4W+gD8UVQI0FGDfnapCSdTpMiBzkixfo3ZvPVMCp1jD5Z85Vj9n7DFAb/bfC+z3kTQVgf/L bs6w6bW+CiHlA3/YcM0BZRUZBKa4M9Vx8XtKVCPiBkvfTL4rjqT0hIZJZleslYXrxpO7rYxzpoPtT 78oPLoKw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nckBC-00FxdM-UY; Fri, 08 Apr 2022 08:41:19 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nckAs-00FxWd-Rg for linux-arm-kernel@lists.infradead.org; Fri, 08 Apr 2022 08:41:00 +0000 Received: by mail-wm1-x34a.google.com with SMTP id z16-20020a05600c0a1000b0038bebbd8548so5671972wmp.3 for ; Fri, 08 Apr 2022 01:40:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=LyDJmbc6nMyu9SqFkdCVfRuw20aqoQ+eVo1CxcriK0Q=; b=AWfeAcuBrGqYMES/PReNxM1XZXjrhduKo6KW+tfZ/y3VNlNUj5YwKv2mJJDYhHQ++/ 9ObdW3SCMl5GDRzCmv1sEcpbdrfyXPaNSp5O9Cr+8nFeTlSm2fG3IndQ0YdInAaeag7U NQB9utZaMymR3+nrvNVt99T5BgD2HGaymqkN/wTT2kldkOJLNIvyeAebzlwu3293Zjam d3IautnXo40AVx4lxQSjklEVdBgt4TnG0uCsgnu00gV/UfOgYXUF2vfCEpJKXG3/GQ38 YCbWypVbzjm1SmJHin+D6nWnYP3FpKD1PJoWmCy9ivRJLiI+3nLo1bhh8gkbgHrM3S8N H+Dw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=LyDJmbc6nMyu9SqFkdCVfRuw20aqoQ+eVo1CxcriK0Q=; b=A6PLm5Wft8fPl+N9Lwkl6jbF++kPY//LtgjiKfT9X/BBLns2hdKcXY0xsfvewGCmM1 qllVYK2b6ToAQjw5Mp25M0QLBVGbfckfsUechmJuywsyE5QQsSHP5Lv5V8xLuoj5p2ml l/s+jZE52ASsCEMJYL6XsvoVzYEYE9belNf1w6giCDYUf/LfQpGHktKwBD8RNuSmcH6X ByuVQen2sXjqfHxSr7ae4JIh3sIBe+rf4VY2LcrGimS3y1xTkv+woTgwoTOUSfRcjwlU PzQnWwljpSv2/33VKVdd/htjD41Rey8NcI9cBmw0PeECPNo+spZxQ1fn7/qrE1NThJIE dU0w== X-Gm-Message-State: AOAM531Dp1/DKL/i4Y5cXR3f7kXGPq1oPy/0mLMlbS6raoo8/lDJz1KL M7HCWHnAa6XrZjr6FAdJznyyB17hFA== X-Google-Smtp-Source: ABdhPJwcwJKTFKHdWVHs0DHwjSzunvpscueaTN17VxvDtj4+qD4ue3cgyCG+Nu8rIuQWDFDjaGq2K96DpA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:600c:3789:b0:38c:bd93:77d6 with SMTP id o9-20020a05600c378900b0038cbd9377d6mr16172413wmr.12.1649407256759; Fri, 08 Apr 2022 01:40:56 -0700 (PDT) Date: Fri, 8 Apr 2022 09:40:50 +0100 In-Reply-To: <20220408084052.3310931-1-tabba@google.com> Message-Id: <20220408084052.3310931-2-tabba@google.com> Mime-Version: 1.0 References: <20220408084052.3310931-1-tabba@google.com> X-Mailer: git-send-email 2.35.1.1178.g4f1659d476-goog Subject: [PATCH v1 1/3] KVM: arm64: Wrapper for getting pmu_events From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, qperret@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, drjones@redhat.com, linux-arm-kernel@lists.infradead.org, tabba@google.com, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220408_014058_938051_5E450143 X-CRM114-Status: GOOD ( 13.29 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Eases migrating away from using hyp data and simplifies the code. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/pmu.c | 42 ++++++++++++++++++++++++++---------------- 1 file changed, 26 insertions(+), 16 deletions(-) diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 03a6c1f4a09a..310d47c9990f 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -25,21 +25,31 @@ static bool kvm_pmu_switch_needed(struct perf_event_attr *attr) return (attr->exclude_host != attr->exclude_guest); } +static struct kvm_pmu_events *get_kvm_pmu_events(void) +{ + struct kvm_host_data *ctx = this_cpu_ptr_hyp_sym(kvm_host_data); + + if (!ctx) + return NULL; + + return &ctx->pmu_events; +} + /* * Add events to track that we may want to switch at guest entry/exit * time. */ void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) { - struct kvm_host_data *ctx = this_cpu_ptr_hyp_sym(kvm_host_data); + struct kvm_pmu_events *pmu = get_kvm_pmu_events(); - if (!kvm_arm_support_pmu_v3() || !ctx || !kvm_pmu_switch_needed(attr)) + if (!kvm_arm_support_pmu_v3() || !pmu || !kvm_pmu_switch_needed(attr)) return; if (!attr->exclude_host) - ctx->pmu_events.events_host |= set; + pmu->events_host |= set; if (!attr->exclude_guest) - ctx->pmu_events.events_guest |= set; + pmu->events_guest |= set; } /* @@ -47,13 +57,13 @@ void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) */ void kvm_clr_pmu_events(u32 clr) { - struct kvm_host_data *ctx = this_cpu_ptr_hyp_sym(kvm_host_data); + struct kvm_pmu_events *pmu = get_kvm_pmu_events(); - if (!kvm_arm_support_pmu_v3() || !ctx) + if (!kvm_arm_support_pmu_v3() || !pmu) return; - ctx->pmu_events.events_host &= ~clr; - ctx->pmu_events.events_guest &= ~clr; + pmu->events_host &= ~clr; + pmu->events_guest &= ~clr; } #define PMEVTYPER_READ_CASE(idx) \ @@ -169,16 +179,16 @@ static void kvm_vcpu_pmu_disable_el0(unsigned long events) */ void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) { - struct kvm_host_data *host; + struct kvm_pmu_events *pmu; u32 events_guest, events_host; if (!kvm_arm_support_pmu_v3() || !has_vhe()) return; preempt_disable(); - host = this_cpu_ptr_hyp_sym(kvm_host_data); - events_guest = host->pmu_events.events_guest; - events_host = host->pmu_events.events_host; + pmu = get_kvm_pmu_events(); + events_guest = pmu->events_guest; + events_host = pmu->events_host; kvm_vcpu_pmu_enable_el0(events_guest); kvm_vcpu_pmu_disable_el0(events_host); @@ -190,15 +200,15 @@ void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) */ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) { - struct kvm_host_data *host; + struct kvm_pmu_events *pmu; u32 events_guest, events_host; if (!kvm_arm_support_pmu_v3() || !has_vhe()) return; - host = this_cpu_ptr_hyp_sym(kvm_host_data); - events_guest = host->pmu_events.events_guest; - events_host = host->pmu_events.events_host; + pmu = get_kvm_pmu_events(); + events_guest = pmu->events_guest; + events_host = pmu->events_host; kvm_vcpu_pmu_enable_el0(events_host); kvm_vcpu_pmu_disable_el0(events_guest);