From patchwork Fri Feb 3 04:20:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13126992 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AA3D7C636CC for ; Fri, 3 Feb 2023 04:24:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=FEApnehfImNSSFldUcJl7WSyZUuh/l/o+ywWsdD6+XM=; b=Hu2nKD3BAvkxDYuFC/Je/i9GOA q5JtBbYxIdzTy4Pc2KbQ3MmR+xWETI/Dwbc1zJ8zCY9yUOfwv1q8RcVr6GZpPxn+0naoprf94ZiGv 9SveLV3vUeHAHMK1X5aGtzpA/H7Mc+iJhfzI0ilNKuyWd6eDVyh7N9C7lOjq93NFBB7QgETir4P+9 Vrbj1xZ7ITGErAS6Ywj2Qjwj2x4G9PybwFHLfWrsYr3WkOm0+kVJEb2mhbHsYAqyBvNxkx0iHO2Jd wcXkgXgMQi/jD0oS7TuLBX5SrwnW0oqJO9tm871+jF6AHrQx/t5zac0n5r6gv8ZVPEca1hMBUA7Xd jYStVSqA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnbk-000JZy-UB; Fri, 03 Feb 2023 04:23:29 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNnbh-000JXk-7q for linux-arm-kernel@lists.infradead.org; Fri, 03 Feb 2023 04:23:26 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-4c11ae6ab25so39884167b3.8 for ; Thu, 02 Feb 2023 20:23:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bsxNffR5T4+GCyVU1rQER4x81/cUB6ikFRzN3Ch65v8=; b=im+oblvNpOXv0tNoFXmcYty6GGMOtyQ16C4YJhhKMWKcpj4ix+8iIeWPIjk1y+P7TK TAme7nDlXvpnp7/PRBqUXsdjOZA3WnjKzaHxuwhdNmKSwHHk+ToMvvS5HS+7eiP7zduL kVaMmsc3UmkmkCqgDoZAs/ksnsY7kREkUCfzQFH3xkJ9P4NVjEFhs1kx9wx6RZp1bxO9 lLncFtAdu9yOSttS6YvvkPSPpo8zevrYj3t8gbuDhwTlfzhJHXhx8fhdL7x1/033uaQj hYKNtWH2UjCjoWeQ4vBxBdUQMKLUQJGrszIgDTGrnK9E9o2Iee3nbcFmKoKmVsG3ZTqQ z6Og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bsxNffR5T4+GCyVU1rQER4x81/cUB6ikFRzN3Ch65v8=; b=Jq/RXTe8mLkM4euQidcZfON+GDSqHmsZhN+cCPkO60ioAz/Hm14vnkP1VH1PepTp1J KMjRzGefdVRx4f/XUUBC3VvkhQC3ea68OfOQNJzwm98r1agKvUwu6+cSnpzO68DWsObg 2mtmHZBCOTZPoo2YrzIqBd85t15qi0JdDtz8IHNG3JUCMlALjAlQBH0KLeFuMLH8Fiju b2S4evX6JtZZVT9VFQtetPkfAYljJfdvexj+P0J8uFEIiMOrfCslVGBDzBARuS2d3iyO AjQD1T/DPcVFBIt8lJHGmQ/ypYuXKxXVctKFWJLxwuR71UBdQEjTC6zzKYFDOeK3Zi0S nUGg== X-Gm-Message-State: AO0yUKV1dbKsTT8gqFEUZJBIBXRx8xMRiulYVmqfOj7tuY1o/lfV5yrM 5TruZ3krIp04CB4pHEscQzP4zzLpy0M= X-Google-Smtp-Source: AK7set/wpJYN/xrCwxDVv1wcqFOvXuH8Zi1m5CeU9pjOV2IyOIeXUl9PfgDkrAGzUL5m4JkVGoZ6jumXfPY= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a81:4b8c:0:b0:525:483e:5cb4 with SMTP id y134-20020a814b8c000000b00525483e5cb4mr0ywa.6.1675398201590; Thu, 02 Feb 2023 20:23:21 -0800 (PST) Date: Thu, 2 Feb 2023 20:20:46 -0800 In-Reply-To: <20230203042056.1794649-1-reijiw@google.com> Mime-Version: 1.0 References: <20230203042056.1794649-1-reijiw@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230203042056.1794649-3-reijiw@google.com> Subject: [PATCH v3 04/14] KVM: arm64: PMU: Don't use the PMUVer of the PMU set for the guest From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Shaoqin Huang , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230202_202325_299558_DE215988 X-CRM114-Status: GOOD ( 16.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org KVM uses two potentially different PMUVer for a vCPU with PMU configured (kvm->arch.dfr0_pmuver.imp and kvm->arch.arm_pmu->pmuver). Stop using the host's PMUVer (arm_pmu->pmuver) in most cases, as the PMUVer for the guest (kvm->arch.dfr0_pmuver.imp) could be set by userspace (could be lower than the host's PMUVer). The only exception to KVM using the host's PMUVer is to create an event filter (KVM_ARM_VCPU_PMU_V3_FILTER). For this, KVM uses the value to determine the valid range of the event, and as the size of the event filter bitmap. Using the host's PMUVer here will allow KVM to keep the compatibility with the current behavior of the PMU_V3_FILTER. Also, that will allow KVM to keep the entire filter when PMUVer for the guest is changed, and KVM only need to change the actual range of use. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/pmu-emul.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 49580787ee09..701728ad78d6 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -35,12 +35,8 @@ static struct kvm_pmc *kvm_vcpu_idx_to_pmc(struct kvm_vcpu *vcpu, int cnt_idx) return &vcpu->arch.pmu.pmc[cnt_idx]; } -static u32 kvm_pmu_event_mask(struct kvm *kvm) +static u32 __kvm_pmu_event_mask(u8 pmuver) { - unsigned int pmuver; - - pmuver = kvm->arch.arm_pmu->pmuver; - switch (pmuver) { case ID_AA64DFR0_EL1_PMUVer_IMP: return GENMASK(9, 0); @@ -55,6 +51,11 @@ static u32 kvm_pmu_event_mask(struct kvm *kvm) } } +static u32 kvm_pmu_event_mask(struct kvm *kvm) +{ + return __kvm_pmu_event_mask(kvm->arch.dfr0_pmuver.imp); +} + /** * kvm_pmc_is_64bit - determine if counter is 64bit * @pmc: counter context @@ -755,7 +756,7 @@ u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) * Don't advertise STALL_SLOT, as PMMIR_EL0 is handled * as RAZ */ - if (vcpu->kvm->arch.arm_pmu->pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P4) + if (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P4) val &= ~BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32); base = 32; } @@ -955,7 +956,12 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) struct kvm_pmu_event_filter filter; int nr_events; - nr_events = kvm_pmu_event_mask(kvm) + 1; + /* + * Allocate an event filter for the entire range supported + * by the PMU hardware so we can simply change the actual + * range of use when the PMUVer for the guest is changed. + */ + nr_events = __kvm_pmu_event_mask(kvm->arch.dfr0_pmuver.imp_limit) + 1; uaddr = (struct kvm_pmu_event_filter __user *)(long)attr->addr;