From patchwork Sat Aug 19 04:39:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13358507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DCA11EE49A7 for ; Sat, 19 Aug 2023 04:40:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=bjvxcWOGXmnciDkAErg2UGDK1sTWycSWBX1cPY7Ny4M=; b=HodPpBuXLXTMlPe/AwMf9YG6/r liGncsA/nTxFdTbqQ8gPgjNxnS7qpUiLcgvkFrznszxC1AjPbsXA7XgVHDfYnsP/01nSmh/6peo2o 5cgaJ+JH3fnrktwuO+uUxQSQRn0NlqSbpnGxdz/u6vN3Oby3kePst7VyK5v1Z20xyMAa0+YtBLsp8 GxHM6xeC1/mV59mKUwngURiE/WvfsJ418z5cazy+rVBKH3t025FUruA30+QacrrJT6mhv0hmvG2UZ f7mLgHix6gWiUYY6t6H4ipAqVsWjLHcuxjgvRuH+bIg+5Ja3D8b9gjC00906JGfqsm+QMnDj9ss9q rfVib69Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qXDkr-00AOUV-0n; Sat, 19 Aug 2023 04:40:05 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qXDkn-00AOSS-2n for linux-arm-kernel@lists.infradead.org; Sat, 19 Aug 2023 04:40:03 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-58cf42a3313so22875037b3.0 for ; Fri, 18 Aug 2023 21:40:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692420000; x=1693024800; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TAo6+1RgBMzjn0+tgeK+RdjREY7saD7U6oafRnw0fx8=; b=rKmZ11ymHgsb8A+ZhygucloIc+RS76L/+ICgvOdkG41GnCGHyv62mTLMaS9ChptEpP lvA1xiqLn4J6R1X02OildZNEMxMF9TecNNRaYeH6rjJWyLQkqoJeMsGfp5JonrhYLNRk mDdPcat/mu45vFl46D66+iAGMZYw9xaBWB/TB2XxUh5u/wVpgjpMpB2nBT50PserrpvH 2EYr0YsubYR1ZjdOYElel/ObCyhTmWPldrY2I5/O26x1B45EYoCGCZ0I7tR4nieMf+HQ zW9aW/c2M1C+Apy+b8eb/JPXMOTaf9toLNiJNtU/pt/w1OhyIe4YTU8+J+GEdx6Sn4Mr UlDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692420000; x=1693024800; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TAo6+1RgBMzjn0+tgeK+RdjREY7saD7U6oafRnw0fx8=; b=k27/eC2RxYR1gcENnOd3NweimJ6t4uYWJRPvFlCQKLdcMG8aCMg7EEKqTw1sG5Axh7 CSD5VZUlqFBP8xdfislj8SjXXuXqca+aEXKd6gd939jfqWGA4H2IbA1ysxjWTTwUVVW9 5BKLNJd61+e3djiodcHB9qxiLBK5hqOMNuhZh4n7KgTlkYvZeKwCPIqI9pj7+aouo3Yd 7TdyHfoHgSEZMm8ZDq5z7SDH091/BitFmha22mVQNnGudU1x9UyUQpjSKAZckjBbHpen ua3l9NdLjJlNau6SakS4ozbWn0rQC3oMrnhV+JYZ4YEEtLm2D6AypJ6bWwTYQvn+jwIj wDGQ== X-Gm-Message-State: AOJu0YyWJ76L/zTn4Y7OYUd9qz1lRGDE8S/dMflJEUx/HvpPzpCWKf4f DLCILybYm4+I0fVePmRGDKFI3W2q4ao= X-Google-Smtp-Source: AGHT+IHX3mSDq9FhzDQbdS+XlHKtXxqy2p8nQlFtuOxi5Mt0UrnWf4ZYN4S6hnPUBVVyAcYA9P5COQcbbG4= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a25:768f:0:b0:d3f:ccc:2053 with SMTP id r137-20020a25768f000000b00d3f0ccc2053mr9506ybc.7.1692420000623; Fri, 18 Aug 2023 21:40:00 -0700 (PDT) Date: Fri, 18 Aug 2023 21:39:45 -0700 In-Reply-To: <20230819043947.4100985-1-reijiw@google.com> Mime-Version: 1.0 References: <20230819043947.4100985-1-reijiw@google.com> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog Message-ID: <20230819043947.4100985-3-reijiw@google.com> Subject: [PATCH v3 2/4] KVM: arm64: PMU: Avoid inappropriate use of host's PMUVer From: Reiji Watanabe To: Marc Zyngier , Oliver Upton , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230818_214001_908108_62D4AF39 X-CRM114-Status: GOOD ( 16.58 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Avoid using the PMUVer of the host's PMU hardware to determine the PMU event mask, except in one case, as the value of host's PMUVer may differ from the value of ID_AA64DFR0_EL1.PMUVer for the guest. The exception case is when using the PMUVer to determine the valid range of events for KVM_ARM_VCPU_PMU_V3_FILTER, as it has been allowing userspace to specify events that are valid for the PMU hardware, regardless of the value of the guest's ID_AA64DFR0_EL1.PMUVer. KVM will use a valid range of events based on the value of the guest's ID_AA64DFR0_EL1.PMUVer, in order to effectively filter events that the guest attempts to program though. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/pmu-emul.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 689bbd88fd69..eaeb8fea7971 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -36,12 +36,8 @@ static struct kvm_pmc *kvm_vcpu_idx_to_pmc(struct kvm_vcpu *vcpu, int cnt_idx) return &vcpu->arch.pmu.pmc[cnt_idx]; } -static u32 kvm_pmu_event_mask(struct kvm *kvm) +static u32 __kvm_pmu_event_mask(unsigned int pmuver) { - unsigned int pmuver; - - pmuver = kvm->arch.arm_pmu->pmuver; - switch (pmuver) { case ID_AA64DFR0_EL1_PMUVer_IMP: return GENMASK(9, 0); @@ -56,6 +52,14 @@ static u32 kvm_pmu_event_mask(struct kvm *kvm) } } +static u32 kvm_pmu_event_mask(struct kvm *kvm) +{ + u64 dfr0 = IDREG(kvm, SYS_ID_AA64DFR0_EL1); + u8 pmuver = SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, dfr0); + + return __kvm_pmu_event_mask(pmuver); +} + /** * kvm_pmc_is_64bit - determine if counter is 64bit * @pmc: counter context @@ -954,11 +958,17 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) return 0; } case KVM_ARM_VCPU_PMU_V3_FILTER: { + u8 pmuver = kvm_arm_pmu_get_pmuver_limit(); struct kvm_pmu_event_filter __user *uaddr; struct kvm_pmu_event_filter filter; int nr_events; - nr_events = kvm_pmu_event_mask(kvm) + 1; + /* + * Allow userspace to specify an event filter for the entire + * event range supported by PMUVer of the hardware, rather + * than the guest's PMUVer for KVM backward compatibility. + */ + nr_events = __kvm_pmu_event_mask(pmuver) + 1; uaddr = (struct kvm_pmu_event_filter __user *)(long)attr->addr;