From patchwork Mon Jan 18 17:30:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 12027943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11629C433DB for ; Mon, 18 Jan 2021 17:32:45 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9A5E522CA0 for ; Mon, 18 Jan 2021 17:32:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9A5E522CA0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:MIME-Version:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:In-Reply-To:References:List-Owner; bh=kyvPXNV1RmOBy4ir28sUAfM2rYnicB5hAC0qEV1COjA=; b=soiXy0mV1z5RHLfpNUNNBJkH8d G+KhiJWCPLUVWHicu55gcoZ3IjxRZ7cSl3ftfXUIpxcyxPktw80IcNp16YxSGzailqWMkjzAF+VjS S4zDMnCQm3QzSsxHMtBO3nmpbNM92IFZFCSetFhKSn4giiquemZLzX0RSH5c9vxhQUZilfc7/C962 9oztxSn45l+G9x7GEyrg7wKn+9vELViJfPdGxpWqDbnyv7dRXrmAI4L29k0FptCm8Q6bzKOW1yccd taygfqd5qiUAlGxU4KY+ZuLaI5BJwll/fAkmTZzECZ26UanyrtqCIr2YgwR3s3Dxo9Wkl29SY5pzn 7y1Nzu8w==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l1YMu-0002ez-4j; Mon, 18 Jan 2021 17:31:08 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l1YMq-0002ee-0m for linux-arm-kernel@lists.infradead.org; Mon, 18 Jan 2021 17:31:05 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3048A31B; Mon, 18 Jan 2021 09:31:01 -0800 (PST) Received: from donnerap.arm.com (donnerap.cambridge.arm.com [10.1.195.35]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D343E3F719; Mon, 18 Jan 2021 09:30:59 -0800 (PST) From: Andre Przywara To: Marc Zyngier , Will Deacon , Catalin Marinas Subject: [RFC PATCH] KVM: arm64: Avoid unconditional PMU register access Date: Mon, 18 Jan 2021 17:30:54 +0000 Message-Id: <20210118173054.188160-1-andre.przywara@arm.com> X-Mailer: git-send-email 2.17.1 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210118_123104_184825_88251480 X-CRM114-Status: GOOD ( 16.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , James Morse , Julien Thierry , Haibo Xu , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The ARM PMU is an optional CPU feature, so we should consult the CPUID registers before accessing any PMU related registers. However the KVM code accesses some PMU registers (PMUSERENR_EL0 and PMSEL_EL0) unconditionally, when activating the traps. This wasn't a problem so far, because every(?) silicon implements the PMU, with KVM guests being the lone exception, and those never ran KVM host code. As this is about to change with nested virt, we need to guard PMU register accesses with a proper CPU feature check. Add a new CPU capability, which marks whether we have at least the basic PMUv3 registers available. Use that in the KVM VHE code to check before accessing the PMU registers. Signed-off-by: Andre Przywara --- Hi, not sure a new CPU capability isn't a bit over the top here, and we should use a simple static key instead? Cheers, Andre arch/arm64/include/asm/cpucaps.h | 3 ++- arch/arm64/kernel/cpufeature.c | 10 ++++++++++ arch/arm64/kvm/hyp/include/hyp/switch.h | 9 ++++++--- 3 files changed, 18 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index b77d997b173b..e3a002583c43 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -66,7 +66,8 @@ #define ARM64_WORKAROUND_1508412 58 #define ARM64_HAS_LDAPR 59 #define ARM64_KVM_PROTECTED_MODE 60 +#define ARM64_HAS_PMUV3 61 -#define ARM64_NCAPS 61 +#define ARM64_NCAPS 62 #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index e99eddec0a46..54d23d38322d 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2154,6 +2154,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_cpuid_feature, .min_field_value = 1, }, + { + .desc = "ARM PMUv3 support", + .capability = ARM64_HAS_PMUV3, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .sys_reg = SYS_ID_AA64DFR0_EL1, + .sign = FTR_SIGNED, + .field_pos = ID_AA64DFR0_PMUVER_SHIFT, + .matches = has_cpuid_feature, + .min_field_value = 1, + }, {}, }; diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 84473574c2e7..622baf7b7371 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -90,15 +90,18 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu) * counter, which could make a PMXEVCNTR_EL0 access UNDEF at * EL1 instead of being trapped to EL2. */ - write_sysreg(0, pmselr_el0); - write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); + if (cpus_have_final_cap(ARM64_HAS_PMUV3)) { + write_sysreg(0, pmselr_el0); + write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); + } write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); } static inline void __deactivate_traps_common(void) { write_sysreg(0, hstr_el2); - write_sysreg(0, pmuserenr_el0); + if (cpus_have_final_cap(ARM64_HAS_PMUV3)) + write_sysreg(0, pmuserenr_el0); } static inline void ___activate_traps(struct kvm_vcpu *vcpu)