From patchwork Wed Aug 25 16:17:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1673AC4338F for ; Wed, 25 Aug 2021 16:31:53 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CEC59610A3 for ; Wed, 25 Aug 2021 16:31:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org CEC59610A3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=w/ZrIC3KFU7sqdbq2Rw/oL33LIGbtz4MZ2WqDi2i0QE=; b=PV8nBjaZbyuHon z16Pz7ALqFEL9A6YZEyD4x5ftMrZifpz4MPCZmkxYx8vXz5sDcJJ51dYRzWubErELbfKT6dynFGUA 11Bcw2GZGcybtLdh4IBX1fzBLLQUvaSOBMWLN9xgcFhZhTgjLHxMjHOMDN4J4do19EY0LrNaJEYR3 Tzo7S5jJnXnKKr8GaMBnVe2NsfXPlUa7QlkDJC22t14IDSK6ipn9SfuzPzFTFvKqNRIRq+My7kyJL WgHzF9ttJbMf2b0ALtjfGivGOBL1SGFcU3L4w1qSOJ6Of6snGa40EkASjmAaWVeFW4qgzvgXbpatk lmgaMYtrzpXtLNW8SQHw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvn0-007miw-Cu; Wed, 25 Aug 2021 16:30:11 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvax-007gXn-1D for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:44 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A439D113E; Wed, 25 Aug 2021 09:17:42 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 559F33F66F; Wed, 25 Aug 2021 09:17:41 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 19/39] KVM: arm64: Do not emulate SPE on CPUs which don't have SPE Date: Wed, 25 Aug 2021 17:17:55 +0100 Message-Id: <20210825161815.266051-20-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091743_220597_BA071E36 X-CRM114-Status: GOOD ( 15.70 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The kernel allows heterogeneous systems where FEAT_SPE is not present on all CPUs. This presents a challenge for KVM, as it will have to touch the SPE registers when emulating SPE for a guest, and those accesses will cause an undefined exception if SPE is not present on the CPU. Avoid this situation by comparing the cpumask of the physical CPUs that support SPE with the cpu list provided by userspace via the KVM_ARM_VCPU_SUPPORTED_CPUS ioctl and refusing the run the VCPU if there is a mismatch. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_spe.h | 2 ++ arch/arm64/kvm/arm.c | 3 +++ arch/arm64/kvm/spe.c | 12 ++++++++++++ 3 files changed, 17 insertions(+) diff --git a/arch/arm64/include/asm/kvm_spe.h b/arch/arm64/include/asm/kvm_spe.h index 328115ce0b48..ed67ddbf8132 100644 --- a/arch/arm64/include/asm/kvm_spe.h +++ b/arch/arm64/include/asm/kvm_spe.h @@ -16,11 +16,13 @@ static __always_inline bool kvm_supports_spe(void) void kvm_spe_init_supported_cpus(void); void kvm_spe_vm_init(struct kvm *kvm); +int kvm_spe_check_supported_cpus(struct kvm_vcpu *vcpu); #else #define kvm_supports_spe() (false) static inline void kvm_spe_init_supported_cpus(void) {} static inline void kvm_spe_vm_init(struct kvm *kvm) {} +static inline int kvm_spe_check_supported_cpus(struct kvm_vcpu *vcpu) { return -ENOEXEC; } #endif /* CONFIG_KVM_ARM_SPE */ #endif /* __ARM64_KVM_SPE_H__ */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 82cb7b5b3b45..8f7025f2e4a0 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -633,6 +633,9 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) if (!kvm_arm_vcpu_is_finalized(vcpu)) return -EPERM; + if (kvm_vcpu_has_spe(vcpu) && kvm_spe_check_supported_cpus(vcpu)) + return -EPERM; + vcpu->arch.has_run_once = true; kvm_arm_vcpu_init_debug(vcpu); diff --git a/arch/arm64/kvm/spe.c b/arch/arm64/kvm/spe.c index 83f92245f881..8d2afc137151 100644 --- a/arch/arm64/kvm/spe.c +++ b/arch/arm64/kvm/spe.c @@ -30,3 +30,15 @@ void kvm_spe_vm_init(struct kvm *kvm) /* Set supported_cpus if it isn't already initialized. */ kvm_spe_init_supported_cpus(); } + +int kvm_spe_check_supported_cpus(struct kvm_vcpu *vcpu) +{ + /* SPE is supported on all CPUs, we don't care about the VCPU mask */ + if (cpumask_equal(supported_cpus, cpu_possible_mask)) + return 0; + + if (!cpumask_subset(&vcpu->arch.supported_cpus, supported_cpus)) + return -ENOEXEC; + + return 0; +}