From patchwork Fri Jun 21 09:37:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 11009183 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C58A91580 for ; Fri, 21 Jun 2019 09:39:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AE98E289A6 for ; Fri, 21 Jun 2019 09:39:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A2CF2289E5; Fri, 21 Jun 2019 09:39:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 45CF5289A6 for ; Fri, 21 Jun 2019 09:39:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726597AbfFUJjT (ORCPT ); Fri, 21 Jun 2019 05:39:19 -0400 Received: from foss.arm.com ([217.140.110.172]:53826 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726210AbfFUJjT (ORCPT ); Fri, 21 Jun 2019 05:39:19 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D95741478; Fri, 21 Jun 2019 02:39:18 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 860443F246; Fri, 21 Jun 2019 02:39:17 -0700 (PDT) From: Marc Zyngier To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Cc: Andre Przywara , Christoffer Dall , Dave Martin , Jintack Lim , Julien Thierry , James Morse , Suzuki K Poulose Subject: [PATCH 04/59] KVM: arm64: nv: Introduce nested virtualization VCPU feature Date: Fri, 21 Jun 2019 10:37:48 +0100 Message-Id: <20190621093843.220980-5-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190621093843.220980-1-marc.zyngier@arm.com> References: <20190621093843.220980-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall Introduce the feature bit and a primitive that checks if the feature is set behind a static key check based on the cpus_have_const_cap check. Checking nested_virt_in_use() on systems without nested virt enabled should have neglgible overhead. We don't yet allow userspace to actually set this feature. Signed-off-by: Christoffer Dall Signed-off-by: Marc Zyngier Reviewed-by: Julien Thierry --- arch/arm/include/asm/kvm_nested.h | 9 +++++++++ arch/arm64/include/asm/kvm_nested.h | 13 +++++++++++++ arch/arm64/include/uapi/asm/kvm.h | 1 + 3 files changed, 23 insertions(+) create mode 100644 arch/arm/include/asm/kvm_nested.h create mode 100644 arch/arm64/include/asm/kvm_nested.h diff --git a/arch/arm/include/asm/kvm_nested.h b/arch/arm/include/asm/kvm_nested.h new file mode 100644 index 000000000000..124ff6445f8f --- /dev/null +++ b/arch/arm/include/asm/kvm_nested.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ARM_KVM_NESTED_H +#define __ARM_KVM_NESTED_H + +#include + +static inline bool nested_virt_in_use(const struct kvm_vcpu *vcpu) { return false; } + +#endif /* __ARM_KVM_NESTED_H */ diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h new file mode 100644 index 000000000000..8a3d121a0b42 --- /dev/null +++ b/arch/arm64/include/asm/kvm_nested.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ARM64_KVM_NESTED_H +#define __ARM64_KVM_NESTED_H + +#include + +static inline bool nested_virt_in_use(const struct kvm_vcpu *vcpu) +{ + return cpus_have_const_cap(ARM64_HAS_NESTED_VIRT) && + test_bit(KVM_ARM_VCPU_NESTED_VIRT, vcpu->arch.features); +} + +#endif /* __ARM64_KVM_NESTED_H */ diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index d819a3e8b552..563e2a8bae93 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -106,6 +106,7 @@ struct kvm_regs { #define KVM_ARM_VCPU_SVE 4 /* enable SVE for this CPU */ #define KVM_ARM_VCPU_PTRAUTH_ADDRESS 5 /* VCPU uses address authentication */ #define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */ +#define KVM_ARM_VCPU_NESTED_VIRT 7 /* Support nested virtualization */ struct kvm_vcpu_init { __u32 target;