From patchwork Thu Mar 3 03:54:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12767002 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4BC97C433F5 for ; Thu, 3 Mar 2022 03:57:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Mime-Version: Message-Id:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To: References:List-Owner; bh=LqFLzsw47fsCMlLd6v/6Onh/kFncDaULBVLp/dPO0xU=; b=Gvo 8apgMk3cvINu+a1tlSm9/k9YLFgl+Evn666ZbG7hkZhCoylGqW0iNAWrox/PfCyNFX3a1kainFv1V oyg4BbBFFa2HfI5dw3WUZeeJz5vlxcYZMu+ZfFaLtzfkm6IOTYbtIsX51kCRrMPNkL5BIllyioqeE QfHeH++h3uaZ5CL5rOYZzdHZ358Sl6NAmo1KpIQlOawY2sMhRcs2FDsoCoJ2TDx+5SdZ5zj4LcnH8 eGUjVJxVNNw28d7z7wEgOg7thwxJ4wVo+c0cfxOCi7I1FuakLHj4HWcGBze6yj9TB812obinSzThD rZIu0tyKik3UaMwRLq27ej0g+52xskw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nPcYn-0057xb-PH; Thu, 03 Mar 2022 03:55:25 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nPcYk-0057x5-0A for linux-arm-kernel@lists.infradead.org; Thu, 03 Mar 2022 03:55:23 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-2d07ae11462so31576317b3.8 for ; Wed, 02 Mar 2022 19:55:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc; bh=7hpXMqaCEkEB7Bx15AIkPLODG6VSJouGbjh17YaxmTc=; b=OLpT04cetG8fcEWG5/b5prO9PeE1zRzjatqOWS9m9F75ZeuNMmMwnRotwWDPlQaixf utXpi/1iwLCp6gBKiUY7SRcGDL80co1c7/yLGdrHvCuLL4SbwyWIrLcx1zvcfcc9Q0E5 zU7Znery7da7+ng1uq0iGaNam/ATVUnQQUGqxQG7kkTn2SdqWhQ4HlHSelu0zkzHEDBY 9CK6Cn81oYdbT3S54AV4e8RLcsmMRLhWUgXVWaAUPLAMXNket6a3SUxGtiqzeoJUvzZZ waUQlYvEmRwx+RLFBIZcGQbD+Qa3XJkD4vyeUcS81LaNGDKH50lfIXIM8Thxcmmslc6N MTjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=7hpXMqaCEkEB7Bx15AIkPLODG6VSJouGbjh17YaxmTc=; b=2bxuD/G/wOXwjMCFf2nImQGq0gtC4qe/9d1NNyobK0l3AXDgvMv6ewJXRmvKkhGlIf vzL3EnV/udZTZwZXlHt4dE9ZVIOyWaSkfqrApO2QQbqhyFoMntNpX36N4vltRHSg831c kMrPABZiVauR6/ECWh36WCg2TauXTRazdt4DL8/GmbN4rmd/g1oyddvk5y1qVYSAxTEF /Wa61bJmVgSjzsI635A28jyQ9Y92VFNJ3LHnD+8LlgA3n6BI3Uialo5vOLFyEEfNzvRX vY4hSLNW005E6V2r0Z/Ez8eDK/OdEah7HUpXzbDd9XlzWWreP/gX94liIaJVi4bVj3C7 KR0w== X-Gm-Message-State: AOAM532bje5SfW73yD0og+ecmhLgHVZ9nMwNrtq0zANvJPG36+kldhyn 7WwP+G4QarWDOOBzb2/YqggfmQgUJcs= X-Google-Smtp-Source: ABdhPJzUJIgejr3blx9qXWZIWOIzHNseYmCooyAJDB7EyY/J0oalqOvUtDwIAbZy2nCrb1py0++M00oZmGU= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a25:4454:0:b0:60a:69f9:f1c5 with SMTP id r81-20020a254454000000b0060a69f9f1c5mr32171447yba.285.1646279720231; Wed, 02 Mar 2022 19:55:20 -0800 (PST) Date: Wed, 2 Mar 2022 19:54:06 -0800 Message-Id: <20220303035408.3708241-1-reijiw@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.35.1.574.g5d30c73bfb-goog Subject: [PATCH v3 1/3] KVM: arm64: Generalise VM features into a set of flags From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220302_195522_085641_27F9DEF1 X-CRM114-Status: GOOD ( 15.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier We currently deal with a set of booleans for VM features, while they could be better represented as set of flags contained in an unsigned long, similarily to what we are doing on the CPU side. Signed-off-by: Marc Zyngier Reviewed-by: Andrew Jones Signed-off-by: Reiji Watanabe --- arch/arm64/include/asm/kvm_host.h | 12 +++++++----- arch/arm64/kvm/arm.c | 5 +++-- arch/arm64/kvm/mmio.c | 3 ++- 3 files changed, 12 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 5bc01e62c08a..11a7ae747ded 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -122,7 +122,10 @@ struct kvm_arch { * should) opt in to this feature if KVM_CAP_ARM_NISV_TO_USER is * supported. */ - bool return_nisv_io_abort_to_user; +#define KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER 0 + /* Memory Tagging Extension enabled for the guest */ +#define KVM_ARCH_FLAG_MTE_ENABLED 1 + unsigned long flags; /* * VM-wide PMU filter, implemented as a bitmap and big enough for @@ -133,9 +136,6 @@ struct kvm_arch { u8 pfr0_csv2; u8 pfr0_csv3; - - /* Memory Tagging Extension enabled for the guest */ - bool mte_enabled; }; struct kvm_vcpu_fault_info { @@ -786,7 +786,9 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); #define kvm_arm_vcpu_sve_finalized(vcpu) \ ((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED) -#define kvm_has_mte(kvm) (system_supports_mte() && (kvm)->arch.mte_enabled) +#define kvm_has_mte(kvm) \ + (system_supports_mte() && \ + test_bit(KVM_ARCH_FLAG_MTE_ENABLED, &(kvm)->arch.flags)) #define kvm_vcpu_has_pmu(vcpu) \ (test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features)) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index ecc5958e27fe..9a2d240ef6a3 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -89,7 +89,8 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, switch (cap->cap) { case KVM_CAP_ARM_NISV_TO_USER: r = 0; - kvm->arch.return_nisv_io_abort_to_user = true; + set_bit(KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER, + &kvm->arch.flags); break; case KVM_CAP_ARM_MTE: mutex_lock(&kvm->lock); @@ -97,7 +98,7 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, r = -EINVAL; } else { r = 0; - kvm->arch.mte_enabled = true; + set_bit(KVM_ARCH_FLAG_MTE_ENABLED, &kvm->arch.flags); } mutex_unlock(&kvm->lock); break; diff --git a/arch/arm64/kvm/mmio.c b/arch/arm64/kvm/mmio.c index 3e2d8ba11a02..3dd38a151d2a 100644 --- a/arch/arm64/kvm/mmio.c +++ b/arch/arm64/kvm/mmio.c @@ -135,7 +135,8 @@ int io_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) * volunteered to do so, and bail out otherwise. */ if (!kvm_vcpu_dabt_isvalid(vcpu)) { - if (vcpu->kvm->arch.return_nisv_io_abort_to_user) { + if (test_bit(KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER, + &vcpu->kvm->arch.flags)) { run->exit_reason = KVM_EXIT_ARM_NISV; run->arm_nisv.esr_iss = kvm_vcpu_dabt_iss_nisv_sanitized(vcpu); run->arm_nisv.fault_ipa = fault_ipa; From patchwork Thu Mar 3 03:54:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12767003 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 64154C433EF for ; Thu, 3 Mar 2022 03:57:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=sVdAesdFAxxGUxJ8OyDiXMR97i3UZ2KSGOStMwV52Oo=; b=TbFKAvwZp68CLei2Wr75KdNIvy eHDiemJ+Nhl+8tAIfyePTDivbGMiYc0ehLG1CoM4DYwEuZFyj8pW/QXQYMRk54aFRpsYeyQKonNEI UF6SlmcVGuqi+DAaHHqBr4q6roly6oQ72HPj1hzwdFs8B577YOzZs9P729Buwc6wIYCEWSiYOIWIT JFX2pobvAJZhIFQnxrHSIOamKmqVSphZ30CDT1CY0VpTc7+xzBdWMF49fw9AYCoKJ7JSMJIzf92wh XF3zWvVVXUkAX7QG67ouZixKmFwdd/aZolPa/UlAzgAAhf4O4BDOvj/XhpKbAaoCluufuLDQ7hw3o d1mvvZRg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nPcZR-00582u-8I; Thu, 03 Mar 2022 03:56:05 +0000 Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nPcZN-005824-JC for linux-arm-kernel@lists.infradead.org; Thu, 03 Mar 2022 03:56:03 +0000 Received: by mail-pf1-x44a.google.com with SMTP id 203-20020a6217d4000000b004f66d3542a5so594822pfx.11 for ; Wed, 02 Mar 2022 19:55:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=2O5tqm85dJQhkLMWZJPT1tS2tYSeXeUxQ48vdWcXam8=; b=bgptN9PY5GNZgfMeVshpiNtEDjxaMAlBk8Yqu0QEftcmyVcfjWL3WKh1QFk2dh9FNi 0ME1/Jjr4igNaLdvStc9ZdZtzcq5G+B2wY3smWw1YfQTcqa55l3ew49S25DarCwInMGV R+yYG/o0UtV0SDYL/7ODecscfIkjY1nLrI5gPnQzzsZLq+mTRfQSlZ1PpBvGNn4A+I2J ZRtf1HwsYMiYwxIsbXHVwTccJqn2YFhht1RdfW2ZPsVSGJxKaW//kkDdNqViJL8Sf49e OgtVqNiVFLy7vwK/IX5NmHEIA0xYm009EF7+bPtUnTIOOMYYdB3mfuCBhKy4gM1q/yyc +kzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=2O5tqm85dJQhkLMWZJPT1tS2tYSeXeUxQ48vdWcXam8=; b=7gHwh2heN6ipVj8WDT4B+duLg9NXAfe5F6jwiGW5u8M5hfjcrpwALTyCFshvhQU+t4 LWB21Zs3JrETO2UlMuwlw3aGjwbK/1/7kcg1qgUCvOewbxhB9yB8RV3fLkHwwJS96Qtt iy21+mbGbGG5jIFLFQ49/wYzKj5NTbsdrEz/F8PAoukLNHwVvOXwgQx37glt4cubTFHV 7XMMZEXSPelfw3oC9FdBfZZcUYeZJGkLigIgYD2u+OG5pwimyK1Ib9JSbRaguNJsYP5V C7hwWpwh/+Zc66LEExKvAmbiBE6XJbqzuMlQQwjYKzOXbLBgfck0T/Jdl9j6k/jIB+0k FljA== X-Gm-Message-State: AOAM532O4fWEX6s7tyGH6r9W0nxNE5Y3M90XSh9aPciZj6ORyHhnxlEJ AVH2GlmnklRlZ02B9wgDO6PcPECJ2Bg= X-Google-Smtp-Source: ABdhPJw9yx/IyJvkFVOwn5qC3Id+SAO2jS9BZYH+J414vcDMXDqLc0Xq9fymwb4L9yNCuks6le/ym0ztXHE= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a05:6a00:a15:b0:4f6:5051:61ad with SMTP id p21-20020a056a000a1500b004f6505161admr6582732pfh.69.1646279759153; Wed, 02 Mar 2022 19:55:59 -0800 (PST) Date: Wed, 2 Mar 2022 19:54:07 -0800 In-Reply-To: <20220303035408.3708241-1-reijiw@google.com> Message-Id: <20220303035408.3708241-2-reijiw@google.com> Mime-Version: 1.0 References: <20220303035408.3708241-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.574.g5d30c73bfb-goog Subject: [PATCH v3 2/3] KVM: arm64: mixed-width check should be skipped for uninitialized vCPUs From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220302_195601_664596_3A6CA6FE X-CRM114-Status: GOOD ( 25.67 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org KVM allows userspace to configure either all EL1 32bit or 64bit vCPUs for a guest. At vCPU reset, vcpu_allowed_register_width() checks if the vcpu's register width is consistent with all other vCPUs'. Since the checking is done even against vCPUs that are not initialized (KVM_ARM_VCPU_INIT has not been done) yet, the uninitialized vCPUs are erroneously treated as 64bit vCPU, which causes the function to incorrectly detect a mixed-width VM. Introduce KVM_ARCH_FLAG_EL1_32BIT and KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED bits for kvm->arch.flags. A value of the EL1_32BIT bit indicates that the guest needs to be configured with all 32bit or 64bit vCPUs, and a value of the REG_WIDTH_CONFIGURED bit indicates if a value of the EL1_32BIT bit is valid (already set up). Values in those bits are set at the first KVM_ARM_VCPU_INIT for the guest based on KVM_ARM_VCPU_EL1_32BIT configuration for the vCPU. Check vcpu's register width against those new bits at the vcpu's KVM_ARM_VCPU_INIT (instead of against other vCPUs' register width). Fixes: 66e94d5cafd4 ("KVM: arm64: Prevent mixed-width VM creation") Signed-off-by: Reiji Watanabe --- arch/arm64/include/asm/kvm_emulate.h | 25 +++++++++++------ arch/arm64/include/asm/kvm_host.h | 8 ++++++ arch/arm64/kvm/arm.c | 41 ++++++++++++++++++++++++++++ arch/arm64/kvm/reset.c | 8 ------ 4 files changed, 65 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index d62405ce3e6d..f4f960819888 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -20,6 +20,7 @@ #include #include #include +#include #define CURRENT_EL_SP_EL0_VECTOR 0x0 #define CURRENT_EL_SP_ELx_VECTOR 0x200 @@ -45,7 +46,14 @@ void kvm_vcpu_wfi(struct kvm_vcpu *vcpu); static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu) { - return !(vcpu->arch.hcr_el2 & HCR_RW); + struct kvm *kvm; + + kvm = is_kernel_in_hyp_mode() ? kern_hyp_va(vcpu->kvm) : vcpu->kvm; + + WARN_ON_ONCE(!test_bit(KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED, + &kvm->arch.flags)); + + return test_bit(KVM_ARCH_FLAG_EL1_32BIT, &kvm->arch.flags); } static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) @@ -72,15 +80,14 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) vcpu->arch.hcr_el2 |= HCR_TVM; } - if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) + if (vcpu_el1_is_32bit(vcpu)) vcpu->arch.hcr_el2 &= ~HCR_RW; - - /* - * TID3: trap feature register accesses that we virtualise. - * For now this is conditional, since no AArch32 feature regs - * are currently virtualised. - */ - if (!vcpu_el1_is_32bit(vcpu)) + else + /* + * TID3: trap feature register accesses that we virtualise. + * For now this is conditional, since no AArch32 feature regs + * are currently virtualised. + */ vcpu->arch.hcr_el2 |= HCR_TID3; if (cpus_have_const_cap(ARM64_MISMATCHED_CACHE_TYPE) || diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 11a7ae747ded..5cde7f7b5042 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -125,6 +125,14 @@ struct kvm_arch { #define KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER 0 /* Memory Tagging Extension enabled for the guest */ #define KVM_ARCH_FLAG_MTE_ENABLED 1 + /* + * The guest's EL1 register width. A value of KVM_ARCH_FLAG_EL1_32BIT + * bit is valid only when KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED is set. + * Otherwise, the guest's EL1 register width has not yet been + * determined yet. + */ +#define KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED 2 +#define KVM_ARCH_FLAG_EL1_32BIT 3 unsigned long flags; /* diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 9a2d240ef6a3..9ac75aa46e2f 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1101,6 +1101,43 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level, return -EINVAL; } +/* + * A guest can have either all EL1 32bit or 64bit vcpus only. It is + * indicated by a value of KVM_ARCH_FLAG_EL1_32BIT bit in kvm->arch.flags, + * which is valid only when KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED in + * kvm->arch.flags is set. + * This function checks if the vCPU's register width configuration is + * consistent with a value of the EL1_32BIT bit in kvm->arch.flags + * when the REG_WIDTH_CONFIGURED bit is set. + * Otherwise, the function sets a value of EL1_32BIT bit based on the vcpu's + * KVM_ARM_VCPU_EL1_32BIT configuration (and sets the REG_WIDTH_CONFIGURED + * bit of kvm->arch.flags). + */ +static int kvm_register_width_check_or_init(struct kvm_vcpu *vcpu) +{ + bool is32bit; + bool allowed = true; + struct kvm *kvm = vcpu->kvm; + + is32bit = vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT); + + mutex_lock(&kvm->lock); + + if (test_bit(KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED, &kvm->arch.flags)) { + allowed = (is32bit == + test_bit(KVM_ARCH_FLAG_EL1_32BIT, &kvm->arch.flags)); + } else { + if (is32bit) + set_bit(KVM_ARCH_FLAG_EL1_32BIT, &kvm->arch.flags); + + set_bit(KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED, &kvm->arch.flags); + } + + mutex_unlock(&kvm->lock); + + return allowed ? 0 : -EINVAL; +} + static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu, const struct kvm_vcpu_init *init) { @@ -1140,6 +1177,10 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu, /* Now we know what it is, we can reset it. */ ret = kvm_reset_vcpu(vcpu); + + if (!ret) + ret = kvm_register_width_check_or_init(vcpu); + if (ret) { vcpu->arch.target = -1; bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES); diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index ecc40c8cd6f6..6c5f7677057d 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -183,9 +183,7 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu) { - struct kvm_vcpu *tmp; bool is32bit; - unsigned long i; is32bit = vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT); if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1) && is32bit) @@ -195,12 +193,6 @@ static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu) if (kvm_has_mte(vcpu->kvm) && is32bit) return false; - /* Check that the vcpus are either all 32bit or all 64bit */ - kvm_for_each_vcpu(i, tmp, vcpu->kvm) { - if (vcpu_has_feature(tmp, KVM_ARM_VCPU_EL1_32BIT) != is32bit) - return false; - } - return true; } From patchwork Thu Mar 3 03:54:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12767004 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 205D0C433EF for ; Thu, 3 Mar 2022 03:57:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=kRuoWzB+87guRIpYUCreZ3U4cHNVcLO2zkw5NQTzC/k=; b=U4c9pRRdNvhtnkRBRguDZIf4QX wI12oO21cLaspBwb5i9jCIZ7L1vq+Fhl33Y2UitBOEAdtAOGlkhkq+tSFmErmgx95YV2JKaJblKgr aMwogLJ1fMDhMK24P3qEStcf4r/DAfDyr6pgg6EojjJM6RH8FPgFwmN+mKnU9+i0zcQZP4s/BVIQa XMiyogzl6/RWfZ6+9how9h7GlDLWEfYMW12FcDpfZvyLqi03yhCMp2YOstE9GXKNGpzgAjbguqvBG 7cn4xM8EYfS5YloIQrJMHassPulRRexOP56uSFcP0H7ByQ0d4fg0nYc6HKqR6BqUuqedLWeL2+3gR 5KQc8c1w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nPcZo-005884-S1; Thu, 03 Mar 2022 03:56:29 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nPcZk-00586f-Go for linux-arm-kernel@lists.infradead.org; Thu, 03 Mar 2022 03:56:26 +0000 Received: by mail-pl1-x64a.google.com with SMTP id d14-20020a170902654e00b001518cc774d3so2152669pln.2 for ; Wed, 02 Mar 2022 19:56:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=NckzY4aHIyiZXsaClAwfWp/pDb1JmWqk+qjn0+sOUoI=; b=VW1g/aLap7Cx412U6MVCmQ3huTTCGQIsKF+5EhVu8PSpIXmzOtImfHebUE3SNnu1mG fKzBzvc/D5fFVMAQZC5cJwDANguFabZD9m0VZn3nDb3SkhFQRx3ywQkbS1D49Q9Y/Wm+ QI1Src4SiGKfxBcOUtZZUpuLnXUrE5WKonM52u9TyvUpWItBv2gaj3Kc84R6gm7REmku JqYjYvATpVM5MiRVbj1op9a141FocnG4UbL3XIZFhAptEjKTeEOlMQJb9j0GTlgQQ4VT aISjFD4PKqsSwepPxKw17QTGDpaq1pna70ZldBznas6RSZcJK12P6lyhkNDoD9+Z/MyB R3ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NckzY4aHIyiZXsaClAwfWp/pDb1JmWqk+qjn0+sOUoI=; b=OGZV8Q6yejufDKWs+HBNU8u5CytqZNnS5J5jwjMrK6WJHW4FH9G7YRZqN6mc5E/MCD ymdHXBmkW2eTiMEcP2iDqGMtbmefN+TF2DaFXmWMwsq+vzdOWeTVPHChYhJqnXELcDh1 LvMin+CkFxIWkrKR9ZNQhq9bG06QxU6H/kds6MZGKIOKHWoKqG49V8GNpu1LRD7Ev5z2 fhtxbhCe0WP8GRAAuGhTsVn7JsRwHps1TTBJXOf3lxgrMMtQ73d9q+wkaFWtbmBTd6iW 14OzuYQhOe0cSfMr1tyms196azsnUa66rGv8BVALerlOBN+SofFZKHa/VX3GCxaJBsB5 glDg== X-Gm-Message-State: AOAM531mga/g9RL7ltxw3T/l4MZaclMM3BXEkZmPdOqVz2MFJiNCMWTw CHQT7PPBOoutg2Z6/cpxaa9gs5f0oww= X-Google-Smtp-Source: ABdhPJwGkqx1HAK+FV7iAAHRTN1diZQ3YkCi5pD9Jjog5yTbQqCGb2a6Lajv7Zk8gguxjYp3GH6tcJVztQw= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a05:6a00:1d8a:b0:4e1:559d:2f62 with SMTP id z10-20020a056a001d8a00b004e1559d2f62mr36600999pfw.26.1646279783056; Wed, 02 Mar 2022 19:56:23 -0800 (PST) Date: Wed, 2 Mar 2022 19:54:08 -0800 In-Reply-To: <20220303035408.3708241-1-reijiw@google.com> Message-Id: <20220303035408.3708241-3-reijiw@google.com> Mime-Version: 1.0 References: <20220303035408.3708241-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.574.g5d30c73bfb-goog Subject: [PATCH v3 3/3] KVM: arm64: selftests: Introduce vcpu_width_config From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220302_195624_590940_8C3A33B8 X-CRM114-Status: GOOD ( 20.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a test for aarch64 that ensures non-mixed-width vCPUs (all 64bit vCPUs or all 32bit vcPUs) can be configured, and mixed-width vCPUs cannot be configured. Reviewed-by: Andrew Jones Signed-off-by: Reiji Watanabe --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/aarch64/vcpu_width_config.c | 125 ++++++++++++++++++ 3 files changed, 127 insertions(+) create mode 100644 tools/testing/selftests/kvm/aarch64/vcpu_width_config.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index dce7de7755e6..4e884e29b2a8 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -3,6 +3,7 @@ /aarch64/debug-exceptions /aarch64/get-reg-list /aarch64/psci_cpu_on_test +/aarch64/vcpu_width_config /aarch64/vgic_init /aarch64/vgic_irq /s390x/memop diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 17c3f0749f05..3482586c6e33 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -103,6 +103,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/arch_timer TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list TEST_GEN_PROGS_aarch64 += aarch64/psci_cpu_on_test +TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config TEST_GEN_PROGS_aarch64 += aarch64/vgic_init TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq TEST_GEN_PROGS_aarch64 += demand_paging_test diff --git a/tools/testing/selftests/kvm/aarch64/vcpu_width_config.c b/tools/testing/selftests/kvm/aarch64/vcpu_width_config.c new file mode 100644 index 000000000000..6e6e6a9f69e3 --- /dev/null +++ b/tools/testing/selftests/kvm/aarch64/vcpu_width_config.c @@ -0,0 +1,125 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * vcpu_width_config - Test KVM_ARM_VCPU_INIT() with KVM_ARM_VCPU_EL1_32BIT. + * + * Copyright (c) 2022 Google LLC. + * + * This is a test that ensures that non-mixed-width vCPUs (all 64bit vCPUs + * or all 32bit vcPUs) can be configured and mixed-width vCPUs cannot be + * configured. + */ + +#define _GNU_SOURCE + +#include "kvm_util.h" +#include "processor.h" +#include "test_util.h" + + +/* + * Add a vCPU, run KVM_ARM_VCPU_INIT with @init1, and then + * add another vCPU, and run KVM_ARM_VCPU_INIT with @init2. + */ +static int add_init_2vcpus(struct kvm_vcpu_init *init1, + struct kvm_vcpu_init *init2) +{ + struct kvm_vm *vm; + int ret; + + vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR); + + vm_vcpu_add(vm, 0); + ret = _vcpu_ioctl(vm, 0, KVM_ARM_VCPU_INIT, init1); + if (ret) + goto free_exit; + + vm_vcpu_add(vm, 1); + ret = _vcpu_ioctl(vm, 1, KVM_ARM_VCPU_INIT, init2); + +free_exit: + kvm_vm_free(vm); + return ret; +} + +/* + * Add two vCPUs, then run KVM_ARM_VCPU_INIT for one vCPU with @init1, + * and run KVM_ARM_VCPU_INIT for another vCPU with @init2. + */ +static int add_2vcpus_init_2vcpus(struct kvm_vcpu_init *init1, + struct kvm_vcpu_init *init2) +{ + struct kvm_vm *vm; + int ret; + + vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR); + + vm_vcpu_add(vm, 0); + vm_vcpu_add(vm, 1); + + ret = _vcpu_ioctl(vm, 0, KVM_ARM_VCPU_INIT, init1); + if (ret) + goto free_exit; + + ret = _vcpu_ioctl(vm, 1, KVM_ARM_VCPU_INIT, init2); + +free_exit: + kvm_vm_free(vm); + return ret; +} + +/* + * Tests that two 64bit vCPUs can be configured, two 32bit vCPUs can be + * configured, and two mixed-witgh vCPUs cannot be configured. + * Each of those three cases, configure vCPUs in two different orders. + * The one is running KVM_CREATE_VCPU for 2 vCPUs, and then running + * KVM_ARM_VCPU_INIT for them. + * The other is running KVM_CREATE_VCPU and KVM_ARM_VCPU_INIT for a vCPU, + * and then run those commands for another vCPU. + */ +int main(void) +{ + struct kvm_vcpu_init init1, init2; + struct kvm_vm *vm; + int ret; + + if (kvm_check_cap(KVM_CAP_ARM_EL1_32BIT) <= 0) { + print_skip("KVM_CAP_ARM_EL1_32BIT is not supported"); + exit(KSFT_SKIP); + } + + /* Get the preferred target type and copy that to init2 */ + vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR); + vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init1); + kvm_vm_free(vm); + memcpy(&init2, &init1, sizeof(init2)); + + /* Test with 64bit vCPUs */ + ret = add_init_2vcpus(&init1, &init2); + TEST_ASSERT(ret == 0, + "Configuring 64bit EL1 vCPUs failed unexpectedly"); + ret = add_2vcpus_init_2vcpus(&init1, &init2); + TEST_ASSERT(ret == 0, + "Configuring 64bit EL1 vCPUs failed unexpectedly"); + + /* Test with 32bit vCPUs */ + init1.features[0] = (1 << KVM_ARM_VCPU_EL1_32BIT); + init2.features[0] = (1 << KVM_ARM_VCPU_EL1_32BIT); + ret = add_init_2vcpus(&init1, &init2); + TEST_ASSERT(ret == 0, + "Configuring 32bit EL1 vCPUs failed unexpectedly"); + ret = add_2vcpus_init_2vcpus(&init1, &init2); + TEST_ASSERT(ret == 0, + "Configuring 32bit EL1 vCPUs failed unexpectedly"); + + /* Test with mixed-width vCPUs */ + init1.features[0] = 0; + init2.features[0] = (1 << KVM_ARM_VCPU_EL1_32BIT); + ret = add_init_2vcpus(&init1, &init2); + TEST_ASSERT(ret != 0, + "Configuring mixed-width vCPUs worked unexpectedly"); + ret = add_2vcpus_init_2vcpus(&init1, &init2); + TEST_ASSERT(ret != 0, + "Configuring mixed-width vCPUs worked unexpectedly"); + + return 0; +}