From patchwork Mon Mar 21 05:08:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12786792 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BF773C433EF for ; Mon, 21 Mar 2022 05:09:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=o4MCL4FY/MAnIXjCM7U5qgZCmrYo+6rQOrY3rfWmZPw=; b=mZnDiFWcWtE87Ly3Qmg5utq6rs BdY4hWhsc/GD659ErYxWV3roDxSw3YTzxeOAYhApQS4t1cWFNnmPL34vCXTWiouxvTwWe0NFYNE+o skOtsas3+NIdaH3Z6bRgAxrPFQ5S/F6dMbY88ZxSLOdeOKvBvKry8nMexa5/HKXdVYWAj6DQ1q2Yd /ixBbsJDma3W9zewL+D8woIVt3iaCi+OTIQq5OxE8c4WMVWvWvb9UKjCUqp196kCmWM8sP2W9QXst paRPVIhiG/fSE5KNqNCSTR9vvajJA7qJG1QXdGcjXvBbRDLx/4BL81c6YoaSoDQBZ3Hx38ywoC3xS Hn6ioE9Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nWAHb-006WgM-G1; Mon, 21 Mar 2022 05:08:43 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nWAHY-006Wfg-41 for linux-arm-kernel@lists.infradead.org; Mon, 21 Mar 2022 05:08:41 +0000 Received: by mail-pg1-x549.google.com with SMTP id v8-20020a637a08000000b00381ef25b6b8so6860423pgc.9 for ; Sun, 20 Mar 2022 22:08:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=LxG2PTi2xwn24mKlBgiN0pZ6uoHcICcLtzYM1y02gXo=; b=bZbz61KCU9jW+w/5YoXfJEXJZ/m0SNBjFyhfX3IbMdemYcFH3VdmzYTNYtY1MURus/ ddfsxUqKECi3N6+bxZGSlIZIbFRKHeV2mAFElrrTxPTWe5lnBuymovO2ry2O7d5PpFrg TS5QLcxpiGT9zFxSmAOGbwfbF45gQU6r8EgLNKN4krUvaOW8iganHg7i3wIc/Y2EvblV 61JSDoTTPccZaPHysd9Y/z57z9eKCxF/gX4jrgxuU8jGyIbtbLghItcqXj3Jc5Mn6UoV crnAssfFH6OLLf+Glq34tPzjZmGQgKPEkY7KZcDBOaDaHeGWBs/s3HnYp+1QV/xY8dkS GH1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=LxG2PTi2xwn24mKlBgiN0pZ6uoHcICcLtzYM1y02gXo=; b=dkCqlYyMN9I93EzsrTGKlT9Ox1e55IAjuYEMfDde9rFhRBK5S5Bh34AQP6v9jcRrnk INPHGaAIzwz6onN4mp/2enRmm5HsHjYRaWyRO36nRV67ut7Qm1PFq41fGaEeyMWXjZD9 mSVtaNBwsqFYmehBWhVtOjVee5KgVbwDqRCj+L5DQbjrfmT5PzL7KMf+YujHT2XrQelg OnAwnpnf0FiTHYJHTLvgozblfafKI6GxIbf9JZICTxewipziYioqgbKf6XTvrM8mELBR 0itsipFqaFw1h+JFl8mEyRIrTWzLlzRnp3nmnInTp7YVtt2/WploqK0vaPX0xtcmpKxT EJDg== X-Gm-Message-State: AOAM532g9RSvD/WPJnwt8us7d0vMzUvTlYrE3xR/dHfXRKpVG6+FZXvV p22Qy8GSFPw6YqlvSfo2eGjhtr/D+74= X-Google-Smtp-Source: ABdhPJzYJOQqj21E39f3tJVWTtqG7Ne9x+ipGfWCxPaeEUCXf0Ww/p0RNHF5QyFicNBr9zXiOSHlHnkKoVI= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a17:90a:4890:b0:1bf:654e:e1a0 with SMTP id b16-20020a17090a489000b001bf654ee1a0mr34733727pjh.113.1647839317951; Sun, 20 Mar 2022 22:08:37 -0700 (PDT) Date: Sun, 20 Mar 2022 22:08:03 -0700 In-Reply-To: <20220321050804.2701035-1-reijiw@google.com> Message-Id: <20220321050804.2701035-2-reijiw@google.com> Mime-Version: 1.0 References: <20220321050804.2701035-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v5 1/2] KVM: arm64: mixed-width check should be skipped for uninitialized vCPUs From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Andrew Jones , Paolo Bonzini , Will Deacon , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220320_220840_198178_5A51ED85 X-CRM114-Status: GOOD ( 27.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org KVM allows userspace to configure either all EL1 32bit or 64bit vCPUs for a guest. At vCPU reset, vcpu_allowed_register_width() checks if the vcpu's register width is consistent with all other vCPUs'. Since the checking is done even against vCPUs that are not initialized (KVM_ARM_VCPU_INIT has not been done) yet, the uninitialized vCPUs are erroneously treated as 64bit vCPU, which causes the function to incorrectly detect a mixed-width VM. Introduce KVM_ARCH_FLAG_EL1_32BIT and KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED bits for kvm->arch.flags. A value of the EL1_32BIT bit indicates that the guest needs to be configured with all 32bit or 64bit vCPUs, and a value of the REG_WIDTH_CONFIGURED bit indicates if a value of the EL1_32BIT bit is valid (already set up). Values in those bits are set at the first KVM_ARM_VCPU_INIT for the guest based on KVM_ARM_VCPU_EL1_32BIT configuration for the vCPU. Check vcpu's register width against those new bits at the vcpu's KVM_ARM_VCPU_INIT (instead of against other vCPUs' register width). Fixes: 66e94d5cafd4 ("KVM: arm64: Prevent mixed-width VM creation") Signed-off-by: Reiji Watanabe Reviewed-by: Oliver Upton --- arch/arm64/include/asm/kvm_emulate.h | 27 ++++++++---- arch/arm64/include/asm/kvm_host.h | 10 +++++ arch/arm64/kvm/reset.c | 65 ++++++++++++++++++---------- 3 files changed, 72 insertions(+), 30 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index d62405ce3e6d..7496deab025a 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -43,10 +43,22 @@ void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr); void kvm_vcpu_wfi(struct kvm_vcpu *vcpu); +#if defined(__KVM_VHE_HYPERVISOR__) || defined(__KVM_NVHE_HYPERVISOR__) static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu) { return !(vcpu->arch.hcr_el2 & HCR_RW); } +#else +static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu) +{ + struct kvm *kvm = vcpu->kvm; + + WARN_ON_ONCE(!test_bit(KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED, + &kvm->arch.flags)); + + return test_bit(KVM_ARCH_FLAG_EL1_32BIT, &kvm->arch.flags); +} +#endif static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) { @@ -72,15 +84,14 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) vcpu->arch.hcr_el2 |= HCR_TVM; } - if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) + if (vcpu_el1_is_32bit(vcpu)) vcpu->arch.hcr_el2 &= ~HCR_RW; - - /* - * TID3: trap feature register accesses that we virtualise. - * For now this is conditional, since no AArch32 feature regs - * are currently virtualised. - */ - if (!vcpu_el1_is_32bit(vcpu)) + else + /* + * TID3: trap feature register accesses that we virtualise. + * For now this is conditional, since no AArch32 feature regs + * are currently virtualised. + */ vcpu->arch.hcr_el2 |= HCR_TID3; if (cpus_have_const_cap(ARM64_MISMATCHED_CACHE_TYPE) || diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 0e96087885fe..f7781c5e0c6a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -127,6 +127,16 @@ struct kvm_arch { #define KVM_ARCH_FLAG_MTE_ENABLED 1 /* At least one vCPU has ran in the VM */ #define KVM_ARCH_FLAG_HAS_RAN_ONCE 2 + /* + * The following two bits are used to indicate the guest's EL1 + * register width configuration. A value of KVM_ARCH_FLAG_EL1_32BIT + * bit is valid only when KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED is set. + * Otherwise, the guest's EL1 register width has not yet been + * determined yet. + */ +#define KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED 3 +#define KVM_ARCH_FLAG_EL1_32BIT 4 + unsigned long flags; /* diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index ecc40c8cd6f6..bc8b3909640f 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -181,27 +181,46 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) return 0; } -static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu) +/** + * kvm_set_vm_width() - set a register width for the guest + * @kvm: Pointer to the KVM struct + * @is32bit: Whether the register width of the guest is 32-bit or not (64-bit) + * + * Set KVM_ARCH_FLAG_EL1_32BIT bit in kvm->arch.flags based on @is32bit + * and also set KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED bit in the flags. + * When REG_WIDTH_CONFIGURED bit is already set in the flags, @is32bit + * must be consistent with the value of FLAG_EL1_32BIT bit in the flags. + * + * Return: 0 on success, negative error code on failure. + */ +static int kvm_set_vm_width(struct kvm *kvm, bool is32bit) { - struct kvm_vcpu *tmp; - bool is32bit; - unsigned long i; + lockdep_assert_held(&kvm->lock); + + if (test_bit(KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED, &kvm->arch.flags)) { + /* + * The guest's register width is already configured. + * Make sure that @is32bit is consistent with it. + */ + if (is32bit == test_bit(KVM_ARCH_FLAG_EL1_32BIT, &kvm->arch.flags)) + return 0; + else + return -EINVAL; + } - is32bit = vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT); if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1) && is32bit) - return false; + return -EINVAL; /* MTE is incompatible with AArch32 */ - if (kvm_has_mte(vcpu->kvm) && is32bit) - return false; + if (kvm_has_mte(kvm) && is32bit) + return -EINVAL; - /* Check that the vcpus are either all 32bit or all 64bit */ - kvm_for_each_vcpu(i, tmp, vcpu->kvm) { - if (vcpu_has_feature(tmp, KVM_ARM_VCPU_EL1_32BIT) != is32bit) - return false; - } + if (is32bit) + set_bit(KVM_ARCH_FLAG_EL1_32BIT, &kvm->arch.flags); - return true; + set_bit(KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED, &kvm->arch.flags); + + return 0; } /** @@ -230,10 +249,17 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) u32 pstate; mutex_lock(&vcpu->kvm->lock); - reset_state = vcpu->arch.reset_state; - WRITE_ONCE(vcpu->arch.reset_state.reset, false); + ret = kvm_set_vm_width(vcpu->kvm, + vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT)); + if (!ret) { + reset_state = vcpu->arch.reset_state; + WRITE_ONCE(vcpu->arch.reset_state.reset, false); + } mutex_unlock(&vcpu->kvm->lock); + if (ret) + return ret; + /* Reset PMU outside of the non-preemptible section */ kvm_pmu_vcpu_reset(vcpu); @@ -260,14 +286,9 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) } } - if (!vcpu_allowed_register_width(vcpu)) { - ret = -EINVAL; - goto out; - } - switch (vcpu->arch.target) { default: - if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) { + if (vcpu_el1_is_32bit(vcpu)) { pstate = VCPU_RESET_PSTATE_SVC; } else { pstate = VCPU_RESET_PSTATE_EL1; From patchwork Mon Mar 21 05:08:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12786793 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9B0BFC433F5 for ; Mon, 21 Mar 2022 05:09:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=BK89KgZjzADyQsnZXPG6ELrUdpFTRfCaWqU65bngv34=; b=vB4SVVWDb4apc8wcV2M9THoJEi jco4HwKXJFslYp7rfnR4dNqT4iZ/e62MxxEFHtqWUY29MENRjQwXFEfB1d3y7pOYmsJUHHZVbRvsS p6qq5G8ifRJfattdhPSa6dh1XBAZe3cxRPldzsmARnYKMpQDBQ9ELdSlIZBK90PDwbPp5cep6q8Bz bRa8F8DPC8KD7SbznQXihESg8j2zDlQR1x8oZZoYEcxCHPBLNyvAtpa+xndBq7nGF23Zd1AoGHo6I 1NL/k1wG0/iQhHcWB80I+WuijCNrmmQTWM9XyDUixXuLGFQlKMzwOdQ54EI+qqDk64Q5upf7z63vG EUQ1HT1Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nWAHl-006Wj7-2f; Mon, 21 Mar 2022 05:08:53 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nWAHa-006WgA-SI for linux-arm-kernel@lists.infradead.org; Mon, 21 Mar 2022 05:08:44 +0000 Received: by mail-pg1-x549.google.com with SMTP id bj8-20020a056a02018800b0035ec8c16f0bso6860475pgb.11 for ; Sun, 20 Mar 2022 22:08:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xx9vXAQsjWJ4PqaVluoZdA7aztDrefns6JFls1G6TvE=; b=caiyFO9BQED3DB74bsm5w1FxlJ2wgiBaWuyChUxk4AWg/xyn0dNtJMkSviQ479bOy7 E96IRIPyQoh5LDaSiLs/+7zNQMsuQ7Hr3E3ZBEs8OUlPriZ9xV6D0bGx9PsErzkZwtCy Epg+Jhh3w5d7kfc1rHWGg9zL1TAC/rEbgxvbFHDhGUBsAg90UkuSl/KzZD8ucQuk4SdM ancv6futsIc/0SBia1leElP7WHkHym0zeGLFyyCUXVhh5RWUZC6dqcIUvAAM81hRO1Z2 wh89QLQbzfufnW96ylPa+YpiOoRnFUSalmnH4o9SO/WGVgHLeEfktr19h8e3xtpa9cCp 7b+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xx9vXAQsjWJ4PqaVluoZdA7aztDrefns6JFls1G6TvE=; b=rZS5HuVFgJ23CgEAg1Jbs9dpnzyXbe9GwzYQtA1oizUdvkSzeuEioFyCPtuoMYRGEd 2mqHhAWY5A1Nu87dMGwcDljHVMOquOEHQNGlpmk6+P3eRnHBExmJ0MR2re1aVQyKSNjU p0MztdLcXj92mW2bBjj/IclKkfGu+KCLzPj6TIhwdEMcaDKsI8WRHTP66CzDRXu7ainT 135fbRoV1fKhFnqpyRb/OpMVU+WWr6Oo/3MHVXsFotJDdFFYKvMwclGJFW4p2+t1rlnb Q+Hokp1QwQ5RMBT6eqlMv2CcCaoTRqBUhQ99MnfOMotOpXEX6pTRqWbPFpG4pDUk3p+c AlHg== X-Gm-Message-State: AOAM531sBUisUJlz2n7z9u5LV0740aza4rwx9kOBDudGqIBLhhMq99BQ 1c826F6b6OVAA6QR9ilcTjzZP7BcxlI= X-Google-Smtp-Source: ABdhPJzfIim7P32GSwYDVyD8r+Xn8iEiC+oqaGcyOxgHQB7onHAILDCU6mQCiF4Kc6jiVEXN6sDN6aTOkXQ= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a17:902:f686:b0:151:ca86:e1be with SMTP id l6-20020a170902f68600b00151ca86e1bemr11404831plg.16.1647839321433; Sun, 20 Mar 2022 22:08:41 -0700 (PDT) Date: Sun, 20 Mar 2022 22:08:04 -0700 In-Reply-To: <20220321050804.2701035-1-reijiw@google.com> Message-Id: <20220321050804.2701035-3-reijiw@google.com> Mime-Version: 1.0 References: <20220321050804.2701035-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v5 2/2] KVM: arm64: selftests: Introduce vcpu_width_config From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Andrew Jones , Paolo Bonzini , Will Deacon , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220320_220842_956750_4CC8B3F9 X-CRM114-Status: GOOD ( 18.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a test for aarch64 that ensures non-mixed-width vCPUs (all 64bit vCPUs or all 32bit vcPUs) can be configured, and mixed-width vCPUs cannot be configured. Reviewed-by: Andrew Jones Signed-off-by: Reiji Watanabe Reviewed-by: Oliver Upton --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/aarch64/vcpu_width_config.c | 125 ++++++++++++++++++ 3 files changed, 127 insertions(+) create mode 100644 tools/testing/selftests/kvm/aarch64/vcpu_width_config.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index dce7de7755e6..4e884e29b2a8 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -3,6 +3,7 @@ /aarch64/debug-exceptions /aarch64/get-reg-list /aarch64/psci_cpu_on_test +/aarch64/vcpu_width_config /aarch64/vgic_init /aarch64/vgic_irq /s390x/memop diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 0e4926bc9a58..06a5a982123e 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -104,6 +104,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/arch_timer TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list TEST_GEN_PROGS_aarch64 += aarch64/psci_cpu_on_test +TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config TEST_GEN_PROGS_aarch64 += aarch64/vgic_init TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq TEST_GEN_PROGS_aarch64 += demand_paging_test diff --git a/tools/testing/selftests/kvm/aarch64/vcpu_width_config.c b/tools/testing/selftests/kvm/aarch64/vcpu_width_config.c new file mode 100644 index 000000000000..6e6e6a9f69e3 --- /dev/null +++ b/tools/testing/selftests/kvm/aarch64/vcpu_width_config.c @@ -0,0 +1,125 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * vcpu_width_config - Test KVM_ARM_VCPU_INIT() with KVM_ARM_VCPU_EL1_32BIT. + * + * Copyright (c) 2022 Google LLC. + * + * This is a test that ensures that non-mixed-width vCPUs (all 64bit vCPUs + * or all 32bit vcPUs) can be configured and mixed-width vCPUs cannot be + * configured. + */ + +#define _GNU_SOURCE + +#include "kvm_util.h" +#include "processor.h" +#include "test_util.h" + + +/* + * Add a vCPU, run KVM_ARM_VCPU_INIT with @init1, and then + * add another vCPU, and run KVM_ARM_VCPU_INIT with @init2. + */ +static int add_init_2vcpus(struct kvm_vcpu_init *init1, + struct kvm_vcpu_init *init2) +{ + struct kvm_vm *vm; + int ret; + + vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR); + + vm_vcpu_add(vm, 0); + ret = _vcpu_ioctl(vm, 0, KVM_ARM_VCPU_INIT, init1); + if (ret) + goto free_exit; + + vm_vcpu_add(vm, 1); + ret = _vcpu_ioctl(vm, 1, KVM_ARM_VCPU_INIT, init2); + +free_exit: + kvm_vm_free(vm); + return ret; +} + +/* + * Add two vCPUs, then run KVM_ARM_VCPU_INIT for one vCPU with @init1, + * and run KVM_ARM_VCPU_INIT for another vCPU with @init2. + */ +static int add_2vcpus_init_2vcpus(struct kvm_vcpu_init *init1, + struct kvm_vcpu_init *init2) +{ + struct kvm_vm *vm; + int ret; + + vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR); + + vm_vcpu_add(vm, 0); + vm_vcpu_add(vm, 1); + + ret = _vcpu_ioctl(vm, 0, KVM_ARM_VCPU_INIT, init1); + if (ret) + goto free_exit; + + ret = _vcpu_ioctl(vm, 1, KVM_ARM_VCPU_INIT, init2); + +free_exit: + kvm_vm_free(vm); + return ret; +} + +/* + * Tests that two 64bit vCPUs can be configured, two 32bit vCPUs can be + * configured, and two mixed-witgh vCPUs cannot be configured. + * Each of those three cases, configure vCPUs in two different orders. + * The one is running KVM_CREATE_VCPU for 2 vCPUs, and then running + * KVM_ARM_VCPU_INIT for them. + * The other is running KVM_CREATE_VCPU and KVM_ARM_VCPU_INIT for a vCPU, + * and then run those commands for another vCPU. + */ +int main(void) +{ + struct kvm_vcpu_init init1, init2; + struct kvm_vm *vm; + int ret; + + if (kvm_check_cap(KVM_CAP_ARM_EL1_32BIT) <= 0) { + print_skip("KVM_CAP_ARM_EL1_32BIT is not supported"); + exit(KSFT_SKIP); + } + + /* Get the preferred target type and copy that to init2 */ + vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR); + vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init1); + kvm_vm_free(vm); + memcpy(&init2, &init1, sizeof(init2)); + + /* Test with 64bit vCPUs */ + ret = add_init_2vcpus(&init1, &init2); + TEST_ASSERT(ret == 0, + "Configuring 64bit EL1 vCPUs failed unexpectedly"); + ret = add_2vcpus_init_2vcpus(&init1, &init2); + TEST_ASSERT(ret == 0, + "Configuring 64bit EL1 vCPUs failed unexpectedly"); + + /* Test with 32bit vCPUs */ + init1.features[0] = (1 << KVM_ARM_VCPU_EL1_32BIT); + init2.features[0] = (1 << KVM_ARM_VCPU_EL1_32BIT); + ret = add_init_2vcpus(&init1, &init2); + TEST_ASSERT(ret == 0, + "Configuring 32bit EL1 vCPUs failed unexpectedly"); + ret = add_2vcpus_init_2vcpus(&init1, &init2); + TEST_ASSERT(ret == 0, + "Configuring 32bit EL1 vCPUs failed unexpectedly"); + + /* Test with mixed-width vCPUs */ + init1.features[0] = 0; + init2.features[0] = (1 << KVM_ARM_VCPU_EL1_32BIT); + ret = add_init_2vcpus(&init1, &init2); + TEST_ASSERT(ret != 0, + "Configuring mixed-width vCPUs worked unexpectedly"); + ret = add_2vcpus_init_2vcpus(&init1, &init2); + TEST_ASSERT(ret != 0, + "Configuring mixed-width vCPUs worked unexpectedly"); + + return 0; +}