From patchwork Fri Aug 27 10:16:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12461873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2381C432BE for ; Fri, 27 Aug 2021 10:20:42 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 81C2660FDA for ; Fri, 27 Aug 2021 10:20:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 81C2660FDA Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Z/1ucwrgbnHK99/C1OOajMBmdkywP1jaGw11PAbjPGI=; b=JiM7i7sDqFNLeRUFuQo88X9pzM 8ysdD0FdJCuqgTlAY2J1xz5rX2dY/i/6P9pbFg9hKxTUIs8uASTlP1CYPgHTNzUFAAT4qZvP5U79H E0RHchIdiudHfNkF5ZWXbPS9SFe4Qu1DW0HHokqN7V8TLX4TU6UEYM+gIQ9kGa1R0JZrigRlWoOUD p+La0TzYBgEPc91HaFdguYltUWiispKUgbsytxHynkXHX5PdrahD0clX3kUySsnljU0E02gLYT7ld TUZdv1jTpyKEn0Wgit5+cxpV0PSBbqgv1kan+tMMXa6lPybQ5AhEIv0m6tLkKcbkVTcIWm+OC2kyE h72ikFGA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mJYwS-00CEcr-Ok; Fri, 27 Aug 2021 10:18:33 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mJYuU-00CE0y-Sy for linux-arm-kernel@lists.infradead.org; Fri, 27 Aug 2021 10:16:35 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id e137-20020a25698f000000b0059b84c50006so6114288ybc.11 for ; Fri, 27 Aug 2021 03:16:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=OOYeyRZ535ASPcJl34CxFAtspma0NaClFfMwCj0zNiM=; b=MMJd98j1sjtVGzvY0xoiCe870qLTkNP4IyEZQjgVbqTh/LW07Yma/AMKoEBaiRsBUj dYeyIu3MYgqn4+fvSRz06yVW8rA51aG3iAnhHeXZUChsF4s3Q/2HdZxqDE3htmI3Spge 4rAar8nENLBa+e1WETiUf3djsH0xgXB5IQjIbFL0hRlJ9X2nJM6BPGOYhgul5UDDTeQJ 48Dh4vIZ6WWqWLpQW4yPpek3U4syFdWh+DNIQdQx3DbPEK0vKCQIchz6vHIAK3r9HGZ2 /2WSnCl4gskjvR1DCwFmRgraJZmpoPTb8Q+8Xc/vL7iu97iW0K7EcJJ8cj1fc6WItBP5 Uz5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=OOYeyRZ535ASPcJl34CxFAtspma0NaClFfMwCj0zNiM=; b=P+rgXluPFk2CLtmB+uQ8yBB0Rga9ru8J8CTZFk09Sx4qOlE/JIRKi51Bb4FJPKvi2b +4f7n2mUyZ5bK4jW3VHWl/dYEumQ1u8P8ScwBYw/i08XTzsje7ujncoyFcwXMRc8gMlh pgKLi+A7HKYofr5P3EK9c89BD/CCNphrK//H0QhZOdG9gcHZBt9Qil7+UurzDvxwp9XN DX0H6m3qYFFxi4ltqZdhm1qeQoDu6R22WW2B+j3hzsBAlpJXy49v5d85vqLEt+JyxOjv NgD560DDI0prYzoGpFAJa1FcHBm+UhLcx51ofrJuWYKXaq8mEOD5V7Ovfl7XdpTZk6rb cq8g== X-Gm-Message-State: AOAM5315KOY/hgoe+r58F5PFeJT/JqbnA0wwnVkz4lEBL9QMeaWIdLNH bs1wiD/tJmOBNz9ZYydfX50xU2dvpQ== X-Google-Smtp-Source: ABdhPJzT2yWZOSKKQOr1U5pdRkyQythprVXGwwbqFDaS7R+/nXzLrvHZijLvzoQpOLWF/RjJwMsgi571yQ== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a25:5b44:: with SMTP id p65mr4363216ybb.336.1630059388598; Fri, 27 Aug 2021 03:16:28 -0700 (PDT) Date: Fri, 27 Aug 2021 11:16:09 +0100 In-Reply-To: <20210827101609.2808181-1-tabba@google.com> Message-Id: <20210827101609.2808181-9-tabba@google.com> Mime-Version: 1.0 References: <20210827101609.2808181-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.259.gc128427fd7-goog Subject: [PATCH v5 8/8] KVM: arm64: Handle protected guests at 32 bits From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, oupton@google.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210827_031630_989160_6A850333 X-CRM114-Status: GOOD ( 19.14 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Protected KVM does not support protected AArch32 guests. However, it is possible for the guest to force run AArch32, potentially causing problems. Add an extra check so that if the hypervisor catches the guest doing that, it can prevent the guest from running again by resetting vcpu->arch.target and returning ARM_EXCEPTION_IL. If this were to happen, The VMM can try and fix it by re- initializing the vcpu with KVM_ARM_VCPU_INIT, however, this is likely not possible for protected VMs. Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric AArch32 systems") Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/switch.c | 41 ++++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index fe0c3833ec66..8fbb94fb8588 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -191,6 +192,43 @@ const exit_handler_fn *kvm_get_exit_handler_array(struct kvm *kvm) return hyp_exit_handlers; } +/* + * Some guests (e.g., protected VMs) might not be allowed to run in AArch32. + * The ARMv8 architecture does not give the hypervisor a mechanism to prevent a + * guest from dropping to AArch32 EL0 if implemented by the CPU. If the + * hypervisor spots a guest in such a state ensure it is handled, and don't + * trust the host to spot or fix it. The check below is based on the one in + * kvm_arch_vcpu_ioctl_run(). + * + * Returns false if the guest ran in AArch32 when it shouldn't have, and + * thus should exit to the host, or true if a the guest run loop can continue. + */ +static bool handle_aarch32_guest(struct kvm_vcpu *vcpu, u64 *exit_code) +{ + const struct kvm *kvm = (const struct kvm *) kern_hyp_va(vcpu->kvm); + bool is_aarch32_allowed = + FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL0), + get_pvm_id_aa64pfr0(vcpu)) >= + ID_AA64PFR0_ELx_32BIT_64BIT; + + if (kvm_vm_is_protected(kvm) && + vcpu_mode_is_32bit(vcpu) && + !is_aarch32_allowed) { + /* + * As we have caught the guest red-handed, decide that it isn't + * fit for purpose anymore by making the vcpu invalid. The VMM + * can try and fix it by re-initializing the vcpu with + * KVM_ARM_VCPU_INIT, however, this is likely not possible for + * protected VMs. + */ + vcpu->arch.target = -1; + *exit_code = ARM_EXCEPTION_IL; + return false; + } + + return true; +} + /* Switch to the guest for legacy non-VHE systems */ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { @@ -253,6 +291,9 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) /* Jump in the fire! */ exit_code = __guest_enter(vcpu); + if (unlikely(!handle_aarch32_guest(vcpu, &exit_code))) + break; + /* And we're baaack! */ } while (fixup_guest_exit(vcpu, &exit_code));