From patchwork Fri Mar 5 01:10:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12117313 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73BD1C433DB for ; Fri, 5 Mar 2021 01:11:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3C29564F67 for ; Fri, 5 Mar 2021 01:11:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229648AbhCEBLK (ORCPT ); Thu, 4 Mar 2021 20:11:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229602AbhCEBLJ (ORCPT ); Thu, 4 Mar 2021 20:11:09 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EF790C061756 for ; Thu, 4 Mar 2021 17:11:07 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id b127so662710ybc.13 for ; Thu, 04 Mar 2021 17:11:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=YZHRewQoZy371sjGbfTjC6cuAY8+Z64Uwc7ykPHV30o=; b=lbtKupSNNVVnfGQzSlvQ+vTPFhg4zP425mwF5KftOrXADXrxW5MCkY9PdW6+OECVLi f7gV3Yd6yk9HuYcd9bdWv1hOqkHKEFHbq+1wcJfndCJtUGQV2uhXVaL5jxC05PzU+IiH vZy/QxLlzTQ+SsUMbVNaHztQNZ0z6CPanVisBzdLeHIMuCF52m9Ad1PoHiflsYduUBra X8oKSrNikGzi6cw5nBmyUDTIdrAo/GvoAwdZ6atuSc103Jz3Wio3cM7NLE92H1iRQrE6 tEcpWLHfeJ87aekUvaTCOfkDbJ5cVQkEDsQNNGyBYFgWWIvVzx6r/kXgN0TQ4i12Dg67 3iFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=YZHRewQoZy371sjGbfTjC6cuAY8+Z64Uwc7ykPHV30o=; b=MsGqAkUXvn50yH0C5ifgrOyk2ADqjjsCpZKyB9kgR6nuHzHdxe6RWyxSuo6mXspwNJ 0z7vBBp1CHrv1QJ4hYwGwdm6SkdLQl4501LoPRrKRJvmI1R2uZW6iPlF1uB5D9bxmTsi cSBfxbpkDEvMLljrruS/FHFc0S+pW1ExK7zJ/yOjVnVFaoQjFpaKG6E3NL29Dd7KAtJF yx7XRy7ey548R8K/HvUOYI58l0Ntmd9orZ13fBzJ1XXmwvUbIaK6lgXW5PAnihgPxNBo 39Y2shJBXf3vMSzA08fp8vN3DVNfrXUjOQGH/mkf0K/tzly953y3nj1F4vxsdOAULbWv Ivjw== X-Gm-Message-State: AOAM533KF6cWI/9bU4tXWQma6xZcyqSLJ6L1Cl2R6wh0eXnoAIc8v61e nSlrPczGct2jd+GfzttOS66/UfRdawM= X-Google-Smtp-Source: ABdhPJzNiol+uPbyiI3HEQZPgXCwVI/rb61MbgLwfLCGALveKCKrVdNcrvALevBAaHdUJicLhjlt8ns3F9s= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:9857:be95:97a2:e91c]) (user=seanjc job=sendgmr) by 2002:a25:76c3:: with SMTP id r186mr10611691ybc.365.1614906667218; Thu, 04 Mar 2021 17:11:07 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 4 Mar 2021 17:10:45 -0800 In-Reply-To: <20210305011101.3597423-1-seanjc@google.com> Message-Id: <20210305011101.3597423-2-seanjc@google.com> Mime-Version: 1.0 References: <20210305011101.3597423-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 01/17] KVM: nSVM: Set the shadow root level to the TDP level for nested NPT From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Brijesh Singh , Tom Lendacky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Override the shadow root level in the MMU context when configuring NPT for shadowing nested NPT. The level is always tied to the TDP level of the host, not whatever level the guest happens to be using. Fixes: 096586fda522 ("KVM: nSVM: Correctly set the shadow NPT root level in its MMU role") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c462062d36aa..0987cc1d53eb 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4618,12 +4618,17 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, u32 cr0, u32 cr4, u32 efer, struct kvm_mmu *context = &vcpu->arch.guest_mmu; union kvm_mmu_role new_role = kvm_calc_shadow_npt_root_page_role(vcpu); - context->shadow_root_level = new_role.base.level; - __kvm_mmu_new_pgd(vcpu, nested_cr3, new_role.base, false, false); - if (new_role.as_u64 != context->mmu_role.as_u64) + if (new_role.as_u64 != context->mmu_role.as_u64) { shadow_mmu_init_context(vcpu, context, cr0, cr4, efer, new_role); + + /* + * Override the level set by the common init helper, nested TDP + * always uses the host's TDP configuration. + */ + context->shadow_root_level = new_role.base.level; + } } EXPORT_SYMBOL_GPL(kvm_init_shadow_npt_mmu); From patchwork Fri Mar 5 01:10:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12117315 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F6BFC433E9 for ; Fri, 5 Mar 2021 01:11:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4C48064FEE for ; Fri, 5 Mar 2021 01:11:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229690AbhCEBLL (ORCPT ); Thu, 4 Mar 2021 20:11:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229631AbhCEBLK (ORCPT ); Thu, 4 Mar 2021 20:11:10 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65E94C061756 for ; Thu, 4 Mar 2021 17:11:10 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id 127so636287ybc.19 for ; Thu, 04 Mar 2021 17:11:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=uL/f26X0wR1TrSq8wwKTyq3/1XDiiBIv6kyywcEmzC0=; b=AE495IerCTeNJHsk14pa0aeMKLbFQ+PO39FXKuTblVreZrrosG7OhTBxjBUgnapn1Y rIXDLyHa0lOB8gDKaShoRKQ+sNPnP1cxr9SaJJkqjFUyZXWv/omOlHru8fgr18w6VTz8 ujM/w9RjqXgjlVIENB0LVFJIsGFybW5gi/pG5jiP3QxlEeaesJVhl/L9jvms5hKgSvvG RytNg++loAv+eZhBXNExcQGwvgEdxx7xmJ/52jsuJ+S4uQK3ILpXyGlaVcSULGtYAnvE t7n/iOSlqbGReMjqY8IkPkfXlUd+Q8gT7y/9qvwCW4dsN1EITNcA7Jv4G6dRSZ/zs6W+ 900w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=uL/f26X0wR1TrSq8wwKTyq3/1XDiiBIv6kyywcEmzC0=; b=itySlDhnJQ9PDD53f8CBR0+mH/isvN/q18RfMC/9wHHJYa3dlM1vXl+t5HAij/QoaZ mgyNF/IsPF/mtnl7loZgNRByItjERY9RsbBhEJhqg3vff6qiuIKDLaNyAIDTFcRPYJm/ z9LhCKVkxY3GdSoWnQvhkaCRk+gm9169fNzOopGkO5MGxb9VayY0qD9XE1M7h3ZOfq9C 81QvPeGZQIq4Gufl7acRaSawFIJ8wtQv0nYvOMRWW67DpgNWlGLa0kZJ5vGqq5gQKf6U T0yUfw5r97Pr6/YS7jiGqMjjIPMeFAlTGhjLGNfE01nBeNS+1rAeQlL5gTZxDLx4d5jS 96qA== X-Gm-Message-State: AOAM533g+C8TVeS7ZfWDUL+NT67NGwLbWDATfQl52IPFwk88fXlXHWun nxIiijHE1ZBZYkJ9E0g1XU1JavbLzGU= X-Google-Smtp-Source: ABdhPJyNWE6toNEOXAP6ueAU8azUs5F2xbWvNWqjHRUkTueIkVzmEAcIL91FuaN6CZlJGgEtXSrTg0qehcE= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:9857:be95:97a2:e91c]) (user=seanjc job=sendgmr) by 2002:a25:af05:: with SMTP id a5mr10817378ybh.86.1614906669696; Thu, 04 Mar 2021 17:11:09 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 4 Mar 2021 17:10:46 -0800 In-Reply-To: <20210305011101.3597423-1-seanjc@google.com> Message-Id: <20210305011101.3597423-3-seanjc@google.com> Mime-Version: 1.0 References: <20210305011101.3597423-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 02/17] KVM: x86/mmu: Alloc page for PDPTEs when shadowing 32-bit NPT with 64-bit From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Brijesh Singh , Tom Lendacky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Allocate the so called pae_root page on-demand, along with the lm_root page, when shadowing 32-bit NPT with 64-bit NPT, i.e. when running a 32-bit L1. KVM currently only allocates the page when NPT is disabled, or when L0 is 32-bit (using PAE paging). Note, there is an existing memory leak involving the MMU roots, as KVM fails to free the PAE roots on failure. This will be addressed in a future commit. Fixes: ee6268ba3a68 ("KVM: x86: Skip pae_root shadow allocation if tdp enabled") Fixes: b6b80c78af83 ("KVM: x86/mmu: Allocate PAE root array when using SVM's 32-bit NPT") Cc: stable@vger.kernel.org Reviewed-by: Ben Gardon Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 44 ++++++++++++++++++++++++++++-------------- 1 file changed, 29 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 0987cc1d53eb..2ed3fac1244e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3187,14 +3187,14 @@ void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, if (mmu->shadow_root_level >= PT64_ROOT_4LEVEL && (mmu->root_level >= PT64_ROOT_4LEVEL || mmu->direct_map)) { mmu_free_root_page(kvm, &mmu->root_hpa, &invalid_list); - } else { + } else if (mmu->pae_root) { for (i = 0; i < 4; ++i) if (mmu->pae_root[i] != 0) mmu_free_root_page(kvm, &mmu->pae_root[i], &invalid_list); - mmu->root_hpa = INVALID_PAGE; } + mmu->root_hpa = INVALID_PAGE; mmu->root_pgd = 0; } @@ -3306,9 +3306,23 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) * the shadow page table may be a PAE or a long mode page table. */ pm_mask = PT_PRESENT_MASK; - if (vcpu->arch.mmu->shadow_root_level == PT64_ROOT_4LEVEL) + if (vcpu->arch.mmu->shadow_root_level == PT64_ROOT_4LEVEL) { pm_mask |= PT_ACCESSED_MASK | PT_WRITABLE_MASK | PT_USER_MASK; + /* + * Allocate the page for the PDPTEs when shadowing 32-bit NPT + * with 64-bit only when needed. Unlike 32-bit NPT, it doesn't + * need to be in low mem. See also lm_root below. + */ + if (!vcpu->arch.mmu->pae_root) { + WARN_ON_ONCE(!tdp_enabled); + + vcpu->arch.mmu->pae_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT); + if (!vcpu->arch.mmu->pae_root) + return -ENOMEM; + } + } + for (i = 0; i < 4; ++i) { MMU_WARN_ON(VALID_PAGE(vcpu->arch.mmu->pae_root[i])); if (vcpu->arch.mmu->root_level == PT32E_ROOT_LEVEL) { @@ -3331,21 +3345,19 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->pae_root); /* - * If we shadow a 32 bit page table with a long mode page - * table we enter this path. + * When shadowing 32-bit or PAE NPT with 64-bit NPT, the PML4 and PDP + * tables are allocated and initialized at MMU creation as there is no + * equivalent level in the guest's NPT to shadow. Allocate the tables + * on demand, as running a 32-bit L1 VMM is very rare. The PDP is + * handled above (to share logic with PAE), deal with the PML4 here. */ if (vcpu->arch.mmu->shadow_root_level == PT64_ROOT_4LEVEL) { if (vcpu->arch.mmu->lm_root == NULL) { - /* - * The additional page necessary for this is only - * allocated on demand. - */ - u64 *lm_root; lm_root = (void*)get_zeroed_page(GFP_KERNEL_ACCOUNT); - if (lm_root == NULL) - return 1; + if (!lm_root) + return -ENOMEM; lm_root[0] = __pa(vcpu->arch.mmu->pae_root) | pm_mask; @@ -5248,9 +5260,11 @@ static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu) * while the PDP table is a per-vCPU construct that's allocated at MMU * creation. When emulating 32-bit mode, cr3 is only 32 bits even on * x86_64. Therefore we need to allocate the PDP table in the first - * 4GB of memory, which happens to fit the DMA32 zone. Except for - * SVM's 32-bit NPT support, TDP paging doesn't use PAE paging and can - * skip allocating the PDP table. + * 4GB of memory, which happens to fit the DMA32 zone. TDP paging + * generally doesn't use PAE paging and can skip allocating the PDP + * table. The main exception, handled here, is SVM's 32-bit NPT. The + * other exception is for shadowing L1's 32-bit or PAE NPT on 64-bit + * KVM; that horror is handled on-demand by mmu_alloc_shadow_roots(). */ if (tdp_enabled && kvm_mmu_get_tdp_level(vcpu) > PT32E_ROOT_LEVEL) return 0; From patchwork Fri Mar 5 01:10:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12117317 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F4213C433DB for ; Fri, 5 Mar 2021 01:11:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CF52965004 for ; Fri, 5 Mar 2021 01:11:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229741AbhCEBLO (ORCPT ); Thu, 4 Mar 2021 20:11:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60864 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229602AbhCEBLO (ORCPT ); Thu, 4 Mar 2021 20:11:14 -0500 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0699C061574 for ; Thu, 4 Mar 2021 17:11:13 -0800 (PST) Received: by mail-qv1-xf4a.google.com with SMTP id e9so174258qvf.21 for ; Thu, 04 Mar 2021 17:11:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=K8AhprUmOUnoVow3QoDb0P3U2ocrp0Lj4++kLQ400/Y=; b=VFYT7tWkcgjtrx9pCym1KVbaXpHcUfG/Mnj/zrzk0Xs9QJWbaW5mGlk0dxJ6pSYc2i /hAc2E3MUJpYXLab9PEal3+KIpetekVMUpCZZ59Yt9+gSpKWy6uvaetcwtBboscWrK5x v5vFwWao1zsiEnJCHmrUH+pMxXwK9Phq45ItsyUVdIAkkOHchtR7spV749sT+pcxOhuI wKkOUCBQnhZ3ZnXSQbokvu9mGjeCoSUX9q/bfXjAfBACPKQfN+7D4Uk9heNejihJdQNN HJLQ+Xe4sOy0OWQgf91ZNVWF9We2XFlO8QdYVZoASCM8Srz4kJMueYeMcx+hcqz/gisb b9HA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=K8AhprUmOUnoVow3QoDb0P3U2ocrp0Lj4++kLQ400/Y=; b=fITYxXLezfVhnV05fEhcugzQ32DYOxL7cGtlo+LmkNXcZ2oLMlKI61nhpkt/y+/syX qiJN2a1G6YXaMS2pFZKZEsFNjo7xrL6sO96S/5T05F0va5o95sStLsV732iNmvG13987 h3j9TcUfVv9a5+KWlZzOgo8hFzYaGzb+l6+H1c3qblltnLyQT2PdRBklj+7qgXi/sn+A +w+KDKlb3+RVFoOoiB+vglBxZ2F3zD5D8kB3Qo8pPimGsy5mnr9ZVTKOc/aSKnjaTygy Wb3hpFSDbrptZX6LE5tjRg0r2Pdd8JiZmm6g90L5u7JU4wrbnorDYDPYeYMZ25lLybnf dGVw== X-Gm-Message-State: AOAM533ZnKiCcKNdsYeXOi+SqwgO8TWit3ZzuMWnXxP+xKmSKktlP489 hNDP/FFzhyZ3nOZX2JvDFwnjrp4Xufc= X-Google-Smtp-Source: ABdhPJwzVa3OhqR281felvSFKO7UbSwcrz+Q6E+bTsf0Zl1fOWtG+LUMiiKa6iD3xJ8fkKBwsVeGq7J9PdA= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:9857:be95:97a2:e91c]) (user=seanjc job=sendgmr) by 2002:ad4:55ef:: with SMTP id bu15mr6602357qvb.46.1614906673054; Thu, 04 Mar 2021 17:11:13 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 4 Mar 2021 17:10:47 -0800 In-Reply-To: <20210305011101.3597423-1-seanjc@google.com> Message-Id: <20210305011101.3597423-4-seanjc@google.com> Mime-Version: 1.0 References: <20210305011101.3597423-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 03/17] KVM: x86/mmu: Capture 'mmu' in a local variable when allocating roots From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Brijesh Singh , Tom Lendacky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Grab 'mmu' and do s/vcpu->arch.mmu/mmu to shorten line lengths and yield smaller diffs when moving code around in future cleanup without forcing the new code to use the same ugly pattern. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 58 ++++++++++++++++++++++-------------------- 1 file changed, 30 insertions(+), 28 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2ed3fac1244e..c4f8e59f596c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3235,7 +3235,8 @@ static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, gva_t gva, static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) { - u8 shadow_root_level = vcpu->arch.mmu->shadow_root_level; + struct kvm_mmu *mmu = vcpu->arch.mmu; + u8 shadow_root_level = mmu->shadow_root_level; hpa_t root; unsigned i; @@ -3244,42 +3245,43 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) if (!VALID_PAGE(root)) return -ENOSPC; - vcpu->arch.mmu->root_hpa = root; + mmu->root_hpa = root; } else if (shadow_root_level >= PT64_ROOT_4LEVEL) { root = mmu_alloc_root(vcpu, 0, 0, shadow_root_level, true); if (!VALID_PAGE(root)) return -ENOSPC; - vcpu->arch.mmu->root_hpa = root; + mmu->root_hpa = root; } else if (shadow_root_level == PT32E_ROOT_LEVEL) { for (i = 0; i < 4; ++i) { - MMU_WARN_ON(VALID_PAGE(vcpu->arch.mmu->pae_root[i])); + MMU_WARN_ON(VALID_PAGE(mmu->pae_root[i])); root = mmu_alloc_root(vcpu, i << (30 - PAGE_SHIFT), i << 30, PT32_ROOT_LEVEL, true); if (!VALID_PAGE(root)) return -ENOSPC; - vcpu->arch.mmu->pae_root[i] = root | PT_PRESENT_MASK; + mmu->pae_root[i] = root | PT_PRESENT_MASK; } - vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->pae_root); + mmu->root_hpa = __pa(mmu->pae_root); } else BUG(); /* root_pgd is ignored for direct MMUs. */ - vcpu->arch.mmu->root_pgd = 0; + mmu->root_pgd = 0; return 0; } static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) { + struct kvm_mmu *mmu = vcpu->arch.mmu; u64 pdptr, pm_mask; gfn_t root_gfn, root_pgd; hpa_t root; int i; - root_pgd = vcpu->arch.mmu->get_guest_pgd(vcpu); + root_pgd = mmu->get_guest_pgd(vcpu); root_gfn = root_pgd >> PAGE_SHIFT; if (mmu_check_root(vcpu, root_gfn)) @@ -3289,14 +3291,14 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) * Do we shadow a long mode page table? If so we need to * write-protect the guests page table root. */ - if (vcpu->arch.mmu->root_level >= PT64_ROOT_4LEVEL) { - MMU_WARN_ON(VALID_PAGE(vcpu->arch.mmu->root_hpa)); + if (mmu->root_level >= PT64_ROOT_4LEVEL) { + MMU_WARN_ON(VALID_PAGE(mmu->root_hpa)); root = mmu_alloc_root(vcpu, root_gfn, 0, - vcpu->arch.mmu->shadow_root_level, false); + mmu->shadow_root_level, false); if (!VALID_PAGE(root)) return -ENOSPC; - vcpu->arch.mmu->root_hpa = root; + mmu->root_hpa = root; goto set_root_pgd; } @@ -3306,7 +3308,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) * the shadow page table may be a PAE or a long mode page table. */ pm_mask = PT_PRESENT_MASK; - if (vcpu->arch.mmu->shadow_root_level == PT64_ROOT_4LEVEL) { + if (mmu->shadow_root_level == PT64_ROOT_4LEVEL) { pm_mask |= PT_ACCESSED_MASK | PT_WRITABLE_MASK | PT_USER_MASK; /* @@ -3314,21 +3316,21 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) * with 64-bit only when needed. Unlike 32-bit NPT, it doesn't * need to be in low mem. See also lm_root below. */ - if (!vcpu->arch.mmu->pae_root) { + if (!mmu->pae_root) { WARN_ON_ONCE(!tdp_enabled); - vcpu->arch.mmu->pae_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT); - if (!vcpu->arch.mmu->pae_root) + mmu->pae_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT); + if (!mmu->pae_root) return -ENOMEM; } } for (i = 0; i < 4; ++i) { - MMU_WARN_ON(VALID_PAGE(vcpu->arch.mmu->pae_root[i])); - if (vcpu->arch.mmu->root_level == PT32E_ROOT_LEVEL) { - pdptr = vcpu->arch.mmu->get_pdptr(vcpu, i); + MMU_WARN_ON(VALID_PAGE(mmu->pae_root[i])); + if (mmu->root_level == PT32E_ROOT_LEVEL) { + pdptr = mmu->get_pdptr(vcpu, i); if (!(pdptr & PT_PRESENT_MASK)) { - vcpu->arch.mmu->pae_root[i] = 0; + mmu->pae_root[i] = 0; continue; } root_gfn = pdptr >> PAGE_SHIFT; @@ -3340,9 +3342,9 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) PT32_ROOT_LEVEL, false); if (!VALID_PAGE(root)) return -ENOSPC; - vcpu->arch.mmu->pae_root[i] = root | pm_mask; + mmu->pae_root[i] = root | pm_mask; } - vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->pae_root); + mmu->root_hpa = __pa(mmu->pae_root); /* * When shadowing 32-bit or PAE NPT with 64-bit NPT, the PML4 and PDP @@ -3351,24 +3353,24 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) * on demand, as running a 32-bit L1 VMM is very rare. The PDP is * handled above (to share logic with PAE), deal with the PML4 here. */ - if (vcpu->arch.mmu->shadow_root_level == PT64_ROOT_4LEVEL) { - if (vcpu->arch.mmu->lm_root == NULL) { + if (mmu->shadow_root_level == PT64_ROOT_4LEVEL) { + if (mmu->lm_root == NULL) { u64 *lm_root; lm_root = (void*)get_zeroed_page(GFP_KERNEL_ACCOUNT); if (!lm_root) return -ENOMEM; - lm_root[0] = __pa(vcpu->arch.mmu->pae_root) | pm_mask; + lm_root[0] = __pa(mmu->pae_root) | pm_mask; - vcpu->arch.mmu->lm_root = lm_root; + mmu->lm_root = lm_root; } - vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->lm_root); + mmu->root_hpa = __pa(mmu->lm_root); } set_root_pgd: - vcpu->arch.mmu->root_pgd = root_pgd; + mmu->root_pgd = root_pgd; return 0; } From patchwork Fri Mar 5 01:10:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12117319 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25E11C433DB for ; Fri, 5 Mar 2021 01:11:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F1A2A64FEE for ; Fri, 5 Mar 2021 01:11:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229772AbhCEBLR (ORCPT ); Thu, 4 Mar 2021 20:11:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229791AbhCEBLQ (ORCPT ); Thu, 4 Mar 2021 20:11:16 -0500 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3AE4BC06175F for ; Thu, 4 Mar 2021 17:11:16 -0800 (PST) Received: by mail-qv1-xf49.google.com with SMTP id d26so196640qve.7 for ; Thu, 04 Mar 2021 17:11:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=LYxFT4pOeVnoMsQ+X9owyhaLmk95i91RCUFmHpd8o00=; b=n6UwsGRzwiYvaiDCD6Ln97IoBdaqzvGyTfW0APuURuBqjKPmL3be8RpTz8s6rXOf4c X5DCrfuT+xc7S5gYPV5OCzUiy90Cwq5nRsX0kJj0F1aZ/tvfc9PLictCy2Xat1JlFT8d 43qCHwBDJNwnKUq//RdZD9havfLszBxlJVuA1M8T3vLgWSmJlnid0YN+bZpYxcxj8fQA j1QICHQhB8j0Osahhrv93vhcLF3X5CVJmPVVgYKbsDYKYPuJ40fuHRSlhH8p45Zvmm5+ VHxYHzOyRlZDzdu8XLLnrn/u4+jfkvdiNng13OI5mxMHv7ekCdKoCTt3VTVF2B/XJfpc n1yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=LYxFT4pOeVnoMsQ+X9owyhaLmk95i91RCUFmHpd8o00=; b=rfh/WVFzjVI8TITXo/5urt3ggd84cWsU5/Fql1so7MGb4wzOxXP5FCvmOCOqfHED/+ 8O/rvFPjJLoyuuySEJ1qiePVAzp4+hmspNHs5joNO52vF6S3W8bSW80DCAtm0vOAisT9 sg7OMmhtaTiqrcKZVObxYtPubi6Q1/q4OfDBl83mW/iI2u6MjmW6dVumoi1pXBUlAKnv 1KPuAEHmR61UnD2kELuiYByo1xK6TNfZ4oT3ruy5xjr5gHNrM25ARPRf/WLsm/Q7lDCX Va8qu6b3GVWz9ATFQIM/H/URNggBKvkUAfC/N5xPsoa3XIfzTTy2xKb4Nk03RUjPvVEi DubQ== X-Gm-Message-State: AOAM532K8zGR+PSefG+9UKVnsigHBQzyot4LZKWwNSBpCpbFslK3J2ST moJNHoGy3JmNmWQ5XxxuldtXDqEmnwQ= X-Google-Smtp-Source: ABdhPJz58uN5rXMxKfkQdJ4mdsacSBHXSYWCi+lQBymZY0ezQ16RmqurQtF/8Qe6WV8yluVuP8NY88pH7pE= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:9857:be95:97a2:e91c]) (user=seanjc job=sendgmr) by 2002:a05:6214:20af:: with SMTP id 15mr6858313qvd.42.1614906675307; Thu, 04 Mar 2021 17:11:15 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 4 Mar 2021 17:10:48 -0800 In-Reply-To: <20210305011101.3597423-1-seanjc@google.com> Message-Id: <20210305011101.3597423-5-seanjc@google.com> Mime-Version: 1.0 References: <20210305011101.3597423-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 04/17] KVM: x86/mmu: Allocate the lm_root before allocating PAE roots From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Brijesh Singh , Tom Lendacky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Allocate lm_root before the PAE roots so that the PAE roots aren't leaked if the memory allocation for the lm_root happens to fail. Note, KVM can still leak PAE roots if mmu_check_root() fails on a guest's PDPTR, or if mmu_alloc_root() fails due to MMU pages not being available. Those issues will be fixed in future commits. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 64 ++++++++++++++++++++---------------------- 1 file changed, 31 insertions(+), 33 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c4f8e59f596c..7cb5fb5d2d4d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3308,21 +3308,38 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) * the shadow page table may be a PAE or a long mode page table. */ pm_mask = PT_PRESENT_MASK; - if (mmu->shadow_root_level == PT64_ROOT_4LEVEL) { + if (mmu->shadow_root_level == PT64_ROOT_4LEVEL) pm_mask |= PT_ACCESSED_MASK | PT_WRITABLE_MASK | PT_USER_MASK; - /* - * Allocate the page for the PDPTEs when shadowing 32-bit NPT - * with 64-bit only when needed. Unlike 32-bit NPT, it doesn't - * need to be in low mem. See also lm_root below. - */ - if (!mmu->pae_root) { - WARN_ON_ONCE(!tdp_enabled); + /* + * When shadowing 32-bit or PAE NPT with 64-bit NPT, the PML4 and PDP + * tables are allocated and initialized at root creation as there is no + * equivalent level in the guest's NPT to shadow. Allocate the tables + * on demand, as running a 32-bit L1 VMM is very rare. Unlike 32-bit + * NPT, the PDP table doesn't need to be in low mem. Preallocate the + * pages so that the PAE roots aren't leaked on failure. + */ + if (mmu->shadow_root_level == PT64_ROOT_4LEVEL && + (!mmu->pae_root || !mmu->lm_root)) { + u64 *lm_root, *pae_root; - mmu->pae_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT); - if (!mmu->pae_root) - return -ENOMEM; + if (WARN_ON_ONCE(!tdp_enabled || mmu->pae_root || mmu->lm_root)) + return -EIO; + + pae_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT); + if (!pae_root) + return -ENOMEM; + + lm_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT); + if (!lm_root) { + free_page((unsigned long)pae_root); + return -ENOMEM; } + + mmu->pae_root = pae_root; + mmu->lm_root = lm_root; + + lm_root[0] = __pa(mmu->pae_root) | pm_mask; } for (i = 0; i < 4; ++i) { @@ -3344,30 +3361,11 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) return -ENOSPC; mmu->pae_root[i] = root | pm_mask; } - mmu->root_hpa = __pa(mmu->pae_root); - - /* - * When shadowing 32-bit or PAE NPT with 64-bit NPT, the PML4 and PDP - * tables are allocated and initialized at MMU creation as there is no - * equivalent level in the guest's NPT to shadow. Allocate the tables - * on demand, as running a 32-bit L1 VMM is very rare. The PDP is - * handled above (to share logic with PAE), deal with the PML4 here. - */ - if (mmu->shadow_root_level == PT64_ROOT_4LEVEL) { - if (mmu->lm_root == NULL) { - u64 *lm_root; - - lm_root = (void*)get_zeroed_page(GFP_KERNEL_ACCOUNT); - if (!lm_root) - return -ENOMEM; - - lm_root[0] = __pa(mmu->pae_root) | pm_mask; - - mmu->lm_root = lm_root; - } + if (mmu->shadow_root_level == PT64_ROOT_4LEVEL) mmu->root_hpa = __pa(mmu->lm_root); - } + else + mmu->root_hpa = __pa(mmu->pae_root); set_root_pgd: mmu->root_pgd = root_pgd; From patchwork Fri Mar 5 01:10:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12117321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD61CC433E6 for ; Fri, 5 Mar 2021 01:11:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A82BB64FEE for ; Fri, 5 Mar 2021 01:11:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230063AbhCEBLV (ORCPT ); Thu, 4 Mar 2021 20:11:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229813AbhCEBLT (ORCPT ); Thu, 4 Mar 2021 20:11:19 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AFE42C061756 for ; Thu, 4 Mar 2021 17:11:18 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id p136so648162ybc.21 for ; Thu, 04 Mar 2021 17:11:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=uZuqwsLALY1krY1iXlnTM3rA9YU0G849FI8TdMR67Z0=; b=QO27+GC7V/SOoob94kAg8EzDLZtMrtUSl4gVWNlEJ9QksMRFESE6lxfb0CkyNkry37 n2q6GNCkslt23PHWi8uKOOVqjTw5hwquV702w9orhoHo94MZfQ0eMUHnZfZ28uEvMCpJ fBK7UyF/rXd+EfATU750RDdEgKy2qid0nyZBLruEmbR6J43YqziuFi8p0vVsDESuXPmD 3Kxh/uPlMyxqxQcNrvf0jCCrEl+ypLDWggJiF5xX99mv/z77ns+iwd340WIvzqiwXfFB YQNsMjYWNbfZKYqNn0SAUQ1GIaOGvg2KNDrYKR3s3aOZfGD1uhleuWbljE9UlGoUdAdJ 1SCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=uZuqwsLALY1krY1iXlnTM3rA9YU0G849FI8TdMR67Z0=; b=LGT//wGKhNKSVPZ+wCzIaKZ3FqS2an+XxY/EBuN2E9KY6Rz3XuerDNlnHzwBHlDX7M OCpEEY9NDEBd+qi2hDZ9QOUn+Z+Ob/lyyppjmdR83tcbc/miygSPOQsQ+ppSsKRZxAx9 fUEuWO6o3RHuBWr239nPjmhedXGm1h1FOI+apXwYhDeqCOgwsXHrBMrnXMsuczqW3uH9 ao2hoPd9SQ05iQh96PZuI2jXM6sZy4+XRXx8L3MBbwKqdWMo+6oGhQDYmNMovmOjEBls qdi28012BvPcBZHLOyaEAW7jpLTYnAmAhLFebHgFEFkBd3l88eL0iCF4HmIeCQSbvpzF HUTQ== X-Gm-Message-State: AOAM530nJaHVE7NlI21nRV4LB2D2isZ2r/VFpehJL7HLtFeJvTaxW/ay OI6+1gSMBI+jlwwVe6uNB8uWmseBUCI= X-Google-Smtp-Source: ABdhPJz9ZIBpw9QvlfCDj4hr0bOdja6zM4RuTgfwg5vgqcGbctm9kwXDTuXA/OgYvGqNSkPQJIL4HAvs6hk= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:9857:be95:97a2:e91c]) (user=seanjc job=sendgmr) by 2002:a25:bec4:: with SMTP id k4mr10144083ybm.104.1614906677970; Thu, 04 Mar 2021 17:11:17 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 4 Mar 2021 17:10:49 -0800 In-Reply-To: <20210305011101.3597423-1-seanjc@google.com> Message-Id: <20210305011101.3597423-6-seanjc@google.com> Mime-Version: 1.0 References: <20210305011101.3597423-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 05/17] KVM: x86/mmu: Allocate pae_root and lm_root pages in dedicated helper From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Brijesh Singh , Tom Lendacky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the on-demand allocation of the pae_root and lm_root pages, used by nested NPT for 32-bit L1s, into a separate helper. This will allow a future patch to hold mmu_lock while allocating the non-special roots so that make_mmu_pages_available() can be checked once at the start of root allocation, and thus avoid having to deal with failure in the middle of root allocation. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 84 +++++++++++++++++++++++++++--------------- 1 file changed, 54 insertions(+), 30 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7cb5fb5d2d4d..dd9d5cc13a46 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3308,38 +3308,10 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) * the shadow page table may be a PAE or a long mode page table. */ pm_mask = PT_PRESENT_MASK; - if (mmu->shadow_root_level == PT64_ROOT_4LEVEL) + if (mmu->shadow_root_level == PT64_ROOT_4LEVEL) { pm_mask |= PT_ACCESSED_MASK | PT_WRITABLE_MASK | PT_USER_MASK; - /* - * When shadowing 32-bit or PAE NPT with 64-bit NPT, the PML4 and PDP - * tables are allocated and initialized at root creation as there is no - * equivalent level in the guest's NPT to shadow. Allocate the tables - * on demand, as running a 32-bit L1 VMM is very rare. Unlike 32-bit - * NPT, the PDP table doesn't need to be in low mem. Preallocate the - * pages so that the PAE roots aren't leaked on failure. - */ - if (mmu->shadow_root_level == PT64_ROOT_4LEVEL && - (!mmu->pae_root || !mmu->lm_root)) { - u64 *lm_root, *pae_root; - - if (WARN_ON_ONCE(!tdp_enabled || mmu->pae_root || mmu->lm_root)) - return -EIO; - - pae_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT); - if (!pae_root) - return -ENOMEM; - - lm_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT); - if (!lm_root) { - free_page((unsigned long)pae_root); - return -ENOMEM; - } - - mmu->pae_root = pae_root; - mmu->lm_root = lm_root; - - lm_root[0] = __pa(mmu->pae_root) | pm_mask; + mmu->lm_root[0] = __pa(mmu->pae_root) | pm_mask; } for (i = 0; i < 4; ++i) { @@ -3373,6 +3345,55 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) return 0; } +static int mmu_alloc_special_roots(struct kvm_vcpu *vcpu) +{ + struct kvm_mmu *mmu = vcpu->arch.mmu; + u64 *lm_root, *pae_root; + + /* + * When shadowing 32-bit or PAE NPT with 64-bit NPT, the PML4 and PDP + * tables are allocated and initialized at root creation as there is no + * equivalent level in the guest's NPT to shadow. Allocate the tables + * on demand, as running a 32-bit L1 VMM on 64-bit KVM is very rare. + */ + if (mmu->direct_map || mmu->root_level >= PT64_ROOT_4LEVEL || + mmu->shadow_root_level < PT64_ROOT_4LEVEL) + return 0; + + /* + * This mess only works with 4-level paging and needs to be updated to + * work with 5-level paging. + */ + if (WARN_ON_ONCE(mmu->shadow_root_level != PT64_ROOT_4LEVEL)) + return -EIO; + + if (mmu->pae_root && mmu->lm_root) + return 0; + + /* + * The special roots should always be allocated in concert. Yell and + * bail if KVM ends up in a state where only one of the roots is valid. + */ + if (WARN_ON_ONCE(!tdp_enabled || mmu->pae_root || mmu->lm_root)) + return -EIO; + + /* Unlike 32-bit NPT, the PDP table doesn't need to be in low mem. */ + pae_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT); + if (!pae_root) + return -ENOMEM; + + lm_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT); + if (!lm_root) { + free_page((unsigned long)pae_root); + return -ENOMEM; + } + + mmu->pae_root = pae_root; + mmu->lm_root = lm_root; + + return 0; +} + static int mmu_alloc_roots(struct kvm_vcpu *vcpu) { if (vcpu->arch.mmu->direct_map) @@ -4820,6 +4841,9 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu) int r; r = mmu_topup_memory_caches(vcpu, !vcpu->arch.mmu->direct_map); + if (r) + goto out; + r = mmu_alloc_special_roots(vcpu); if (r) goto out; r = mmu_alloc_roots(vcpu); From patchwork Fri Mar 5 01:10:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12117323 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99552C433DB for ; Fri, 5 Mar 2021 01:11:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 75CF864FEE for ; Fri, 5 Mar 2021 01:11:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230173AbhCEBL1 (ORCPT ); Thu, 4 Mar 2021 20:11:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230107AbhCEBLW (ORCPT ); Thu, 4 Mar 2021 20:11:22 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01C87C061574 for ; Thu, 4 Mar 2021 17:11:21 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id n10so660602ybb.12 for ; Thu, 04 Mar 2021 17:11:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=CmO0jBjVu/c7x8l80OPFh/BbeYFo14Dpvu6vuN/tNRs=; b=qnGtoCB+XFmno9EBMk9jcVVvFzLo328MkUD0yjac4wxhw46zCLfzmRFzZ/4ua7M4w8 MfsXNVlcHndKhGNbOrfmJN3RND0Rwuc/xiLph6AhP3vTvQpLRnGRobmTkUIRzI6K0h0S 1yH3qovPQLrZJwXjy3yK1+TLxTAj3aFhAWWSTqyIKhdyagk6/Rod8TmUixRpYAk3vUtb KORGqEBXyU8XnB3UZVDCNaOPoXm9jKDdx20WbRwxr2cHfH0JxVtwqLvtcFhEsReBq9/5 vM0NgC1BfLictuoIJBR+U/2G0NViOKpvqmxoLSra+k4QoWdiD2KvwUKXdoMXRfhUKlV6 C9Wg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=CmO0jBjVu/c7x8l80OPFh/BbeYFo14Dpvu6vuN/tNRs=; b=drx955hzX0Ek978X3jhwvZDLDIHiODl2xM/JHAGPrco8Wgqy5RqnRPbbkefWkRqmJU V9L8mtrphVqT7z7Co4o+zxnIs62mn1lJ9gMFrLPluJE6QIwwiiZfse1Lc0DcDF7Z/bVP aR//dxt64XOhpqCBlAfQjmWneESdbE60yu9Q1ncwZIViLijlt3NVt9wnOFFyOh5ju4ib 72WmBhVs2Uu6ahx5Gh4pi4bK4UzaKvbFhFNVsMg95hfXiqBtMhBfoe5p2rJESR8IhnoX Zo3ebukAZGpHncIrSflWApSEFlUD0LCtwEs4uts3WmiRB4o6dHQ3jadodlHv9eu6upnp bwkg== X-Gm-Message-State: AOAM530UGP/DPPytmeE7dyIDnXQt1h2EqBHxix/4ohmEGz4xzdrC+KRz oKWHWLPrkJ3B1fX3ygKt2Rju/D492qo= X-Google-Smtp-Source: ABdhPJwOvNDvvbRu3skaLol+f303QwH6Y9y6nF46U0Wmf6//Sjp7k3BgHEWmglXG6NUwb6NjW24TmW8ziqE= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:9857:be95:97a2:e91c]) (user=seanjc job=sendgmr) by 2002:a25:d6d5:: with SMTP id n204mr10134841ybg.22.1614906680251; Thu, 04 Mar 2021 17:11:20 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 4 Mar 2021 17:10:50 -0800 In-Reply-To: <20210305011101.3597423-1-seanjc@google.com> Message-Id: <20210305011101.3597423-7-seanjc@google.com> Mime-Version: 1.0 References: <20210305011101.3597423-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 06/17] KVM: x86/mmu: Ensure MMU pages are available when allocating roots From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Brijesh Singh , Tom Lendacky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Hold the mmu_lock for write for the entire duration of allocating and initializing an MMU's roots. This ensures there are MMU pages available and thus prevents root allocations from failing. That in turn fixes a bug where KVM would fail to free valid PAE roots if a one of the later roots failed to allocate. Add a comment to make_mmu_pages_available() to call out that the limit is a soft limit, e.g. KVM will temporarily exceed the threshold if a page fault allocates multiple shadow pages and there was only one page "available". Note, KVM _still_ leaks the PAE roots if the guest PDPTR checks fail. This will be addressed in a future commit. Cc: Ben Gardon Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 50 +++++++++++++++----------------------- arch/x86/kvm/mmu/tdp_mmu.c | 23 ++++-------------- 2 files changed, 25 insertions(+), 48 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index dd9d5cc13a46..7ebfbc77b050 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2403,6 +2403,15 @@ static int make_mmu_pages_available(struct kvm_vcpu *vcpu) kvm_mmu_zap_oldest_mmu_pages(vcpu->kvm, KVM_REFILL_PAGES - avail); + /* + * Note, this check is intentionally soft, it only guarantees that one + * page is available, while the caller may end up allocating as many as + * four pages, e.g. for PAE roots or for 5-level paging. Temporarily + * exceeding the (arbitrary by default) limit will not harm the host, + * being too agressive may unnecessarily kill the guest, and getting an + * exact count is far more trouble than it's worth, especially in the + * page fault paths. + */ if (!kvm_mmu_available_pages(vcpu->kvm)) return -ENOSPC; return 0; @@ -3220,16 +3229,9 @@ static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, gva_t gva, { struct kvm_mmu_page *sp; - write_lock(&vcpu->kvm->mmu_lock); - - if (make_mmu_pages_available(vcpu)) { - write_unlock(&vcpu->kvm->mmu_lock); - return INVALID_PAGE; - } sp = kvm_mmu_get_page(vcpu, gfn, gva, level, direct, ACC_ALL); ++sp->root_count; - write_unlock(&vcpu->kvm->mmu_lock); return __pa(sp->spt); } @@ -3242,16 +3244,9 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) if (is_tdp_mmu_enabled(vcpu->kvm)) { root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu); - - if (!VALID_PAGE(root)) - return -ENOSPC; mmu->root_hpa = root; } else if (shadow_root_level >= PT64_ROOT_4LEVEL) { - root = mmu_alloc_root(vcpu, 0, 0, shadow_root_level, - true); - - if (!VALID_PAGE(root)) - return -ENOSPC; + root = mmu_alloc_root(vcpu, 0, 0, shadow_root_level, true); mmu->root_hpa = root; } else if (shadow_root_level == PT32E_ROOT_LEVEL) { for (i = 0; i < 4; ++i) { @@ -3259,8 +3254,6 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) root = mmu_alloc_root(vcpu, i << (30 - PAGE_SHIFT), i << 30, PT32_ROOT_LEVEL, true); - if (!VALID_PAGE(root)) - return -ENOSPC; mmu->pae_root[i] = root | PT_PRESENT_MASK; } mmu->root_hpa = __pa(mmu->pae_root); @@ -3296,8 +3289,6 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) root = mmu_alloc_root(vcpu, root_gfn, 0, mmu->shadow_root_level, false); - if (!VALID_PAGE(root)) - return -ENOSPC; mmu->root_hpa = root; goto set_root_pgd; } @@ -3316,6 +3307,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) for (i = 0; i < 4; ++i) { MMU_WARN_ON(VALID_PAGE(mmu->pae_root[i])); + if (mmu->root_level == PT32E_ROOT_LEVEL) { pdptr = mmu->get_pdptr(vcpu, i); if (!(pdptr & PT_PRESENT_MASK)) { @@ -3329,8 +3321,6 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) root = mmu_alloc_root(vcpu, root_gfn, i << 30, PT32_ROOT_LEVEL, false); - if (!VALID_PAGE(root)) - return -ENOSPC; mmu->pae_root[i] = root | pm_mask; } @@ -3394,14 +3384,6 @@ static int mmu_alloc_special_roots(struct kvm_vcpu *vcpu) return 0; } -static int mmu_alloc_roots(struct kvm_vcpu *vcpu) -{ - if (vcpu->arch.mmu->direct_map) - return mmu_alloc_direct_roots(vcpu); - else - return mmu_alloc_shadow_roots(vcpu); -} - void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) { int i; @@ -4846,7 +4828,15 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu) r = mmu_alloc_special_roots(vcpu); if (r) goto out; - r = mmu_alloc_roots(vcpu); + write_lock(&vcpu->kvm->mmu_lock); + if (make_mmu_pages_available(vcpu)) + r = -ENOSPC; + else if (vcpu->arch.mmu->direct_map) + r = mmu_alloc_direct_roots(vcpu); + else + r = mmu_alloc_shadow_roots(vcpu); + write_unlock(&vcpu->kvm->mmu_lock); + kvm_mmu_sync_roots(vcpu); if (r) goto out; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 70226e0875fe..50ef757c5586 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -137,22 +137,21 @@ static struct kvm_mmu_page *alloc_tdp_mmu_page(struct kvm_vcpu *vcpu, gfn_t gfn, return sp; } -static struct kvm_mmu_page *get_tdp_mmu_vcpu_root(struct kvm_vcpu *vcpu) +hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) { union kvm_mmu_page_role role; struct kvm *kvm = vcpu->kvm; struct kvm_mmu_page *root; + lockdep_assert_held_write(&kvm->mmu_lock); + role = page_role_for_level(vcpu, vcpu->arch.mmu->shadow_root_level); - write_lock(&kvm->mmu_lock); - /* Check for an existing root before allocating a new one. */ for_each_tdp_mmu_root(kvm, root) { if (root->role.word == role.word) { kvm_mmu_get_root(kvm, root); - write_unlock(&kvm->mmu_lock); - return root; + goto out; } } @@ -161,19 +160,7 @@ static struct kvm_mmu_page *get_tdp_mmu_vcpu_root(struct kvm_vcpu *vcpu) list_add(&root->link, &kvm->arch.tdp_mmu_roots); - write_unlock(&kvm->mmu_lock); - - return root; -} - -hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) -{ - struct kvm_mmu_page *root; - - root = get_tdp_mmu_vcpu_root(vcpu); - if (!root) - return INVALID_PAGE; - +out: return __pa(root->spt); } From patchwork Fri Mar 5 01:10:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12117325 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E759CC433E0 for ; Fri, 5 Mar 2021 01:11:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AD0F265004 for ; Fri, 5 Mar 2021 01:11:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230093AbhCEBLa (ORCPT ); Thu, 4 Mar 2021 20:11:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229972AbhCEBLX (ORCPT ); Thu, 4 Mar 2021 20:11:23 -0500 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51054C06175F for ; Thu, 4 Mar 2021 17:11:23 -0800 (PST) Received: by mail-qt1-x849.google.com with SMTP id b18so277046qtt.6 for ; Thu, 04 Mar 2021 17:11:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=bwWmsfKbRJfV8/+Fm2c/LUlXd7iO+HWaAYiyWCQXxfU=; b=U7lGCiWjmHB0z1Bdaswlr989f90JI/0sCy4z5iS62dl19Q02XbnDo6l7vIbmx0GgCm v9hpEGJiqp+tZPo5dBGeXXje6uFoG4AXj4zM+caKcWaJbk7ELoPxzC39GLyhH9zFD/xg KjZAjp85Z2M1WwtVPeYnXo+0z2eyhVHPiy+LxxGsPEdQTRqDZjPqeIOnbMV8Dvx2CkAj Pxts+2zz962epVbW6hKDxscpI3k5fjKeC+4ivEbiRuvjtl1BydfPwzqigkGstRbxO6xW un/CeNjvVml/GgGNKCnv+nZZK7im8DYMN6iZONLrwOqweUinlrPhfNhWNTsTDNu2co/3 vNjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=bwWmsfKbRJfV8/+Fm2c/LUlXd7iO+HWaAYiyWCQXxfU=; b=Sm3ey6IHlertWEZ0nfRlMTFGgv6iKUYP/PboBdo9YxrKomrwHW2yNhhOSaJC/TYdQI N+ecWdux5J3JZMoLtd11SasFRN1KA3MBPQ8SZ9c9wiPlOfuPpLYKQiWv0vQ5Ck+2538F FAw7lYc2DZR7X98bJ3Wi0rmlAh3bjN9b/0zDO/x07sdBj0EhTWY0SxzTEP/aP7ebjIhO Xpe//fjaLlTc886QfiSxlh4gYaAD/3eX36GpYI4fh+WApUhA81VN+L+PKsiz7s4ns0bT HTr6bP4WPaTSAW8GG16Cb204H4wAG+V3Av+PFbDByM382srFbE8giH0r6WN9lpRKLq6H bGGw== X-Gm-Message-State: AOAM530N4kMPA9lGusv4ljEzZWoS84DBGQvyK55T/fL1vTjxs/fborJv hfrRXI0bh98R6K5xTKcavH5N7RQh6BM= X-Google-Smtp-Source: ABdhPJyKhBQdbiAnarpfnqG9GFRqou15P8VO9icFY/IzrVPYTSCUm2afkPKq9taANSKp2bAeEy6Mr4Kbpn8= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:9857:be95:97a2:e91c]) (user=seanjc job=sendgmr) by 2002:a0c:b4a8:: with SMTP id c40mr6983168qve.60.1614906682447; Thu, 04 Mar 2021 17:11:22 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 4 Mar 2021 17:10:51 -0800 In-Reply-To: <20210305011101.3597423-1-seanjc@google.com> Message-Id: <20210305011101.3597423-8-seanjc@google.com> Mime-Version: 1.0 References: <20210305011101.3597423-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 07/17] KVM: x86/mmu: Check PDPTRs before allocating PAE roots From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Brijesh Singh , Tom Lendacky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Check the validity of the PDPTRs before allocating any of the PAE roots, otherwise a bad PDPTR will cause KVM to leak any previously allocated roots. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7ebfbc77b050..9fc2b46f8541 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3269,7 +3269,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) { struct kvm_mmu *mmu = vcpu->arch.mmu; - u64 pdptr, pm_mask; + u64 pdptrs[4], pm_mask; gfn_t root_gfn, root_pgd; hpa_t root; int i; @@ -3280,6 +3280,17 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) if (mmu_check_root(vcpu, root_gfn)) return 1; + if (mmu->root_level == PT32E_ROOT_LEVEL) { + for (i = 0; i < 4; ++i) { + pdptrs[i] = mmu->get_pdptr(vcpu, i); + if (!(pdptrs[i] & PT_PRESENT_MASK)) + continue; + + if (mmu_check_root(vcpu, pdptrs[i] >> PAGE_SHIFT)) + return 1; + } + } + /* * Do we shadow a long mode page table? If so we need to * write-protect the guests page table root. @@ -3309,14 +3320,11 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) MMU_WARN_ON(VALID_PAGE(mmu->pae_root[i])); if (mmu->root_level == PT32E_ROOT_LEVEL) { - pdptr = mmu->get_pdptr(vcpu, i); - if (!(pdptr & PT_PRESENT_MASK)) { + if (!(pdptrs[i] & PT_PRESENT_MASK)) { mmu->pae_root[i] = 0; continue; } - root_gfn = pdptr >> PAGE_SHIFT; - if (mmu_check_root(vcpu, root_gfn)) - return 1; + root_gfn = pdptrs[i] >> PAGE_SHIFT; } root = mmu_alloc_root(vcpu, root_gfn, i << 30, From patchwork Fri Mar 5 01:10:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12117333 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2883BC433E6 for ; Fri, 5 Mar 2021 01:11:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0588864FEE for ; Fri, 5 Mar 2021 01:11:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230416AbhCEBLj (ORCPT ); Thu, 4 Mar 2021 20:11:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230155AbhCEBL1 (ORCPT ); Thu, 4 Mar 2021 20:11:27 -0500 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C2BFC061765 for ; Thu, 4 Mar 2021 17:11:25 -0800 (PST) Received: by mail-qt1-x849.google.com with SMTP id t5so279700qti.5 for ; Thu, 04 Mar 2021 17:11:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=SZwHUSvDdXIjOwL6DYqZk6MGlJKECDbIyHyFjVNd5DY=; b=XbErS6UpsKnvave61IB3C2uZlgtOfq/eJ41z1RORqSUIwc79t99F0vpyCi0Im3xIcp 64KWQePLR7YCrmhbdti2+XQIH44YQMm7O5eV+mtp3rR8Gv9PIr1AzvspMx1GAHxxDVR+ pdbJma85yk71m29tUTY7C4srHm9U/r/PbrKEsgjXAoVsxjkyESpiewkSOjWRZVnrF3dF FlAGxRQEgKKVQ3d7sRvituaJXCdzYeq2wIJAuknuo013TkAjVxq1r7vhxD/UarFMW7w3 +375dR5e8TTa1MSXrCOwLe+mAAsMfnEraJ4WRL9ZELZHzHlPdBsrl6aCXnLv3R53TvWQ uVtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=SZwHUSvDdXIjOwL6DYqZk6MGlJKECDbIyHyFjVNd5DY=; b=YTjeWaIRs9dcXlcmmqZcgCMKmfCaCLOun/WrkBdR/l5Wq3NAuXZzmMFrdPiohsZRmV GrCbvR+4XzR4MVkFD+hXTVPRyDGZBHYIe3WO/uFdpF7CDUJOLT+iRzBS/Nx+pbkEL23d +LjgWw3MB5KqoibljUty9f+CWDpVrqjaoQ7MSOFbO0kaiPUsC4OyPSyEI3p38FbiL9zI alXup52meD1s9JiAkGR3TsXYSoVDCSKhVzAjFi0TFyLiRmfZA3P+Nb10HVfuAjUBjQW7 7/+QhwirAV/R4KgnnKe9rzhu5Ub6wdwxtAF0XaS4rWK+u2DyhAPY0TliXKR2vxLpeALJ iiTA== X-Gm-Message-State: AOAM532zSjaaFfPTmpXTFgQc0dROiAtb/7zdNx253h1s/ox9Mta+FdH1 XZNRkZYSUkWhjA2eS0pnHwe+umC+bAo= X-Google-Smtp-Source: ABdhPJwzbr84Spc6kUFtsp2d0ycR+uQPDfjyLmxkIwxSgviJq5GyStPAn18I3GkNfErIdMu0vHCYrdFOKY8= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:9857:be95:97a2:e91c]) (user=seanjc job=sendgmr) by 2002:a0c:b59f:: with SMTP id g31mr6709578qve.28.1614906684731; Thu, 04 Mar 2021 17:11:24 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 4 Mar 2021 17:10:52 -0800 In-Reply-To: <20210305011101.3597423-1-seanjc@google.com> Message-Id: <20210305011101.3597423-9-seanjc@google.com> Mime-Version: 1.0 References: <20210305011101.3597423-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 08/17] KVM: x86/mmu: Fix and unconditionally enable WARNs to detect PAE leaks From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Brijesh Singh , Tom Lendacky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Exempt NULL PAE roots from the check to detect leaks, since kvm_mmu_free_roots() doesn't set them back to INVALID_PAGE. Stop hiding the WARNs to detect PAE root leaks behind MMU_WARN_ON, the hidden WARNs obviously didn't do their job given the hilarious number of bugs that could lead to PAE roots being leaked, not to mention the above false positive. Opportunistically delete a warning on root_hpa being valid, there's nothing special about 4/5-level shadow pages that warrants a WARN. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 9fc2b46f8541..b82c1b0d6d6e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3250,7 +3250,8 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) mmu->root_hpa = root; } else if (shadow_root_level == PT32E_ROOT_LEVEL) { for (i = 0; i < 4; ++i) { - MMU_WARN_ON(VALID_PAGE(mmu->pae_root[i])); + WARN_ON_ONCE(mmu->pae_root[i] && + VALID_PAGE(mmu->pae_root[i])); root = mmu_alloc_root(vcpu, i << (30 - PAGE_SHIFT), i << 30, PT32_ROOT_LEVEL, true); @@ -3296,8 +3297,6 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) * write-protect the guests page table root. */ if (mmu->root_level >= PT64_ROOT_4LEVEL) { - MMU_WARN_ON(VALID_PAGE(mmu->root_hpa)); - root = mmu_alloc_root(vcpu, root_gfn, 0, mmu->shadow_root_level, false); mmu->root_hpa = root; @@ -3317,7 +3316,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) } for (i = 0; i < 4; ++i) { - MMU_WARN_ON(VALID_PAGE(mmu->pae_root[i])); + WARN_ON_ONCE(mmu->pae_root[i] && VALID_PAGE(mmu->pae_root[i])); if (mmu->root_level == PT32E_ROOT_LEVEL) { if (!(pdptrs[i] & PT_PRESENT_MASK)) { From patchwork Fri Mar 5 01:10:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12117331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF9C4C43381 for ; Fri, 5 Mar 2021 01:11:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B2D3A64FEE for ; Fri, 5 Mar 2021 01:11:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230406AbhCEBLi (ORCPT ); Thu, 4 Mar 2021 20:11:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230107AbhCEBL2 (ORCPT ); Thu, 4 Mar 2021 20:11:28 -0500 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDA4AC0613D7 for ; Thu, 4 Mar 2021 17:11:27 -0800 (PST) Received: by mail-qt1-x849.google.com with SMTP id h13so250357qti.21 for ; Thu, 04 Mar 2021 17:11:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=70+pbfsr9FN7xUzI6QuNzJ56yN8wNBmmQK3zMq55CLc=; b=pmvh31yoJyb4qrUv2ew3hjgo7ixYIuYizEYdNw1S8Gq+ecXPDzJKFHb3LM+e/2mqlG fSdsnhAOdSBYIPi4Yn9DL7cP7LY+onbUmEF6zZ2YIQba5+ujrgJc2v0BCLWm6SwvtX9m pkB/+anEXQ8lTyWLQ6E3g+cS4mTKDg+RYpwSnWSciyGNxDOeKaoU0RjNRc+d06+gT1b3 1FxItGOw2GqdRP4E4UcBo0GbAbvV4Y0psgkuzUalH67kDa0N08Y/ZgiEem2gbWboUZBr AdGwoXl3tVta/PsBOVFM/z/pfBU6r8PAXyUbeMVY8rNmrNrF9Z/EW8BL/MT9Sv4o9WyC 8ZZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=70+pbfsr9FN7xUzI6QuNzJ56yN8wNBmmQK3zMq55CLc=; b=f1iRbA5xORgT+oAI+W+ShcHAwetbl/MfCMwlwLbodM/71UA0/AEsWwUr/aa1E/7cs1 4VR2gQAJYB3nw9Dj27B7u+uQe3qlYTkWZvj2dNT9FXyJM5OBlWv/pBRmlYGcxTLj5W8C vdDsA1K/j3hOi+J4UVFpsH5u4Rhih4hjBZvP5fFkhBZiHLY/TT5acM1E6+KrxBFt0qUB h71wHY8e27mSFWt+7Fnv0sD9mCrwVzPP778UVbmcDWqy9IIy7P21JDwACKlcO85ixfRS Q3PHhAIbhlwfCM2qbsz7KzAGUi+NEEr6bOWJzJ7AduJ7cgSdxKG12rGQVjj70HIBOKpt ALzA== X-Gm-Message-State: AOAM531v85qwQ1UORLkkrur2QFjUbwAjfi+c1d/bETS96RYs2JuHvRNK AdYSobaJK4AzkVHeovpVmt+EboyYRPU= X-Google-Smtp-Source: ABdhPJw0wUHUyLIyV84zNUK5PAtf+fBteAA09FyMWMOjiUWzcaumAVi1oNZhRU3hAx+V+kxeA4J903pJaRo= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:9857:be95:97a2:e91c]) (user=seanjc job=sendgmr) by 2002:a0c:ef11:: with SMTP id t17mr6672405qvr.21.1614906687093; Thu, 04 Mar 2021 17:11:27 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 4 Mar 2021 17:10:53 -0800 In-Reply-To: <20210305011101.3597423-1-seanjc@google.com> Message-Id: <20210305011101.3597423-10-seanjc@google.com> Mime-Version: 1.0 References: <20210305011101.3597423-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 09/17] KVM: x86/mmu: Use '0' as the one and only value for an invalid PAE root From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Brijesh Singh , Tom Lendacky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use '0' to denote an invalid pae_root instead of '0' or INVALID_PAGE. Unlike root_hpa, the pae_roots hold permission bits and thus are guaranteed to be non-zero. Having to deal with both values leads to bugs, e.g. failing to set back to INVALID_PAGE, warning on the wrong value, etc... Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b82c1b0d6d6e..dbf7f0395e4b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3197,11 +3197,14 @@ void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, (mmu->root_level >= PT64_ROOT_4LEVEL || mmu->direct_map)) { mmu_free_root_page(kvm, &mmu->root_hpa, &invalid_list); } else if (mmu->pae_root) { - for (i = 0; i < 4; ++i) - if (mmu->pae_root[i] != 0) - mmu_free_root_page(kvm, - &mmu->pae_root[i], - &invalid_list); + for (i = 0; i < 4; ++i) { + if (!mmu->pae_root[i]) + continue; + + mmu_free_root_page(kvm, &mmu->pae_root[i], + &invalid_list); + mmu->pae_root[i] = 0; + } } mmu->root_hpa = INVALID_PAGE; mmu->root_pgd = 0; @@ -3250,8 +3253,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) mmu->root_hpa = root; } else if (shadow_root_level == PT32E_ROOT_LEVEL) { for (i = 0; i < 4; ++i) { - WARN_ON_ONCE(mmu->pae_root[i] && - VALID_PAGE(mmu->pae_root[i])); + WARN_ON_ONCE(mmu->pae_root[i]); root = mmu_alloc_root(vcpu, i << (30 - PAGE_SHIFT), i << 30, PT32_ROOT_LEVEL, true); @@ -3316,7 +3318,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) } for (i = 0; i < 4; ++i) { - WARN_ON_ONCE(mmu->pae_root[i] && VALID_PAGE(mmu->pae_root[i])); + WARN_ON_ONCE(mmu->pae_root[i]); if (mmu->root_level == PT32E_ROOT_LEVEL) { if (!(pdptrs[i] & PT_PRESENT_MASK)) { @@ -3438,7 +3440,7 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) for (i = 0; i < 4; ++i) { hpa_t root = vcpu->arch.mmu->pae_root[i]; - if (root && VALID_PAGE(root)) { + if (root && !WARN_ON_ONCE(!VALID_PAGE(root))) { root &= PT64_BASE_ADDR_MASK; sp = to_shadow_page(root); mmu_sync_children(vcpu, sp); @@ -5296,7 +5298,7 @@ static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu) mmu->pae_root = page_address(page); for (i = 0; i < 4; ++i) - mmu->pae_root[i] = INVALID_PAGE; + mmu->pae_root[i] = 0; return 0; } From patchwork Fri Mar 5 01:10:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12117327 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AD03C433E0 for ; Fri, 5 Mar 2021 01:11:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 25FF465016 for ; Fri, 5 Mar 2021 01:11:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230134AbhCEBLd (ORCPT ); Thu, 4 Mar 2021 20:11:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230211AbhCEBLa (ORCPT ); Thu, 4 Mar 2021 20:11:30 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31966C061756 for ; Thu, 4 Mar 2021 17:11:30 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id v62so660711ybb.15 for ; Thu, 04 Mar 2021 17:11:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=n1g7YcfBRqOyuIXQ+DlBk/rdIXAZ4MTBjh9mK1MAaXk=; b=LDFQJveapvxPIH8Hc01b156PY3r2JYw4k5ngF4S8WpkiyW1fiJXfMwTHSjeIJ1dAms f1Qwk0mF3tIAx4iBI9PnlYNVQr4ypv/40b5dbfPwpD92HUUq7fo4WOz345/HU+tDgdut budOqXEnUTo6uW/OGFgHRI5PwPzBCEx46YugRPKfO6QCOGsyK7Cp6zRYZ6WCrYsRvEPp da/4mqv1rdMjid+3mXZfuXYK8IK2eB0s6L/A3XVBNFemGAdAzHV6C+T9WCyumhypTdlq vwrjeDg7U3Qe3X12eCPsvQp5Xg5Asc5mzO0ibxXxtzvlxjQs2/4+OVYWgsjf9MCmXkeM RGAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=n1g7YcfBRqOyuIXQ+DlBk/rdIXAZ4MTBjh9mK1MAaXk=; b=qfcg/bzzbv1TSbe5TQ8C9IcTP/CXpVX6TSqTMtM4P9xxdUXlpliy1sdJ1e6y9tMcj9 NNPYrB5T0/OnFVMIbFG1ZdGP0zIMtw+ZMyUZflI7JZjfcDa2X30Zpk+wzYGMFW5r2lj+ ClJY6+wqfF/PNidmCpEvutCVhf6+SEVX7NjwHEG3FCqITl0Gw/AYi5rXrzyv83MAJ85F RK9FG4fpDG9JzdLdx4goYmpmjq5MyILq/xA2TCkRw0R9LR83IotJ0QSrm7CJnzUZY/SW nvwyBapqZrDac5cZ258v0g6Ijy7LiWN3gQTXVES+w3l7WsIeyUtgEdyhlHTjl+1xwfqf wONw== X-Gm-Message-State: AOAM530c3qzcO0PMn79jzywscrRSNtuhtAkayS4FNoVTzt3aQvueifJ4 z6vgd/YcP5sRYX8WEVunLp6+VuMjVJQ= X-Google-Smtp-Source: ABdhPJxgkDxhai+YpmKJqz17AH7bQtZteIjFGAKP9YfF8FUCjyTPQoPpaAozUutt5kH0YAiYyfyaTn+NGUE= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:9857:be95:97a2:e91c]) (user=seanjc job=sendgmr) by 2002:a25:9706:: with SMTP id d6mr10302184ybo.139.1614906689458; Thu, 04 Mar 2021 17:11:29 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 4 Mar 2021 17:10:54 -0800 In-Reply-To: <20210305011101.3597423-1-seanjc@google.com> Message-Id: <20210305011101.3597423-11-seanjc@google.com> Mime-Version: 1.0 References: <20210305011101.3597423-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 10/17] KVM: x86/mmu: Set the C-bit in the PDPTRs and LM pseudo-PDPTRs From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Brijesh Singh , Tom Lendacky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Set the C-bit in SPTEs that are set outside of the normal MMU flows, specifically the PDPDTRs and the handful of special cased "LM root" entries, all of which are shadow paging only. Note, the direct-mapped-root PDPTR handling is needed for the scenario where paging is disabled in the guest, in which case KVM uses a direct mapped MMU even though TDP is disabled. Fixes: d0ec49d4de90 ("kvm/x86/svm: Support Secure Memory Encryption within KVM") Cc: stable@vger.kernel.org Cc: Brijesh Singh Cc: Tom Lendacky Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index dbf7f0395e4b..09310c35fcf4 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3257,7 +3257,8 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) root = mmu_alloc_root(vcpu, i << (30 - PAGE_SHIFT), i << 30, PT32_ROOT_LEVEL, true); - mmu->pae_root[i] = root | PT_PRESENT_MASK; + mmu->pae_root[i] = root | PT_PRESENT_MASK | + shadow_me_mask; } mmu->root_hpa = __pa(mmu->pae_root); } else @@ -3310,7 +3311,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) * or a PAE 3-level page table. In either case we need to be aware that * the shadow page table may be a PAE or a long mode page table. */ - pm_mask = PT_PRESENT_MASK; + pm_mask = PT_PRESENT_MASK | shadow_me_mask; if (mmu->shadow_root_level == PT64_ROOT_4LEVEL) { pm_mask |= PT_ACCESSED_MASK | PT_WRITABLE_MASK | PT_USER_MASK; From patchwork Fri Mar 5 01:10:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12117329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 865EEC433DB for ; Fri, 5 Mar 2021 01:11:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 629CC65004 for ; Fri, 5 Mar 2021 01:11:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230147AbhCEBLh (ORCPT ); Thu, 4 Mar 2021 20:11:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230135AbhCEBLe (ORCPT ); Thu, 4 Mar 2021 20:11:34 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76014C06175F for ; Thu, 4 Mar 2021 17:11:32 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id 127so637374ybc.19 for ; Thu, 04 Mar 2021 17:11:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=CpyO1bLY+et4cnAiq5PgmaeBbmTTXBzXB805b0dBnqM=; b=dttg1qcgKTyMunJKlRAKnYO2ttxUKcj6h43TL3OI6ijvDFUi+DndTw3BGQBJ0lNChq 1i8i0Bp17Tm5KKxPcd0tWZSGsglTop8xkyUkelC7kf9wD9Np9Cgk+LGL0YqjVjR+jMlR Ie2aW6hDAV4io4aJjOsJQGbOYCeirgsizB1c8NI36KqmzzEsJJQIA47NbVsOTlS78DMc ULC07sjVLx2DaNZCiukX8zcgxguk2eVXzxULG5wNaNMj0LfADr1fvBjbClgf75BXXJzb hTrVD/fBfS1G+Gno71KAxkmvU/vxH1mb/FCP4QG/9Wu/K/WoTfpppCJN1vExPM5vSNHI L4Ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=CpyO1bLY+et4cnAiq5PgmaeBbmTTXBzXB805b0dBnqM=; b=Riuu7cZ+G+DmnabSc/3SndlTCbSH33qmB5zYHTmP1xAeCTVAA+toEHFNLjFGZ7x5ks ETBAqJm3icHSbUZAiNH/r9XIrT2CnRuK962mFdNG0WnnbVyLcHpVef6aZrjaJ5IN6dW8 wlWpGZ7aCn8uzT/G8aNDyXw+UB3hGHO4ayqH9uT397qjmNm/7QZI5HDmMxTQC3hUpdGY OEWs1HixyTNJdsi/OwWnhgVDZlMJrxVuFWOX59OJbuyq2iq+V1PgQHa+osQaYnWxpZf1 FV1uj/DE+LwCwM3sSmaPxxU5TtqiwO9wBK4MCaDNpOahXGHma/hrki8VaLd1gzRoaL9N xtyA== X-Gm-Message-State: AOAM5328OQ1CLQiEXCasq24XYLlLw76tzdMrxtZCUB3Dr1Z514MEVOlu 9JATX1LBWxKv4E0sTlevO0IOWwM8eXs= X-Google-Smtp-Source: ABdhPJx8YVPFShL/JS75Gpw21pe+cAeXntITW+nY/6EOdR/wf3gaDcO2AiNXsF6B4wK8ePCbPksmVxQfS3M= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:9857:be95:97a2:e91c]) (user=seanjc job=sendgmr) by 2002:a25:442:: with SMTP id 63mr10110082ybe.131.1614906691755; Thu, 04 Mar 2021 17:11:31 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 4 Mar 2021 17:10:55 -0800 In-Reply-To: <20210305011101.3597423-1-seanjc@google.com> Message-Id: <20210305011101.3597423-12-seanjc@google.com> Mime-Version: 1.0 References: <20210305011101.3597423-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 11/17] KVM: x86/mmu: Mark the PAE roots as decrypted for shadow paging From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Brijesh Singh , Tom Lendacky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Set the PAE roots used as decrypted to play nice with SME when KVM is using shadow paging. Explicitly skip setting the C-bit when loading CR3 for PAE shadow paging, even though it's completely ignored by the CPU. The extra documentation is nice to have. Note, there are several subtleties at play with NPT. In addition to legacy shadow paging, the PAE roots are used for SVM's NPT when either KVM is 32-bit (uses PAE paging) or KVM is 64-bit and shadowing 32-bit NPT. However, 32-bit Linux, and thus KVM, doesn't support SME. And 64-bit KVM can happily set the C-bit in CR3. This also means that keeping __sme_set(root) for 32-bit KVM when NPT is enabled is conceptually wrong, but functionally ok since SME is 64-bit only. Leave it as is to avoid unnecessary pollution. Fixes: d0ec49d4de90 ("kvm/x86/svm: Support Secure Memory Encryption within KVM") Cc: stable@vger.kernel.org Cc: Brijesh Singh Cc: Tom Lendacky Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 24 +++++++++++++++++++++++- arch/x86/kvm/svm/svm.c | 7 +++++-- 2 files changed, 28 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 09310c35fcf4..fa1aca21f6eb 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -48,6 +48,7 @@ #include #include #include +#include #include #include #include "trace.h" @@ -3377,7 +3378,10 @@ static int mmu_alloc_special_roots(struct kvm_vcpu *vcpu) if (WARN_ON_ONCE(!tdp_enabled || mmu->pae_root || mmu->lm_root)) return -EIO; - /* Unlike 32-bit NPT, the PDP table doesn't need to be in low mem. */ + /* + * Unlike 32-bit NPT, the PDP table doesn't need to be in low mem, and + * doesn't need to be decrypted. + */ pae_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT); if (!pae_root) return -ENOMEM; @@ -5264,6 +5268,8 @@ slot_handle_leaf(struct kvm *kvm, struct kvm_memory_slot *memslot, static void free_mmu_pages(struct kvm_mmu *mmu) { + if (!tdp_enabled && mmu->pae_root) + set_memory_encrypted((unsigned long)mmu->pae_root, 1); free_page((unsigned long)mmu->pae_root); free_page((unsigned long)mmu->lm_root); } @@ -5301,6 +5307,22 @@ static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu) for (i = 0; i < 4; ++i) mmu->pae_root[i] = 0; + /* + * CR3 is only 32 bits when PAE paging is used, thus it's impossible to + * get the CPU to treat the PDPTEs as encrypted. Decrypt the page so + * that KVM's writes and the CPU's reads get along. Note, this is + * only necessary when using shadow paging, as 64-bit NPT can get at + * the C-bit even when shadowing 32-bit NPT, and SME isn't supported + * by 32-bit kernels (when KVM itself uses 32-bit NPT). + */ + if (!tdp_enabled) + set_memory_decrypted((unsigned long)mmu->pae_root, 1); + else + WARN_ON_ONCE(shadow_me_mask); + + for (i = 0; i < 4; ++i) + mmu->pae_root[i] = 0; + return 0; } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 54610270f66a..4769cf8bf2fd 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3908,15 +3908,18 @@ static void svm_load_mmu_pgd(struct kvm_vcpu *vcpu, unsigned long root, struct vcpu_svm *svm = to_svm(vcpu); unsigned long cr3; - cr3 = __sme_set(root); if (npt_enabled) { - svm->vmcb->control.nested_cr3 = cr3; + svm->vmcb->control.nested_cr3 = __sme_set(root); vmcb_mark_dirty(svm->vmcb, VMCB_NPT); /* Loading L2's CR3 is handled by enter_svm_guest_mode. */ if (!test_bit(VCPU_EXREG_CR3, (ulong *)&vcpu->arch.regs_avail)) return; cr3 = vcpu->arch.cr3; + } else if (vcpu->arch.mmu->shadow_root_level >= PT64_ROOT_4LEVEL) { + cr3 = __sme_set(root); + } else { + cr3 = root; } svm->vmcb->save.cr3 = cr3; From patchwork Fri Mar 5 01:10:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12117335 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E4ADC433E0 for ; Fri, 5 Mar 2021 01:11:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7A21764F67 for ; Fri, 5 Mar 2021 01:11:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230437AbhCEBLn (ORCPT ); Thu, 4 Mar 2021 20:11:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230140AbhCEBLf (ORCPT ); Thu, 4 Mar 2021 20:11:35 -0500 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BEDE4C061760 for ; Thu, 4 Mar 2021 17:11:34 -0800 (PST) Received: by mail-qt1-x84a.google.com with SMTP id i16so258640qtv.18 for ; Thu, 04 Mar 2021 17:11:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=UoSJXw/PVEOjBFScqgNo0bQzSNuTFSWEQtCbZglmTq0=; b=kQAMJObEqqL+h/6HsZlsplPO1BCJpjh9pP+48cHLs3OobIOB6CgTtrh6EZUMu3bXFi SHbHfRmqn8dUDqWqCLuyGL1qpn/iSCISWig3ZEmUqIOSOJf84eA7mq/rxUqwRENuEaS1 bMeL/YM/DTmkCvKXJsxa8oH6zJQE+GEHRs++0WStbmPTpCCY4ix/suKB5VIiC8F2mFfm gVvvGp1a4BcXKYtQE/1E+t2IlypiBh1StC6Fx3FSEBYdV4TrwK8+IM67QaO/+AmjsMYu 7w6gmYHZ7zvChYP1fmGyniTRJ06L8LCTWj1APum5SrUWMTrUkp5ID1aQAB1TXbCAYsLS /bOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=UoSJXw/PVEOjBFScqgNo0bQzSNuTFSWEQtCbZglmTq0=; b=uC/JzvwMZ7vyqjUaG9sGztxKHYEtsoZXTvO6btSSuEMq29ky7AQuArphmWifalyuL2 9/Cn7AinEIdC3qbwPrCy9tJc3U0UCwoSPfsOJXfh6A5odcDv1ib2AdrurT4JsL7GUfgD yqoXcCRcELQpIH/wJuHgwgPiXR664myGJPS50Qhi8JYL6m22CZRXlKHF0ypLh12kX9GL ahqB5JU88wOwlRk12zHCjbfrqMmkFL0y8earRtMTZ5UpTEznHPRBGXnVJvbpQkJ0O3i9 ZwWre1LKbZFKb2dMIBFbBxWbWrEXY49NATA7u3rVLYVKrdAYNQRMgMtNOfmHK61DR6sK 6fvA== X-Gm-Message-State: AOAM5317r1YAB+hDi3pRMcDPHg+vF/7WXCu2MXoBINFbxUQvyBFZfONb XBteFCgkbKfOSP1KEle2E8Fyh8mppm4= X-Google-Smtp-Source: ABdhPJzYlWm8aGDZV4ZezLV91M6AacXjOom4mKjt8Lg/YvFhNfe1l4qG1CS/p0mg+dc/jjRPcvsiw64XfYE= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:9857:be95:97a2:e91c]) (user=seanjc job=sendgmr) by 2002:a0c:e5c9:: with SMTP id u9mr6738397qvm.55.1614906693998; Thu, 04 Mar 2021 17:11:33 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 4 Mar 2021 17:10:56 -0800 In-Reply-To: <20210305011101.3597423-1-seanjc@google.com> Message-Id: <20210305011101.3597423-13-seanjc@google.com> Mime-Version: 1.0 References: <20210305011101.3597423-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 12/17] KVM: SVM: Don't strip the C-bit from CR2 on #PF interception From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Brijesh Singh , Tom Lendacky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Don't strip the C-bit from the faulting address on an intercepted #PF, the address is a virtual address, not a physical address. Fixes: 0ede79e13224 ("KVM: SVM: Clear C-bit from the page fault address") Cc: stable@vger.kernel.org Cc: Brijesh Singh Cc: Tom Lendacky Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/svm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 4769cf8bf2fd..dfc8fe231e8b 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1907,7 +1907,7 @@ static int pf_interception(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); - u64 fault_address = __sme_clr(svm->vmcb->control.exit_info_2); + u64 fault_address = svm->vmcb->control.exit_info_2; u64 error_code = svm->vmcb->control.exit_info_1; return kvm_handle_page_fault(vcpu, error_code, fault_address, From patchwork Fri Mar 5 01:10:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12117337 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4895DC433E9 for ; Fri, 5 Mar 2021 01:11:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1EB0164FEE for ; Fri, 5 Mar 2021 01:11:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229528AbhCEBLm (ORCPT ); Thu, 4 Mar 2021 20:11:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230396AbhCEBLh (ORCPT ); Thu, 4 Mar 2021 20:11:37 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 27466C061756 for ; Thu, 4 Mar 2021 17:11:37 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id l3so652553ybf.17 for ; Thu, 04 Mar 2021 17:11:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=VvZxbsS7usA2XSAkfRC0sbz46vD+BnH1u7vFisa6mO0=; b=nMfaIQkshHHKWY0t0z5Ecf0LwmQDb3vkLOcg63u2GoT2PIEretoYxOXCZf+9FF7GlZ 7kTHcuZcVSW2Z4KberUEU3h23143dvkVliNEiDyWM7WPVSnsL13dD5VdqR0vZ/ETa5O4 q58pcSLGtOmPvm3GHVQm1A+riTm4dGRm7Tmx2ubupST/1OVPnlUpZNWEn6bFX5LL50/1 Ops1kjitD+ibIpCNvgQ19rSDstyn8vOF8SINmUh+Vx9d8wMBMcKiYx8nvXgIBh3twMSe AHQNrus4nuyAEFXFsNP8vb61fRTS1ezF187ezh+bekDkDgPys57rMH1JgyxDpHe6FblT NIZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=VvZxbsS7usA2XSAkfRC0sbz46vD+BnH1u7vFisa6mO0=; b=sY5oIxysgeOxzYwgeyPw0bLRU6Uz3edGotpzGPV/u+9KHfLvgDWAVg8YpgNIsGekgI 3S2Fk7gjYAJfHpYanzuyJa9mFRPsEU6T5cQX/j8j2wOr4BxhortB4DyBWD0MfUhiaecu 8rAaHIISLY+2gMpnEOcDvd3UVFqVkkJSypDXYvON+MjzdInfTr5KkCuZm/RqtCxrp/U+ YEN+qm4WQRbwVEicubPb/dvlIWrZnh9rn746BoAb9ulzAMjmt8lHBLGGPPpk7pC44pWa oTvIlbVeTv37W6i/GiJ+06qAh5IXLyukutfzDHIUA2hE4loDl4MaEEyCKScrDqyHl3W1 /KvA== X-Gm-Message-State: AOAM531vk5OKeiO72f05f68JgBZf9FwkQctrgZhMFTck4/5MgW79nK9a ItOB1cdFhBloZjZM8hTkn12dDWqiTtI= X-Google-Smtp-Source: ABdhPJykw1/ckd3FliB/mVuq8j3h7IPXNCBb73BP/3oSB+2xOvmPUxIAHC/BgV4nProD+JMarALm7rCuAsc= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:9857:be95:97a2:e91c]) (user=seanjc job=sendgmr) by 2002:a25:7645:: with SMTP id r66mr10406291ybc.331.1614906696451; Thu, 04 Mar 2021 17:11:36 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 4 Mar 2021 17:10:57 -0800 In-Reply-To: <20210305011101.3597423-1-seanjc@google.com> Message-Id: <20210305011101.3597423-14-seanjc@google.com> Mime-Version: 1.0 References: <20210305011101.3597423-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 13/17] KVM: nVMX: Defer the MMU reload to the normal path on an EPTP switch From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Brijesh Singh , Tom Lendacky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Defer reloading the MMU after a EPTP successful EPTP switch. The VMFUNC instruction itself is executed in the previous EPTP context, any side effects, e.g. updating RIP, should occur in the old context. Practically speaking, this bug is benign as VMX doesn't touch the MMU when skipping an emulated instruction, nor does queuing a single-step #DB. No other post-switch side effects exist. Fixes: 41ab93727467 ("KVM: nVMX: Emulate EPTP switching for the L1 hypervisor") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index fdd80dd8e781..81f609886c8b 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -5473,16 +5473,11 @@ static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu, if (!nested_vmx_check_eptp(vcpu, new_eptp)) return 1; - kvm_mmu_unload(vcpu); mmu->ept_ad = accessed_dirty; mmu->mmu_role.base.ad_disabled = !accessed_dirty; vmcs12->ept_pointer = new_eptp; - /* - * TODO: Check what's the correct approach in case - * mmu reload fails. Currently, we just let the next - * reload potentially fail - */ - kvm_mmu_reload(vcpu); + + kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu); } return 0; From patchwork Fri Mar 5 01:10:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12117339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79613C433E0 for ; Fri, 5 Mar 2021 01:11:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 52CAB64FEE for ; Fri, 5 Mar 2021 01:11:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230458AbhCEBLq (ORCPT ); Thu, 4 Mar 2021 20:11:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230423AbhCEBLk (ORCPT ); Thu, 4 Mar 2021 20:11:40 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A3CBBC061756 for ; Thu, 4 Mar 2021 17:11:39 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id 6so687314ybq.7 for ; Thu, 04 Mar 2021 17:11:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=hUIu7sVBeZPHOKAKqkQEBvI6/Cyva1cSCDUAiAjyS7A=; b=q44fMswbrj4G/uOTNQh4DPWIDZUCkPsS8SfokrIsEzd0Sh05bH4+ukcQa7cf2hczmN C/MB7qk27lN9hdkKAbP9QsSqjdRsL5EsW5YiYEO9JBT5zrJPm+ncTeqILqU4mY5OAqiA Z1QZjc0Y4/+T/PYnn2Psx8raSFMpDjxblzN0LlSvniDL993Ft2sUPyzEn/5S8JcImmO5 GZSHCTH425OhbVAN9Pymg0/JOSZ6f7l3ACzTUqeZvEQnGcP0mD+ONHuYhCn2Oc9plOr+ rlUEOxEiaSA/PSvGWwNVYZWTsHxyT8ktjyCSccwvtWx9AMhzcrqMecN00hy+i3TwDTFw z/1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=hUIu7sVBeZPHOKAKqkQEBvI6/Cyva1cSCDUAiAjyS7A=; b=nwqOvpn2Fzp6ZG9hcdfT6e8r6nEqNGHUhZ+fkjF9mxgOZWQw7veMDGYJfW9mE7ikWe UQZxAC9IsaUoQlRX6LnR77O60FkN2H2JFq/RNlyg9uDmlb+OLxjOoPZ42WEWlgH8zXPk 4p04v/yb+6OPsp4yDx3D7Q6O4K9TvklVDHrpueP+nR90jDq6dfXT/lGuJP4cVgPqSXuc UrucTIZAxQViytLMt/N/9597WUModzfoICNx0J19dSemOJA6VN7o3znPyFBpkDh1wSjl adi78cdEi98bpoUD4Zx+A8rpUhyU8bewkuXE8WTCqivwGrzO6wU6NS0sZIKmr3qSDplo idhw== X-Gm-Message-State: AOAM5320yo6F32Xj97WvXjKw8TCKuRwJJijSBMbcj7zk4T3UNa9d2Dn1 UqcArhp/Q/dOXcz4GGV0ON2ouAeSdHQ= X-Google-Smtp-Source: ABdhPJxKz/X1UR84yOmR5Dk4kEeRWgaX6x8vaKIwjASikS9HDq95CD7YKEeHICZbOpRMX3tDki/GnFoYwnw= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:9857:be95:97a2:e91c]) (user=seanjc job=sendgmr) by 2002:a25:254a:: with SMTP id l71mr10220487ybl.125.1614906698911; Thu, 04 Mar 2021 17:11:38 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 4 Mar 2021 17:10:58 -0800 In-Reply-To: <20210305011101.3597423-1-seanjc@google.com> Message-Id: <20210305011101.3597423-15-seanjc@google.com> Mime-Version: 1.0 References: <20210305011101.3597423-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 14/17] KVM: x86: Defer the MMU unload to the normal path on an global INVPCID From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Brijesh Singh , Tom Lendacky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Defer unloading the MMU after a INVPCID until the instruction emulation has completed, i.e. until after RIP has been updated. On VMX, this is a benign bug as VMX doesn't touch the MMU when skipping an emulated instruction. However, on SVM, if nrip is disabled, the emulator is used to skip an instruction, which would lead to fireworks if the emulator were invoked without a valid MMU. Fixes: eb4b248e152d ("kvm: vmx: Support INVPCID in shadow paging mode") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 828de7d65074..7b0adebec1ef 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11531,7 +11531,7 @@ int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsigned long type, gva_t gva) fallthrough; case INVPCID_TYPE_ALL_INCL_GLOBAL: - kvm_mmu_unload(vcpu); + kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu); return kvm_skip_emulated_instruction(vcpu); default: From patchwork Fri Mar 5 01:10:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12117343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67AA8C433E9 for ; Fri, 5 Mar 2021 01:11:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4813F64FEE for ; Fri, 5 Mar 2021 01:11:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230409AbhCEBLs (ORCPT ); Thu, 4 Mar 2021 20:11:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230453AbhCEBLp (ORCPT ); Thu, 4 Mar 2021 20:11:45 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 24073C061762 for ; Thu, 4 Mar 2021 17:11:42 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id j4so631055ybt.23 for ; Thu, 04 Mar 2021 17:11:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=GwLtnjUHvhPN+SdKpQLX9+PZpDkmPsCXyyolIntRT8Q=; b=YTBApRYdKmN5TNo2/zDb8jt3g5J1OBP5RrcLnOmgvYROgZ4nw8D8iOcoMM+/fowvvH 5drldgzi8cHBpIZBd4LVlVyaxh4Pp8juYwQcp8n6dgBmujr7WNRlSFBkBW94xJbrMZog TaMVEIvM/KOUmFeBjCjNQn4vLefwOERlwzVEBiVccgYHiPzUc3k41wddWXP5FlDoiI4z l94BaTRP4ri/jZ7z68m0OEI9j2WcIMjYRLdpEojOzOcqweXDVAI7vVN/JTcmgpk7ly6/ ry34aQMWAi98LSrjjvuO2YWMuJvuF6LtU8mS5Uj+5yAUU695Fb4NzGlNfxwizElDYcsa kRCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=GwLtnjUHvhPN+SdKpQLX9+PZpDkmPsCXyyolIntRT8Q=; b=nMRlCiIKL1rBRpGBt3Qx19mL/6D2UHz55IUBxsL6oNHKoU8CtORe4SW4lMh4HPvOx3 j3p24wKVoEas0hoTAZAVnMSNgYEcs2pBWjyRl96NH43umO9AoTYLpVnh1GkjRCISpKwD pGhwXPl4fIdEwiBruAW7/D6P0QYa2wc9h9Bd26khPNcx6xjYOrP2M2PS5fre4uTkKWLQ 2GUIsuDu3o657AQfri7/wx2YObnN6ySOH6us1YN8bsGg+SGWoQlveuY6UZhOYz3dmZ7c Mg3MnsxdATBR3EuhMdrZq/oHqvtwWEHENlmctebU+J4ZzwWlJLIzmjzg9I8fwoKWLwXK 7KRQ== X-Gm-Message-State: AOAM533NPmQjJiUa+/PsraujCjVkP50WhkbY0y9NWuFOpA4wubmLLWLP /tv7kLasy1kZXOAsDcVWzTHmnLNHlCM= X-Google-Smtp-Source: ABdhPJwhGY3VsDk0phWFsY+dbxEMztlaHmwM2ag+WGOvqjCSG0YIdrtM0Q+LclZIX/JU85vbBBawg175iE4= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:9857:be95:97a2:e91c]) (user=seanjc job=sendgmr) by 2002:a25:d6d5:: with SMTP id n204mr10136397ybg.22.1614906701379; Thu, 04 Mar 2021 17:11:41 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 4 Mar 2021 17:10:59 -0800 In-Reply-To: <20210305011101.3597423-1-seanjc@google.com> Message-Id: <20210305011101.3597423-16-seanjc@google.com> Mime-Version: 1.0 References: <20210305011101.3597423-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 15/17] KVM: x86/mmu: Unexport MMU load/unload functions From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Brijesh Singh , Tom Lendacky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Unexport the MMU load and unload helpers now that they are no longer used (incorrectly) in vendor code. Opportunistically move the kvm_mmu_sync_roots() declaration into mmu.h, it should not be exposed to vendor code. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 3 --- arch/x86/kvm/mmu.h | 4 ++++ arch/x86/kvm/mmu/mmu.c | 2 -- 3 files changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 6db60ea8ee5b..2da6c9f5935a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1592,9 +1592,6 @@ void kvm_update_dr7(struct kvm_vcpu *vcpu); int kvm_mmu_unprotect_page(struct kvm *kvm, gfn_t gfn); void __kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu); -int kvm_mmu_load(struct kvm_vcpu *vcpu); -void kvm_mmu_unload(struct kvm_vcpu *vcpu); -void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu); void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, ulong roots_to_free); gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access, diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 72b0f66073dc..67e8c7c7a6ce 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -74,6 +74,10 @@ bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu); int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, u64 fault_address, char *insn, int insn_len); +int kvm_mmu_load(struct kvm_vcpu *vcpu); +void kvm_mmu_unload(struct kvm_vcpu *vcpu); +void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu); + static inline int kvm_mmu_reload(struct kvm_vcpu *vcpu) { if (likely(vcpu->arch.mmu->root_hpa != INVALID_PAGE)) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index fa1aca21f6eb..4f66ca0f5f68 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4859,7 +4859,6 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu) out: return r; } -EXPORT_SYMBOL_GPL(kvm_mmu_load); void kvm_mmu_unload(struct kvm_vcpu *vcpu) { @@ -4868,7 +4867,6 @@ void kvm_mmu_unload(struct kvm_vcpu *vcpu) kvm_mmu_free_roots(vcpu, &vcpu->arch.guest_mmu, KVM_MMU_ROOTS_ALL); WARN_ON(VALID_PAGE(vcpu->arch.guest_mmu.root_hpa)); } -EXPORT_SYMBOL_GPL(kvm_mmu_unload); static bool need_remote_flush(u64 old, u64 new) { From patchwork Fri Mar 5 01:11:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12117345 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8FEAC433E6 for ; Fri, 5 Mar 2021 01:11:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 73FD765022 for ; Fri, 5 Mar 2021 01:11:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230509AbhCEBLv (ORCPT ); Thu, 4 Mar 2021 20:11:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230457AbhCEBLp (ORCPT ); Thu, 4 Mar 2021 20:11:45 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 893EAC0613D7 for ; Thu, 4 Mar 2021 17:11:44 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id 194so689832ybl.5 for ; Thu, 04 Mar 2021 17:11:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=a0iu5Jr62fO/y6sZLX97v683VVAURhp+4UWYw55JC8A=; b=a2TdeP2jIifcVwzqSBDM0de5+MwbT43APmxx1Fo6PChIdw0LzDMC9HH6i09sQbEvEQ nsyBB+1a0yRtBfvP2QLcuusKIDVaJIyt2qi927seF24rIwjt/udvGo4h0hYKjPzZ9/Tc CLr+SlevtE/inchuXygmJbsrDG+ALFWdiRVcBqXQqJi9rzfiOZ5EJODShYqljY3lHYE1 IA8Cl4sh4rzvzaT3TLvsFFuRKvuMcRvstyQ5HgLNxFyCelqK37iuhOXXXbcoHzAy7eng JF2iI6cZShez1bzrx2ti3DJgXO+TKcMXM373c3PeViizQuDK12tO0dqa2W2PAL5N2mNH /VsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=a0iu5Jr62fO/y6sZLX97v683VVAURhp+4UWYw55JC8A=; b=AS08qtzvC/m4XogWG0PaUCUeAhafTEq+HoIn5iBjZsX1vWF34MyvPOyKjCVHrH79Yf pwvQzhnh8eRWBS1I1thfnBPNpG4DnI/i79uHc5m+zqhYYEt+tcZ5ZRq/76GFG1SRaBtj aJQlrUnOCCiubJyKIRUrwq7V6VYuDk3OiqZZ62cOgavo+ku5dfLCtqs+F8Y0iq4o5cAD wk2cCD6RN/Yc/eLQUlji7OLuGoY5yZuaGQjERja24ZGc/bTI+N+jE/fDL2+aemgVhp1K lN7EvahdNMtGp0SKZmqMmoMaU8ZOBdx2N6MgcEv4T8JT+qo0vYMu+y/XquODvEB9/x4D y3nQ== X-Gm-Message-State: AOAM532zqyHwL98ebpqlHJvAWDYP+pT77U/mPUTtmxy1S5lOHmE1zsgl pny3MnsHxqoqwSF1mN4DV8oO0xftZ5I= X-Google-Smtp-Source: ABdhPJzgFxnA5s0adQuP9jc1yY23XBQGOxKNuuDdMTj9ETO8QkE751E1BJ0BbCGDJhnhGUcLZF+QyU861W8= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:9857:be95:97a2:e91c]) (user=seanjc job=sendgmr) by 2002:a25:655:: with SMTP id 82mr9307990ybg.168.1614906703841; Thu, 04 Mar 2021 17:11:43 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 4 Mar 2021 17:11:00 -0800 In-Reply-To: <20210305011101.3597423-1-seanjc@google.com> Message-Id: <20210305011101.3597423-17-seanjc@google.com> Mime-Version: 1.0 References: <20210305011101.3597423-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 16/17] KVM: x86/mmu: Sync roots after MMU load iff load as successful From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Brijesh Singh , Tom Lendacky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org For clarity, explicitly skip syncing roots if the MMU load failed instead of relying on the !VALID_PAGE check in kvm_mmu_sync_roots(). Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4f66ca0f5f68..bceff7d815c3 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4850,10 +4850,11 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu) else r = mmu_alloc_shadow_roots(vcpu); write_unlock(&vcpu->kvm->mmu_lock); + if (r) + goto out; kvm_mmu_sync_roots(vcpu); - if (r) - goto out; + kvm_mmu_load_pgd(vcpu); static_call(kvm_x86_tlb_flush_current)(vcpu); out: From patchwork Fri Mar 5 01:11:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12117347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B0D9C433DB for ; Fri, 5 Mar 2021 01:11:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3234D64FEE for ; Fri, 5 Mar 2021 01:11:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230444AbhCEBLx (ORCPT ); Thu, 4 Mar 2021 20:11:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230198AbhCEBLr (ORCPT ); Thu, 4 Mar 2021 20:11:47 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9A89C061761 for ; Thu, 4 Mar 2021 17:11:46 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id f81so676995yba.8 for ; Thu, 04 Mar 2021 17:11:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=1d75g85xgtYE6EG7cfU2REgYRmx185ZHiOuo3S9KNAA=; b=txRUg8YkvyfqIJ0/XuPt5R/AlheVjwdXFDUJapIwiRlZfuO1aTxi7EME12Kt4vEcGi hHumJL9TnwIbyLcyuFKqpeLtXJL5WWf+5pc1CcI0IH/lziTzBSDxuj0p+3hjvVJO4dqI eTvVqADPE/JcQamgBZ9NfUe8/C7ww/mTuimP96Lavhvgo6rhz+YUEdM+taue2t+xlZbG ZRN9PPVOZdlckrxevuXpR2bLlPnBad9AwoWUF4JObK+/+sIJTF52TcULbf4W03kwTaQS Nf/fgJQyepDxAY1KjPj2QutFZr37DCSyif1Ff9w1pqTMiChnONWfsJ7BVjjja/6yEtRC Z+Ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=1d75g85xgtYE6EG7cfU2REgYRmx185ZHiOuo3S9KNAA=; b=Tzy1mJnukeV/GYFVJBM2e+9BQAPQM2Hcl30X/TThvNqCf+s1Ovbs+qtfe3pnjmnvSd tQXw7T27RTWtt4WfCys3iVjcYXT/vM88zMdiCRFBzuZoIl7WE265XpEXt1jGFQY0SAkZ mA3XJnZF0TZ/vzPYUiYqPuK3QIKNEbxvRfGT8h+lbTcg5bjAcZMw+POxCOM/qI2lSeK1 n7WB4aEMh4i5RWHc42fxdjGGFRHYmS4Pjrl9pbXMROn0tgs+DZ6GlUTES5EMC3iIKks+ 6f++OMzqAl1C/8edEzi/LA0o2MUMWp0y68wcjRq/0fef5TdvNGCxSKwmajU0PcqkPaTn 3/xw== X-Gm-Message-State: AOAM530GiTC+3S56DYlApsGRCkN8/xQNYFMjIpu0WU89LGBx/V51drqR Q/Z9tjpj3s8uDIGFp6fjo/SCJVQy+pw= X-Google-Smtp-Source: ABdhPJyML4KoyC8fISaQe2/KDV43JeBYfE/oli0AyNxL8l43JmeZozRuqeue98etV5vlZnI0RdCiqZp3a0k= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:9857:be95:97a2:e91c]) (user=seanjc job=sendgmr) by 2002:a25:3417:: with SMTP id b23mr10520532yba.257.1614906706188; Thu, 04 Mar 2021 17:11:46 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 4 Mar 2021 17:11:01 -0800 In-Reply-To: <20210305011101.3597423-1-seanjc@google.com> Message-Id: <20210305011101.3597423-18-seanjc@google.com> Mime-Version: 1.0 References: <20210305011101.3597423-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 17/17] KVM: x86/mmu: WARN on NULL pae_root or lm_root, or bad shadow root level From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Brijesh Singh , Tom Lendacky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org WARN if KVM is about to dereference a NULL pae_root or lm_root when loading an MMU, and convert the BUG() on a bad shadow_root_level into a WARN (now that errors are handled cleanly). With nested NPT, botching the level and sending KVM down the wrong path is all too easy, and the on-demand allocation of pae_root and lm_root means bugs crash the host. Obviously, KVM could unconditionally allocate the roots, but that's arguably a worse failure mode as it would potentially corrupt the guest instead of crashing it. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index bceff7d815c3..eb9dd8144fa5 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3253,6 +3253,9 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) root = mmu_alloc_root(vcpu, 0, 0, shadow_root_level, true); mmu->root_hpa = root; } else if (shadow_root_level == PT32E_ROOT_LEVEL) { + if (WARN_ON_ONCE(!mmu->pae_root)) + return -EIO; + for (i = 0; i < 4; ++i) { WARN_ON_ONCE(mmu->pae_root[i]); @@ -3262,8 +3265,10 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) shadow_me_mask; } mmu->root_hpa = __pa(mmu->pae_root); - } else - BUG(); + } else { + WARN_ONCE(1, "Bad TDP root level = %d\n", shadow_root_level); + return -EIO; + } /* root_pgd is ignored for direct MMUs. */ mmu->root_pgd = 0; @@ -3307,6 +3312,9 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) goto set_root_pgd; } + if (WARN_ON_ONCE(!mmu->pae_root)) + return -EIO; + /* * We shadow a 32 bit page table. This may be a legacy 2-level * or a PAE 3-level page table. In either case we need to be aware that @@ -3316,6 +3324,9 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) if (mmu->shadow_root_level == PT64_ROOT_4LEVEL) { pm_mask |= PT_ACCESSED_MASK | PT_WRITABLE_MASK | PT_USER_MASK; + if (WARN_ON_ONCE(!mmu->lm_root)) + return -EIO; + mmu->lm_root[0] = __pa(mmu->pae_root) | pm_mask; }