From patchwork Fri Dec 10 09:25:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12669083 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0760EC433EF for ; Fri, 10 Dec 2021 09:25:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239296AbhLJJ2g (ORCPT ); Fri, 10 Dec 2021 04:28:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41146 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239221AbhLJJ2f (ORCPT ); Fri, 10 Dec 2021 04:28:35 -0500 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E957C061746; Fri, 10 Dec 2021 01:25:01 -0800 (PST) Received: by mail-pf1-x42f.google.com with SMTP id x131so7906489pfc.12; Fri, 10 Dec 2021 01:25:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cusxSY5dMLoUC+dS6eYXkKfX2IuAJbTvzR7oGab+ZMw=; b=eWVhTC0pQEwUQhbkJBYVQIhYLcH2goEf1e2s8UCz+3Qf+3u6Q5xR1tItoxNUsCBItO eYIvCMNyJ0j2WgjAItXkuHBVhQUfUfbKjpnGl3uc26oU4izHJPQX6kzucmhqoKQjnteS Kx7lnYwZKFcfeZCDurJO5wPQI+ifD1o4V9ti3jouPbqsq7V3Y+PAGAAFl/K0Yb9J8yTg Pdp4cASHAzT3j08ybgdBT8/sLwoy9ofV810e3Xi60ZerQHbSRTt7D2by7jKYPe4AqHCj 4GROctbr2jjOAL893l13aHz4MQGiQwn6YBh7Tk6V5D2rDA8YOewd3ZVXGBDv6R31ZqFF FaRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cusxSY5dMLoUC+dS6eYXkKfX2IuAJbTvzR7oGab+ZMw=; b=j2vixIeR/dym7T0vTN3vzYUYVO7gO9ygtADTtnIEQw/f8uz5CTuH0xy0lTyy2E2oCs MuoE9db53xzv8HVfVUfoMUnmZ78+8GRNnx7zLJjixT++844YrrxnaXLaq04VScrlc+W9 NRSLI62BbBms8MuWt8DJLIAdnI8WYFmPhfGelsTn7lMwujqu06GXXamcsc/xl594CmnQ FEckTBZe3DycMw5o85wwzpyHx4k4uN63yqg5KZmMN5hPqB4Cg82WWQHGtcU8fGHiGbzB gSNVDbiRhG5EMCuA+TJyBmHJYPgzTkw0MvpFvdT/We1o7Cheg5OCNi5wiOK6vN7kFV1B pyXg== X-Gm-Message-State: AOAM531FWpXt2T5aYyRtyRGzeJ13HHRlnscRsGCbmsNqa/OEzVZoj6zV LsBQr1Sd3o47g8TF521cKbL2Q7c8mIg= X-Google-Smtp-Source: ABdhPJyIg1UiHF+RqGBaoJDEBB+4YuNCo6VMWX+Xl/rXJ+RNRtwLGfLtgPWb3Lbr0zAYF69jlPyShA== X-Received: by 2002:a63:33cc:: with SMTP id z195mr38493658pgz.339.1639128300468; Fri, 10 Dec 2021 01:25:00 -0800 (PST) Received: from localhost ([47.251.3.230]) by smtp.gmail.com with ESMTPSA id h5sm2878355pfc.113.2021.12.10.01.24.59 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 10 Dec 2021 01:25:00 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Paolo Bonzini , Sean Christopherson Cc: Lai Jiangshan , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [RFC PATCH 1/6] KVM: X86: Check root_level only in fast_pgd_switch() Date: Fri, 10 Dec 2021 17:25:03 +0800 Message-Id: <20211210092508.7185-2-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20211210092508.7185-1-jiangshanlai@gmail.com> References: <20211210092508.7185-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan If root_level >= 4, shadow_root_level must be >= 4 too. Checking only root_level can reduce a check. Signed-off-by: Lai Jiangshan Reviewed-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 11b06d536cc9..846a2e426e0b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4136,8 +4136,7 @@ static bool fast_pgd_switch(struct kvm_vcpu *vcpu, gpa_t new_pgd, * having to deal with PDPTEs. We may add support for 32-bit hosts/VMs * later if necessary. */ - if (mmu->shadow_root_level >= PT64_ROOT_4LEVEL && - mmu->root_level >= PT64_ROOT_4LEVEL) + if (mmu->root_level >= PT64_ROOT_4LEVEL) return cached_root_available(vcpu, new_pgd, new_role); return false; From patchwork Fri Dec 10 09:25:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12669085 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CFECC433EF for ; Fri, 10 Dec 2021 09:25:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239276AbhLJJ2o (ORCPT ); Fri, 10 Dec 2021 04:28:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239302AbhLJJ2l (ORCPT ); Fri, 10 Dec 2021 04:28:41 -0500 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EEEADC061746; Fri, 10 Dec 2021 01:25:06 -0800 (PST) Received: by mail-pj1-x1032.google.com with SMTP id n15-20020a17090a160f00b001a75089daa3so9012899pja.1; Fri, 10 Dec 2021 01:25:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OXQJ/Hg7JktHdh34i19iwQxJ93YDj8UHhHLVZ7kWpcY=; b=fU2nXCsApe/3mCyooya6n0VD2eRdIXO2pUNaXsJi7xEWVoO831aGa4kaRjjQnGGd01 SVWcOZ2eVdEakq92lJyRqLHA5ccAMIEIFq7Yt0zciUksBgH260IVa23EO8UZ08PJ6KfY m3CmJgFrZgysYcsS992yB5Z1+7tIjHhtnSl+7VGwpnrYodk823R3gBeO9uPJM/DCzdew H0QSAIPf0MH8bowQ05O1gHU4yYCeG19WQDt/NNQLLdZKU7IolEE39X1qbnKbxCcZLRdZ j17AXI/mSOdNiecVsO7U1xJhj0KhkFlnIqPN9OORiQLQlQxWQ6YsjNcfWKqtMCD9iP4H FbrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OXQJ/Hg7JktHdh34i19iwQxJ93YDj8UHhHLVZ7kWpcY=; b=aqh9VNKUE4zhAJ6uErwrbCl4/4nBrQ/JVPhHRjyS+Z8InXMk3R/zQ9+5lFuKMNz3M1 SDYaiy1ku81S7TBziFmULEO4B/2wMHo7RgzrPBH5B14TaWwrIEvjmw66dX4LrgjTySOg FQAOew/qkASzRd0BT7Bjh1NbXWDnmXaJlB+ozfkQpOSDmBIm1X27MqxnexIC+T0yOP3y o3dANeMTN74eiMHVWbKaDua0xf4IrPTwn1Kb1PrtE+DoTTyheMVOlsRqgcwVokdT9xJC rFtWisyP6J1dbqcbXH7nTIiT6HAK5lzuctoInzdQ7RR/FQMYXb/jNEcguda1mQTg1gnM xmZw== X-Gm-Message-State: AOAM5307/BPlXewCy17VScaD3uQAR5kbYtcn9PQnYDJ6cyyG0Y/dyv3s d18/tUbr906d/7rkpBQydDdaA/H25pA= X-Google-Smtp-Source: ABdhPJwU+aBVNFa/r7juu0zc9fpwzSgkply/NwhH6eXjyaJq2j5gXQxwO507ZXLkQAu+n0wR4Ta+1A== X-Received: by 2002:a17:903:24d:b0:143:beb5:b6b1 with SMTP id j13-20020a170903024d00b00143beb5b6b1mr74487937plh.54.1639128306355; Fri, 10 Dec 2021 01:25:06 -0800 (PST) Received: from localhost ([47.251.3.230]) by smtp.gmail.com with ESMTPSA id f4sm2657482pfj.61.2021.12.10.01.25.04 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 10 Dec 2021 01:25:06 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Paolo Bonzini , Sean Christopherson Cc: Lai Jiangshan , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [RFC PATCH 2/6] KVM: X86: Walk shadow page starting with shadow_root_level Date: Fri, 10 Dec 2021 17:25:04 +0800 Message-Id: <20211210092508.7185-3-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20211210092508.7185-1-jiangshanlai@gmail.com> References: <20211210092508.7185-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Walking from the root page of the shadow page table should start with the level of the shadow page table: shadow_root_level. Also change a small defect in audit_mappings(), it is believed that the current walking level is more valuable to print. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu_audit.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu_audit.c b/arch/x86/kvm/mmu/mmu_audit.c index 9e7dcf999f08..6bbbf85b3e46 100644 --- a/arch/x86/kvm/mmu/mmu_audit.c +++ b/arch/x86/kvm/mmu/mmu_audit.c @@ -63,7 +63,7 @@ static void mmu_spte_walk(struct kvm_vcpu *vcpu, inspect_spte_fn fn) hpa_t root = vcpu->arch.mmu->root_hpa; sp = to_shadow_page(root); - __mmu_spte_walk(vcpu, sp, fn, vcpu->arch.mmu->root_level); + __mmu_spte_walk(vcpu, sp, fn, vcpu->arch.mmu->shadow_root_level); return; } @@ -119,8 +119,7 @@ static void audit_mappings(struct kvm_vcpu *vcpu, u64 *sptep, int level) hpa = pfn << PAGE_SHIFT; if ((*sptep & PT64_BASE_ADDR_MASK) != hpa) audit_printk(vcpu->kvm, "levels %d pfn %llx hpa %llx " - "ent %llxn", vcpu->arch.mmu->root_level, pfn, - hpa, *sptep); + "ent %llxn", level, pfn, hpa, *sptep); } static void inspect_spte_has_rmap(struct kvm *kvm, u64 *sptep) From patchwork Fri Dec 10 09:25:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12669087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E272C433EF for ; Fri, 10 Dec 2021 09:25:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239322AbhLJJ2u (ORCPT ); Fri, 10 Dec 2021 04:28:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239274AbhLJJ2r (ORCPT ); Fri, 10 Dec 2021 04:28:47 -0500 Received: from mail-pg1-x531.google.com (mail-pg1-x531.google.com [IPv6:2607:f8b0:4864:20::531]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 91B41C061746; Fri, 10 Dec 2021 01:25:12 -0800 (PST) Received: by mail-pg1-x531.google.com with SMTP id g16so7584010pgi.1; Fri, 10 Dec 2021 01:25:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Jvyh985iWE+52Vh0gYXmI7swXmsP4Ob31Ssk6jo1Jmo=; b=JeGYCb8anqRLCaVkwzFfsIaUqPabj6B0+QdcghkTTh6b4A2Dgxh1I24AJvHlRMJ6W6 FIqm4n5zCxmUIpPM2VCIaxeVgB/xZedPLYJ2fi3KTcDmdZscIXuBmUlCbPCJiK/c9ttk f5TOOvos8z6l2tz4ZYa2pe8pYGFQG72vpWyycPqtwcuWukegtQgsf6JgaMcoat5MFEXB 4jqu453lGNYWr8Z3wko/8aNvZ+dqa7m6T5Y13xpWDSBlzBaHGQ08iz7wxGRU4dVmWM7S cuoFHYXnyQuUnygz1vsctl42d3hxRsdT9PqAKnMfPvuhzJ3y9nhx6jDK5bMRh3BxQoJH kXcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Jvyh985iWE+52Vh0gYXmI7swXmsP4Ob31Ssk6jo1Jmo=; b=gb2vYVzzqpkNjHroRCnBP8xjllmlaCEuzOI0C4/ZeO3Pjw2MRXlUQOradlNLWrjcB2 8VgwZghlWUbDvWS9JbbHgdz69SE0wNoRqsotRIJepK0xNnl2vuXBG20YyynBMTz9j8jF uV2FghpgrAUPpooVulDHNjkjjZwDpFbux5WPGzkLNnl0cAR2DEX4/p88aP3aumnIEq6I P9Y1Al7+Amyi/03OKyolK/FOxvHk1cgbnS29O+da0pzOd6NYk4lzxbLfzU8B8kI7C17s nKeS4IrgsQE0IDXYyluWcXXuqxaehodPSchcvK/6yVjPhj6NSoX+H7spcLUMXubI7aMe E+Sw== X-Gm-Message-State: AOAM531nIHRM/EXTa57k1IuqpZNvX15a89RVVg6EV58mYpynpwRCMFcG 5WObZHYWV2rVDZekKzYY63r04lXgbmo= X-Google-Smtp-Source: ABdhPJy40TqoGqMn7su0neDQlDJO8kt3xEmiY0vgHBohqtDu8HrhV6VFlxJNBQQsg08CSoeJmcwGmQ== X-Received: by 2002:aa7:98dd:0:b0:49f:bab8:3b67 with SMTP id e29-20020aa798dd000000b0049fbab83b67mr16922004pfm.86.1639128311996; Fri, 10 Dec 2021 01:25:11 -0800 (PST) Received: from localhost ([47.251.3.230]) by smtp.gmail.com with ESMTPSA id rm10sm2616421pjb.29.2021.12.10.01.25.11 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 10 Dec 2021 01:25:11 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Paolo Bonzini , Sean Christopherson Cc: Lai Jiangshan , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [RFC PATCH 3/6] KVM: X86: Add arguement gfn and role to kvm_mmu_alloc_page() Date: Fri, 10 Dec 2021 17:25:05 +0800 Message-Id: <20211210092508.7185-4-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20211210092508.7185-1-jiangshanlai@gmail.com> References: <20211210092508.7185-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan kvm_mmu_alloc_page() will access to more bits of the role. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 846a2e426e0b..54e7cbc15380 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1734,13 +1734,13 @@ static void drop_parent_pte(struct kvm_mmu_page *sp, mmu_spte_clear_no_track(parent_pte); } -static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct) +static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, gfn_t gfn, union kvm_mmu_page_role role) { struct kvm_mmu_page *sp; sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); - if (!direct) + if (!role.direct) sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache); set_page_private(virt_to_page(sp->spt), (unsigned long)sp); @@ -1752,6 +1752,8 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct sp->mmu_valid_gen = vcpu->kvm->arch.mmu_valid_gen; list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages); kvm_mod_used_mmu_pages(vcpu->kvm, +1); + sp->gfn = gfn; + sp->role = role; return sp; } @@ -2138,10 +2140,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, ++vcpu->kvm->stat.mmu_cache_miss; - sp = kvm_mmu_alloc_page(vcpu, direct); - - sp->gfn = gfn; - sp->role = role; + sp = kvm_mmu_alloc_page(vcpu, gfn, role); hlist_add_head(&sp->hash_link, sp_list); if (!direct) { account_shadowed(vcpu->kvm, sp); From patchwork Fri Dec 10 09:25:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12669089 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77128C433F5 for ; Fri, 10 Dec 2021 09:25:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239367AbhLJJ24 (ORCPT ); Fri, 10 Dec 2021 04:28:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239371AbhLJJ2x (ORCPT ); Fri, 10 Dec 2021 04:28:53 -0500 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFF85C061D76; Fri, 10 Dec 2021 01:25:18 -0800 (PST) Received: by mail-pl1-x634.google.com with SMTP id v19so5885550plo.7; Fri, 10 Dec 2021 01:25:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ULR8qiwIYzctQaurDatzqm9cuOJ++k57v+D1328ms9Q=; b=Ge1DmyMGQedOwMicPc/4G/Y/Ln+vdEJC33rwbcZPkRso6+qwfbTmePNWfh9ZnXQj7/ bT0zXqo3BYlwaFUbBXbaWPHhtbrJ+NSc+ipCwdmySO36UBfbWu9g37yuiQzUvj4MeDNM OGYqVi1ecYtMv0IhjwIqEwnFMvy2VjsOH8YVO4+1WKEiUciS6fgToFZho5Dy1P8zv5ir m+SBaHd0w1cF/gtgjnv5fyuFi0w/1HAzLQV9yv8lwNr7oAOVRlTWoNatbkmv3XSkP4H8 hzi7y35cebNBvJJ7QZfn4aXJTbgIcWIIAkawC2BvLh3gSSlL1o/EaFX9OVIdK55y2j1k y/RQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ULR8qiwIYzctQaurDatzqm9cuOJ++k57v+D1328ms9Q=; b=IzqjONpyyb5OtdwCZr02H6ajx0G7+Yi1DTmqyAbbTmMlhaT7Rl2zYF9pN67qJV83QY JFQJK3oDChjJuj+I8leG4Y1c/Zr0b05+UsYWihYVIMEN3HrRrupsqTPDAApEVLGOJUXG Ho7OPbvFaYvUiIUGlqKcGAHCLbBx51WCYd4+eKXzVLjO0womQyS10Cfp1BSdHl+eAog4 hVwTWON8O8IyizvuY9F1x9RPt6fTLzTWMM0WtwOfFnryb/VvulEGwpSOWhS++z8fRSle 5MDjR21K+j+o5ntEsnU/OcZdHrMPZtco7T70hd9J1KR/pblIFldEjoxocy3oTTkWedZA LqPA== X-Gm-Message-State: AOAM531SNGKfIyW4yqrj6Ky7OmWyMeFjApSRKbXIqIvyiM3bI1BH+u7p 12E2lyZH4oZSWaCjxSRMmfDzF9uVj/M= X-Google-Smtp-Source: ABdhPJwwlb/Ue/i9tieyDJd91tS1268lFKuhm1NNtsSkNcd1ZwywWP7lz8UwoWnFPkAD42AR3naVsg== X-Received: by 2002:a17:90b:3b8c:: with SMTP id pc12mr22362012pjb.9.1639128318094; Fri, 10 Dec 2021 01:25:18 -0800 (PST) Received: from localhost ([47.251.3.230]) by smtp.gmail.com with ESMTPSA id m15sm2176204pgd.44.2021.12.10.01.25.16 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 10 Dec 2021 01:25:17 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Paolo Bonzini , Sean Christopherson Cc: Lai Jiangshan , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [RFC PATCH 4/6] KVM: X86: Introduce role.level_promoted Date: Fri, 10 Dec 2021 17:25:06 +0800 Message-Id: <20211210092508.7185-5-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20211210092508.7185-1-jiangshanlai@gmail.com> References: <20211210092508.7185-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Level pormotion occurs when mmu->shadow_root_level > mmu->root_level. There are several cases that can cuase level pormotion: shadow mmu (shadow paging for 32 bit guest): case1: gCR0_PG=1,gEFER_LMA=0,gCR4_PSE=0 shadow nested NPT (for 32bit L1 hypervisor): case2: gCR0_PG=1,gEFER_LMA=0,gCR4_PSE=0,hEFER_LMA=0 case3: gCR0_PG=1,gEFER_LMA=0,hEFER_LMA=1 shadow nested NPT (for 64bit L1 hypervisor): case4: gEFER_LMA=1,gCR4_LA57=0,hEFER_LMA=1,hCR4_LA57=1 When level pormotion occurs (32bit guest, case1-3), special roots are often used. But case4 is not using special roots. It uses shadow page without fully aware of the specialty. It might work accidentally: 1) The root_page (root_sp->spt) is allocated with level = 5, and root_sp->spt[0] is allocated with the same gfn and the same role except role.level = 4. Luckly that they are different shadow pages. 2) FNAME(walk_addr_generic) sets walker->table_gfn[4] and walker->pt_access[4], which are normally unused when mmu->shadow_root_level == mmu->root_level == 4, so that FNAME(fetch) can use them to allocate shadow page for root_sp->spt[0] and link them when shadow_root_level == 5. But it has problems. If the guest switches from gCR4_LA57=0 to gCR4_LA57=1 (or vice verse) and usees the same gfn as the root for the nNPT before and after switching gCR4_LA57. The host (hCR4_LA57=1) wold use the same root_sp for guest even guest switches gCR4_LA57. The guest will see unexpected page mapped and L2 can hurts L1. It is lucky the the problem can't hurt L0. The root_sp should be like role.direct=1 sometimes: its contents are not backed by gptes, root_sp->gfns is meaningless. For a normal high level sp, sp->gfns is often unused and kept zero, but it could be relevant and meaningful when sp->gfns is used because they are back by concret gptes. For level-promoted root_sp described before, root_sp is just a portal to contribute root_sp->spt[0], and root_sp should not have root_sp->gfns and root_sp->spt[0] should not be dropped if gpte[0] of the root gfn is changed. This patch adds role.level_promoted to address the two problems. role.level_promoted is set when shadow paging and role.level > gMMU.level. An alternative way to fix the problem of case4 is that: also using the special root pml5_root for it. But it would required to change many other places because it is assumption that special roots is only used for 32bit guests. This patch also paves the way to use level promoted shadow page for case1-3, but that requires the special handling or PAE paging, so the extensive usage of it is not included. Signed-off-by: Lai Jiangshan --- arch/x86/include/asm/kvm_host.h | 3 ++- arch/x86/kvm/mmu/mmu.c | 15 +++++++++++++-- arch/x86/kvm/mmu/paging_tmpl.h | 1 + 3 files changed, 16 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 88ecf53f0d2b..6465c83794fc 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -334,7 +334,8 @@ union kvm_mmu_page_role { unsigned smap_andnot_wp:1; unsigned ad_disabled:1; unsigned guest_mode:1; - unsigned :6; + unsigned level_promoted:1; + unsigned :5; /* * This is left at the top of the word so that diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 54e7cbc15380..4769253e9024 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -767,6 +767,9 @@ static void mmu_free_pte_list_desc(struct pte_list_desc *pte_list_desc) static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) { + if (sp->role.level_promoted) + return sp->gfn; + if (!sp->role.direct) return sp->gfns[index]; @@ -776,6 +779,8 @@ static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) static void kvm_mmu_page_set_gfn(struct kvm_mmu_page *sp, int index, gfn_t gfn) { if (!sp->role.direct) { + if (WARN_ON_ONCE(sp->role.level_promoted && gfn != sp->gfn)) + return; sp->gfns[index] = gfn; return; } @@ -1702,7 +1707,7 @@ static void kvm_mmu_free_page(struct kvm_mmu_page *sp) hlist_del(&sp->hash_link); list_del(&sp->link); free_page((unsigned long)sp->spt); - if (!sp->role.direct) + if (!sp->role.direct && !sp->role.level_promoted) free_page((unsigned long)sp->gfns); kmem_cache_free(mmu_page_header_cache, sp); } @@ -1740,7 +1745,7 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, gfn_t gfn, sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); - if (!role.direct) + if (!(role.direct || role.level_promoted)) sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache); set_page_private(virt_to_page(sp->spt), (unsigned long)sp); @@ -2084,6 +2089,8 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, quadrant &= (1 << ((PT32_PT_BITS - PT64_PT_BITS) * level)) - 1; role.quadrant = quadrant; } + if (role.level_promoted && (level <= vcpu->arch.mmu->root_level)) + role.level_promoted = 0; sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; for_each_valid_sp(vcpu->kvm, sp, sp_list) { @@ -4836,6 +4843,8 @@ kvm_calc_shadow_npt_root_page_role(struct kvm_vcpu *vcpu, role.base.direct = false; role.base.level = kvm_mmu_get_tdp_level(vcpu); + if (role.base.level > role_regs_to_root_level(regs)) + role.base.level_promoted = 1; return role; } @@ -5228,6 +5237,8 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, kvm_mmu_audit(vcpu, AUDIT_PRE_PTE_WRITE); for_each_gfn_indirect_valid_sp(vcpu->kvm, sp, gfn) { + if (sp->role.level_promoted) + continue; if (detect_write_misaligned(sp, gpa, bytes) || detect_write_flooding(sp)) { kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 5c78300fc7d9..16ac276d342a 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -1043,6 +1043,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) .level = 0xf, .access = 0x7, .quadrant = 0x3, + .level_promoted = 0x1, }; /* From patchwork Fri Dec 10 09:25:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12669091 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAE08C433F5 for ; Fri, 10 Dec 2021 09:25:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239366AbhLJJ3S (ORCPT ); Fri, 10 Dec 2021 04:29:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239443AbhLJJ27 (ORCPT ); Fri, 10 Dec 2021 04:28:59 -0500 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 610B8C061A72; Fri, 10 Dec 2021 01:25:24 -0800 (PST) Received: by mail-pj1-x1029.google.com with SMTP id y14-20020a17090a2b4e00b001a5824f4918so8986985pjc.4; Fri, 10 Dec 2021 01:25:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=98Xa5oxPztPAdRgg+uk6YCzCFqchhpbQCP+3VdeBe9I=; b=WCOVXXH7oLBD0jeSZXl8oo5ptQmWQoitpR/wNTpVsrcX8VStdPPEH3y+WSkwhggkEP XKWs5817qHxdZPWJ4XEiO09DEhf8xTF21G0rZPi4km1/EUxwHw72PSvFmGb43OCKckWK uDyf36Oka1iu+BaVhNnNsKlJxuGvDugkIXk502tAzeYfnu9+BsFAlA54B7BhwZFQS2us ZZKfPoYRbQbR9PkTuU+UK+Wu7vXTKEOx1Dj07LnjJCgWrsxna2QdNxMos5JQpbkY48Zk 1YyS38MRNZ0e2vi34FzkdLxyiu6b38I3kIHbbkWf1NqkQN2yG0ZYpQTkbYTiVjlzGc+B 5d5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=98Xa5oxPztPAdRgg+uk6YCzCFqchhpbQCP+3VdeBe9I=; b=ToI8l0Y1pRKyS/IYKgWJJbCRuST5+62/KMwBctJXd2eJP83sF8hySFtVdBsBP7fowj 5ugxTJH1tmZXtOrrg8BkzH6DH7Gm65IhKHPixbM0MQedg9/2F+nrddgR8LaFwfxjt2Dc WtoPcOdf6AmoboS9JLbGEPLc+/+GDN2Aj5xMrGIsRPmv7+NSyL7TQlapwrsYLKP1+UkQ Wdmm4r4JdwrA0E3DvluytjUzg5YEaNeimTX8VypiHdPeC2yYP3TxOJm3oh7TyvbJBkgI dX+fMaqBy7HIbLcCisxbioxFS+L/96toyJBaZkQ2kM0DBsXPqwWCbQOmbJNW36M1GxoA Gt7Q== X-Gm-Message-State: AOAM530RlD14I2jIy4Zf7QLSlx5XNNRnBzawoxY0okmDXIlluN9sT/xv RF/VtlvG4XpZE9Efyg9kj6LAyrBpPk4= X-Google-Smtp-Source: ABdhPJwk0MJyqjcItNTRfHPItPKJ/MVYQQH1Nu2atSju50dvGCiSnLmkp6wpX6U4sqs0+baRUPI9/w== X-Received: by 2002:a17:90a:bf8a:: with SMTP id d10mr22194685pjs.67.1639128323718; Fri, 10 Dec 2021 01:25:23 -0800 (PST) Received: from localhost ([47.251.3.230]) by smtp.gmail.com with ESMTPSA id g13sm2113784pjc.39.2021.12.10.01.25.22 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 10 Dec 2021 01:25:23 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Paolo Bonzini , Sean Christopherson Cc: Lai Jiangshan , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [RFC PATCH 5/6] KVM: X86: Alloc pae_root shadow page Date: Fri, 10 Dec 2021 17:25:07 +0800 Message-Id: <20211210092508.7185-6-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20211210092508.7185-1-jiangshanlai@gmail.com> References: <20211210092508.7185-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Currently pae_root is special root page, this patch adds facility to allow using kvm_mmu_get_page() to allocate pae_root shadow page. When kvm_mmu_get_page() is called for level == PT32E_ROOT_LEVEL and vcpu->arch.mmu->shadow_root_level == PT32E_ROOT_LEVEL, it will get a DMA32 root page with default PAE pdptes installed. The pae_root bit is needed in the page role because: it is required to be DMA32 page. its first 4 sptes are initialized with default_pae_pdpte. default_pae_pdpte is needed because the cpu expect PAE pdptes are present when VMenter. default_pae_pdpte is designed to have no SPTE_MMU_PRESENT_MASK so that it is present in the view of CPU but not present in the view of shadow papging, and the page fault handler will replace it with real present shadow page. When changing from default_pae_pdpte to a present spte, no tlb flushing is requested, although both are present in the view of CPU. The reason is that default_pae_pdpte points to zero page, no pte is present if the paging structure is cached. No functionality changed since this code is not activated because when vcpu->arch.mmu->shadow_root_level == PT32E_ROOT_LEVEL, kvm_mmu_get_page() is only called for level == 1 or 2 now. Signed-off-by: Lai Jiangshan --- arch/x86/include/asm/kvm_host.h | 4 +- arch/x86/kvm/mmu/mmu.c | 113 +++++++++++++++++++++++++++++++- arch/x86/kvm/mmu/paging_tmpl.h | 1 + 3 files changed, 114 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 6465c83794fc..82a8844f80ac 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -335,7 +335,8 @@ union kvm_mmu_page_role { unsigned ad_disabled:1; unsigned guest_mode:1; unsigned level_promoted:1; - unsigned :5; + unsigned pae_root:1; + unsigned :4; /* * This is left at the top of the word so that @@ -695,6 +696,7 @@ struct kvm_vcpu_arch { struct kvm_mmu_memory_cache mmu_shadow_page_cache; struct kvm_mmu_memory_cache mmu_gfn_array_cache; struct kvm_mmu_memory_cache mmu_page_header_cache; + unsigned long mmu_pae_root_cache; /* * QEMU userspace and the guest each have their own FPU state. diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4769253e9024..0d2976dad863 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -724,6 +724,67 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu) } } +static u64 default_pae_pdpte; + +static void free_default_pae_pdpte(void) +{ + free_page((unsigned long)__va(default_pae_pdpte & PAGE_MASK)); + default_pae_pdpte = 0; +} + +static int alloc_default_pae_pdpte(void) +{ + unsigned long p = __get_free_page(GFP_KERNEL | __GFP_ZERO); + + if (!p) + return -ENOMEM; + default_pae_pdpte = __pa(p) | PT_PRESENT_MASK | shadow_me_mask; + if (WARN_ON(is_shadow_present_pte(default_pae_pdpte) || + is_mmio_spte(default_pae_pdpte))) { + free_default_pae_pdpte(); + return -EINVAL; + } + return 0; +} + +static int alloc_pae_root(struct kvm_vcpu *vcpu) +{ + struct page *page; + unsigned long pae_root; + u64* pdpte; + + if (vcpu->arch.mmu->shadow_root_level != PT32E_ROOT_LEVEL) + return 0; + if (vcpu->arch.mmu_pae_root_cache) + return 0; + + page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO | __GFP_DMA32); + if (!page) + return -ENOMEM; + + pae_root = (unsigned long)page_address(page); + + /* + * CR3 is only 32 bits when PAE paging is used, thus it's impossible to + * get the CPU to treat the PDPTEs as encrypted. Decrypt the page so + * that KVM's writes and the CPU's reads get along. Note, this is + * only necessary when using shadow paging, as 64-bit NPT can get at + * the C-bit even when shadowing 32-bit NPT, and SME isn't supported + * by 32-bit kernels (when KVM itself uses 32-bit NPT). + */ + if (!tdp_enabled) + set_memory_decrypted(pae_root, 1); + else + WARN_ON_ONCE(shadow_me_mask); + vcpu->arch.mmu_pae_root_cache = pae_root; + pdpte = (void *)pae_root; + pdpte[0] = default_pae_pdpte; + pdpte[1] = default_pae_pdpte; + pdpte[2] = default_pae_pdpte; + pdpte[3] = default_pae_pdpte; + return 0; +} + static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect) { int r; @@ -735,6 +796,9 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect) return r; r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache, PT64_ROOT_MAX_LEVEL); + if (r) + return r; + r = alloc_pae_root(vcpu); if (r) return r; if (maybe_indirect) { @@ -753,6 +817,10 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_gfn_array_cache); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); + if (!tdp_enabled && vcpu->arch.mmu_pae_root_cache) + set_memory_encrypted(vcpu->arch.mmu_pae_root_cache, 1); + free_page(vcpu->arch.mmu_pae_root_cache); + vcpu->arch.mmu_pae_root_cache = 0; } static struct pte_list_desc *mmu_alloc_pte_list_desc(struct kvm_vcpu *vcpu) @@ -1706,6 +1774,8 @@ static void kvm_mmu_free_page(struct kvm_mmu_page *sp) MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); hlist_del(&sp->hash_link); list_del(&sp->link); + if (!tdp_enabled && sp->role.pae_root) + set_memory_encrypted((unsigned long)sp->spt, 1); free_page((unsigned long)sp->spt); if (!sp->role.direct && !sp->role.level_promoted) free_page((unsigned long)sp->gfns); @@ -1735,8 +1805,13 @@ static void mmu_page_remove_parent_pte(struct kvm_mmu_page *sp, static void drop_parent_pte(struct kvm_mmu_page *sp, u64 *parent_pte) { + struct kvm_mmu_page *parent_sp = sptep_to_sp(parent_pte); + mmu_page_remove_parent_pte(sp, parent_pte); - mmu_spte_clear_no_track(parent_pte); + if (!parent_sp->role.pae_root) + mmu_spte_clear_no_track(parent_pte); + else + __update_clear_spte_fast(parent_pte, default_pae_pdpte); } static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, gfn_t gfn, union kvm_mmu_page_role role) @@ -1744,7 +1819,12 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_mmu_page *sp; sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); - sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); + if (!role.pae_root) { + sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); + } else { + sp->spt = (void *)vcpu->arch.mmu_pae_root_cache; + vcpu->arch.mmu_pae_root_cache = 0; + } if (!(role.direct || role.level_promoted)) sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache); set_page_private(virt_to_page(sp->spt), (unsigned long)sp); @@ -2091,6 +2171,8 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, } if (role.level_promoted && (level <= vcpu->arch.mmu->root_level)) role.level_promoted = 0; + if (role.pae_root && (level < PT32E_ROOT_LEVEL)) + role.pae_root = 0; sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; for_each_valid_sp(vcpu->kvm, sp, sp_list) { @@ -2226,14 +2308,27 @@ static void shadow_walk_next(struct kvm_shadow_walk_iterator *iterator) __shadow_walk_next(iterator, *iterator->sptep); } +static u64 make_pae_pdpte(u64 *child_pt) +{ + u64 spte = __pa(child_pt) | PT_PRESENT_MASK; + + /* The only ignore bits in PDPTE are 11:9. */ + BUILD_BUG_ON(!(GENMASK(11,9) & SPTE_MMU_PRESENT_MASK)); + return spte | SPTE_MMU_PRESENT_MASK | shadow_me_mask; +} + static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, struct kvm_mmu_page *sp) { + struct kvm_mmu_page *parent_sp = sptep_to_sp(sptep); u64 spte; BUILD_BUG_ON(VMX_EPT_WRITABLE_MASK != PT_WRITABLE_MASK); - spte = make_nonleaf_spte(sp->spt, sp_ad_disabled(sp)); + if (!parent_sp->role.pae_root) + spte = make_nonleaf_spte(sp->spt, sp_ad_disabled(sp)); + else + spte = make_pae_pdpte(sp->spt); mmu_spte_set(sptep, spte); @@ -4733,6 +4828,8 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, role.base.level = kvm_mmu_get_tdp_level(vcpu); role.base.direct = true; role.base.has_4_byte_gpte = false; + if (role.base.level == PT32E_ROOT_LEVEL) + role.base.pae_root = 1; return role; } @@ -4798,6 +4895,9 @@ kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, else role.base.level = PT64_ROOT_4LEVEL; + if (!____is_cr0_pg(regs) || !____is_efer_lma(regs)) + role.base.pae_root = 1; + return role; } @@ -4845,6 +4945,8 @@ kvm_calc_shadow_npt_root_page_role(struct kvm_vcpu *vcpu, role.base.level = kvm_mmu_get_tdp_level(vcpu); if (role.base.level > role_regs_to_root_level(regs)) role.base.level_promoted = 1; + if (role.base.level == PT32E_ROOT_LEVEL) + role.base.pae_root = 1; return role; } @@ -6133,6 +6235,10 @@ int kvm_mmu_module_init(void) if (ret) goto out; + ret = alloc_default_pae_pdpte(); + if (ret) + goto out; + return 0; out: @@ -6174,6 +6280,7 @@ void kvm_mmu_destroy(struct kvm_vcpu *vcpu) void kvm_mmu_module_exit(void) { + free_default_pae_pdpte(); mmu_destroy_caches(); percpu_counter_destroy(&kvm_total_used_mmu_pages); unregister_shrinker(&mmu_shrinker); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 16ac276d342a..014136e15b26 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -1044,6 +1044,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) .access = 0x7, .quadrant = 0x3, .level_promoted = 0x1, + .pae_root = 0x1, }; /* From patchwork Fri Dec 10 09:25:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12669093 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 950EFC433EF for ; Fri, 10 Dec 2021 09:25:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239422AbhLJJ3W (ORCPT ); Fri, 10 Dec 2021 04:29:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41308 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239483AbhLJJ3R (ORCPT ); Fri, 10 Dec 2021 04:29:17 -0500 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D9E90C0698C8; Fri, 10 Dec 2021 01:25:30 -0800 (PST) Received: by mail-pj1-x1036.google.com with SMTP id w33-20020a17090a6ba400b001a722a06212so7817621pjj.0; Fri, 10 Dec 2021 01:25:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=uR+UlOcOA3baKsq2NGn/fMmHPthil1kGOJ2nua40D1c=; b=MZCMYsTEwldk7+vBNcxl6MEFkD6zhpb5d4zZzc0bI0EhuHaAZfsk46ZPbA6H0jrCHN Ok0xTGckFLlNDVE9iqpQAIfwl7KpwQrwMAPL3/mCvRJbTvYHNIfZz51p4lGwAijerpne ZBq7lSuycpp6LCl9yJNvLCadk+XqoTWRmNzaCm4sf3AUiaVnjTFRAms6W8F9HFURGaSh NL7JElVvIuBCka0xRMuy7GO2XcOLg0bDmRlUXQw/73hL1cj6ML4LiORFwqRL1CcmLH+K LuEAyb0gpWIu6/kORAmZL7N49D6wKzr+Sg2maP6mvINIFUbDD1HBcBHJPdJhkIlUE2e6 NGaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uR+UlOcOA3baKsq2NGn/fMmHPthil1kGOJ2nua40D1c=; b=bABAw984U7jwrzjI5o53qNz8X/6Mza/jb5uMinTb1qUWnaDmvD7BDknEjKeyizysRl DmrR+rqjvzXAc5TtjFQYP85FZHlGOCM/5LtQfblpVBNl6YMhgnA4vrg56i8LO+YLKF8q TRwg/aV/3st85OyYOwUJmsgUKFzUUGRFYtryCDa/ZKx+Y7pLcjnkGn/UKC2azwK0IT2W o4tKiHBan6oBNCfqBIM9ffqsbUCYOihakYDKWcPU6CW10q6YljGE6vPwkZEQRBoq4cqR u4LbwyIvM620GbXuEQ7Tght2a/R/bZ52tVE2bBq50vpBj66kGE+pDGCv6Oxlgkbv8vwK GQlw== X-Gm-Message-State: AOAM530556JrJ/ft1R3A6hOBPT5+VeLxB20MdcHRV9RxfWVlg+LqnZWk Rpzr1w/0/emH32Fw6xOOEo4dRiu9IcI= X-Google-Smtp-Source: ABdhPJzhtPT+TjuUC+A7APLQnBGShZk+g83JalckmljTosDxATmLQ1PI4XOx4H1fGsimiCLWYseEBg== X-Received: by 2002:a17:90b:1b06:: with SMTP id nu6mr22287238pjb.155.1639128329977; Fri, 10 Dec 2021 01:25:29 -0800 (PST) Received: from localhost ([47.251.3.230]) by smtp.gmail.com with ESMTPSA id c137sm2451000pfb.49.2021.12.10.01.25.28 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 10 Dec 2021 01:25:29 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Paolo Bonzini , Sean Christopherson Cc: Lai Jiangshan , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [RFC PATCH 6/6] KVM: X86: Use level_promoted and pae_root shadow page for 32bit guests Date: Fri, 10 Dec 2021 17:25:08 +0800 Message-Id: <20211210092508.7185-7-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20211210092508.7185-1-jiangshanlai@gmail.com> References: <20211210092508.7185-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Use role.pae_root = 1 for shadow_root_level == 3 no matter if it is shadow MMU or if the level is promoted. Use role.level_promoted = 1 for promoted shadow page if it is shadow MMU and the level is promoted. And remove the unneeded special roots. Now all the root pages and pagetable pointed by a present spte in kvm_mmu are backed by struct kvm_mmu_page, and to_shadow_page() is guaranteed to be not NULL. shadow_walk() and the intialization of shadow page are much simplied since there is not special roots. Affect cases: direct mmu (nonpaping for 32 bit guest): gCR0_PG=0 (pae_root=1) shadow mmu (shadow paping for 32 bit guest): gCR0_PG=1,gEFER_LMA=0,gCR4_PSE=0 (pae_root=1,level_promoted=1) gCR0_PG=1,gEFER_LMA=0,gCR4_PSE=1 (pae_root=1,level_promoted=0) direct mmu (NPT for 32bit host): hEFER_LMA=0 (pae_root=1) shadow nested NPT (for 32bit L1 hypervisor): gCR0_PG=1,gEFER_LMA=0,gCR4_PSE=0,hEFER_LMA=0 (pae_root=1,level_promoted=1) gCR0_PG=1,gEFER_LMA=0,gCR4_PSE=1,hEFER_LMA=0 (pae_root=1,level_promoted=0) gCR0_PG=1,gEFER_LMA=0,gCR4_PSE={0|1},hEFER_LMA=1,hCR4_LA57={0|1} (pae_root=0,level_promoted=1) (default_pae_pdpte is not used even guest is using PAE paging) Shadow nested NPT for 64bit L1 hypervisor has been already handled: gEFER_LMA=1,gCR4_LA57=0,hEFER_LMA=1,hCR4_LA57=1 (pae_root=0,level_promoted=1) FNAME(walk_addr_generic) adds initialization code for shadow nested NPT for 32bit L1 hypervisor when the level increment might be more than one, for example, 2->4, 2->5, 3->5. After this patch, the PAE Page-Directory-Pointer-Table is also write protected (including NPT's). Signed-off-by: Lai Jiangshan --- arch/x86/include/asm/kvm_host.h | 4 - arch/x86/kvm/mmu/mmu.c | 302 ++------------------------------ arch/x86/kvm/mmu/mmu_audit.c | 23 +-- arch/x86/kvm/mmu/paging_tmpl.h | 13 +- arch/x86/kvm/mmu/tdp_mmu.h | 7 +- 5 files changed, 30 insertions(+), 319 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 82a8844f80ac..d4ab6f53ab00 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -466,10 +466,6 @@ struct kvm_mmu { */ u32 pkru_mask; - u64 *pae_root; - u64 *pml4_root; - u64 *pml5_root; - /* * check zero bits on shadow page table entries, these * bits include not only hardware reserved bits but also diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 0d2976dad863..fd2bc851b700 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2252,26 +2252,6 @@ static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterato iterator->addr = addr; iterator->shadow_addr = root; iterator->level = vcpu->arch.mmu->shadow_root_level; - - if (iterator->level >= PT64_ROOT_4LEVEL && - vcpu->arch.mmu->root_level < PT64_ROOT_4LEVEL && - !vcpu->arch.mmu->direct_map) - iterator->level = PT32E_ROOT_LEVEL; - - if (iterator->level == PT32E_ROOT_LEVEL) { - /* - * prev_root is currently only used for 64-bit hosts. So only - * the active root_hpa is valid here. - */ - BUG_ON(root != vcpu->arch.mmu->root_hpa); - - iterator->shadow_addr - = vcpu->arch.mmu->pae_root[(addr >> 30) & 3]; - iterator->shadow_addr &= PT64_BASE_ADDR_MASK; - --iterator->level; - if (!iterator->shadow_addr) - iterator->level = 0; - } } static void shadow_walk_init(struct kvm_shadow_walk_iterator *iterator, @@ -3375,19 +3355,7 @@ void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, &invalid_list); if (free_active_root) { - if (mmu->shadow_root_level >= PT64_ROOT_4LEVEL && - (mmu->root_level >= PT64_ROOT_4LEVEL || mmu->direct_map)) { - mmu_free_root_page(kvm, &mmu->root_hpa, &invalid_list); - } else if (mmu->pae_root) { - for (i = 0; i < 4; ++i) { - if (!IS_VALID_PAE_ROOT(mmu->pae_root[i])) - continue; - - mmu_free_root_page(kvm, &mmu->pae_root[i], - &invalid_list); - mmu->pae_root[i] = INVALID_PAE_ROOT; - } - } + mmu_free_root_page(kvm, &mmu->root_hpa, &invalid_list); mmu->root_hpa = INVALID_PAGE; mmu->root_pgd = 0; } @@ -3452,7 +3420,6 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) struct kvm_mmu *mmu = vcpu->arch.mmu; u8 shadow_root_level = mmu->shadow_root_level; hpa_t root; - unsigned i; int r; write_lock(&vcpu->kvm->mmu_lock); @@ -3463,24 +3430,9 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) if (is_tdp_mmu_enabled(vcpu->kvm)) { root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu); mmu->root_hpa = root; - } else if (shadow_root_level >= PT64_ROOT_4LEVEL) { + } else if (shadow_root_level >= PT32E_ROOT_LEVEL) { root = mmu_alloc_root(vcpu, 0, 0, shadow_root_level, true); mmu->root_hpa = root; - } else if (shadow_root_level == PT32E_ROOT_LEVEL) { - if (WARN_ON_ONCE(!mmu->pae_root)) { - r = -EIO; - goto out_unlock; - } - - for (i = 0; i < 4; ++i) { - WARN_ON_ONCE(IS_VALID_PAE_ROOT(mmu->pae_root[i])); - - root = mmu_alloc_root(vcpu, i << (30 - PAGE_SHIFT), - i << 30, PT32_ROOT_LEVEL, true); - mmu->pae_root[i] = root | PT_PRESENT_MASK | - shadow_me_mask; - } - mmu->root_hpa = __pa(mmu->pae_root); } else { WARN_ONCE(1, "Bad TDP root level = %d\n", shadow_root_level); r = -EIO; @@ -3558,10 +3510,8 @@ static int mmu_first_shadow_root_alloc(struct kvm *kvm) static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) { struct kvm_mmu *mmu = vcpu->arch.mmu; - u64 pdptrs[4], pm_mask; gfn_t root_gfn, root_pgd; hpa_t root; - unsigned i; int r; root_pgd = mmu->get_guest_pgd(vcpu); @@ -3570,21 +3520,6 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) if (mmu_check_root(vcpu, root_gfn)) return 1; - /* - * On SVM, reading PDPTRs might access guest memory, which might fault - * and thus might sleep. Grab the PDPTRs before acquiring mmu_lock. - */ - if (mmu->root_level == PT32E_ROOT_LEVEL) { - for (i = 0; i < 4; ++i) { - pdptrs[i] = mmu->get_pdptr(vcpu, i); - if (!(pdptrs[i] & PT_PRESENT_MASK)) - continue; - - if (mmu_check_root(vcpu, pdptrs[i] >> PAGE_SHIFT)) - return 1; - } - } - r = mmu_first_shadow_root_alloc(vcpu->kvm); if (r) return r; @@ -3594,146 +3529,14 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) if (r < 0) goto out_unlock; - /* - * Do we shadow a long mode page table? If so we need to - * write-protect the guests page table root. - */ - if (mmu->root_level >= PT64_ROOT_4LEVEL) { - root = mmu_alloc_root(vcpu, root_gfn, 0, - mmu->shadow_root_level, false); - mmu->root_hpa = root; - goto set_root_pgd; - } - - if (WARN_ON_ONCE(!mmu->pae_root)) { - r = -EIO; - goto out_unlock; - } - - /* - * We shadow a 32 bit page table. This may be a legacy 2-level - * or a PAE 3-level page table. In either case we need to be aware that - * the shadow page table may be a PAE or a long mode page table. - */ - pm_mask = PT_PRESENT_MASK | shadow_me_mask; - if (mmu->shadow_root_level >= PT64_ROOT_4LEVEL) { - pm_mask |= PT_ACCESSED_MASK | PT_WRITABLE_MASK | PT_USER_MASK; - - if (WARN_ON_ONCE(!mmu->pml4_root)) { - r = -EIO; - goto out_unlock; - } - mmu->pml4_root[0] = __pa(mmu->pae_root) | pm_mask; - - if (mmu->shadow_root_level == PT64_ROOT_5LEVEL) { - if (WARN_ON_ONCE(!mmu->pml5_root)) { - r = -EIO; - goto out_unlock; - } - mmu->pml5_root[0] = __pa(mmu->pml4_root) | pm_mask; - } - } - - for (i = 0; i < 4; ++i) { - WARN_ON_ONCE(IS_VALID_PAE_ROOT(mmu->pae_root[i])); - - if (mmu->root_level == PT32E_ROOT_LEVEL) { - if (!(pdptrs[i] & PT_PRESENT_MASK)) { - mmu->pae_root[i] = INVALID_PAE_ROOT; - continue; - } - root_gfn = pdptrs[i] >> PAGE_SHIFT; - } - - root = mmu_alloc_root(vcpu, root_gfn, i << 30, - PT32_ROOT_LEVEL, false); - mmu->pae_root[i] = root | pm_mask; - } - - if (mmu->shadow_root_level == PT64_ROOT_5LEVEL) - mmu->root_hpa = __pa(mmu->pml5_root); - else if (mmu->shadow_root_level == PT64_ROOT_4LEVEL) - mmu->root_hpa = __pa(mmu->pml4_root); - else - mmu->root_hpa = __pa(mmu->pae_root); - -set_root_pgd: + root = mmu_alloc_root(vcpu, root_gfn, 0, + mmu->shadow_root_level, false); + mmu->root_hpa = root; mmu->root_pgd = root_pgd; out_unlock: write_unlock(&vcpu->kvm->mmu_lock); - return 0; -} - -static int mmu_alloc_special_roots(struct kvm_vcpu *vcpu) -{ - struct kvm_mmu *mmu = vcpu->arch.mmu; - bool need_pml5 = mmu->shadow_root_level > PT64_ROOT_4LEVEL; - u64 *pml5_root = NULL; - u64 *pml4_root = NULL; - u64 *pae_root; - - /* - * When shadowing 32-bit or PAE NPT with 64-bit NPT, the PML4 and PDP - * tables are allocated and initialized at root creation as there is no - * equivalent level in the guest's NPT to shadow. Allocate the tables - * on demand, as running a 32-bit L1 VMM on 64-bit KVM is very rare. - */ - if (mmu->direct_map || mmu->root_level >= PT64_ROOT_4LEVEL || - mmu->shadow_root_level < PT64_ROOT_4LEVEL) - return 0; - - /* - * NPT, the only paging mode that uses this horror, uses a fixed number - * of levels for the shadow page tables, e.g. all MMUs are 4-level or - * all MMus are 5-level. Thus, this can safely require that pml5_root - * is allocated if the other roots are valid and pml5 is needed, as any - * prior MMU would also have required pml5. - */ - if (mmu->pae_root && mmu->pml4_root && (!need_pml5 || mmu->pml5_root)) - return 0; - - /* - * The special roots should always be allocated in concert. Yell and - * bail if KVM ends up in a state where only one of the roots is valid. - */ - if (WARN_ON_ONCE(!tdp_enabled || mmu->pae_root || mmu->pml4_root || - (need_pml5 && mmu->pml5_root))) - return -EIO; - - /* - * Unlike 32-bit NPT, the PDP table doesn't need to be in low mem, and - * doesn't need to be decrypted. - */ - pae_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT); - if (!pae_root) - return -ENOMEM; - -#ifdef CONFIG_X86_64 - pml4_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT); - if (!pml4_root) - goto err_pml4; - - if (need_pml5) { - pml5_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT); - if (!pml5_root) - goto err_pml5; - } -#endif - - mmu->pae_root = pae_root; - mmu->pml4_root = pml4_root; - mmu->pml5_root = pml5_root; - - return 0; - -#ifdef CONFIG_X86_64 -err_pml5: - free_page((unsigned long)pml4_root); -err_pml4: - free_page((unsigned long)pae_root); - return -ENOMEM; -#endif + return r; } static bool is_unsync_root(hpa_t root) @@ -3765,46 +3568,23 @@ static bool is_unsync_root(hpa_t root) void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) { - int i; - struct kvm_mmu_page *sp; + hpa_t root = vcpu->arch.mmu->root_hpa; if (vcpu->arch.mmu->direct_map) return; - if (!VALID_PAGE(vcpu->arch.mmu->root_hpa)) + if (!VALID_PAGE(root)) return; vcpu_clear_mmio_info(vcpu, MMIO_GVA_ANY); - if (vcpu->arch.mmu->root_level >= PT64_ROOT_4LEVEL) { - hpa_t root = vcpu->arch.mmu->root_hpa; - sp = to_shadow_page(root); - - if (!is_unsync_root(root)) - return; - - write_lock(&vcpu->kvm->mmu_lock); - kvm_mmu_audit(vcpu, AUDIT_PRE_SYNC); - - mmu_sync_children(vcpu, sp, true); - - kvm_mmu_audit(vcpu, AUDIT_POST_SYNC); - write_unlock(&vcpu->kvm->mmu_lock); + if (!is_unsync_root(root)) return; - } write_lock(&vcpu->kvm->mmu_lock); kvm_mmu_audit(vcpu, AUDIT_PRE_SYNC); - for (i = 0; i < 4; ++i) { - hpa_t root = vcpu->arch.mmu->pae_root[i]; - - if (IS_VALID_PAE_ROOT(root)) { - root &= PT64_BASE_ADDR_MASK; - sp = to_shadow_page(root); - mmu_sync_children(vcpu, sp, true); - } - } + mmu_sync_children(vcpu, to_shadow_page(root), true); kvm_mmu_audit(vcpu, AUDIT_POST_SYNC); write_unlock(&vcpu->kvm->mmu_lock); @@ -4895,8 +4675,11 @@ kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, else role.base.level = PT64_ROOT_4LEVEL; - if (!____is_cr0_pg(regs) || !____is_efer_lma(regs)) + if (!____is_cr0_pg(regs) || !____is_efer_lma(regs)) { role.base.pae_root = 1; + if (____is_cr0_pg(regs) && !____is_cr4_pse(regs)) + role.base.level_promoted = 1; + } return role; } @@ -5161,9 +4944,6 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu) int r; r = mmu_topup_memory_caches(vcpu, !vcpu->arch.mmu->direct_map); - if (r) - goto out; - r = mmu_alloc_special_roots(vcpu); if (r) goto out; if (vcpu->arch.mmu->direct_map) @@ -5580,65 +5360,14 @@ slot_handle_level_4k(struct kvm *kvm, const struct kvm_memory_slot *memslot, PG_LEVEL_4K, flush_on_yield); } -static void free_mmu_pages(struct kvm_mmu *mmu) -{ - if (!tdp_enabled && mmu->pae_root) - set_memory_encrypted((unsigned long)mmu->pae_root, 1); - free_page((unsigned long)mmu->pae_root); - free_page((unsigned long)mmu->pml4_root); - free_page((unsigned long)mmu->pml5_root); -} - static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu) { - struct page *page; int i; mmu->root_hpa = INVALID_PAGE; mmu->root_pgd = 0; for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) mmu->prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID; - - /* vcpu->arch.guest_mmu isn't used when !tdp_enabled. */ - if (!tdp_enabled && mmu == &vcpu->arch.guest_mmu) - return 0; - - /* - * When using PAE paging, the four PDPTEs are treated as 'root' pages, - * while the PDP table is a per-vCPU construct that's allocated at MMU - * creation. When emulating 32-bit mode, cr3 is only 32 bits even on - * x86_64. Therefore we need to allocate the PDP table in the first - * 4GB of memory, which happens to fit the DMA32 zone. TDP paging - * generally doesn't use PAE paging and can skip allocating the PDP - * table. The main exception, handled here, is SVM's 32-bit NPT. The - * other exception is for shadowing L1's 32-bit or PAE NPT on 64-bit - * KVM; that horror is handled on-demand by mmu_alloc_special_roots(). - */ - if (tdp_enabled && kvm_mmu_get_tdp_level(vcpu) > PT32E_ROOT_LEVEL) - return 0; - - page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_DMA32); - if (!page) - return -ENOMEM; - - mmu->pae_root = page_address(page); - - /* - * CR3 is only 32 bits when PAE paging is used, thus it's impossible to - * get the CPU to treat the PDPTEs as encrypted. Decrypt the page so - * that KVM's writes and the CPU's reads get along. Note, this is - * only necessary when using shadow paging, as 64-bit NPT can get at - * the C-bit even when shadowing 32-bit NPT, and SME isn't supported - * by 32-bit kernels (when KVM itself uses 32-bit NPT). - */ - if (!tdp_enabled) - set_memory_decrypted((unsigned long)mmu->pae_root, 1); - else - WARN_ON_ONCE(shadow_me_mask); - - for (i = 0; i < 4; ++i) - mmu->pae_root[i] = INVALID_PAE_ROOT; - return 0; } @@ -5667,7 +5396,6 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu) return ret; fail_allocate_root: - free_mmu_pages(&vcpu->arch.guest_mmu); return ret; } @@ -6273,8 +6001,6 @@ unsigned long kvm_mmu_calculate_default_mmu_pages(struct kvm *kvm) void kvm_mmu_destroy(struct kvm_vcpu *vcpu) { kvm_mmu_unload(vcpu); - free_mmu_pages(&vcpu->arch.root_mmu); - free_mmu_pages(&vcpu->arch.guest_mmu); mmu_free_memory_caches(vcpu); } diff --git a/arch/x86/kvm/mmu/mmu_audit.c b/arch/x86/kvm/mmu/mmu_audit.c index 6bbbf85b3e46..f5e8dabe13bf 100644 --- a/arch/x86/kvm/mmu/mmu_audit.c +++ b/arch/x86/kvm/mmu/mmu_audit.c @@ -53,31 +53,14 @@ static void __mmu_spte_walk(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, static void mmu_spte_walk(struct kvm_vcpu *vcpu, inspect_spte_fn fn) { - int i; + hpa_t root = vcpu->arch.mmu->root_hpa; struct kvm_mmu_page *sp; if (!VALID_PAGE(vcpu->arch.mmu->root_hpa)) return; - if (vcpu->arch.mmu->root_level >= PT64_ROOT_4LEVEL) { - hpa_t root = vcpu->arch.mmu->root_hpa; - - sp = to_shadow_page(root); - __mmu_spte_walk(vcpu, sp, fn, vcpu->arch.mmu->shadow_root_level); - return; - } - - for (i = 0; i < 4; ++i) { - hpa_t root = vcpu->arch.mmu->pae_root[i]; - - if (IS_VALID_PAE_ROOT(root)) { - root &= PT64_BASE_ADDR_MASK; - sp = to_shadow_page(root); - __mmu_spte_walk(vcpu, sp, fn, 2); - } - } - - return; + sp = to_shadow_page(root); + __mmu_spte_walk(vcpu, sp, fn, vcpu->arch.mmu->shadow_root_level); } typedef void (*sp_handler) (struct kvm *kvm, struct kvm_mmu_page *sp); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 014136e15b26..d71b562bf8f0 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -365,6 +365,16 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, pte = mmu->get_guest_pgd(vcpu); have_ad = PT_HAVE_ACCESSED_DIRTY(mmu); + /* kvm_mmu_get_page() will uses this values for allocating level + * promoted shadow page. + */ + walker->table_gfn[4] = gpte_to_gfn(pte); + walker->pt_access[4] = ACC_ALL; + walker->table_gfn[3] = gpte_to_gfn(pte); + walker->pt_access[3] = ACC_ALL; + walker->table_gfn[2] = gpte_to_gfn(pte); + walker->pt_access[2] = ACC_ALL; + #if PTTYPE == 64 walk_nx_mask = 1ULL << PT64_NX_SHIFT; if (walker->level == PT32E_ROOT_LEVEL) { @@ -710,7 +720,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, * Verify that the gpte in the page we've just write * protected is still there. */ - if (FNAME(gpte_changed)(vcpu, gw, it.level - 1)) + if (it.level - 1 < top_level && + FNAME(gpte_changed)(vcpu, gw, it.level - 1)) goto out_gpte_changed; if (sp) diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 476b133544dd..822ff5d76b91 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -100,13 +100,8 @@ static inline bool is_tdp_mmu(struct kvm_mmu *mmu) if (WARN_ON(!VALID_PAGE(hpa))) return false; - /* - * A NULL shadow page is legal when shadowing a non-paging guest with - * PAE paging, as the MMU will be direct with root_hpa pointing at the - * pae_root page, not a shadow page. - */ sp = to_shadow_page(hpa); - return sp && is_tdp_mmu_page(sp) && sp->root_count; + return is_tdp_mmu_page(sp) && sp->root_count; } #else static inline bool kvm_mmu_init_tdp_mmu(struct kvm *kvm) { return false; }