From patchwork Tue Jun 22 17:56:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D54B2C2B9F4 for ; Tue, 22 Jun 2021 17:58:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AEF3861289 for ; Tue, 22 Jun 2021 17:58:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232479AbhFVSAR (ORCPT ); Tue, 22 Jun 2021 14:00:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232456AbhFVSAP (ORCPT ); Tue, 22 Jun 2021 14:00:15 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD48AC061574 for ; Tue, 22 Jun 2021 10:57:59 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id c29-20020ac86e9d0000b0290247b267c8e4so29352qtv.22 for ; Tue, 22 Jun 2021 10:57:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=/fL0eFtMViaI+pLQ+p/yxMkgCYrB50jqMM8EallIziw=; b=nfBQWmOotSk7zxmjZWsTw0TdrpqnnpN7Mk4+o9aBj2MGa+i8KJo9kDkR2iRFtVu/Lz FQUjoo4VZdCyUlWnaEhS8uuPWIWxreUkQ9W/byttwTBhO0LrFxar3ZBR2fEGW4OqAB9N 8RnBmaRwCloVv6biWAjgn1Exzc/WF3gCOLCIAz1Xkr36Fb8eErIl8gegRAv7ZjKHl6IW QhmR5MgSx87RqqqKuZZcwp15yFwfxg4sbYv8XUiFqwzZ+RUYl7GuqtLrqIFAn4ams/P7 TpBNApCmwQMZYAOfE0/2/L9kSEyrDs7pQrW0OHcPx+PNWTRJyVFMkqjPxzZAu8YF8b9E iEzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=/fL0eFtMViaI+pLQ+p/yxMkgCYrB50jqMM8EallIziw=; b=NUl9gukFAR+aZ7X4YFunFFzEIQmI7orEXNZleMgASnPqOajB8RGiLWaBxi9CIrANqc 9I5aePL6oviLeC8V34/M4+Dc4ZZOnToHL6NtEonZ7pzyBVq8LyqXc1nuFml1eQeRk5Bb FfDEjuactAaE0XMD+zHClwQSq6i/MpduOpYVu0ZFdeoTv9yQh2iociClB91bMG0/Jqwf 5vGS5tvpO5kVPWuuV0GAkiL9ytn+aK8PgJHKrkbjTZYpGUg3ZRBbUlWrhx2NX0T/3Ayr QvWDLMITvhlTwy9Vi0wFJRHIw6wupuWn8fkXhu/liMjuJF94JMoiB72knbEbpSxo7aVq ObOg== X-Gm-Message-State: AOAM532UiWsOGUD5BVhJkl/suURNHY2ejmz8F8OduIDMW+w9j6LBZ8RW cJzLrSI+rAXbo5aQpSkpYjJJiaaITao= X-Google-Smtp-Source: ABdhPJy4HSUsMUGD9BPsLjOhafmF0KpL3oWnLoJm6r7xjntV1mYlUiPhpGggjjRIEdQIALyvDbjOxgoaJrU= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:2e43:: with SMTP id b3mr6234041ybn.152.1624384678162; Tue, 22 Jun 2021 10:57:58 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:56:46 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-2-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 01/54] KVM: x86/mmu: Remove broken WARN that fires on 32-bit KVM w/ nested EPT From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Remove a misguided WARN that attempts to detect the scenario where using a special A/D tracking flag will set reserved bits on a non-MMIO spte. The WARN triggers false positives when using EPT with 32-bit KVM because of the !64-bit clause, which is just flat out wrong. The whole A/D tracking goo is specific to EPT, and one of the big selling points of EPT is that EPT is decoupled from the host's native paging mode. Drop the WARN instead of trying to salvage the check. Keeping a check specific to A/D tracking bits would essentially regurgitate the same code that led to KVM needed the tracking bits in the first place. A better approach would be to add a generic WARN on reserved bits being set, which would naturally cover the A/D tracking bits, work for all flavors of paging, and be self-documenting to some extent. Fixes: 8a406c89532c ("KVM: x86/mmu: Rename and document A/D scheme for TDP SPTEs") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/spte.c | 7 ------- 1 file changed, 7 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 66d43cec0c31..8e8e8da740a0 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -102,13 +102,6 @@ int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, else if (kvm_vcpu_ad_need_write_protect(vcpu)) spte |= SPTE_TDP_AD_WRPROT_ONLY_MASK; - /* - * Bits 62:52 of PAE SPTEs are reserved. WARN if said bits are set - * if PAE paging may be employed (shadow paging or any 32-bit KVM). - */ - WARN_ON_ONCE((!tdp_enabled || !IS_ENABLED(CONFIG_X86_64)) && - (spte & SPTE_TDP_AD_MASK)); - /* * For the EPT case, shadow_present_mask is 0 if hardware * supports exec-only page table entries. In that case, From patchwork Tue Jun 22 17:56:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338219 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C422EC2B9F4 for ; Tue, 22 Jun 2021 17:58:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A9F2A61289 for ; Tue, 22 Jun 2021 17:58:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232517AbhFVSAW (ORCPT ); Tue, 22 Jun 2021 14:00:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232482AbhFVSAR (ORCPT ); Tue, 22 Jun 2021 14:00:17 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75A60C061760 for ; Tue, 22 Jun 2021 10:58:01 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id s20-20020a0ce3140000b0290268773fc36bso14758128qvl.10 for ; Tue, 22 Jun 2021 10:58:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=b8Wsppq/GlcuesoRfm0KDojivDbmhCEZ9n9fv6tGG5k=; b=AWYu9f26z3jKRqUE7jiaFlhFDSjABP4CPw3I3Vzhe+qwtHtjccPXgq8r8NmgoqAu7m OqHWPQKBUtkaepQAkat0UJXKQTO20t8/6H84ef+60CgVRMRra8M/rpPf3rArZRcRYPqD 8QwPeGIb7zKHiO0yNOG4PLo4Bq/5NRpKHPjqe6WfoVXV0zMHcGIYEavgVq8vKya0IuuZ hBaSENEPjArSrATfe7GuFq86FYXPaoBbZaDvZuHVigF+zdH/K3l5FMVNe8NAy1yNAjK1 nrykdz3nbzOZ2lGYWmUNrmm7uwJTl0mqVkm7d1pXhPjOnBdTceEd8XkW3IeUD7tm7044 UyxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=b8Wsppq/GlcuesoRfm0KDojivDbmhCEZ9n9fv6tGG5k=; b=cmyJMqsfxuXkIi8qyXmsN2gspBxBgXwgdKRi6cqJuT1u+SLN64RSwOUaOJEn12/TEq 5YhsJmrARcr8JqURyAeV6k+V5De/b4ZjvBmb1hyLMsIzFAPCR8iUGuex7R6HBOcQuMwD ANuS+s3x+vpfrVX6DZFJBkbI03QTU1ud65bt/8wKQwYQUKysnYUA2XuglsS74pDzikQJ g0e/K84+OMQd6ULyq9LSmbS3GwnLal1uOZEe8qLgD0n2SG6D3yskN1H9BvsKlhB2t/Hn HwNGEI8lMX+FmRelSgc8B2L3M6IRkXtne5k42TXT5/8dNPhcWtk48iWFZ8++6+Qe0Tuz CAFA== X-Gm-Message-State: AOAM530lVrpSvz9D+Egq5n1/7YkyFUMN+08H3TLPM2FbihlLotpe4eqF 5Lw2g4X8qMFsSHqiGZuaKJxDLwB15F0= X-Google-Smtp-Source: ABdhPJyUTlf/be84pRPgY61j10K1EzMREnMDny0lC0z2MBJhCHlWp1zgredc417gfI5VsuJUaj8X+7ULyKI= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:cac4:: with SMTP id a187mr6161796ybg.423.1624384680514; Tue, 22 Jun 2021 10:58:00 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:56:47 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-3-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 02/54] KVM: x86/mmu: Treat NX as used (not reserved) for all !TDP shadow MMUs From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Mark NX as being used for all non-nested shadow MMUs, as KVM will set the NX bit for huge SPTEs if the iTLB mutli-hit mitigation is enabled. Checking the mitigation itself is not sufficient as it can be toggled on at any time and KVM doesn't reset MMU contexts when that happens. KVM could reset the contexts, but that would require purging all SPTEs in all MMUs, for no real benefit. And, KVM already forces EFER.NX=1 when TDP is disabled (for WP=0, SMEP=1, NX=0), so technically NX is never reserved for shadow MMUs. Fixes: b8e8c8303ff2 ("kvm: mmu: ITLB_MULTIHIT mitigation") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 84d48a33e38b..0db12f461c9d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4221,7 +4221,15 @@ static inline u64 reserved_hpa_bits(void) void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context) { - bool uses_nx = context->nx || + /* + * KVM uses NX when TDP is disabled to handle a variety of scenarios, + * notably for huge SPTEs if iTLB multi-hit mitigation is enabled and + * to generate correct permissions for CR0.WP=0/CR4.SMEP=1/EFER.NX=0. + * The iTLB multi-hit workaround can be toggled at any time, so assume + * NX can be used by any non-nested shadow MMU to avoid having to reset + * MMU contexts. Note, KVM forces EFER.NX=1 when TDP is disabled. + */ + bool uses_nx = context->nx || !tdp_enabled || context->mmu_role.base.smep_andnot_wp; struct rsvd_bits_validate *shadow_zero_check; int i; From patchwork Tue Jun 22 17:56:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338223 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13EE5C48BE5 for ; Tue, 22 Jun 2021 17:58:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F24CC61358 for ; Tue, 22 Jun 2021 17:58:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232559AbhFVSAZ (ORCPT ); Tue, 22 Jun 2021 14:00:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232505AbhFVSAV (ORCPT ); Tue, 22 Jun 2021 14:00:21 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9964EC061767 for ; Tue, 22 Jun 2021 10:58:03 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id a12-20020ac8108c0000b029023c90fba3dcso91114qtj.7 for ; Tue, 22 Jun 2021 10:58:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=w16oYt0iKSKWOMspUVtVlE9oFeOTkC9A/TBUaesRgr0=; b=wCOY/2V6r5LJo6VUNy2t44U0CjXGUf3gp6SBDvliT/3aYksK7+5EpnjfV9JSvomODP PWWEKUodU01x8l/Vvcph3qX209E0sO2Er0XB2uhDIJamudunocqSTJLI6Xpjl4bsWwDs MyIDrG0HlvNBZmPuqCT+D8pGut1jyt33Pltp2xkYpwAWfxnNyFbX5Xd8Ake35zHAFuri KeGgu+iC6tX8yKIKY9uUPKrTSDAa0B8HTEtI7dAIkvW1NNBikB5TecyW9aChHYvWollr 5MWTYG6E5bRRagwBcZueQPspSBil5B/Rlgs9CA22GODgovTWQA+WrDkZl6G2DK/8jsoH 970Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=w16oYt0iKSKWOMspUVtVlE9oFeOTkC9A/TBUaesRgr0=; b=VAxWj5mLtDGuZjVGpe6k9fAqkPAo7+jA3v0ObVQBfXWuO5fFhtrLWuyABjubu8ieFb 6tGjk1smMvlPlm64GIAuYboU3Vje0/+qZcdm7MTbYuqoFJeqXdeNGpdvYWKtPJ6sT6Ds OEpMj7p8JpKhIunOyQLobRLcHrPKj9jefalK6ZB2DYLKhZ85PJKOyFhBMYoQP2sfyFtg ZeTzOkHdr+Xj2rVGSKUsc/NR67op976K2yT8p3vSv1pn4t2jBNEmoVvIl1sy8I4Ed5EM 2Oz5H9Xxz9FCRxxVVa9oNicMMFw/HN+0wEnVTiodU2M3ORoWGmp3c7DfWj+umVHDCSAw mxLw== X-Gm-Message-State: AOAM533UEknacXhnKvUXeVIttEUNDYCZuxF976/UJLtvioTrlNuaMklQ hX/lt6J90cUtdbVyCEFjTksyPFM0PQc= X-Google-Smtp-Source: ABdhPJzhjd9weDitq99P+aWf5UUwgjWeiPEDbkdiLCgMwTr/Xsn4umcE0nX1tvL+mom4QsYOUvRMv1sS6PY= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:ab6b:: with SMTP id u98mr6473219ybi.98.1624384682743; Tue, 22 Jun 2021 10:58:02 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:56:48 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-4-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 03/54] KVM: x86: Properly reset MMU context at vCPU RESET/INIT From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Reset the MMU context at vCPU INIT (and RESET for good measure) if CR0.PG was set prior to INIT. Simply re-initializing the current MMU is not sufficient as the current root HPA may not be usable in the new context. E.g. if TDP is disabled and INIT arrives while the vCPU is in long mode, KVM will fail to switch to the 32-bit pae_root and bomb on the next VM-Enter due to running with a 64-bit CR3 in 32-bit mode. This bug was papered over in both VMX and SVM, but still managed to rear its head in the MMU role on VMX. Because EFER.LMA=1 requires CR0.PG=1, kvm_calc_shadow_mmu_root_page_role() checks for EFER.LMA without first checking CR0.PG. VMX's RESET/INIT flow writes CR0 before EFER, and so an INIT with the vCPU in 64-bit mode will cause the hack-a-fix to generate the wrong MMU role. In VMX, the INIT issue is specific to running without unrestricted guest since unrestricted guest is available if and only if EPT is enabled. Commit 8668a3c468ed ("KVM: VMX: Reset mmu context when entering real mode") resolved the issue by forcing a reset when entering emulated real mode. In SVM, commit ebae871a509d ("kvm: svm: reset mmu on VCPU reset") forced a MMU reset on every INIT to workaround the flaw in common x86. Note, at the time the bug was fixed, the SVM problem was exacerbated by a complete lack of a CR4 update. The vendor resets will be reverted in future patches, primarily to aid bisection in case there are non-INIT flows that rely on the existing VMX logic. Because CR0.PG is unconditionally cleared on INIT, and because CR0.WP and all CR4/EFER paging bits are ignored if CR0.PG=0, simply checking that CR0.PG was '1' prior to INIT/RESET is sufficient to detect a required MMU context reset. Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 76dae88cf524..42608b515ce4 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10735,6 +10735,8 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) { + unsigned long old_cr0 = kvm_read_cr0(vcpu); + kvm_lapic_reset(vcpu, init_event); vcpu->arch.hflags = 0; @@ -10803,6 +10805,17 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) vcpu->arch.ia32_xss = 0; static_call(kvm_x86_vcpu_reset)(vcpu, init_event); + + /* + * Reset the MMU context if paging was enabled prior to INIT (which is + * implied if CR0.PG=1 as CR0 will be '0' prior to RESET). Unlike the + * standard CR0/CR4/EFER modification paths, only CR0.PG needs to be + * checked because it is unconditionally cleared on INIT and all other + * paging related bits are ignored if paging is disabled, i.e. CR0.WP, + * CR4, and EFER changes are all irrelevant if CR0.PG was '0'. + */ + if (old_cr0 & X86_CR0_PG) + kvm_mmu_reset_context(vcpu); } void kvm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector) From patchwork Tue Jun 22 17:56:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338221 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2872C48BE5 for ; Tue, 22 Jun 2021 17:58:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BE0B96137D for ; Tue, 22 Jun 2021 17:58:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232548AbhFVSAY (ORCPT ); Tue, 22 Jun 2021 14:00:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232476AbhFVSAW (ORCPT ); Tue, 22 Jun 2021 14:00:22 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D9C46C0617AD for ; Tue, 22 Jun 2021 10:58:05 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id s9-20020ad452490000b02902786b63dbfeso3018819qvq.5 for ; Tue, 22 Jun 2021 10:58:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=CIzHUFEr3GeKAMXjnbC+xjP+Eg1EK4JnSnmAhJQMsJE=; b=WCs4PSsVQ6zrFs+0qZ1BRM00puOifY/e7Ztf88WvlGbmeunkDfepfkdfmUClSsXwDb yYQg4I6Ik2sHs8so3Ax10vc+kOszdU+ckAiqLmdc6AqRNtoGS0PA0hsbE33mF3XNmeIC DLFtsBHvpG6t4RG683rfJ+4GRl7aauHp8lTQHu/+Z1vaLDoz0djw/NYlwnQky6HhvXs3 gyK1NQmyQOUA6NeoyS/OSIk4z46/FrlaxOI92qekb+EpFVkBKqLKS+3Mq9ykzdNUXF/0 GVSB7DSwc8Zo3tTakIaMC3ZLzQj+3pMBzUYnKhq8lwn8LFsM56oHTbZdP8Db3xJVWjnZ yegg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=CIzHUFEr3GeKAMXjnbC+xjP+Eg1EK4JnSnmAhJQMsJE=; b=f9XOBDHwDbTSS3Mtc6Nbzb7QXboOghjotMQQjOOMWwtiORRlA4q7l0RQq3SCEII1o5 rSiN7KpjlMzehyfogGOSXfpot8I3vM6wf4UmcT9nZAp77m7GiWrd0uepGgxd8AhQrINj pUeAnRpcT1rLUn4GGuTj4EtxpOG0kJ6DOlSRk0lIzP+vFMToX4jQjPJrEDMc9MvLW8pO RTekIDl8n1PIAgs0gacxvguML5O+RPFZxLPDnC+toubYGS9Ex+vKBki1Oou6saEYBGl8 y+ZnPEuTIv1YIq6oe0vxOvGI2YYto4EQKee7IMxfv6v4dKSuouWTrluzldVoJ/z0I5yF UQkw== X-Gm-Message-State: AOAM531OGXyadoIKGqb83SVqwh4rpYGPN2sst/VCBtpfq6UkVRXWC6TN T93niImHYexYUo1fELKBhAr/4XFtUKg= X-Google-Smtp-Source: ABdhPJzmgHHM+NNllA3I3T3c3z6tsMtbpsbsqUrpXg0+3SYN522sb9bhMpGy7zTaTjtwVkUL+G7w7vnCA84= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a5b:4c6:: with SMTP id u6mr6647323ybp.31.1624384684905; Tue, 22 Jun 2021 10:58:04 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:56:49 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-5-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 04/54] KVM: x86/mmu: Use MMU's role to detect CR4.SMEP value in nested NPT walk From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use the MMU's role to get its effective SMEP value when injecting a fault into the guest. When walking L1's (nested) NPT while L2 is active, vCPU state will reflect L2, whereas NPT uses the host's (L1 in this case) CR0, CR4, EFER, etc... If L1 and L2 have different settings for SMEP and L1 does not have EFER.NX=1, this can result in an incorrect PFEC.FETCH when injecting #NPF. Fixes: e57d4a356ad3 ("KVM: Add instruction fetch checking when walking guest page table") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/paging_tmpl.h | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 823a5919f9fa..52fffd68b522 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -471,8 +471,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, error: errcode |= write_fault | user_fault; - if (fetch_fault && (mmu->nx || - kvm_read_cr4_bits(vcpu, X86_CR4_SMEP))) + if (fetch_fault && (mmu->nx || mmu->mmu_role.ext.cr4_smep)) errcode |= PFERR_FETCH_MASK; walker->fault.vector = PF_VECTOR; From patchwork Tue Jun 22 17:56:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338225 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A82F2C2B9F4 for ; Tue, 22 Jun 2021 17:58:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8B99D61353 for ; Tue, 22 Jun 2021 17:58:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232454AbhFVSAb (ORCPT ); Tue, 22 Jun 2021 14:00:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37624 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232539AbhFVSAY (ORCPT ); Tue, 22 Jun 2021 14:00:24 -0400 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0DEC8C06175F for ; Tue, 22 Jun 2021 10:58:08 -0700 (PDT) Received: by mail-qv1-xf4a.google.com with SMTP id f11-20020a056214164bb029026bc7adaae8so12449007qvw.2 for ; Tue, 22 Jun 2021 10:58:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=aT64cUHXT6dp41D10XTHz8018dt0NmEcWWEMJr3d1uw=; b=v51wOygFZrNku7rVxKLTi0sdbT8P6eE9TRRoSpgd9idvUUeZzScIheIG3y9gBe97Em rb+mUSN0c1WFJyrdnjV+RIZYptSeavZ/B3zZ0H3e7vqroQ1owEp0wlMwGcunpDW8zKpP qKlPqDIODviw+R4hiV2F5HvqsNDcDDoiarAtZzDYCgW8Z+iNu1K+U/VXNDWvlm6/r2Rr aFFxrGOld5R/e6JH8Jk1OSWj9m1vQ5bvzdSd8x5yQZmMUa2o81+0Qq5z614QOty+TlGk 0Dkz9cNdrj5djFlcP4M39aO9X3YCCFkjxgYA6bwAg7h3WIeHfvhQ029xhgc03g4LPRkO PkHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=aT64cUHXT6dp41D10XTHz8018dt0NmEcWWEMJr3d1uw=; b=GLIr6/cDFw6N0n87RCVOAo1KMq+//m7IkA+8kBgXtLD7qUw8OT/25xC88oU02PRgG5 uSDDAOEHFXCRdkA3K8r3jCSifxFSZmTK8HOU81hhFlsozF6s/LWnpVNZj8uqceQTe6G4 0yDt/bb7NI4ktfyMCGPCGVXpExY4HNtAPCEGiDYGbts8J5HOlVZarz0u1e0fT/D8nrl3 gO0Ptkogk5kQrClTQV6RjCt2+SvOLBB/Y6mVXpiYSaGihERTt0yB/HyrcZSAfncWf8jC ZsObtDqiIyoVefHILI6cjzbJAZXzGmYsFxMCXLry1l+UiyU2u2fDktTHw24vNVTGgibt Mjiw== X-Gm-Message-State: AOAM530IDRm/i2l51+5kToUnQLdilyIjDcpxvIRfOE7lp7ayteMuoYhG NV30CTIgo2OHkWTmux375gpaXe2dHog= X-Google-Smtp-Source: ABdhPJwwAwFbfISpq/EYG7173lVaWK9r9UnVQlH1SZxJ8QnY5vnyH2+2J1GpKrB3vtZjR4v64KEISAlRCsg= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:c6cb:: with SMTP id k194mr6257503ybf.286.1624384687156; Tue, 22 Jun 2021 10:58:07 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:56:50 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-6-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 05/54] Revert "KVM: x86/mmu: Drop kvm_mmu_extended_role.cr4_la57 hack" From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Restore CR4.LA57 to the mmu_role to fix an amusing edge case with nested virtualization. When KVM (L0) is using TDP, CR4.LA57 is not reflected in mmu_role.base.level because that tracks the shadow root level, i.e. TDP level. Normally, this is not an issue because LA57 can't be toggled while long mode is active, i.e. the guest has to first disable paging, then toggle LA57, then re-enable paging, thus ensuring an MMU reinitialization. But if L1 is crafty, it can load a new CR4 on VM-Exit and toggle LA57 without having to bounce through an unpaged section. L1 can also load a new CR3 on exit, i.e. it doesn't even need to play crazy paging games, a single entry PML5 is sufficient. Such shenanigans are only problematic if L0 and L1 use TDP, otherwise L1 and L2 share an MMU that gets reinitialized on nested VM-Enter/VM-Exit due to mmu_role.base.guest_mode. Note, in the L2 case with nested TDP, even though L1 can switch between L2s with different LA57 settings, thus bypassing the paging requirement, in that case KVM's nested_mmu will track LA57 in base.level. This reverts commit 8053f924cad30bf9f9a24e02b6c8ddfabf5202ea. Fixes: 8053f924cad3 ("KVM: x86/mmu: Drop kvm_mmu_extended_role.cr4_la57 hack") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu/mmu.c | 1 + 2 files changed, 2 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index e11d64aa0bcd..916e0f89fdfc 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -320,6 +320,7 @@ union kvm_mmu_extended_role { unsigned int cr4_pke:1; unsigned int cr4_smap:1; unsigned int cr4_smep:1; + unsigned int cr4_la57:1; unsigned int maxphyaddr:6; }; }; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 0db12f461c9d..5024318dec45 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4537,6 +4537,7 @@ static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *vcpu) ext.cr4_smap = !!kvm_read_cr4_bits(vcpu, X86_CR4_SMAP); ext.cr4_pse = !!is_pse(vcpu); ext.cr4_pke = !!kvm_read_cr4_bits(vcpu, X86_CR4_PKE); + ext.cr4_la57 = !!kvm_read_cr4_bits(vcpu, X86_CR4_LA57); ext.maxphyaddr = cpuid_maxphyaddr(vcpu); ext.valid = 1; From patchwork Tue Jun 22 17:56:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56101C2B9F4 for ; Tue, 22 Jun 2021 17:58:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3AC6B61289 for ; Tue, 22 Jun 2021 17:58:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232636AbhFVSAh (ORCPT ); Tue, 22 Jun 2021 14:00:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37654 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232506AbhFVSA1 (ORCPT ); Tue, 22 Jun 2021 14:00:27 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77059C061760 for ; Tue, 22 Jun 2021 10:58:10 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id 44-20020aed30af0000b029024e8ccfcd07so69915qtf.11 for ; Tue, 22 Jun 2021 10:58:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=UvWOiZNIX7hCeQXFYIlCFQ6LdPOFZrfts8z5QCqyL0U=; b=oDnh2Quk2OxkzewIMcVO6Z65slyLPg2A6HSZBCtvjs6itrPS8R/0+q358KpJ/Gju2y Qhe2B+5GN6BiHsUCbnClOjG9o4dL4/cYuh2Ss5GUIBeAt2TKlgyk8XWt+dKXqj4flReS UPhedyp09aR9TcHcqg659LjqfcGDaeCWSqngScokdH8X1jxsv6Eev3dAaeapESy6R9qr Fz7Xr3CE/syMaVS9OcpEFhqDvhGjmpru9FFQmnbCzUN7iCHRq41LqZcoUEtJiKTB+eCI l+Jlcfnz8J8gjHdUOwIsVU9qn2pNQdS4rZSY8/ItdMdYYdsADxPB104GPblkPDsyUZM/ Vq6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=UvWOiZNIX7hCeQXFYIlCFQ6LdPOFZrfts8z5QCqyL0U=; b=p+i+rliN/dfNFSypGLIb5lRY7PQ64grjYrRTCDb25fKn1RLIqF0OU2Btwt8ZwE9sc7 vlz9XWlKiEkfpPf+HpTqSPnbgJPXoWrIbqhlJkHlPUouw7OyqVGrsdfAPCEECw6h9B25 xbxfczE3+8egkLCducKNNNVXSKLOPAdx88YTGmMAl42CImE/OFZ2t6uoSIimszmutQCD xs90qDBN8bQ1t0Nh3LFCBclkcXZaU8ge0tG9hs5bwAtXjZEDUswC5Lg39CL3tdrKzpRH olwiJMN1UQxhCqAp8lf3nvieu7CkZhheYQCBf7TPTe2KJ0AO1LYh1GAIWb6OI/L2r03k t1yQ== X-Gm-Message-State: AOAM532VpGh4PinWRjUWqf8yCo+vXrR8stKmk8CCkENvO/PQJKMaRc8L 6TZlUyMjHGDzFr3ASEb4RF6YhjjGD0I= X-Google-Smtp-Source: ABdhPJwYEA+Qrti8CZz/HffI7CWhUsl0klOtkUVNZq4ZYdO06MlUHWYqBSR7V5t5Hn0+u0gmATJF+ioOp34= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:be8a:: with SMTP id i10mr6596444ybk.176.1624384689502; Tue, 22 Jun 2021 10:58:09 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:56:51 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-7-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 06/54] KVM: x86: Force all MMUs to reinitialize if guest CPUID is modified From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Invalidate all MMUs' roles after a CPUID update to force reinitizliation of the MMU context/helpers. Despite the efforts of commit de3ccd26fafc ("KVM: MMU: record maximum physical address width in kvm_mmu_extended_role"), there are still a handful of CPUID-based properties that affect MMU behavior but are not incorporated into mmu_role. E.g. 1gb hugepage support, AMD vs. Intel handling of bit 8, and SEV's C-Bit location all factor into the guest's reserved PTE bits. The obvious alternative would be to add all such properties to mmu_role, but doing so provides no benefit over simply forcing a reinitialization on every CPUID update, as setting guest CPUID is a rare operation. Note, reinitializing all MMUs after a CPUID update does not fix all of KVM's woes. Specifically, kvm_mmu_page_role doesn't track the CPUID properties, which means that a vCPU can reuse shadow pages that should not exist for the new vCPU model, e.g. that map GPAs that are now illegal (due to MAXPHYADDR changes) or that set bits that are now reserved (PAGE_SIZE for 1gb pages), etc... Tracking the relevant CPUID properties in kvm_mmu_page_role would address the majority of problems, but fully tracking that much state in the shadow page role comes with an unpalatable cost as it would require a non-trivial increase in KVM's memory footprint. The GBPAGES case is even worse, as neither Intel nor AMD provides a way to disable 1gb hugepage support in the hardware page walker, i.e. it's a virtualization hole that can't be closed when using TDP. In other words, resetting the MMU after a CPUID update is largely a superficial fix. But, it will allow reverting the tracking of MAXPHYADDR in the mmu_role, and that case in particular needs to mostly work because KVM's shadow_root_level depends on guest MAXPHYADDR when 5-level paging is supported. For cases where KVM botches guest behavior, the damage is limited to that guest. But for the shadow_root_level, a misconfigured MMU can cause KVM to incorrectly access memory, e.g. due to walking off the end of its shadow page tables. Fixes: 7dcd57552008 ("x86/kvm/mmu: check if tdp/shadow MMU reconfiguration is needed") Cc: Yu Zhang Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/cpuid.c | 6 +++--- arch/x86/kvm/mmu/mmu.c | 12 ++++++++++++ 3 files changed, 16 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 916e0f89fdfc..4ac534766eff 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1501,6 +1501,7 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu); void kvm_mmu_init_vm(struct kvm *kvm); void kvm_mmu_uninit_vm(struct kvm *kvm); +void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu); void kvm_mmu_reset_context(struct kvm_vcpu *vcpu); void kvm_mmu_slot_remove_write_access(struct kvm *kvm, struct kvm_memory_slot *memslot, diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index b4da665bb892..c42613cfb5ba 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -202,10 +202,10 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) static_call(kvm_x86_vcpu_after_set_cpuid)(vcpu); /* - * Except for the MMU, which needs to be reset after any vendor - * specific adjustments to the reserved GPA bits. + * Except for the MMU, which needs to do its thing any vendor specific + * adjustments to the reserved GPA bits. */ - kvm_mmu_reset_context(vcpu); + kvm_mmu_after_set_cpuid(vcpu); } static int is_efer_nx(void) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 5024318dec45..e2668a9b5936 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4903,6 +4903,18 @@ kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu) return role.base; } +void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu) +{ + /* + * Invalidate all MMU roles to force them to reinitialize as CPUID + * information is factored into reserved bit calculations. + */ + vcpu->arch.root_mmu.mmu_role.ext.valid = 0; + vcpu->arch.guest_mmu.mmu_role.ext.valid = 0; + vcpu->arch.nested_mmu.mmu_role.ext.valid = 0; + kvm_mmu_reset_context(vcpu); +} + void kvm_mmu_reset_context(struct kvm_vcpu *vcpu) { kvm_mmu_unload(vcpu); From patchwork Tue Jun 22 17:56:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24D3EC49EA2 for ; Tue, 22 Jun 2021 17:58:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 104BA61353 for ; Tue, 22 Jun 2021 17:58:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232591AbhFVSAj (ORCPT ); Tue, 22 Jun 2021 14:00:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37666 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232564AbhFVSA3 (ORCPT ); Tue, 22 Jun 2021 14:00:29 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9792EC061574 for ; Tue, 22 Jun 2021 10:58:12 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id e20-20020ac85dd40000b029024ed7d58d2cso85539qtx.8 for ; Tue, 22 Jun 2021 10:58:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=ctgmz8Jc6B+eyWn99WgBSldp+EhT4qLAKur7SwAEIBk=; b=uJZ33hzLtD41/PRVdarSTW7WwE8queNGV/DSluHHNqvcSCA4rphvrsaM6S4SEr2KEx erN/n885YTRofoezFRqWCHyr0+C4wLl3qNayUGK6AKqCcRxuWa+5k/B/Qp17ZAMBxrZP 1s8lViZ26xxX9cnSnnGtjLZrvmJp4H4ybbZfXVr+rTH2UuvGyaFXK71Ceaf0kh+fXA3S vKKO6hzWKx2tfHAkqV1RHt8OtDNsKf1p94lbrwd31hkFBEvrELyEC+XPfO8LFDNCRXW7 e3hfNe1JNfW8eG2zohelkdAY/w0AcLElghCJArzj8Rktblkpho97TYpCUp4Zg7OueI/Z GQDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=ctgmz8Jc6B+eyWn99WgBSldp+EhT4qLAKur7SwAEIBk=; b=d1zj8Go6TyVv4BBRRxv9eA5Obj+tgrglLV0yqWBfl9Ugn57iwZIN1HogJOG9OkbvCt 9mnB2TDnYWvg0LbG1Qggd8bwjqHSGtkXNpaT8Hn2K2IyIV2tbvsTcBV/Wmnh0f/cRxXG jb6ymQ/NCH5aWNPpsufuU2hsfzol3Kxc7weEZf7aN9XliRroGGMGq55ocY9tcJkWttqR Q2HAead876bztU3v03KzsqAybQzmtZhun9f+82qBRq83LAfqit6T9GIhFjMWdtY0wst/ WMCEZSSAZ7HOGmtuSUb/0SJjuH5X/XiroS4vHCFL8NVlrCSaecsFfJl/SHMBc3Lq+bbd rY3g== X-Gm-Message-State: AOAM530uh37yVyLcCL4Tb9nLcC3DpVnWXBYVXDCTOY6jeS5DhgxbbqaO F8imVzh5uSRN3v8Ja32WnuPllgiBe+k= X-Google-Smtp-Source: ABdhPJyjOHX4t4+2a1lnwNObNmXF9ea51NtGJ9XyatRhKb3i6arsOX0OEhdL8QlhsbMoiX7Kg16cAsV4h2Q= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:7cc6:: with SMTP id x189mr6565472ybc.371.1624384691750; Tue, 22 Jun 2021 10:58:11 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:56:52 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-8-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 07/54] KVM: x86: Alert userspace that KVM_SET_CPUID{,2} after KVM_RUN is broken From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Warn userspace that KVM_SET_CPUID{,2} after KVM_RUN "may" cause guest instability. Initialize last_vmentry_cpu to -1 and use it to detect if the vCPU has been run at least once when its CPUID model is changed. KVM does not correctly handle changes to paging related settings in the guest's vCPU model after KVM_RUN, e.g. MAXPHYADDR, GBPAGES, etc... KVM could theoretically zap all shadow pages, but actually making that happen is a mess due to lock inversion (vcpu->mutex is held). And even then, updating paging settings on the fly would only work if all vCPUs are stopped, updated in concert with identical settings, then restarted. To support running vCPUs with different vCPU models (that affect paging), KVM would need to track all relevant information in kvm_mmu_page_role. Note, that's the _page_ role, not the full mmu_role. Updating mmu_role isn't sufficient as a vCPU can reuse a shadow page translation that was created by a vCPU with different settings and thus completely skip the reserved bit checks (that are tied to CPUID). Tracking CPUID state in kvm_mmu_page_role is _extremely_ undesirable as it would require doubling gfn_track from a u16 to a u32, i.e. would increase KVM's memory footprint by 2 bytes for every 4kb of guest memory. E.g. MAXPHYADDR (6 bits), GBPAGES, AMD vs. INTEL = 1 bit, and SEV C-BIT would all need to be tracked. In practice, there is no remotely sane use case for changing any paging related CPUID entries on the fly, so just sweep it under the rug (after yelling at userspace). Signed-off-by: Sean Christopherson --- Documentation/virt/kvm/api.rst | 11 ++++++++--- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/mmu/mmu.c | 18 ++++++++++++++++++ arch/x86/kvm/x86.c | 2 ++ 4 files changed, 29 insertions(+), 4 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index e328caa35d6c..06e82f07fe54 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -688,9 +688,14 @@ MSRs that have been set successfully. Defines the vcpu responses to the cpuid instruction. Applications should use the KVM_SET_CPUID2 ioctl if available. -Note, when this IOCTL fails, KVM gives no guarantees that previous valid CPUID -configuration (if there is) is not corrupted. Userspace can get a copy of the -resulting CPUID configuration through KVM_GET_CPUID2 in case. +Caveat emptor: + - If this IOCTL fails, KVM gives no guarantees that previous valid CPUID + configuration (if there is) is not corrupted. Userspace can get a copy + of the resulting CPUID configuration through KVM_GET_CPUID2 in case. + - Using KVM_SET_CPUID{,2} after KVM_RUN, i.e. changing the guest vCPU model + after running the guest, may cause guest instability. + - Using heterogeneous CPUID configurations, modulo APIC IDs, topology, etc... + may cause guest instability. :: diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4ac534766eff..19c88b445ee0 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -840,7 +840,7 @@ struct kvm_vcpu_arch { bool l1tf_flush_l1d; /* Host CPU on which VM-entry was most recently attempted */ - unsigned int last_vmentry_cpu; + int last_vmentry_cpu; /* AMD MSRC001_0015 Hardware Configuration */ u64 msr_hwcr; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e2668a9b5936..8d97d21d5241 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4913,6 +4913,24 @@ void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu) vcpu->arch.guest_mmu.mmu_role.ext.valid = 0; vcpu->arch.nested_mmu.mmu_role.ext.valid = 0; kvm_mmu_reset_context(vcpu); + + /* + * KVM does not correctly handle changing guest CPUID after KVM_RUN, as + * MAXPHYADDR, GBPAGES support, AMD reserved bit behavior, etc.. aren't + * tracked in kvm_mmu_page_role. As a result, KVM may miss guest page + * faults due to reusing SPs/SPTEs. Alert userspace, but otherwise + * sweep the problem under the rug. + * + * KVM's horrific CPUID ABI makes the problem all but impossible to + * solve, as correctly handling multiple vCPU models (with respect to + * paging and physical address properties) in a single VM would require + * tracking all relevant CPUID information in kvm_mmu_page_role. That + * is very undesirable as it would double the memory requirements for + * gfn_track (see struct kvm_mmu_page_role comments), and in practice + * no sane VMM mucks with the core vCPU model on the fly. + */ + if (vcpu->arch.last_vmentry_cpu != -1) + pr_warn_ratelimited("KVM: KVM_SET_CPUID{,2} after KVM_RUN may cause guest instability\n"); } void kvm_mmu_reset_context(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 42608b515ce4..92b4a9305651 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10583,6 +10583,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) struct page *page; int r; + vcpu->arch.last_vmentry_cpu = -1; + if (!irqchip_in_kernel(vcpu->kvm) || kvm_vcpu_is_reset_bsp(vcpu)) vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; else From patchwork Tue Jun 22 17:56:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8830C2B9F4 for ; Tue, 22 Jun 2021 17:58:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D3E0661353 for ; Tue, 22 Jun 2021 17:58:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232655AbhFVSAl (ORCPT ); Tue, 22 Jun 2021 14:00:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232598AbhFVSAc (ORCPT ); Tue, 22 Jun 2021 14:00:32 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A753FC0617AF for ; Tue, 22 Jun 2021 10:58:14 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id ez18-20020ad459120000b029020e62abfcbdso10062060qvb.16 for ; Tue, 22 Jun 2021 10:58:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=1cuJlej5QtQVUgXaKXLCLupE2h8bZDvIvc+XU0Qy11c=; b=FO/N2ZK8XxpG8TPbM5BAoGiA8fcp0UzdbCojc3o0jFF0opgx7osqSDUuL8TqSRyToR 8Ii6cA+AWkjOR2SaGHv6qtWsiMp+4ypf0+uLoCdajMO7S+hnWs73H9+Kq7VLUOLue2OW cfDYenHu2qGY/Y5HsqFET/TTfrZHvTdsdBR5+ImhvxxEbzPEuXfeVY+agKdsxXyhrjCz 1Rb8dfXUM6auBfI/s/0BDyVaxlcgZr2AurtTVN3YehlxkF4rKBp5GjIhRseu9PMQf5DS 0NSVDRYKCYYwXqhh6DiA4kwHhxa13AvIp3xd/atAfqBRNMcWUhwuoVnLFEfoygkhJ3p5 FZ0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=1cuJlej5QtQVUgXaKXLCLupE2h8bZDvIvc+XU0Qy11c=; b=t0X/Xu/NOBoIVmIrnQLVVokyd3DKiikPj4nnyI3shU/LAiKaCkxtlW/7qDiGaJ4S+f FHeuG/tjDsGzvUZIA74fx6sNkqgbNNPLSCvJa1MVfEsrhtEvp3WXWw4RvKA+ERcS4QU4 y9+Yh3CT0wgmLV1ta8+7qdOqvkIPCbi4XuCDvSVePAmzPSxkB+XM+I8lloJ1YFz4La9y gwzd491THJ68gS6mNVNSZoI/11sAxEjhF+lNgcagUaBXk9IZYg6v4mD93I6ilhESnECd BPnyVptcju2XOFfCbtoX7JiZV1Yw0jrnnzdvYUQwejXSmbiRTh1sVzjCS8N8NzR/ST1w w9eA== X-Gm-Message-State: AOAM530RjN5d7tnIooBulp0bt1uK+VxgWw0KYXkWq8lerS/MjRkVMSdj byjI4nP6lphwgvZVE2R2afphOtWf0MA= X-Google-Smtp-Source: ABdhPJyRTYcjOOe7Hpuegj9TYeKT7xA8Phn/GsRgC9q+yKHQzi56T+2XNNurVkrdjS8MimO54PakvpGyePc= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a05:6214:10e9:: with SMTP id q9mr27020191qvt.45.1624384693773; Tue, 22 Jun 2021 10:58:13 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:56:53 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-9-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 08/54] Revert "KVM: MMU: record maximum physical address width in kvm_mmu_extended_role" From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Drop MAXPHYADDR from mmu_role now that all MMUs have their role invalidated after a CPUID update. Invalidating the role forces all MMUs to re-evaluate the guest's MAXPHYADDR, and the guest's MAXPHYADDR can only be changed only through a CPUID update. This reverts commit de3ccd26fafc707b09792d9b633c8b5b48865315. Cc: Yu Zhang Signed-off-by: Sean Christopherson Reviewed-by: Yu Zhang --- arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/mmu/mmu.c | 1 - 2 files changed, 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 19c88b445ee0..cdaff399ed94 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -321,7 +321,6 @@ union kvm_mmu_extended_role { unsigned int cr4_smap:1; unsigned int cr4_smep:1; unsigned int cr4_la57:1; - unsigned int maxphyaddr:6; }; }; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8d97d21d5241..04cab330c445 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4538,7 +4538,6 @@ static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *vcpu) ext.cr4_pse = !!is_pse(vcpu); ext.cr4_pke = !!kvm_read_cr4_bits(vcpu, X86_CR4_PKE); ext.cr4_la57 = !!kvm_read_cr4_bits(vcpu, X86_CR4_LA57); - ext.maxphyaddr = cpuid_maxphyaddr(vcpu); ext.valid = 1; From patchwork Tue Jun 22 17:56:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB0C1C2B9F4 for ; Tue, 22 Jun 2021 17:58:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C500D61358 for ; Tue, 22 Jun 2021 17:58:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232628AbhFVSAy (ORCPT ); Tue, 22 Jun 2021 14:00:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37654 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232627AbhFVSAh (ORCPT ); Tue, 22 Jun 2021 14:00:37 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02427C0611C0 for ; Tue, 22 Jun 2021 10:58:17 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id r15-20020a0562140c4fb0290262f40bf4bcso18347504qvj.11 for ; Tue, 22 Jun 2021 10:58:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=WVFaMcj6eBUNfXCYM+guxa1ToSj4nAzpGmcEUcXwoLU=; b=fR9azdqHDrrsRqLrk1mw1/khPxXD8zS5fO6Mc48QzCE1Wxj+ICx1DYJZGkJr6oKfw7 O9boiQL7kNK/4+RNr9j9x4FnX8eeMyW+wgzHNPJ8GEbrnH1RBX4gLr5CvbnCTCZH1vNk 0EvirxEWYIN7Xa0f4s7LGTMRkoORtdmDHpTKAUct7/4B6R9yYtkONcCmt6NsXrIa+s1b 2Vi87sN2bWmFzdCHch4QdVnFPUsv0TX1BvhprsWhtzEZ1TP9h6H67zajN5fwTcpgVzq7 128W4sMulSt06iM0BdbYOFyHnExtZ3d7GctYJD34w6WykmpMVmaatBZ0fqMzolxdNBwQ UIrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=WVFaMcj6eBUNfXCYM+guxa1ToSj4nAzpGmcEUcXwoLU=; b=EP904doX0GRYz5Lre23EGacK3z3bZbP6eE9G+iVsyxDf6/4gJvJBcS7NkS/+JsGZRE /yCN0AMJ2C9fvNBwrgPUwHifS4BKYR4e5npQr7dJSbgch5CFCR2UvnMmZQok2+DUb4vl r0bmgRUB5DpwSgBPSMGk7li5OzP2ErzdRLgpTTVRzhE193n4cIQHa9yH3AoRgP5okqeK NTzfnKJi5WtDJFLhR768VhLvE5ZKjmY07SvpXV1Ep7XD2BuR3QqjLnEE9Ub8LkE7PUPv aZWL1RCahr1oBgDMN7iMbRqoeZ88vBB+R2L04xUXY8w208UQIznX8y1ccMrfSCiBWcOt 35Aw== X-Gm-Message-State: AOAM531UvcLMGjhM7tXQrpkV3n4mF9wH1t0AjrLmREEkGHwg7GD724Vo AYHzcRDUZF8wrhIzRk5JfIWnBu9LBF0= X-Google-Smtp-Source: ABdhPJxGL/L9iCuRcW+ybcf0LTSFzUQPLd8NEZi0xJP9P+67r+oO6cLmNTEqbsWhsecmFCudS5bpOsjLyGw= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:d694:: with SMTP id n142mr6295564ybg.349.1624384696152; Tue, 22 Jun 2021 10:58:16 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:56:54 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-10-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 09/54] KVM: x86/mmu: Unconditionally zap unsync SPs when creating >4k SP at GFN From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When creating a new upper-level shadow page, zap unsync shadow pages at the same target gfn instead of attempting to sync the pages. This fixes a bug where an unsync shadow page could be sync'd with an incompatible context, e.g. wrong smm, is_guest, etc... flags. In practice, the bug is relatively benign as sync_page() is all but guaranteed to fail its check that the guest's desired gfn (for the to-be-sync'd page) matches the current gfn associated with the shadow page. I.e. kvm_sync_page() would end up zapping the page anyways. Alternatively, __kvm_sync_page() could be modified to explicitly verify the mmu_role of the unsync shadow page is compatible with the current MMU context. But, except for this specific case, __kvm_sync_page() is called iff the page is compatible, e.g. the transient sync in kvm_mmu_get_page() requires an exact role match, and the call from kvm_sync_mmu_roots() is only synchronizing shadow pages from the current MMU (which better be compatible or KVM has problems). And as described above, attempting to sync shadow pages when creating an upper-level shadow page is unlikely to succeed, e.g. zero successful syncs were observed when running Linux guests despite over a million attempts. Fixes: 9f1a122f970d ("KVM: MMU: allow more page become unsync at getting sp time") Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 50 ++++++++++++++---------------------------- 1 file changed, 16 insertions(+), 34 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 04cab330c445..99d26859021d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1843,24 +1843,6 @@ static bool kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, return __kvm_sync_page(vcpu, sp, invalid_list); } -/* @gfn should be write-protected at the call site */ -static bool kvm_sync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, - struct list_head *invalid_list) -{ - struct kvm_mmu_page *s; - bool ret = false; - - for_each_gfn_indirect_valid_sp(vcpu->kvm, s, gfn) { - if (!s->unsync) - continue; - - WARN_ON(s->role.level != PG_LEVEL_4K); - ret |= kvm_sync_page(vcpu, s, invalid_list); - } - - return ret; -} - struct mmu_page_path { struct kvm_mmu_page *parent[PT64_ROOT_MAX_LEVEL]; unsigned int idx[PT64_ROOT_MAX_LEVEL]; @@ -1990,8 +1972,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, struct hlist_head *sp_list; unsigned quadrant; struct kvm_mmu_page *sp; - bool need_sync = false; - bool flush = false; int collisions = 0; LIST_HEAD(invalid_list); @@ -2014,11 +1994,21 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, continue; } - if (!need_sync && sp->unsync) - need_sync = true; - - if (sp->role.word != role.word) + if (sp->role.word != role.word) { + /* + * If the guest is creating an upper-level page, zap + * unsync pages for the same gfn. While it's possible + * the guest is using recursive page tables, in all + * likelihood the guest has stopped using the unsync + * page and is installing a completely unrelated page. + * Unsync pages must not be left as is, because the new + * upper-level page will be write-protected. + */ + if (level > PG_LEVEL_4K && sp->unsync) + kvm_mmu_prepare_zap_page(vcpu->kvm, sp, + &invalid_list); continue; + } if (direct_mmu) goto trace_get_page; @@ -2052,22 +2042,14 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, sp->role = role; hlist_add_head(&sp->hash_link, sp_list); if (!direct) { - /* - * we should do write protection before syncing pages - * otherwise the content of the synced shadow page may - * be inconsistent with guest page table. - */ account_shadowed(vcpu->kvm, sp); if (level == PG_LEVEL_4K && rmap_write_protect(vcpu, gfn)) kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1); - - if (level > PG_LEVEL_4K && need_sync) - flush |= kvm_sync_pages(vcpu, gfn, &invalid_list); } trace_kvm_mmu_get_page(sp, true); - - kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush); out: + kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list); + if (collisions > vcpu->kvm->stat.max_mmu_page_hash_collisions) vcpu->kvm->stat.max_mmu_page_hash_collisions = collisions; return sp; From patchwork Tue Jun 22 17:56:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B0D8C2B9F4 for ; Tue, 22 Jun 2021 17:58:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 674C561358 for ; Tue, 22 Jun 2021 17:58:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232704AbhFVSBB (ORCPT ); Tue, 22 Jun 2021 14:01:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232644AbhFVSAi (ORCPT ); Tue, 22 Jun 2021 14:00:38 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 723B6C061756 for ; Tue, 22 Jun 2021 10:58:19 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id p14-20020a63fe0e0000b0290223af1026abso997804pgh.20 for ; Tue, 22 Jun 2021 10:58:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=WckRIopBZujqPimPvM/oO/U/oSKI+UjuEVuOH/hGa7M=; b=T9yfHpqlRV49gaT9p3DmZYwhIi1zHM045117HZQdXfkWe97BxhPoL5Hr+sN7dkzCEv f5alEzpKIictZXcSdK8iO/LI6fCxUq0Yf07xSrfxFrCI9sVf3biZPBX8ZA/XKBK3JXyJ 1ymPPY0G8qOhD36obQqknBUOEfJCZPPIO1TbH9wVthEUGgsD0/r140V6+aHsf/Xoc4cz acSaRWRw2JCC/CMkb8vGIAxlhv/8zPHDDpKvluzKVu/IEoocC9EexgGlbxA5cJFvbGvy E8EO+lfTRurlih+hQQh62ngeczQ2RVeDrlv43liawU0EjECbHiBY3YKHDfmk+uoQ08Zp Eb+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=WckRIopBZujqPimPvM/oO/U/oSKI+UjuEVuOH/hGa7M=; b=QySTb5FSROD7MV3PZ6YEcgcfZeWEqvcS5EbGBTUv9GBorueoZMALJNsq5S05zpnS4a OAvSoj7ePfPYd27kTXJNaKb/FaR4yJxdpVx0iXli7MBPT3jYzFRQXsUOZ8++HNn6TWkT BndhOTYpruPHVYEgz3Thp/CWUVcBEHhAmgGPey3TsqF33nAygxN7niA1kv0ekbTHV42B TTwURKyEbPQPs6/0TMsUYRpTfmEuKU02d/SZhHXXKuhUrGEZe6eS71kwoDW+FmPLTfa8 2sAnBtwJsLjZozIkMEWKDlu7vp5VmWcEb+Bcn5W4di9PKeLlyAVWvdQPqfZqXOoPczHe 98LQ== X-Gm-Message-State: AOAM5314fsEZtzNX9SqA5d0GGTEGSjkfhS6BuNLp6meYAEubNt31O2EP C2SOFMBrdfsysnabqAg8s3lHUMR8G2E= X-Google-Smtp-Source: ABdhPJy8FvZRNmaIg8SNCQdNGVcEAG0m88+/R6/1ZpHaizYReKVRFaYUzt3lqk/OieeiNu450WJ3KxZxfSw= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a17:90a:c796:: with SMTP id gn22mr92986pjb.0.1624384698638; Tue, 22 Jun 2021 10:58:18 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:56:55 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-11-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 10/54] KVM: x86/mmu: Replace EPT shadow page shenanigans with simpler check From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Replace the hack to identify nested EPT shadow pages with a simple check that the size of the guest PTEs associated with the shadow page and the current MMU match, which is the intent of the "8 bytes == PAE" test. The nested EPT hack existed to avoid a false negative due to the is_pae() check not matching for 32-bit L2 guests; checking the MMU role directly avoids the indirect calculation of the guest PTE size entirely. Note, this should be a glorified nop now that __kvm_sync_page() is called if and only if the role is an exact match (kvm_mmu_get_page()) or is part of the current MMU context (kvm_mmu_sync_roots()). A future commit will convert the likely-pointless check into a meaningful WARN to enforce that the mmu_roles of the current context and the shadow page are compatible. Cc: Vitaly Kuznetsov Signed-off-by: Sean Christopherson --- Documentation/virt/kvm/mmu.rst | 3 --- arch/x86/kvm/mmu/mmu.c | 16 +++------------- 2 files changed, 3 insertions(+), 16 deletions(-) diff --git a/Documentation/virt/kvm/mmu.rst b/Documentation/virt/kvm/mmu.rst index 20d85daed395..ddbb23998742 100644 --- a/Documentation/virt/kvm/mmu.rst +++ b/Documentation/virt/kvm/mmu.rst @@ -192,9 +192,6 @@ Shadow pages contain the following information: Contains the value of cr4.smap && !cr0.wp for which the page is valid (pages for which this is true are different from other pages; see the treatment of cr0.wp=0 below). - role.ept_sp: - This is a virtual flag to denote a shadowed nested EPT page. ept_sp - is true if "cr0_wp && smap_andnot_wp", an otherwise invalid combination. role.smm: Is 1 if the page is valid in system management mode. This field determines which of the kvm_memslots array was used to build this diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 99d26859021d..9f277c5bab76 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1780,16 +1780,13 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)]) \ if ((_sp)->gfn != (_gfn) || (_sp)->role.direct) {} else -static inline bool is_ept_sp(struct kvm_mmu_page *sp) -{ - return sp->role.cr0_wp && sp->role.smap_andnot_wp; -} - /* @sp->gfn should be write-protected at the call site */ static bool __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct list_head *invalid_list) { - if ((!is_ept_sp(sp) && sp->role.gpte_is_8_bytes != !!is_pae(vcpu)) || + union kvm_mmu_page_role mmu_role = vcpu->arch.mmu->mmu_role.base; + + if (sp->role.gpte_is_8_bytes != mmu_role.gpte_is_8_bytes || vcpu->arch.mmu->sync_page(vcpu, sp) == 0) { kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list); return false; @@ -4721,13 +4718,6 @@ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty, role.base.guest_mode = true; role.base.access = ACC_ALL; - /* - * WP=1 and NOT_WP=1 is an impossible combination, use WP and the - * SMAP variation to denote shadow EPT entries. - */ - role.base.cr0_wp = true; - role.base.smap_andnot_wp = true; - role.ext = kvm_calc_mmu_role_ext(vcpu); role.ext.execonly = execonly; From patchwork Tue Jun 22 17:56:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E46A0C48BE5 for ; Tue, 22 Jun 2021 17:58:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CF0AB61353 for ; Tue, 22 Jun 2021 17:58:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232677AbhFVSBJ (ORCPT ); Tue, 22 Jun 2021 14:01:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232671AbhFVSAn (ORCPT ); Tue, 22 Jun 2021 14:00:43 -0400 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2885DC061787 for ; Tue, 22 Jun 2021 10:58:22 -0700 (PDT) Received: by mail-qv1-xf4a.google.com with SMTP id y17-20020ad445b10000b029027389e9530fso7397931qvu.4 for ; Tue, 22 Jun 2021 10:58:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=w2Kt+/2qENUIrhf0LV790hqbhtHihwRSzE//krnionw=; b=WOP77fC3Ucs4V8/L6vxTB/JiH0DwuxnXT6BnGHWWO+6w3nDkR4dayPBlG3Kz4N3loF 6nzwP4ABmUvOQGpc/ZfhyXvTlYr1PA0Rawxk9ENqAY1+PPF8ykvgWEShpq8lDijd/xYM O8InonsZbIK7zabxNSsdWGRH7t1GWgq2l95mMPwCDZF2HWTd2/XpVKflZTh2MyAbgTPK ytoKjzM1evJ4/sTgqtYZLGY8vuqRId3e7lLzVw/XWA1Bl74jWZ9DNBu7qtNyXEQ7vXwP vqo+f7q8R0IOtGVqimnpjc8SUNp3ILAzsUA2hr6NSWv+3cm+wWrD7D5Pd0VATPRSzsce dONA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=w2Kt+/2qENUIrhf0LV790hqbhtHihwRSzE//krnionw=; b=lzruU7mwoNaw9eQusmnf1A84f044ilj5+9ThelfkLHRwHpU5V9JbrMHt8UDVvno0z6 TFgsRaVmd3P/kV2q0ago6kBx5RNB82h/3jTZp0xGgyEJwzFYBh+LeftrRbeyfnkCfofl 73EvWiSZ0JGTslRouL732e7w0P/AwrZrFlG6g2GC8DcJEw6Ln2jC7snBotcWCRNbhZLf bsxpZ0aKkpgqXpDhEY574nvZXwTGR+jr4EhV+o3q8zr235QCEpiLjAXXz+x8/Ip0QpNb T+U3SBgrQlsd148zg3czJARPHW499U/hdMuchD3oMPd/aVnB3Slc5Rj/lyOx2S3UDdaR qm3w== X-Gm-Message-State: AOAM53173RTN1Juhfh1VzePcMFvFdkrsysf/37mVQ5BlNZ0eHVNswiXL 1FK3nfmm8AZQQv0hbosXY0uSiOWRaa0= X-Google-Smtp-Source: ABdhPJyYEvOaW0sEkmBKcm9Z+YODGpl5E/7J7NUNoQhsTWSaxBw75d9oUqOvwvow9P+dR7Wygfia25GCgF4= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:ada5:: with SMTP id z37mr6434317ybi.415.1624384701324; Tue, 22 Jun 2021 10:58:21 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:56:56 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-12-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 11/54] KVM: x86/mmu: WARN and zap SP when sync'ing if MMU role mismatches From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When synchronizing a shadow page, WARN and zap the page if its mmu role isn't compatible with the current MMU context, where "compatible" is an exact match sans the bits that have no meaning in the overall MMU context or will be explicitly overwritten during the sync. Many of the helpers used by sync_page() are specific to the current context, updating a SMM vs. non-SMM shadow page would use the wrong memslots, updating L1 vs. L2 PTEs might work but would be extremely bizaree, and so on and so forth. Drop the guard with respect to 8-byte vs. 4-byte PTEs in __kvm_sync_page(), it was made useless when kvm_mmu_get_page() stopped trying to sync shadow pages irrespective of the current MMU context. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 5 +---- arch/x86/kvm/mmu/paging_tmpl.h | 27 +++++++++++++++++++++++++-- 2 files changed, 26 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 9f277c5bab76..2e2d66319325 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1784,10 +1784,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, static bool __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct list_head *invalid_list) { - union kvm_mmu_page_role mmu_role = vcpu->arch.mmu->mmu_role.base; - - if (sp->role.gpte_is_8_bytes != mmu_role.gpte_is_8_bytes || - vcpu->arch.mmu->sync_page(vcpu, sp) == 0) { + if (vcpu->arch.mmu->sync_page(vcpu, sp) == 0) { kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list); return false; } diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 52fffd68b522..b632606a87d6 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -1030,13 +1030,36 @@ static gpa_t FNAME(gva_to_gpa_nested)(struct kvm_vcpu *vcpu, gpa_t vaddr, */ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) { + union kvm_mmu_page_role mmu_role = vcpu->arch.mmu->mmu_role.base; int i, nr_present = 0; bool host_writable; gpa_t first_pte_gpa; int set_spte_ret = 0; - /* direct kvm_mmu_page can not be unsync. */ - BUG_ON(sp->role.direct); + /* + * Ignore various flags when verifying that it's safe to sync a shadow + * page using the current MMU context. + * + * - level: not part of the overall MMU role and will never match as the MMU's + * level tracks the root level + * - access: updated based on the new guest PTE + * - quadrant: not part of the overall MMU role (similar to level) + */ + const union kvm_mmu_page_role sync_role_ign = { + .level = 0xf, + .access = 0x7, + .quadrant = 0x3, + }; + + /* + * Direct pages can never be unsync, and KVM should never attempt to + * sync a shadow page for a different MMU context, e.g. if the role + * differs then the memslot lookup (SMM vs. non-SMM) will be bogus, the + * reserved bits checks will be wrong, etc... + */ + if (WARN_ON_ONCE(sp->role.direct || + (sp->role.word ^ mmu_role.word) & ~sync_role_ign.word)) + return 0; first_pte_gpa = FNAME(get_level1_sp_gpa)(sp); From patchwork Tue Jun 22 17:56:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 609E9C2B9F4 for ; Tue, 22 Jun 2021 17:59:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3E44961374 for ; Tue, 22 Jun 2021 17:59:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232738AbhFVSBR (ORCPT ); Tue, 22 Jun 2021 14:01:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232576AbhFVSAx (ORCPT ); Tue, 22 Jun 2021 14:00:53 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A717BC0617AD for ; Tue, 22 Jun 2021 10:58:24 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id e13-20020a37e50d0000b02903ad5730c883so1723796qkg.22 for ; Tue, 22 Jun 2021 10:58:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=SXPUg9mNmqVN5rRtGZsktXnV3IRtDTAZpIQMiVU+baY=; b=m1bpxHKWRMXd6AP9P/rxCP1SVrJwcVZ+LBLaU6bVsOae/XAOt9w7AyOgX1LluKqP+C cdGh27lYDruC6CulMsAYjvLoAGMAy3JFOJqNWfgtWYlbpZRyWuZcTD5xtbrFO9e3+SxE Xnt/QfpgKtpG4UrtYtVNI1pjErGJFV6+5IaNjq1+zpzgGtBeGqW5Uq1dut0AuKfmiuOA XtJPkxtmycqkDXwohCE8n7xPT1SdrluCrjzn2XrvOLEp7UD/LJJHYjmPoXkkrMtwFzzq yGWFOEmvgcPYVBs8VxHhHH9Rid40Fz95cHSd9TgVh+JUiWqd/P3p0dooVe9z7dEm5KWl BuCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=SXPUg9mNmqVN5rRtGZsktXnV3IRtDTAZpIQMiVU+baY=; b=Q28S7csThfbkSXvADkiMTruvWDs0kgd1KyQbXgDnFekZcELaaMvTo1B4fX0mZs7qb3 4Wlsgzh4aeCFWb+dvPOj74SKD7vmb2rQm/jmpDZEYn1qXZVhu4HrPQUcYYy1XHmvonMV 6lpBxb6G3VpTEx0jKa9CYH7HPZh06EW+UTTgX2RrjdwM8BShVSYTZ1iudt81tMCFbmms 7OjF3V7oURh2tddTRGp0ByA+VNuPXUIauXnIly3cVQFfsbqylJFVfG9UnBdEqo59gpEy RXSlTfjHFNGS1pOKLeJjZxiQkmHhRYkiVW/ehJ8xlsS2MWJHyFV0NIVQpHuBGpO659JH Elpg== X-Gm-Message-State: AOAM533c2waYBeq9qPNepy0eix8/9IOUCGjKeGJ0zgILrjW6uAKbXNhj RXPe9sC9j2qYLo7mSG5nYfqK8Xm4Pe8= X-Google-Smtp-Source: ABdhPJxbENL+2wIAUfosytd1ZbrgFCTF9+OpHIVawKBqVJIDy4pnomFUqtWns578BZpguezHdmPPkQtpHXM= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:d44f:: with SMTP id m76mr6300374ybf.198.1624384703815; Tue, 22 Jun 2021 10:58:23 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:56:57 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-13-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 12/54] KVM: x86/mmu: Drop the intermediate "transient" __kvm_sync_page() From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Nove the kvm_unlink_unsync_page() call out of kvm_sync_page() and into it's sole caller, and fold __kvm_sync_page() into kvm_sync_page() since the latter becomes a pure pass-through. There really should be no reason for code to do a complete sync of a shadow page outside of the full kvm_mmu_sync_roots(), e.g. the one use case that creeped in turned out to be flawed and counter-productive. Update the comment in kvm_mmu_get_page() regarding its sync_page() usage, which is anything but obvious. Drop the stale comment about @sp->gfn needing to be write-protected, as it directly contradicts the kvm_mmu_get_page() usage. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 36 +++++++++++++++++++----------------- 1 file changed, 19 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2e2d66319325..77296ce6215f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1780,18 +1780,6 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)]) \ if ((_sp)->gfn != (_gfn) || (_sp)->role.direct) {} else -/* @sp->gfn should be write-protected at the call site */ -static bool __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, - struct list_head *invalid_list) -{ - if (vcpu->arch.mmu->sync_page(vcpu, sp) == 0) { - kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list); - return false; - } - - return true; -} - static bool kvm_mmu_remote_flush_or_zap(struct kvm *kvm, struct list_head *invalid_list, bool remote_flush) @@ -1833,8 +1821,12 @@ static bool is_obsolete_sp(struct kvm *kvm, struct kvm_mmu_page *sp) static bool kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct list_head *invalid_list) { - kvm_unlink_unsync_page(vcpu->kvm, sp); - return __kvm_sync_page(vcpu, sp, invalid_list); + if (vcpu->arch.mmu->sync_page(vcpu, sp) == 0) { + kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list); + return false; + } + + return true; } struct mmu_page_path { @@ -1931,6 +1923,7 @@ static void mmu_sync_children(struct kvm_vcpu *vcpu, } for_each_sp(pages, sp, parents, i) { + kvm_unlink_unsync_page(vcpu->kvm, sp); flush |= kvm_sync_page(vcpu, sp, &invalid_list); mmu_pages_clear_parents(&parents); } @@ -2008,10 +2001,19 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, goto trace_get_page; if (sp->unsync) { - /* The page is good, but __kvm_sync_page might still end - * up zapping it. If so, break in order to rebuild it. + /* + * The page is good, but is stale. "Sync" the page to + * get the latest guest state, but don't write-protect + * the page and don't mark it synchronized! KVM needs + * to ensure the mapping is valid, but doesn't need to + * fully sync (write-protect) the page until the guest + * invalidates the TLB mapping. This allows multiple + * SPs for a single gfn to be unsync. + * + * If the sync fails, the page is zapped. If so, break + * If so, break in order to rebuild it. */ - if (!__kvm_sync_page(vcpu, sp, &invalid_list)) + if (!kvm_sync_page(vcpu, sp, &invalid_list)) break; WARN_ON(!list_empty(&invalid_list)); From patchwork Tue Jun 22 17:56:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25069C48BE5 for ; Tue, 22 Jun 2021 17:59:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0912761360 for ; Tue, 22 Jun 2021 17:59:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232748AbhFVSBS (ORCPT ); Tue, 22 Jun 2021 14:01:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37710 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232618AbhFVSAy (ORCPT ); Tue, 22 Jun 2021 14:00:54 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07E72C0613A2 for ; Tue, 22 Jun 2021 10:58:27 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id x4-20020a3763040000b02903ab95237c25so19051307qkb.0 for ; Tue, 22 Jun 2021 10:58:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=ZVQPmrwIO1jLx03fsUYY+unjOucbiEXlPfNgj7m+k/Q=; b=TLCaZriIZLxqUOGqBFW+2bxITVCMWf2aZbVzomfs8xmn4CKpqFQxG6voZh1f8ao7jJ gRSpgY88aisFMUt8/yqAw+6GxwXTMyHwRdbYznJHY1IkbS6/CN3o9KcyV0DXeZabwf0W ZR/iD/K+Xf1LfgdDb2QEE2H4vUEIun0rHj/ZjZEHSRVOXIukg7aBUIMb4dCTZVqrnwT6 p+E1Q8CGAo9cisd88M7YLozW1C3VPNpF5BeYcq+MhO1sSqUvFjXyboGyvjNsW6fMf9KU rPYr4O5a0jmcIPakG7ou28FStARJGrtT/n1PAmX8V8xOnejx+XMD6eOO2qZNO3MIQRAe T0iQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=ZVQPmrwIO1jLx03fsUYY+unjOucbiEXlPfNgj7m+k/Q=; b=SkMi6ICpUtSIkiysV72GzbSoccfWGOGpXAsuwGNofnRTaB9p/eTQ/0Im4JbdwFjUOe aAy8DkdTmE8ANURTBp4fCY0hw69rX+KDHavnby1J5OcW3HtesvUgAh6yjJEK43rgt7TK XeRRwW1khAFi5UNhzMJ/9jMpXUdkO+6YPMHWgikTYhQa/K3vAW1ZxRc5PRsSXT6qEw88 ECukJ+2XuOe7NtTypE4+3BbnSioyxd5Tef2C1gM9fDYJJAzjhR193nMg5mAvjVjl4yG2 eDCDuReXZpGU7OHdtq99UXVxNie+znDiySLCU/RQCxgW2mQelnRDPcO+KG+dWGNOdXEJ +DFw== X-Gm-Message-State: AOAM533CenVGpm5S8u34jzCJvThihUYtkQIpc57ROP1Ac76n1qorzC5V BydOuYva0rxbB8IEQ3Qt5dwr6DexopE= X-Google-Smtp-Source: ABdhPJxCk4WUMsvLagMH83wIjfOEjv3rLY5xkwQoaxVvsirwDlm46lLgIwGiZKST1D9A22U/zM4VccK8gMM= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:f449:: with SMTP id p9mr6413791ybe.259.1624384706126; Tue, 22 Jun 2021 10:58:26 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:56:58 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-14-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 13/54] KVM: x86/mmu: Rename unsync helper and update related comments From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rename mmu_need_write_protect() to mmu_try_to_unsync_pages() and update a variety of related, stale comments. Add several new comments to call out subtle details, e.g. that upper-level shadow pages are write-tracked, and that can_unsync is false iff KVM is in the process of synchronizing pages. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 34 ++++++++++++++++++++++++--------- arch/x86/kvm/mmu/mmu_internal.h | 3 +-- arch/x86/kvm/mmu/spte.c | 10 ++++++++-- 3 files changed, 34 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 77296ce6215f..0171c245ecc7 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2458,17 +2458,33 @@ static void kvm_unsync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) kvm_mmu_mark_parents_unsync(sp); } -bool mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn, - bool can_unsync) +/* + * Attempt to unsync any shadow pages that can be reached by the specified gfn, + * KVM is creating a writable mapping for said gfn. Returns 0 if all pages + * were marked unsync (or if there is no shadow page), -EPERM if the SPTE must + * be write-protected. + */ +int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync) { struct kvm_mmu_page *sp; + /* + * Force write-protection if the page is being tracked. Note, the page + * track machinery is used to write-protect upper-level shadow pages, + * i.e. this guards the role.level == 4K assertion below! + */ if (kvm_page_track_is_active(vcpu, gfn, KVM_PAGE_TRACK_WRITE)) - return true; + return -EPERM; + /* + * The page is not write-tracked, mark existing shadow pages unsync + * unless KVM is synchronizing an unsync SP (can_unsync = false). In + * that case, KVM must complete emulation of the guest TLB flush before + * allowing shadow pages to become unsync (writable by the guest). + */ for_each_gfn_indirect_valid_sp(vcpu->kvm, sp, gfn) { if (!can_unsync) - return true; + return -EPERM; if (sp->unsync) continue; @@ -2499,8 +2515,8 @@ bool mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn, * 2.2 Guest issues TLB flush. * That causes a VM Exit. * - * 2.3 kvm_mmu_sync_pages() reads sp->unsync. - * Since it is false, so it just returns. + * 2.3 Walking of unsync pages sees sp->unsync is + * false and skips the page. * * 2.4 Guest accesses GVA X. * Since the mapping in the SP was not updated, @@ -2516,7 +2532,7 @@ bool mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn, */ smp_wmb(); - return false; + return 0; } static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, @@ -3461,8 +3477,8 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) * flush strictly after those changes are made. We only need to * ensure that the other CPU sets these flags before any actual * changes to the page tables are made. The comments in - * mmu_need_write_protect() describe what could go wrong if this - * requirement isn't satisfied. + * mmu_try_to_unsync_pages() describe what could go wrong if + * this requirement isn't satisfied. */ if (!smp_load_acquire(&sp->unsync) && !smp_load_acquire(&sp->unsync_children)) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 18be103df9d5..35567293c1fd 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -122,8 +122,7 @@ static inline bool is_nx_huge_page_enabled(void) return READ_ONCE(nx_huge_pages); } -bool mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn, - bool can_unsync); +int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync); void kvm_mmu_gfn_disallow_lpage(struct kvm_memory_slot *slot, gfn_t gfn); void kvm_mmu_gfn_allow_lpage(struct kvm_memory_slot *slot, gfn_t gfn); diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 8e8e8da740a0..246e61e0771e 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -147,13 +147,19 @@ int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, /* * Optimization: for pte sync, if spte was writable the hash * lookup is unnecessary (and expensive). Write protection - * is responsibility of mmu_get_page / kvm_sync_page. + * is responsibility of kvm_mmu_get_page / kvm_mmu_sync_roots. * Same reasoning can be applied to dirty page accounting. */ if (!can_unsync && is_writable_pte(old_spte)) goto out; - if (mmu_need_write_protect(vcpu, gfn, can_unsync)) { + /* + * Unsync shadow pages that are reachable by the new, writable + * SPTE. Write-protect the SPTE if the page can't be unsync'd, + * e.g. it's write-tracked (upper-level SPs) or has one or more + * shadow pages and unsync'ing pages is not allowed. + */ + if (mmu_try_to_unsync_pages(vcpu, gfn, can_unsync)) { pgprintk("%s: found shadow page for %llx, marking ro\n", __func__, gfn); ret |= SET_SPTE_WRITE_PROTECTED_PT; From patchwork Tue Jun 22 17:56:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D297C2B9F4 for ; Tue, 22 Jun 2021 17:59:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2D4E361358 for ; Tue, 22 Jun 2021 17:59:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232767AbhFVSBW (ORCPT ); Tue, 22 Jun 2021 14:01:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232476AbhFVSA5 (ORCPT ); Tue, 22 Jun 2021 14:00:57 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 162CDC061768 for ; Tue, 22 Jun 2021 10:58:29 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id a5-20020ac84d850000b029024998e61d00so58177qtw.14 for ; Tue, 22 Jun 2021 10:58:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=uTc90HlRvNxV1/4nXHcBOYKgMxpy+5jWYh9NNDOLUD8=; b=d+4QRZf2vjtSjIFvpLRxJyq+dSUFgq6Q8H3Ui2TPuYfKcaQ4aO/f8HzfxE1XLHpECC RVLzWUw9ClVb1VymCJcIDfXB0aAHSz2Hh4MYq8HH4wO/32Ja0wDmwv/N4vf/CqPjRnbF MNwJ58W59X7x9Bs7iIX5xY7dHkHdw0adBZt9QO1CrZA10I8shjlrn4qUiBMxTyfKXtSH U5Vz1y4jHLB6ETlMlq3gGjQSkMNo/tvEjhEBu0zno2JlILBXGhQ7ycfY2UhxvvI/7EBN yFpOqz/76Pp0KqtTavVjUFuGCN2rW96/Lrl5KzBIRg2X4H9MGtfjCRq1y38XPev8F6qT l5LA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=uTc90HlRvNxV1/4nXHcBOYKgMxpy+5jWYh9NNDOLUD8=; b=Ox6zIpueAz0xnpTkyMIb6R0WOmxxucGy4DySetkiuYEE7FZHIt0Sjqjmmmj4QNiG99 9uyLP3Va7pf8hdxGs3yeHDcSHJezXVri324jvaG5hwTynMF8MPyoFUP/nk17D3t13ejD DUU+Mm2Su7qu2ot4X6NS8KqBor9thEy1PCfQId/02OZ6sUXxjTinDSA/6jbACCcRp89A Saum849Jz9i2j/ZpST/uEsCUXAd0dF4GTSSrDO5nl68E71wrFUvcay0sTP1NedRMtsca hYEu7UHwszoChjD8nD8p67h5tVorFUt43yh96SfyObnDeq59QqhBhyR08hQvjH8im625 JztQ== X-Gm-Message-State: AOAM532Fb6i+tpiIfbER3Y3Xo83ORk6FAeD09QUR+eL142LNrTIsGlVO bqsbj9ApbPBa8OWY3BbKooQjjMIx8lA= X-Google-Smtp-Source: ABdhPJz2LE0P39MT7JrSChNHGeZB/3WIosyg/xkSg5rQgJuK+6VHqX2zw2HqTGfq4giguLjPphzSkycR2Ac= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:11c2:: with SMTP id 185mr6652606ybr.101.1624384708229; Tue, 22 Jun 2021 10:58:28 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:56:59 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-15-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 14/54] KVM: x86: Fix sizes used to pass around CR0, CR4, and EFER From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When configuring KVM's MMU, pass CR0 and CR4 as unsigned longs, and EFER as a u64 in various flows (mostly MMU). Passing the params as u32s is functionally ok since all of the affected registers reserve bits 63:32 to zero (enforced by KVM), but it's technically wrong. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu.h | 4 ++-- arch/x86/kvm/mmu/mmu.c | 11 ++++++----- arch/x86/kvm/svm/nested.c | 2 +- arch/x86/kvm/x86.c | 2 +- 4 files changed, 10 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index bc11402df83b..47131b92b990 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -66,8 +66,8 @@ void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context); void kvm_init_mmu(struct kvm_vcpu *vcpu); -void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, u32 cr0, u32 cr4, u32 efer, - gpa_t nested_cr3); +void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, + unsigned long cr4, u64 efer, gpa_t nested_cr3); void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, bool accessed_dirty, gpa_t new_eptp); bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 0171c245ecc7..96c16a6e0044 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4659,8 +4659,8 @@ kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, bool base_only) } static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *context, - u32 cr0, u32 cr4, u32 efer, - union kvm_mmu_role new_role) + unsigned long cr0, unsigned long cr4, + u64 efer, union kvm_mmu_role new_role) { if (!(cr0 & X86_CR0_PG)) nonpaging_init_context(vcpu, context); @@ -4675,7 +4675,8 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte reset_shadow_zero_bits_mask(vcpu, context); } -static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, u32 cr0, u32 cr4, u32 efer) +static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, + unsigned long cr4, u64 efer) { struct kvm_mmu *context = &vcpu->arch.root_mmu; union kvm_mmu_role new_role = @@ -4697,8 +4698,8 @@ kvm_calc_shadow_npt_root_page_role(struct kvm_vcpu *vcpu) return role; } -void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, u32 cr0, u32 cr4, u32 efer, - gpa_t nested_cr3) +void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, + unsigned long cr4, u64 efer, gpa_t nested_cr3) { struct kvm_mmu *context = &vcpu->arch.guest_mmu; union kvm_mmu_role new_role = kvm_calc_shadow_npt_root_page_role(vcpu); diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index dca20f949b63..9f0e7ed672b2 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -1244,8 +1244,8 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu, &user_kvm_nested_state->data.svm[0]; struct vmcb_control_area *ctl; struct vmcb_save_area *save; + unsigned long cr0; int ret; - u32 cr0; BUILD_BUG_ON(sizeof(struct vmcb_control_area) + sizeof(struct vmcb_save_area) > KVM_STATE_NESTED_SVM_VMCB_SIZE); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 92b4a9305651..2d3b9f10b14a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9076,8 +9076,8 @@ static void enter_smm(struct kvm_vcpu *vcpu) { struct kvm_segment cs, ds; struct desc_ptr dt; + unsigned long cr0; char buf[512]; - u32 cr0; memset(buf, 0, 512); #ifdef CONFIG_X86_64 From patchwork Tue Jun 22 17:57:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76FA9C2B9F4 for ; Tue, 22 Jun 2021 17:59:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5E052611CE for ; Tue, 22 Jun 2021 17:59:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232804AbhFVSB3 (ORCPT ); Tue, 22 Jun 2021 14:01:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37648 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232574AbhFVSBG (ORCPT ); Tue, 22 Jun 2021 14:01:06 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF9FAC0611BC for ; Tue, 22 Jun 2021 10:58:31 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id v1-20020a372f010000b02903aa9be319adso19021446qkh.11 for ; Tue, 22 Jun 2021 10:58:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=gKgtnO0li3XctR1LlcZ0sfmT/BtRonO0rdROZiYa21Q=; b=Sex7lUeipn7tR+V0xF1ACQQZL4w0FGipBVhvq6RwEVTjOP3LMZ2SwGQeCkBG51CTy7 J9seSG7oKa6S5aBAg+yP9LAAz0n8y0YYm1xzGnfCyGheGzdGzfkDHR4BVLfvgN9x5zg9 I9VtGIP+0i3VJqaw7QDgV7feV49meyBvHZNcDAb4iG3oZnS1qh/xNl88MqnwQLrrQnj7 M2K0KZSQmGA4zHYDfEkX2SUnKHmwKcULBMcyzKop+9KmdLUlvEMGaN07XrFJlZKPDJFe Bt1DmrpQr1goRbZh30CBFX7VtgM7JiJpvJg+fvHTp0AfaGhU3igcdKLznJ2hnwkWr/uU h+tg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=gKgtnO0li3XctR1LlcZ0sfmT/BtRonO0rdROZiYa21Q=; b=mIVA1y2RtU+p/hzXOiHbeLGR4t3osrtsuusHZwXMAiGns1TrHxm/qEtwKf+9QqKnca e2Dq0dMS7s5p7T7kN8U2y4Qm3wHyvo0Obcy7OdLJNcV8vRjqf3lPzc9kTTLlAuYxsCHK sResi4z6/V26uzscTdII/A/I71wjZWJP45+7hpUOvuhRCteMHM7dT+8J7tcCvV5QdIl8 /gIJXog0qH/gpPM52IOwV5Tu+PpQLU082rSaIkgvxk59bPoMYDiEc/ikAs644zw/FFvy OKvYCbe7vsMtGCXf55k7duL+QFdFRrIQiVkEawdobmP54/8SyM46G01jeZl0v4YBt30M pGxw== X-Gm-Message-State: AOAM532umi1QsijT12irZkiuEbM+aGKhlfIVtUzS+R+aKPBhRDyIorbs AA4XTd6SxiwOU0gC1PijtHpq7HHvdcE= X-Google-Smtp-Source: ABdhPJys+z26QeE5bNgr2NIiKY0ZWi/vGDPdyD4gOQRKlwQlOW+EMsJ9CEtp6QFg4/Uf9yZ6iWKymivS3iI= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:2fd4:: with SMTP id v203mr5378244ybv.434.1624384710554; Tue, 22 Jun 2021 10:58:30 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:00 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-16-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 15/54] KVM: nSVM: Add a comment to document why nNPT uses vmcb01, not vCPU state From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a comment in the nested NPT initialization flow to call out that it intentionally uses vmcb01 instead current vCPU state to get the effective hCR4 and hEFER for L1's NPT context. Note, despite nSVM's efforts to handle the case where vCPU state doesn't reflect L1 state, the MMU may still do the wrong thing due to pulling state from the vCPU instead of the passed in CR0/CR4/EFER values. This will be addressed in future commits. Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/nested.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 9f0e7ed672b2..33b2f9337e26 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -98,6 +98,12 @@ static void nested_svm_init_mmu_context(struct kvm_vcpu *vcpu) WARN_ON(mmu_is_nested(vcpu)); vcpu->arch.mmu = &vcpu->arch.guest_mmu; + + /* + * L1's CR4 and EFER are stuffed into vmcb01 by the caller. Note, when + * called via KVM_SET_NESTED_STATE, that state may _not_ match current + * vCPU state. CR0.WP is explicitly ignored, while CR0.PG is required. + */ kvm_init_shadow_npt_mmu(vcpu, X86_CR0_PG, svm->vmcb01.ptr->save.cr4, svm->vmcb01.ptr->save.efer, svm->nested.ctl.nested_cr3); From patchwork Tue Jun 22 17:57:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79A22C2B9F4 for ; Tue, 22 Jun 2021 17:59:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 657E860E0B for ; Tue, 22 Jun 2021 17:59:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232827AbhFVSBl (ORCPT ); Tue, 22 Jun 2021 14:01:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37708 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232334AbhFVSBQ (ORCPT ); Tue, 22 Jun 2021 14:01:16 -0400 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A317EC061147 for ; Tue, 22 Jun 2021 10:58:33 -0700 (PDT) Received: by mail-qv1-xf4a.google.com with SMTP id cu3-20020a05621417c3b0290272a51302bdso8224001qvb.20 for ; Tue, 22 Jun 2021 10:58:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=Y3DE/+nhI8hKRcsxwDWqTYPTqmeZA0ewM302/K5Or18=; b=RriloTVjS0q2o8MofR6Z5JpZFWZqk9SNDGq+/q+gYKgWnUcyxLds9wPSZMEe6q2Zv6 ItgXMx8wuuMFTdlxLX72BVxjjGua+jUQpPnsKAwRLXWvICPby/Iq4zR+NcuFzC7GORFv rpCtVtBbCdQ+kf5leTycfLI6hKmJXwgpAg07Xd2oTJ3fUtvAHuYFstzyOaOv2DjDFzqu T7m4LfJ0MVQRPHM8YPv813/6aRfBQzUbvaIrVN5gGl7J1ioeYUGoL/JIvbnjGx1+Iols YMv0+QjLxr4LtHsCtxSxEdFYnf1QEaRgcLoebJYBz6pg3FZMhVBIRHArW2XZxBpU5KOO td3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=Y3DE/+nhI8hKRcsxwDWqTYPTqmeZA0ewM302/K5Or18=; b=cvnRzABzdiS/IaFUqv4vC1ZmCtPnUWavr7BIDsQ6taTVrPj0JATCA91ebADxOIF0LO EhirMbJur5iTlJSq22UPWdMJcuUE3X6sh/mYw8mgD1k2CDtf3sDbToXXOwJ2dX3sNHBk J8gq9esMeFzt2awDAYAuwwIt+8yFhIBdAn5pLHvFX4A3nUyVcUS4DNBsWaECBDr7nFB3 k2wj67qET3aTvbFmYTw8QUAsLvTLT6j530mYgkm1wl+t6UTjalsFiXFtT1HMuVdiT2CB HrpKLYH36N96JT3UVfj30hs6gnWPBBxQL3LqIoFxeeSraZ6lJu9NSV/yBnzlY57gojPd PEzA== X-Gm-Message-State: AOAM530jD5NCw2MNFbAJF2HTvW5uAIx/OGV84xssIjEBV4NMBm/VCv02 YNqBPPoRRk6JmBFqLw4b07Fzyu2NdSI= X-Google-Smtp-Source: ABdhPJxcVCQqF7FQn2ehTESyLdt547B54miRpQMhwIhU+T5Ne0xJSZ3XwmqlpeJZUZNic62udaw/IBdvjM4= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a5b:c:: with SMTP id a12mr6524009ybp.123.1624384712819; Tue, 22 Jun 2021 10:58:32 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:01 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-17-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 16/54] KVM: x86/mmu: Drop smep_andnot_wp check from "uses NX" for shadow MMUs From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Drop the smep_andnot_wp role check from the "uses NX" calculation now that all non-nested shadow MMUs treat NX as used via the !TDP check. The shadow MMU for nested NPT, which shares the helper, does not need to deal with SMEP (or WP) as NPT walks are always "user" accesses and WP is explicitly noted as being ignored: Table walks for guest page tables are always treated as user writes at the nested page table level. A table walk for the guest page itself is always treated as a user access at the nested page table level The host hCR0.WP bit is ignored under nested paging. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 96c16a6e0044..ca7680d1ea24 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4223,8 +4223,7 @@ reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context) * NX can be used by any non-nested shadow MMU to avoid having to reset * MMU contexts. Note, KVM forces EFER.NX=1 when TDP is disabled. */ - bool uses_nx = context->nx || !tdp_enabled || - context->mmu_role.base.smep_andnot_wp; + bool uses_nx = context->nx || !tdp_enabled; struct rsvd_bits_validate *shadow_zero_check; int i; From patchwork Tue Jun 22 17:57:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338249 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37EF0C48BE5 for ; Tue, 22 Jun 2021 17:59:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1E5A7613B7 for ; Tue, 22 Jun 2021 17:59:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232757AbhFVSBw (ORCPT ); Tue, 22 Jun 2021 14:01:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37666 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232627AbhFVSBT (ORCPT ); Tue, 22 Jun 2021 14:01:19 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0465C06114B for ; Tue, 22 Jun 2021 10:58:35 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id y35-20020a0cb8a30000b0290270c2da88e8so9357582qvf.13 for ; Tue, 22 Jun 2021 10:58:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=QSqv+w39xe1F0DIgRTvyqG5XP+AuLOEkvmdHnfG0H3g=; b=hsyQ3lcL2d9ASWwXD1yqXR6tjA2lEHul6PC1aLlT0mqXYWU722Afpxe8/b2ZCV7tnR S+vjGxp57jxXOteMQy2VSqJYxv2M6o8ulQwLwrFElNVmoqsAwIDcuJexv9Jerpb8qCQm d1Ji0dqMYjVwmiScuFiK2zLyqyyO8oxGICUkxuhcCyV91RyFgdptUV4Ue4MM/2qkiu09 R9snveLgqVZHgg175XAXfpO2031f7+uN61uEyRY5c49WJIhNtJthbZ4BCJTErQ6MJplx FR5To3mRB/QN7vr6MOPaIGR2nEwfI/OJE6Qz7w7i38NIevPhibiqSC1s4uy9wmtVjAtk DIOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=QSqv+w39xe1F0DIgRTvyqG5XP+AuLOEkvmdHnfG0H3g=; b=WmuKbXmdrgsyhETLIwk9vQldfGA+1z0Yj+TD+mIx2CYgI8gBZP2dc9qIe/Ai4Vc5oM mMSu16J52fYea0/gOPkKtS1OFd91/ikwcsHe0zC2KEYXSkiSxTI+YLaH7xKU2GuJo/5W Xo3yhIZ9XWkcIQILAbrLaitHA3GbV8BMvFxt7zGuXlakuPRkxgiL7nwVoCTF6h4GFjeC xhMg9h32tvnrr8TZVeTf1ZV3cewkcELK/xY5C6YtnowWZkK0I4peC8lL0Sn8BUqktdUu M9KVzIC+/guL55UDbwZ6qzo5xlpRSF6BwEMo+9Fr0n40GwIRev632pDDyD4cBHFymzQ/ 26Yw== X-Gm-Message-State: AOAM533aYc6mBXzvooNjMZAhM9wD2Mlr+YG8RMMJXu5JF6Yrmsldc9I3 DBDRH7+VA5bikuXTbgLV5P6AGMTU+Og= X-Google-Smtp-Source: ABdhPJyC8b7MOHAhw56pTNKP4xt7DE193oymRVmJEkB/uOjizB43RS24onuhynJYRajX7hHLfAgZZpfyKg4= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a05:6214:80c:: with SMTP id df12mr9849qvb.18.1624384714932; Tue, 22 Jun 2021 10:58:34 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:02 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-18-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 17/54] KVM: x86: Read and pass all CR0/CR4 role bits to shadow MMU helper From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Grab all CR0/CR4 MMU role bits from current vCPU state when initializing a non-nested shadow MMU. Extract the masks from kvm_post_set_cr{0,4}(), as the CR0/CR4 update masks must exactly match the mmu_role bits, with one exception (see below). The "full" CR0/CR4 will be used by future commits to initialize the MMU and its role, as opposed to the current approach of pulling everything from vCPU, which is incorrect for certain flows, e.g. nested NPT. CR4.LA57 is an exception, as it can be toggled on VM-Exit (for L1's MMU) but can't be toggled via MOV CR4 while long mode is active. I.e. LA57 needs to be in the mmu_role, but technically doesn't need to be checked by kvm_post_set_cr4(). However, the extra check is completely benign as the hardware restrictions simply mean LA57 will never be _the_ cause of a MMU reset during MOV CR4. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu.h | 6 ++++++ arch/x86/kvm/mmu/mmu.c | 4 ++-- arch/x86/kvm/x86.c | 9 ++------- 3 files changed, 10 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 47131b92b990..4e926f4935b0 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -44,6 +44,12 @@ #define PT32_ROOT_LEVEL 2 #define PT32E_ROOT_LEVEL 3 +#define KVM_MMU_CR4_ROLE_BITS (X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE | \ + X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE | \ + X86_CR4_LA57) + +#define KVM_MMU_CR0_ROLE_BITS (X86_CR0_PG | X86_CR0_WP) + static __always_inline u64 rsvd_bits(int s, int e) { BUILD_BUG_ON(__builtin_constant_p(e) && __builtin_constant_p(s) && e < s); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ca7680d1ea24..02c54426e7a2 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4778,8 +4778,8 @@ static void init_kvm_softmmu(struct kvm_vcpu *vcpu) struct kvm_mmu *context = &vcpu->arch.root_mmu; kvm_init_shadow_mmu(vcpu, - kvm_read_cr0_bits(vcpu, X86_CR0_PG), - kvm_read_cr4_bits(vcpu, X86_CR4_PAE), + kvm_read_cr0_bits(vcpu, KVM_MMU_CR0_ROLE_BITS), + kvm_read_cr4_bits(vcpu, KVM_MMU_CR4_ROLE_BITS), vcpu->arch.efer); context->get_guest_pgd = get_cr3; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2d3b9f10b14a..cdce4b134bef 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -832,14 +832,12 @@ EXPORT_SYMBOL_GPL(load_pdptrs); void kvm_post_set_cr0(struct kvm_vcpu *vcpu, unsigned long old_cr0, unsigned long cr0) { - unsigned long update_bits = X86_CR0_PG | X86_CR0_WP; - if ((cr0 ^ old_cr0) & X86_CR0_PG) { kvm_clear_async_pf_completion_queue(vcpu); kvm_async_pf_hash_reset(vcpu); } - if ((cr0 ^ old_cr0) & update_bits) + if ((cr0 ^ old_cr0) & KVM_MMU_CR0_ROLE_BITS) kvm_mmu_reset_context(vcpu); if (((cr0 ^ old_cr0) & X86_CR0_CD) && @@ -1018,10 +1016,7 @@ EXPORT_SYMBOL_GPL(kvm_is_valid_cr4); void kvm_post_set_cr4(struct kvm_vcpu *vcpu, unsigned long old_cr4, unsigned long cr4) { - unsigned long mmu_role_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE | - X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE; - - if (((cr4 ^ old_cr4) & mmu_role_bits) || + if (((cr4 ^ old_cr4) & KVM_MMU_CR4_ROLE_BITS) || (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE))) kvm_mmu_reset_context(vcpu); } From patchwork Tue Jun 22 17:57:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338251 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F384C49EA2 for ; Tue, 22 Jun 2021 17:59:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3651361360 for ; Tue, 22 Jun 2021 17:59:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232288AbhFVSBy (ORCPT ); Tue, 22 Jun 2021 14:01:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232697AbhFVSBT (ORCPT ); Tue, 22 Jun 2021 14:01:19 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31E75C06124A for ; Tue, 22 Jun 2021 10:58:38 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id b6-20020a05620a1266b02903b10c5cfa93so841383qkl.13 for ; Tue, 22 Jun 2021 10:58:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=HZb8OLBJZDPapTr8/JRwbfQg/ysQN+FOOKZXN2ZmDnI=; b=c7dMyoiY/YzDEAKBg8tlk7JQ02rmJ1AdEoobUEXudHruDHKI/JkxZ9CR8dfiJc8Rhi NXSnzi6uOe1DgGKrWhdalFNfWnVpZzsrLSCKsuS/98veGOHM+XAIP4/84MkyewEBCKCO 2SDdrwYfZHSCjnkHyTRrKHVD8PWZBYxbu/0V7dvsp98iUFy2N4isELd0FrREpRkEJomA eiPjwTryx8lkmRcQaBcc3HAcyFfGq8Ncn3Sb83lkHKPRSTUdHXPkiw/9hKrZyNY66MFm f3GneBwdKCJdi5m763xtRYqXeJKQDTWVTVqsW5BmwEdXRaGs6rGUOnPZaCIlHG82esui mhKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=HZb8OLBJZDPapTr8/JRwbfQg/ysQN+FOOKZXN2ZmDnI=; b=Ys9R4jq7FI0tSxcqiC5L3bDVQOG6F84bydoIuac0Rqi8rfoWQTcNN7tXs5a42ASH3P WUQMta2inYUopE6gK+KdYANWK5h64xWEPiKTh8mIGyQPm6jVONFre8kErq5oBE2HgPI8 zmO+bwxfPw3sto8ivNtwxh2TVgR0RpKRwCEAupNgEpVLfNjBKq5v7Y+Ag3Kz+0E01Vpk UdbuI/laWNao/fDnBC6s3zPSGMpTYXPAEGH7p/jdFJvWEsxzRbQeo9QrhRvlMweJW6Qq RJoL1+p1O6t7OqGXvz31mc+HZLzZaXwuPN3wpLMmh3X565bPOzMoxu2PxMUCT8+OxQy0 OHcg== X-Gm-Message-State: AOAM5317nPdhjOtodFt/qtzO2+bG9gjrK895XoT7vLG8ybB5e8DoBJt4 JEXgsgkVLzNXaPGfLA4D3Fpy8/1nQBA= X-Google-Smtp-Source: ABdhPJyjOS840Dh66sh9kYKMVYAWYWMwlpDlLvqiyp18elHTQEQDjLfdQVgO9tcRkn3CX4oLl9ZARYaeIsQ= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:df82:: with SMTP id w124mr6162687ybg.425.1624384717363; Tue, 22 Jun 2021 10:58:37 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:03 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-19-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 18/54] KVM: x86/mmu: Move nested NPT reserved bit calculation into MMU proper From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move nested NPT's invocation of reset_shadow_zero_bits_mask() into the MMU proper and unexport said function. Aside from dropping an export, this is a baby step toward eliminating the call entirely by fixing the shadow_root_level confusion. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu.h | 3 --- arch/x86/kvm/mmu/mmu.c | 11 ++++++++--- arch/x86/kvm/svm/nested.c | 1 - 3 files changed, 8 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 4e926f4935b0..62844bacd13f 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -68,9 +68,6 @@ static __always_inline u64 rsvd_bits(int s, int e) void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask); void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only); -void -reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context); - void kvm_init_mmu(struct kvm_vcpu *vcpu); void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, unsigned long cr4, u64 efer, gpa_t nested_cr3); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 02c54426e7a2..5a46a87b23b0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4212,8 +4212,8 @@ static inline u64 reserved_hpa_bits(void) * table in guest or amd nested guest, its mmu features completely * follow the features in guest. */ -void -reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context) +static void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, + struct kvm_mmu *context) { /* * KVM uses NX when TDP is disabled to handle a variety of scenarios, @@ -4247,7 +4247,6 @@ reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context) } } -EXPORT_SYMBOL_GPL(reset_shadow_zero_bits_mask); static inline bool boot_cpu_is_amd(void) { @@ -4714,6 +4713,12 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, */ context->shadow_root_level = new_role.base.level; } + + /* + * Redo the shadow bits, the reset done by shadow_mmu_init_context() + * (above) may use the wrong shadow_root_level. + */ + reset_shadow_zero_bits_mask(vcpu, context); } EXPORT_SYMBOL_GPL(kvm_init_shadow_npt_mmu); diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 33b2f9337e26..927e545591c3 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -110,7 +110,6 @@ static void nested_svm_init_mmu_context(struct kvm_vcpu *vcpu) vcpu->arch.mmu->get_guest_pgd = nested_svm_get_tdp_cr3; vcpu->arch.mmu->get_pdptr = nested_svm_get_tdp_pdptr; vcpu->arch.mmu->inject_page_fault = nested_svm_inject_npf_exit; - reset_shadow_zero_bits_mask(vcpu, vcpu->arch.mmu); vcpu->arch.walk_mmu = &vcpu->arch.nested_mmu; } From patchwork Tue Jun 22 17:57:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 069F6C2B9F4 for ; Tue, 22 Jun 2021 17:59:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E414161289 for ; Tue, 22 Jun 2021 17:59:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232710AbhFVSCG (ORCPT ); Tue, 22 Jun 2021 14:02:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232797AbhFVSB1 (ORCPT ); Tue, 22 Jun 2021 14:01:27 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8CE53C061226 for ; Tue, 22 Jun 2021 10:58:40 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id n195-20020a3740cc0000b02903b2ccb7bbe6so2526820qka.20 for ; Tue, 22 Jun 2021 10:58:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=3dZybjy6vwU9WfW0H4YoDO4hpVV/AUIoIg4sHYrpcm8=; b=D80TzGeFtbzp+OgIhK2zHERLxniKlpWd//90G4A5NN9szTRrAuf1HHhd10pjxv8HUM sXISFGUYrWRuAKT6TsZ8bWq958xK7V05X21BEb0fp8t8sKDxQ1Zo8Lg6aaxPkoiB4GNY 0lzhlee5RKkrjurvphE/x9YYJLLCsQ9drbsYo+c32wPY+/YzhjM43gDEiD9I0zL/KTYA Hc3xFbwjsHuvNtRIYPL0AZD952qUVDpbI2sUxBKT2lu3raV5DDy8EQ34v5+3ctnA+wAj dHZ/jftE61JWUrRvYp3bdT7LkuhS1BQLMvb2gxpV38LmVRoFKb+TJ1+5zSEJ+jjRr63g az8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=3dZybjy6vwU9WfW0H4YoDO4hpVV/AUIoIg4sHYrpcm8=; b=MsBGy4M04IxDF3R3Rd0rWs4UmHJs7gYpoAZ2XbZShIvY1GTEQbJUXIaMIbwXWohBVx XphpOTH9LXjwTdgCb7Vv88n0kZ9qf8lVzhceKhfTKvQ4ZmtxLwyPwJ+ZKYbGM6EyvkzL FTonASV2KYd68gMGJt+LOGC93gMeSfCh/CwiW9ruS3nQoxBcvSdomV0o1KtX4Qfbp1t0 xrqpkR5IWXLad/RteZn8xurlVzV0gqTwokp3WXRM2SuyTonRhNy5ySR3XQ6fIzXW7Rox zWdGQ+mjnouUtNkTfO5Bkf860a3Wl2Hgzqy1RprpelLchCpfN6GVlzOmF0JPUaz5w+67 ZJVQ== X-Gm-Message-State: AOAM531TRx5pJ6gVtnwuDu9SOjyk2R7ozwZRAsIZSu5ShA1JxPnWB3Vq aOVToPy3shA1ADPUF07H47SqpVA0HFg= X-Google-Smtp-Source: ABdhPJyf7SXRR+kMnblZcElhy/0cxr30tGVPxHF1h18GpS9LWm9iYp5muMU6PIgPPN0gJqMgm/qgnOdS7p0= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:7109:: with SMTP id m9mr6679225ybc.274.1624384719700; Tue, 22 Jun 2021 10:58:39 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:04 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-20-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 19/54] KVM: x86/mmu: Grab shadow root level from mmu_role for shadow MMUs From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use the mmu_role to initialize shadow root level instead of assuming the level of KVM's shadow root (host) is the same as that of the guest root, or in the case of 32-bit non-PAE paging where KVM forces PAE paging. For nested NPT, the shadow root level cannot be adapted to L1's NPT root level and is instead always the TDP root level because NPT uses the current host CR0/CR4/EFER, e.g. 64-bit KVM can't drop into 32-bit PAE to shadow L1's NPT. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 18 +++++------------- 1 file changed, 5 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 5a46a87b23b0..5e3ee4aba2ff 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3898,7 +3898,6 @@ static void nonpaging_init_context(struct kvm_vcpu *vcpu, context->sync_page = nonpaging_sync_page; context->invlpg = NULL; context->root_level = 0; - context->shadow_root_level = PT32E_ROOT_LEVEL; context->direct_map = true; context->nx = false; } @@ -4466,10 +4465,10 @@ static void update_last_nonleaf_level(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu static void paging64_init_context_common(struct kvm_vcpu *vcpu, struct kvm_mmu *context, - int level) + int root_level) { context->nx = is_nx(vcpu); - context->root_level = level; + context->root_level = root_level; reset_rsvds_bits_mask(vcpu, context); update_permission_bitmask(vcpu, context, false); @@ -4481,7 +4480,6 @@ static void paging64_init_context_common(struct kvm_vcpu *vcpu, context->gva_to_gpa = paging64_gva_to_gpa; context->sync_page = paging64_sync_page; context->invlpg = paging64_invlpg; - context->shadow_root_level = level; context->direct_map = false; } @@ -4509,7 +4507,6 @@ static void paging32_init_context(struct kvm_vcpu *vcpu, context->gva_to_gpa = paging32_gva_to_gpa; context->sync_page = paging32_sync_page; context->invlpg = paging32_invlpg; - context->shadow_root_level = PT32E_ROOT_LEVEL; context->direct_map = false; } @@ -4669,6 +4666,8 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte else paging32_init_context(vcpu, context); + context->shadow_root_level = new_role.base.level; + context->mmu_role.as_u64 = new_role.as_u64; reset_shadow_zero_bits_mask(vcpu, context); } @@ -4704,16 +4703,9 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, __kvm_mmu_new_pgd(vcpu, nested_cr3, new_role.base); - if (new_role.as_u64 != context->mmu_role.as_u64) { + if (new_role.as_u64 != context->mmu_role.as_u64) shadow_mmu_init_context(vcpu, context, cr0, cr4, efer, new_role); - /* - * Override the level set by the common init helper, nested TDP - * always uses the host's TDP configuration. - */ - context->shadow_root_level = new_role.base.level; - } - /* * Redo the shadow bits, the reset done by shadow_mmu_init_context() * (above) may use the wrong shadow_root_level. From patchwork Tue Jun 22 17:57:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338255 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE4F3C2B9F4 for ; Tue, 22 Jun 2021 17:59:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B7629611CE for ; Tue, 22 Jun 2021 17:59:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232343AbhFVSCO (ORCPT ); Tue, 22 Jun 2021 14:02:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232650AbhFVSBb (ORCPT ); Tue, 22 Jun 2021 14:01:31 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC8A4C06114C for ; Tue, 22 Jun 2021 10:58:42 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id c3-20020a37b3030000b02903ad0001a2e8so19024278qkf.3 for ; Tue, 22 Jun 2021 10:58:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=uVkGf2uzZgS3tEPT7EpWNxKvNU7O+79uJa1mHHSjgeI=; b=LztvxDinNhi7QJ0GKXdT+3jSiaw1qx/rK9xIwx6lEyS6S0dqw3hy8rbGDquMwozw6O icmZoiOR4DcJmpBJUcypxQd8xDNJwjHrsMcLnCkqyxbpVE02Ll9cRQhhvC8r2z6mfkwe Eorcy6QwaXn0odd28qFnpd2zxYgAV9NyViT8zC0wysWZ6d2cG8G7q7TcinUod3rOj+d/ eIpFRqy9CkXqjfosYszhu4h0uelDYynrjb3HS4uavjY0/kfuwHyJgU9OvHCg9WwScgjH YY+uMST8ElyxH169UJTraxOykxZdJmSFzq18VElct6irjw+Q/rrlfE7TOUH4QqgBKep/ kaig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=uVkGf2uzZgS3tEPT7EpWNxKvNU7O+79uJa1mHHSjgeI=; b=ZmfRbYWlr9Qs2X/cOQhCw5ZdK9yjeJpHmELoJ1koYd0DXMPtrooJNR9Dolkuiv8HF3 ajH76HLKvZmWNfBBoAEloFn+ZQQBP2OphjemQSvZ9lUFgztcl65B58SP1WV+1KsOlt0P Q1BvrLgpigNyS7bRxrlWLw+0elLGllTZYpYTRssMJEHZsx9IsqgdqLmVjUvWu34kfZ9Q r9wntLH/HDTq2Xe+pJqdB1NFxtT8T0pvpQzdJvDu2wI1DwlmI5Mfe1XadMxsi5sBBQHk GzNi/RcFGXFyNA0Z+aDQCc1CVxjWLmMRD5F46Lr29Ww8CwALlnXeeYmk/Ef250eipxcZ 8iAA== X-Gm-Message-State: AOAM532PHL31pgM7wEEO4212FYiwM40RJupZBrh7KFGQSTCTfzqtsR1e QGv0YxyfH9ECkcFC8U+APY8prel/paw= X-Google-Smtp-Source: ABdhPJzFq/Yzp0nWNjquVeWvLNryMHrezu/m4McgdBNBqWieTYPhl/X7abI0Y5s8nj6wZZW9nNZZ4F1hBSU= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:6d88:: with SMTP id i130mr6407406ybc.435.1624384721824; Tue, 22 Jun 2021 10:58:41 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:05 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-21-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 20/54] KVM: x86/mmu: Add struct and helpers to retrieve MMU role bits from regs From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Introduce "struct kvm_mmu_role_regs" to hold the register state that is incorporated into the mmu_role. For nested TDP, the register state that is factored into the MMU isn't vCPU state; the dedicated struct will be used to propagate the correct state throughout the flows without having to pass multiple params, and also provides helpers for the various flag accessors. Intentionally make the new helpers cumbersome/ugly by prepending four underscores. In the not-too-distant future, it will be preferable to use the mmu_role to query bits as the mmu_role can drop irrelevant bits without creating contradictions, e.g. clearing CR4 bits when CR0.PG=0. Reserve the clean helper names (no underscores) for the mmu_role. Add a helper for vCPU conversion, which is the common case. No functional change intended. Signed-off-by: Sean Christopherson Reported-by: kernel test robot --- arch/x86/kvm/mmu/mmu.c | 66 +++++++++++++++++++++++++++++++++--------- 1 file changed, 53 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 5e3ee4aba2ff..3616c3b7618e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -176,9 +176,46 @@ static void mmu_spte_set(u64 *sptep, u64 spte); static union kvm_mmu_page_role kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu); +struct kvm_mmu_role_regs { + const unsigned long cr0; + const unsigned long cr4; + const u64 efer; +}; + #define CREATE_TRACE_POINTS #include "mmutrace.h" +/* + * Yes, lot's of underscores. They're a hint that you probably shouldn't be + * reading from the role_regs. Once the mmu_role is constructed, it becomes + * the single source of truth for the MMU's state. + */ +#define BUILD_MMU_ROLE_REGS_ACCESSOR(reg, name, flag) \ +static inline bool ____is_##reg##_##name(struct kvm_mmu_role_regs *regs)\ +{ \ + return !!(regs->reg & flag); \ +} +BUILD_MMU_ROLE_REGS_ACCESSOR(cr0, pg, X86_CR0_PG); +BUILD_MMU_ROLE_REGS_ACCESSOR(cr0, wp, X86_CR0_WP); +BUILD_MMU_ROLE_REGS_ACCESSOR(cr4, pse, X86_CR4_PSE); +BUILD_MMU_ROLE_REGS_ACCESSOR(cr4, pae, X86_CR4_PAE); +BUILD_MMU_ROLE_REGS_ACCESSOR(cr4, smep, X86_CR4_SMEP); +BUILD_MMU_ROLE_REGS_ACCESSOR(cr4, smap, X86_CR4_SMAP); +BUILD_MMU_ROLE_REGS_ACCESSOR(cr4, pke, X86_CR4_PKE); +BUILD_MMU_ROLE_REGS_ACCESSOR(cr4, la57, X86_CR4_LA57); +BUILD_MMU_ROLE_REGS_ACCESSOR(efer, nx, EFER_NX); +BUILD_MMU_ROLE_REGS_ACCESSOR(efer, lma, EFER_LMA); + +struct kvm_mmu_role_regs vcpu_to_role_regs(struct kvm_vcpu *vcpu) +{ + struct kvm_mmu_role_regs regs = { + .cr0 = kvm_read_cr0_bits(vcpu, KVM_MMU_CR0_ROLE_BITS), + .cr4 = kvm_read_cr4_bits(vcpu, KVM_MMU_CR4_ROLE_BITS), + .efer = vcpu->arch.efer, + }; + + return regs; +} static inline bool kvm_available_flush_tlb_with_range(void) { @@ -4654,14 +4691,14 @@ kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, bool base_only) } static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *context, - unsigned long cr0, unsigned long cr4, - u64 efer, union kvm_mmu_role new_role) + struct kvm_mmu_role_regs *regs, + union kvm_mmu_role new_role) { - if (!(cr0 & X86_CR0_PG)) + if (!____is_cr0_pg(regs)) nonpaging_init_context(vcpu, context); - else if (efer & EFER_LMA) + else if (____is_efer_lma(regs)) paging64_init_context(vcpu, context); - else if (cr4 & X86_CR4_PAE) + else if (____is_cr4_pae(regs)) paging32E_init_context(vcpu, context); else paging32_init_context(vcpu, context); @@ -4672,15 +4709,15 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte reset_shadow_zero_bits_mask(vcpu, context); } -static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, - unsigned long cr4, u64 efer) +static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, + struct kvm_mmu_role_regs *regs) { struct kvm_mmu *context = &vcpu->arch.root_mmu; union kvm_mmu_role new_role = kvm_calc_shadow_mmu_root_page_role(vcpu, false); if (new_role.as_u64 != context->mmu_role.as_u64) - shadow_mmu_init_context(vcpu, context, cr0, cr4, efer, new_role); + shadow_mmu_init_context(vcpu, context, regs, new_role); } static union kvm_mmu_role @@ -4699,12 +4736,17 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, unsigned long cr4, u64 efer, gpa_t nested_cr3) { struct kvm_mmu *context = &vcpu->arch.guest_mmu; + struct kvm_mmu_role_regs regs = { + .cr0 = cr0, + .cr4 = cr4, + .efer = efer, + }; union kvm_mmu_role new_role = kvm_calc_shadow_npt_root_page_role(vcpu); __kvm_mmu_new_pgd(vcpu, nested_cr3, new_role.base); if (new_role.as_u64 != context->mmu_role.as_u64) - shadow_mmu_init_context(vcpu, context, cr0, cr4, efer, new_role); + shadow_mmu_init_context(vcpu, context, ®s, new_role); /* * Redo the shadow bits, the reset done by shadow_mmu_init_context() @@ -4773,11 +4815,9 @@ EXPORT_SYMBOL_GPL(kvm_init_shadow_ept_mmu); static void init_kvm_softmmu(struct kvm_vcpu *vcpu) { struct kvm_mmu *context = &vcpu->arch.root_mmu; + struct kvm_mmu_role_regs regs = vcpu_to_role_regs(vcpu); - kvm_init_shadow_mmu(vcpu, - kvm_read_cr0_bits(vcpu, KVM_MMU_CR0_ROLE_BITS), - kvm_read_cr4_bits(vcpu, KVM_MMU_CR4_ROLE_BITS), - vcpu->arch.efer); + kvm_init_shadow_mmu(vcpu, ®s); context->get_guest_pgd = get_cr3; context->get_pdptr = kvm_pdptr_read; From patchwork Tue Jun 22 17:57:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338257 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E277C2B9F4 for ; Tue, 22 Jun 2021 18:00:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 24B04611CE for ; Tue, 22 Jun 2021 18:00:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232865AbhFVSCU (ORCPT ); Tue, 22 Jun 2021 14:02:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232442AbhFVSBj (ORCPT ); Tue, 22 Jun 2021 14:01:39 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D866FC061756 for ; Tue, 22 Jun 2021 10:58:44 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id m133-20020a37a38b0000b02903adaf1dd081so18977099qke.14 for ; Tue, 22 Jun 2021 10:58:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=SZ7EFM9JvxBFqeMOjL2/OIpnWNqEz9RcVmiKiiKZNTY=; b=qLd9EwwFXb6FlFuWBAbTj9GBDlYyWOUYlj7xXUfYrRNa1WNb1VndejiYUmHmszayUu vJJNYvBjPoVKSI9+0sHO8/Cqwkw+WQnXFw8jIRwzbWs1pnog36jxsR633uO8zyYVEQqt ZAA+9MbxgZNqSfXTBMxOL6rfQsdAC51ZTotDfDbR6QuUfChyS66PkYaNZREVoQdW5lG1 brykcQiDN9fo+LDwtBFz7mn2gQHQWZXj0RYrRz7FGERQH39ISxKIkmAJwg78cZe2fa+k 74DFnA7zYx5DjRgs6h537i/+T6yfDoRqE/TW+rS6bz53Zoy/A3o5oGOUbVJtkNMqhP3O kl6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=SZ7EFM9JvxBFqeMOjL2/OIpnWNqEz9RcVmiKiiKZNTY=; b=SxMJeu53ECIK9FPAXhUvBpsIqSqlZ78ZcWFiAEPChyFbedD3fBYMQEAaaJP/wdVhyN Ag9ERRYNKqf6ogwpM9mX/xjNsvl4emygAEiXln86GXY6peKHp35M1guSBvobf95zdm6b OvnXy2O8i9YMSmh9PKc+1yXJDiPdNx6cJYYeWKUmhB2MMrGiVPVc6OeKLVPwAQ8MtCeN qLHFWOo8MVwUw/b7eT4aB0SvWLLrvEn4696HpoInR43NaVIC8N420LOPmIXi4HXswPIt qVvf7YaUyfOvJTwx/rnVRv9qU1F0oK+aofIuHdxBkxplJNcFi5y8zfSvyaZiNe1wRh1m SVQA== X-Gm-Message-State: AOAM533m+a2KkHHhigYia2v5v5uMxmfS5adT/xGFFSUsL82dj2cRmfe2 kUrm0I/zHSBYwmQ/7MLPNMZNljQhsfA= X-Google-Smtp-Source: ABdhPJzkmP9nVcrzjn2llOniRu5uCh+stueauR2gXAVRn8bXGp699Eu7y8ow4WveZhwE3t7uNzL18eo8JOM= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:ad4:5004:: with SMTP id s4mr21522qvo.8.1624384724058; Tue, 22 Jun 2021 10:58:44 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:06 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-22-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 21/54] KVM: x86/mmu: Consolidate misc updates into shadow_mmu_init_context() From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Consolidate the MMU metadata update calls to deduplicate code, and to prep for future cleanup. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3616c3b7618e..241408e6576d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4507,11 +4507,6 @@ static void paging64_init_context_common(struct kvm_vcpu *vcpu, context->nx = is_nx(vcpu); context->root_level = root_level; - reset_rsvds_bits_mask(vcpu, context); - update_permission_bitmask(vcpu, context, false); - update_pkru_bitmask(vcpu, context, false); - update_last_nonleaf_level(vcpu, context); - MMU_WARN_ON(!is_pae(vcpu)); context->page_fault = paging64_page_fault; context->gva_to_gpa = paging64_gva_to_gpa; @@ -4534,12 +4529,6 @@ static void paging32_init_context(struct kvm_vcpu *vcpu, { context->nx = false; context->root_level = PT32_ROOT_LEVEL; - - reset_rsvds_bits_mask(vcpu, context); - update_permission_bitmask(vcpu, context, false); - update_pkru_bitmask(vcpu, context, false); - update_last_nonleaf_level(vcpu, context); - context->page_fault = paging32_page_fault; context->gva_to_gpa = paging32_gva_to_gpa; context->sync_page = paging32_sync_page; @@ -4703,6 +4692,12 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte else paging32_init_context(vcpu, context); + if (____is_cr0_pg(regs)) { + reset_rsvds_bits_mask(vcpu, context); + update_permission_bitmask(vcpu, context, false); + update_pkru_bitmask(vcpu, context, false); + update_last_nonleaf_level(vcpu, context); + } context->shadow_root_level = new_role.base.level; context->mmu_role.as_u64 = new_role.as_u64; From patchwork Tue Jun 22 17:57:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338259 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3606AC2B9F4 for ; Tue, 22 Jun 2021 18:00:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2094861360 for ; Tue, 22 Jun 2021 18:00:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232739AbhFVSCX (ORCPT ); Tue, 22 Jun 2021 14:02:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232493AbhFVSBp (ORCPT ); Tue, 22 Jun 2021 14:01:45 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46C79C061153 for ; Tue, 22 Jun 2021 10:58:47 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id a193-20020a3766ca0000b02903a9be00d619so19029544qkc.12 for ; Tue, 22 Jun 2021 10:58:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=yVrOMPhzZPdqV1WioAtIf0BXuorXlIfO8VqO1fLczzw=; b=EiMHm4jSoPIly8Y8iOtzHiKF1Kgm1utSHqQ/4BFCMwRiuk+zm1AAaHhPOTX61OujL+ llb+27Sf+eOX8TQnVfeUcY/e4zarGe2Cg9SSQdrQ+9t3Oq3GQBELLorMEVR81GHuroSR w/40x68nt8mkoaUApT5cW5hMGNtdM8eKxKHW5vjSc5E2xnHeKg2pM03KAttrOBi+zvJb agByEjHNen/MK+wcBfObOYGjSgYvkUdB5+sqC7Oa0ri+ghegi155BX9JB0GnODdKE1Dl a5gKsUiOLlZ44TEA6RgtnGw2yuQdPfzXCHLCNI6MT0OZY/w3BnQ0MoBPFeDFEd6KBlwu ZA7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=yVrOMPhzZPdqV1WioAtIf0BXuorXlIfO8VqO1fLczzw=; b=oOjU7TRBoP6/xuTZbwW4vsIwc8nrK8LrhBPgK7j7tJXyHNEY2hVhGHoWDur5AA2KU5 3zXk3kmYt5xjU7fm8SosO0WT/XUbfsOPqa7f+idq7HodcksC1sXUxEzkiyEwroGi+7/0 Jpk2PHxTwmqlL+8yECd28Yla+Qh7ea21Omx8bfUvqlA/yxafRLJW5XtL2k13VJGihIGc 33itQV81H9jvCh5BS8dpN0rx4moWFrjKBWrEoRVFLBxdyuxTQtAegZPYERTyo5LrHEou S+Rkc0eou7xywBxr5L5LYNxaso7xgGBQ2pQElRmhP7vs64esBW0KmdCmSTchz7e1onO/ N6uA== X-Gm-Message-State: AOAM531nRnxeCOv628zOSEdeOaJQLCpscmdNVsM6nVvrmLoyHSnEb/Tu 3rad+0e1Vt8C4yBswU5fJeoO6jbNkXc= X-Google-Smtp-Source: ABdhPJyHSmAYFIl/FI35dSoFxknrxIjIyNHAXOWieOHKKmp9S9zXqV/I+lUFvtCbTofX7lIz9FK9Xm58lhw= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:a4c8:: with SMTP id g66mr6288019ybi.301.1624384726396; Tue, 22 Jun 2021 10:58:46 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:07 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-23-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 22/54] KVM: x86/mmu: Ignore CR0 and CR4 bits in nested EPT MMU role From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Do not incorporate CR0/CR4 bits into the role for the nested EPT MMU, as EPT behavior is not influenced by CR0/CR4. Note, this is the guest_mmu, (L1's EPT), not nested_mmu (L2's IA32 paging); the nested_mmu does need CR0/CR4, and is initialized in a separate flow. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 241408e6576d..84a40488eba7 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4767,8 +4767,10 @@ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty, role.base.guest_mode = true; role.base.access = ACC_ALL; - role.ext = kvm_calc_mmu_role_ext(vcpu); + /* EPT, and thus nested EPT, does not consume CR0, CR4, nor EFER. */ + role.ext.word = 0; role.ext.execonly = execonly; + role.ext.valid = 1; return role; } From patchwork Tue Jun 22 17:57:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338261 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E956C48BE5 for ; Tue, 22 Jun 2021 18:00:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1BD8660E0B for ; Tue, 22 Jun 2021 18:00:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232842AbhFVSCc (ORCPT ); Tue, 22 Jun 2021 14:02:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232763AbhFVSB6 (ORCPT ); Tue, 22 Jun 2021 14:01:58 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CAE90C06175F for ; Tue, 22 Jun 2021 10:58:49 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id a193-20020a3766ca0000b02903a9be00d619so19029685qkc.12 for ; Tue, 22 Jun 2021 10:58:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=RzBQgCsPLMSfiL21Nb3UHXwTCCnNz0lNynpBzQ0WNPQ=; b=udxkLwPikpq/pPJ8zjcFGVKOLVzptYTJgQl6qKQXKKWAG/cuLL7G8E+8cHVXhhPINp zhjXpAN7lTPotXnGGnQIXASLiC9eybO53B0FhP1xAqXonRVjR7Oeuu1hsb5XxHS1rnwD /TRSJr7S+hyVJIMyelK9UpAlDOxaGz92wNUdtF/S5avoCEMCnz9cyiNHEfEDeWhnyhq5 0zOhBXs0boWxPJ/b+miNe1Eu9ymJbbFDQ8l7xegvPhXdAgKQXvat81iO+wcU5zmT4cmT tAiTsRQ5UwaKGUDJvkC8eq2hTwyX9JKOEKQPodVMcOC8UrK0bkdtvR4yyaycjO2qe/Wt GLtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=RzBQgCsPLMSfiL21Nb3UHXwTCCnNz0lNynpBzQ0WNPQ=; b=iIgwcdzXSQf+bggJ1y3JXQS7A8+XgJ09hYnxz/BciKp9jNWXjVydvFC02vvuZ0+uu6 OgVi9lL41Ai8VUj35elYcidMux2u+WUvWYUay3nB0U6oM9hY8uuNEtDGSqOJ8VW4IZMV B0NaQMoPlxV77OMQHFRBt9aEOqaQ0ywbI89lNtasGyx4+VbSm1hZj1IGbAypunOiHrFR UtSZJ5sw7Dbwn6jjdYYNA2hUxUHj6L0xut8319v8TZJBdIHEY0NzrMhipgE+dyIYcm1d rrtp+sl8nzcv9yjCLXT5cfJmPLQHt2FRRi+Cje76Me5Dka6JizTzEHz5Eo0kkeUdW68k Rwjg== X-Gm-Message-State: AOAM532lGmUz+nYBNgFg28oW+aMP0msGNILHYREJoonLk4ttm5gPD8T2 C62VACjsLqkJpsG3LpPXsVsW+5Aj+cM= X-Google-Smtp-Source: ABdhPJyy7aHpbmcQnJkCY2O1Whm5Qt7q3j1zU6OkJMityx8YYUtToUY5Q+TNnNHJzvYBGkQZD1D0V+jfM6Y= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a5b:c:: with SMTP id a12mr6525566ybp.123.1624384728944; Tue, 22 Jun 2021 10:58:48 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:08 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-24-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 23/54] KVM: x86/mmu: Use MMU's role_regs, not vCPU state, to compute mmu_role From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use the provided role_regs to calculate the mmu_role instead of pulling bits from current vCPU state. For some flows, e.g. nested TDP, the vCPU state may not be correct (or relevant). Cc: Maxim Levitsky Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 92 ++++++++++++++++++++++++------------------ 1 file changed, 52 insertions(+), 40 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 84a40488eba7..896e92eac28b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4542,17 +4542,18 @@ static void paging32E_init_context(struct kvm_vcpu *vcpu, paging64_init_context_common(vcpu, context, PT32E_ROOT_LEVEL); } -static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *vcpu) +static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *vcpu, + struct kvm_mmu_role_regs *regs) { union kvm_mmu_extended_role ext = {0}; - ext.cr0_pg = !!is_paging(vcpu); - ext.cr4_pae = !!is_pae(vcpu); - ext.cr4_smep = !!kvm_read_cr4_bits(vcpu, X86_CR4_SMEP); - ext.cr4_smap = !!kvm_read_cr4_bits(vcpu, X86_CR4_SMAP); - ext.cr4_pse = !!is_pse(vcpu); - ext.cr4_pke = !!kvm_read_cr4_bits(vcpu, X86_CR4_PKE); - ext.cr4_la57 = !!kvm_read_cr4_bits(vcpu, X86_CR4_LA57); + ext.cr0_pg = ____is_cr0_pg(regs); + ext.cr4_pae = ____is_cr4_pae(regs); + ext.cr4_smep = ____is_cr4_smep(regs); + ext.cr4_smap = ____is_cr4_smap(regs); + ext.cr4_pse = ____is_cr4_pse(regs); + ext.cr4_pke = ____is_cr4_pke(regs); + ext.cr4_la57 = ____is_cr4_la57(regs); ext.valid = 1; @@ -4560,20 +4561,21 @@ static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *vcpu) } static union kvm_mmu_role kvm_calc_mmu_role_common(struct kvm_vcpu *vcpu, + struct kvm_mmu_role_regs *regs, bool base_only) { union kvm_mmu_role role = {0}; role.base.access = ACC_ALL; - role.base.nxe = !!is_nx(vcpu); - role.base.cr0_wp = is_write_protection(vcpu); + role.base.nxe = ____is_efer_nx(regs); + role.base.cr0_wp = ____is_cr0_wp(regs); role.base.smm = is_smm(vcpu); role.base.guest_mode = is_guest_mode(vcpu); if (base_only) return role; - role.ext = kvm_calc_mmu_role_ext(vcpu); + role.ext = kvm_calc_mmu_role_ext(vcpu, regs); return role; } @@ -4588,9 +4590,10 @@ static inline int kvm_mmu_get_tdp_level(struct kvm_vcpu *vcpu) } static union kvm_mmu_role -kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, bool base_only) +kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, + struct kvm_mmu_role_regs *regs, bool base_only) { - union kvm_mmu_role role = kvm_calc_mmu_role_common(vcpu, base_only); + union kvm_mmu_role role = kvm_calc_mmu_role_common(vcpu, regs, base_only); role.base.ad_disabled = (shadow_accessed_mask == 0); role.base.level = kvm_mmu_get_tdp_level(vcpu); @@ -4603,8 +4606,9 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, bool base_only) static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) { struct kvm_mmu *context = &vcpu->arch.root_mmu; + struct kvm_mmu_role_regs regs = vcpu_to_role_regs(vcpu); union kvm_mmu_role new_role = - kvm_calc_tdp_mmu_root_page_role(vcpu, false); + kvm_calc_tdp_mmu_root_page_role(vcpu, ®s, false); if (new_role.as_u64 == context->mmu_role.as_u64) return; @@ -4648,30 +4652,30 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) } static union kvm_mmu_role -kvm_calc_shadow_root_page_role_common(struct kvm_vcpu *vcpu, bool base_only) +kvm_calc_shadow_root_page_role_common(struct kvm_vcpu *vcpu, + struct kvm_mmu_role_regs *regs, bool base_only) { - union kvm_mmu_role role = kvm_calc_mmu_role_common(vcpu, base_only); + union kvm_mmu_role role = kvm_calc_mmu_role_common(vcpu, regs, base_only); - role.base.smep_andnot_wp = role.ext.cr4_smep && - !is_write_protection(vcpu); - role.base.smap_andnot_wp = role.ext.cr4_smap && - !is_write_protection(vcpu); - role.base.gpte_is_8_bytes = !!is_pae(vcpu); + role.base.smep_andnot_wp = role.ext.cr4_smep && !____is_cr0_wp(regs); + role.base.smap_andnot_wp = role.ext.cr4_smap && !____is_cr0_wp(regs); + role.base.gpte_is_8_bytes = ____is_cr4_pae(regs); return role; } static union kvm_mmu_role -kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, bool base_only) +kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, + struct kvm_mmu_role_regs *regs, bool base_only) { union kvm_mmu_role role = - kvm_calc_shadow_root_page_role_common(vcpu, base_only); + kvm_calc_shadow_root_page_role_common(vcpu, regs, base_only); - role.base.direct = !is_paging(vcpu); + role.base.direct = !____is_cr0_pg(regs); - if (!is_long_mode(vcpu)) + if (!____is_efer_lma(regs)) role.base.level = PT32E_ROOT_LEVEL; - else if (is_la57_mode(vcpu)) + else if (____is_cr4_la57(regs)) role.base.level = PT64_ROOT_5LEVEL; else role.base.level = PT64_ROOT_4LEVEL; @@ -4709,17 +4713,18 @@ static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, { struct kvm_mmu *context = &vcpu->arch.root_mmu; union kvm_mmu_role new_role = - kvm_calc_shadow_mmu_root_page_role(vcpu, false); + kvm_calc_shadow_mmu_root_page_role(vcpu, regs, false); if (new_role.as_u64 != context->mmu_role.as_u64) shadow_mmu_init_context(vcpu, context, regs, new_role); } static union kvm_mmu_role -kvm_calc_shadow_npt_root_page_role(struct kvm_vcpu *vcpu) +kvm_calc_shadow_npt_root_page_role(struct kvm_vcpu *vcpu, + struct kvm_mmu_role_regs *regs) { union kvm_mmu_role role = - kvm_calc_shadow_root_page_role_common(vcpu, false); + kvm_calc_shadow_root_page_role_common(vcpu, regs, false); role.base.direct = false; role.base.level = kvm_mmu_get_tdp_level(vcpu); @@ -4736,7 +4741,9 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, .cr4 = cr4, .efer = efer, }; - union kvm_mmu_role new_role = kvm_calc_shadow_npt_root_page_role(vcpu); + union kvm_mmu_role new_role; + + new_role = kvm_calc_shadow_npt_root_page_role(vcpu, ®s); __kvm_mmu_new_pgd(vcpu, nested_cr3, new_role.base); @@ -4821,9 +4828,12 @@ static void init_kvm_softmmu(struct kvm_vcpu *vcpu) context->inject_page_fault = kvm_inject_page_fault; } -static union kvm_mmu_role kvm_calc_nested_mmu_role(struct kvm_vcpu *vcpu) +static union kvm_mmu_role +kvm_calc_nested_mmu_role(struct kvm_vcpu *vcpu, struct kvm_mmu_role_regs *regs) { - union kvm_mmu_role role = kvm_calc_shadow_root_page_role_common(vcpu, false); + union kvm_mmu_role role; + + role = kvm_calc_shadow_root_page_role_common(vcpu, regs, false); /* * Nested MMUs are used only for walking L2's gva->gpa, they never have @@ -4832,12 +4842,12 @@ static union kvm_mmu_role kvm_calc_nested_mmu_role(struct kvm_vcpu *vcpu) */ role.base.direct = true; - if (!is_paging(vcpu)) + if (!____is_cr0_pg(regs)) role.base.level = 0; - else if (is_long_mode(vcpu)) - role.base.level = is_la57_mode(vcpu) ? PT64_ROOT_5LEVEL : - PT64_ROOT_4LEVEL; - else if (is_pae(vcpu)) + else if (____is_efer_lma(regs)) + role.base.level = ____is_cr4_la57(regs) ? PT64_ROOT_5LEVEL : + PT64_ROOT_4LEVEL; + else if (____is_cr4_pae(regs)) role.base.level = PT32E_ROOT_LEVEL; else role.base.level = PT32_ROOT_LEVEL; @@ -4847,7 +4857,8 @@ static union kvm_mmu_role kvm_calc_nested_mmu_role(struct kvm_vcpu *vcpu) static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu) { - union kvm_mmu_role new_role = kvm_calc_nested_mmu_role(vcpu); + struct kvm_mmu_role_regs regs = vcpu_to_role_regs(vcpu); + union kvm_mmu_role new_role = kvm_calc_nested_mmu_role(vcpu, ®s); struct kvm_mmu *g_context = &vcpu->arch.nested_mmu; if (new_role.as_u64 == g_context->mmu_role.as_u64) @@ -4913,12 +4924,13 @@ EXPORT_SYMBOL_GPL(kvm_init_mmu); static union kvm_mmu_page_role kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu) { + struct kvm_mmu_role_regs regs = vcpu_to_role_regs(vcpu); union kvm_mmu_role role; if (tdp_enabled) - role = kvm_calc_tdp_mmu_root_page_role(vcpu, true); + role = kvm_calc_tdp_mmu_root_page_role(vcpu, ®s, true); else - role = kvm_calc_shadow_mmu_root_page_role(vcpu, true); + role = kvm_calc_shadow_mmu_root_page_role(vcpu, ®s, true); return role.base; } From patchwork Tue Jun 22 17:57:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338263 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F632C2B9F4 for ; Tue, 22 Jun 2021 18:00:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 42C0B60E0B for ; Tue, 22 Jun 2021 18:00:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232717AbhFVSCh (ORCPT ); Tue, 22 Jun 2021 14:02:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37648 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232711AbhFVSCG (ORCPT ); Tue, 22 Jun 2021 14:02:06 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38954C0611F9 for ; Tue, 22 Jun 2021 10:58:52 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id d7-20020ac811870000b02901e65f85117bso44889qtj.18 for ; Tue, 22 Jun 2021 10:58:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=X7T8CU2pWlX7G1KWzznOHQSy1rZlSVK7/Z+0TYT3eEA=; b=FIMx6leMwUpnKir4uxGqVuHx0i8uULW07cBd9bolZKS3GyBypw8ATJpqZmn3PTGeyf gBClmMLiWtH6pOhmqyeYBIB4yZhhA8wHXBY0b/6vP6wCTiZ4kN5s7VNllncGCHb02znS kBQNBhIr9JuGK6z7CXV+dq2a/bCKt3mlJMcX52tskQ2GDss19zRvAXWwlaBj1ZTy+w0u 16dj1y/Yyzkvx4d0o3kQvE2pIHFd0zvqe4/wEo1E7JJo8TkxZTx+ccstdJqEDtxfsEo5 ijesBaAfhHEQ2tVwfUZQhTTPoAoi6uCnA+Nb1EQZDesc6VfjI4qISpwk4GuJbS8p188G cAzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=X7T8CU2pWlX7G1KWzznOHQSy1rZlSVK7/Z+0TYT3eEA=; b=QBhpaphUII7fie5Mp2z5uTuA8MO/W7gNGIh5uaYTZWkjtMq6UKHPTxJBQ+ttFTq4sh bHwDlpAsD6vcEFUYJL9kntl1QYmf9wLmNmm3hbIk5JKRtjOdcm3Xl5YkQk69f06Z/dx3 UGFBNZYfjT4440Kg5DxA9idzDpavT4R68B0UTlvaLXqGqs3qE5+IdMnSyw5/7qSmFOuA YwRXadpYXg8+InTF1pSzvtObk3Po+NgEUEqbMDVdSHQ0J0J2DUPegdlEN8tl19hqNecc J/IZK9+jqCWg5WrbsdNpDQVUnqahX7lo850VktiiGB5lCG/EsTR5GFXYtXnfieS9ZO27 BJog== X-Gm-Message-State: AOAM53210HLGC0cJMPUrXHSd0614j1+gWEfdMX4gZNGvsMqOc3MxgOuF 9LxCRs0VgKNheAijXlX/rt3LQdfC5k0= X-Google-Smtp-Source: ABdhPJzGQ3mmFGTuB7BJzPvwumzn+u7ozsRJwRC9VsNwIKZ+pHTc4fnU/06I6sSMDHUApF+OtJK/oo4hLts= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:9390:: with SMTP id a16mr6670919ybm.208.1624384731335; Tue, 22 Jun 2021 10:58:51 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:09 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-25-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 24/54] KVM: x86/mmu: Rename "nxe" role bit to "efer_nx" for macro shenanigans From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rename "nxe" to "efer_nx" so that future macro magic can use the pattern _ for all CR0, CR4, and EFER bits that included in the role. Using "efer_nx" also makes it clear that the role bit reflects EFER.NX, not the NX bit in the corresponding PTE. Signed-off-by: Sean Christopherson --- Documentation/virt/kvm/mmu.rst | 4 ++-- arch/x86/include/asm/kvm_host.h | 4 ++-- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/mmutrace.h | 2 +- tools/lib/traceevent/plugins/plugin_kvm.c | 4 ++-- 5 files changed, 8 insertions(+), 8 deletions(-) diff --git a/Documentation/virt/kvm/mmu.rst b/Documentation/virt/kvm/mmu.rst index ddbb23998742..f60f5488e121 100644 --- a/Documentation/virt/kvm/mmu.rst +++ b/Documentation/virt/kvm/mmu.rst @@ -180,8 +180,8 @@ Shadow pages contain the following information: role.gpte_is_8_bytes: Reflects the size of the guest PTE for which the page is valid, i.e. '1' if 64-bit gptes are in use, '0' if 32-bit gptes are in use. - role.nxe: - Contains the value of efer.nxe for which the page is valid. + role.efer_nx: + Contains the value of efer.nx for which the page is valid. role.cr0_wp: Contains the value of cr0.wp for which the page is valid. role.smep_andnot_wp: diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index cdaff399ed94..8aa798c75e9a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -274,7 +274,7 @@ struct kvm_kernel_irq_routing_entry; * by indirect shadow page can not be more than 15 bits. * * Currently, we used 14 bits that are @level, @gpte_is_8_bytes, @quadrant, @access, - * @nxe, @cr0_wp, @smep_andnot_wp and @smap_andnot_wp. + * @efer_nx, @cr0_wp, @smep_andnot_wp and @smap_andnot_wp. */ union kvm_mmu_page_role { u32 word; @@ -285,7 +285,7 @@ union kvm_mmu_page_role { unsigned direct:1; unsigned access:3; unsigned invalid:1; - unsigned nxe:1; + unsigned efer_nx:1; unsigned cr0_wp:1; unsigned smep_andnot_wp:1; unsigned smap_andnot_wp:1; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 896e92eac28b..7bc5b1a8fca5 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4567,7 +4567,7 @@ static union kvm_mmu_role kvm_calc_mmu_role_common(struct kvm_vcpu *vcpu, union kvm_mmu_role role = {0}; role.base.access = ACC_ALL; - role.base.nxe = ____is_efer_nx(regs); + role.base.efer_nx = ____is_efer_nx(regs); role.base.cr0_wp = ____is_cr0_wp(regs); role.base.smm = is_smm(vcpu); role.base.guest_mode = is_guest_mode(vcpu); diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h index e798489b56b5..efbad33a0645 100644 --- a/arch/x86/kvm/mmu/mmutrace.h +++ b/arch/x86/kvm/mmu/mmutrace.h @@ -40,7 +40,7 @@ role.direct ? " direct" : "", \ access_str[role.access], \ role.invalid ? " invalid" : "", \ - role.nxe ? "" : "!", \ + role.efer_nx ? "" : "!", \ role.ad_disabled ? "!" : "", \ __entry->root_count, \ __entry->unsync ? "unsync" : "sync", 0); \ diff --git a/tools/lib/traceevent/plugins/plugin_kvm.c b/tools/lib/traceevent/plugins/plugin_kvm.c index 51ceeb9147eb..9ce7b4b68e3f 100644 --- a/tools/lib/traceevent/plugins/plugin_kvm.c +++ b/tools/lib/traceevent/plugins/plugin_kvm.c @@ -366,7 +366,7 @@ union kvm_mmu_page_role { unsigned direct:1; unsigned access:3; unsigned invalid:1; - unsigned nxe:1; + unsigned efer_nx:1; unsigned cr0_wp:1; unsigned smep_and_not_wp:1; unsigned smap_and_not_wp:1; @@ -403,7 +403,7 @@ static int kvm_mmu_print_role(struct trace_seq *s, struct tep_record *record, access_str[role.access], role.invalid ? " invalid" : "", role.cr4_pae ? "" : "!", - role.nxe ? "" : "!", + role.efer_nx ? "" : "!", role.cr0_wp ? "" : "!", role.smep_and_not_wp ? " smep" : "", role.smap_and_not_wp ? " smap" : "", From patchwork Tue Jun 22 17:57:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338265 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AEB4C2B9F4 for ; Tue, 22 Jun 2021 18:00:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 03CF460E0B for ; Tue, 22 Jun 2021 18:00:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232296AbhFVSCm (ORCPT ); Tue, 22 Jun 2021 14:02:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232719AbhFVSCN (ORCPT ); Tue, 22 Jun 2021 14:02:13 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83F5FC0611FB for ; Tue, 22 Jun 2021 10:58:54 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id t144-20020a3746960000b02903ad9c5e94baso18997564qka.16 for ; Tue, 22 Jun 2021 10:58:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=kQmirOy+CFqTjeIK11GchlgwLFqpy8qC6wsHe7x5/AA=; b=BbZKxsWTRk7inBXx75lpGni6nxDEt9V69Z1I82wmBj1CJBLiwZnnQ15qPdWYB1SVmx xMTpnaMrP70Hy/PeAhw5CkiwfLW6JHcZxDu9IrNI6uIyykS7A98rsHFirRIQ4p1RYYA0 2ZsEZmQW+/uXKyTuuGHMGlk7ix6xBKMC0R/Ae88m9zkCnU4Bpjvh6j1B54mo+d8iACG1 7CMJ5a0058s+16l76A5N2QQmPTPUkUl80KfB9NA6eGfnh4WUEYufhj/jtyPwKZjnKFrU XGr891d+dQbNWW3yoi7kCJUpneLv3phrDiakH7Ik3G7qVXZ82Xyf5xo1fSLa4xoKj9TT laBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=kQmirOy+CFqTjeIK11GchlgwLFqpy8qC6wsHe7x5/AA=; b=t3+Clcbg6D6wq6JQdAKujREpd7w7e9nl+FCb1yqHS//q8PSiIw4g6vEaQNQd6e06mo 8Yn8GwahkqNbEV8v4KPm9lbycsl0zqun2L19CiAzFHqCYum2YnoWhb9zsAivKtMUTBK3 D3rLBtqctyI8IAiHD8QzYQBPFiottkKU3/R0lQI6niOYdpG2E0mTIB+w9K73SSqAJBnu IMS+2a77cP3nnRajEeHclNEQ6dZkHl7Mmys2Chw9Wf7DTj3x94FU91Udf/EvjHQP9ktg kvHgPd5nbdYWI46EC79o8hRcwIgTCZGNOYz3gGbdGtCYurNgIyFuBWxGtQ4MzXiiuX5P Qrsw== X-Gm-Message-State: AOAM533eptLvGD3PKPqBxD+V1z4z+KirRaSldWR043rd8azPA5s89/eC wblO7LJLUVt5s+6fPz6UCg55hPwtG2I= X-Google-Smtp-Source: ABdhPJzcaI2hEhNLBOEWavqTVhbiNePzyQoDyK2+oJIAFALaSPLGw+VmD8l1PRjxtXdd/cBuy0ZTLcHl7IY= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:2d55:: with SMTP id s21mr3501239ybe.338.1624384733636; Tue, 22 Jun 2021 10:58:53 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:10 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-26-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 25/54] KVM: x86/mmu: Add helpers to query mmu_role bits From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add helpers via a builder macro for all mmu_role bits that track a CR0, CR4, or EFER bit. Digging out the bits manually is not exactly the most readable code. Future commits will switch to using mmu_role instead of vCPU state to configure the MMU, i.e. there are about to be a large number of users. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 21 +++++++++++++++++++++ arch/x86/kvm/mmu/paging_tmpl.h | 2 +- 2 files changed, 22 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7bc5b1a8fca5..be95595b30c7 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -206,6 +206,27 @@ BUILD_MMU_ROLE_REGS_ACCESSOR(cr4, la57, X86_CR4_LA57); BUILD_MMU_ROLE_REGS_ACCESSOR(efer, nx, EFER_NX); BUILD_MMU_ROLE_REGS_ACCESSOR(efer, lma, EFER_LMA); +/* + * The MMU itself (with a valid role) is the single source of truth for the + * MMU. Do not use the regs used to build the MMU/role, nor the vCPU. The + * regs don't account for dependencies, e.g. clearing CR4 bits if CR0.PG=1, + * and the vCPU may be incorrect/irrelevant. + */ +#define BUILD_MMU_ROLE_ACCESSOR(base_or_ext, reg, name) \ +static inline bool is_##reg##_##name(struct kvm_mmu *mmu) \ +{ \ + return !!(mmu->mmu_role. base_or_ext . reg##_##name); \ +} +BUILD_MMU_ROLE_ACCESSOR(ext, cr0, pg); +BUILD_MMU_ROLE_ACCESSOR(base, cr0, wp); +BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pse); +BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pae); +BUILD_MMU_ROLE_ACCESSOR(ext, cr4, smep); +BUILD_MMU_ROLE_ACCESSOR(ext, cr4, smap); +BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pke); +BUILD_MMU_ROLE_ACCESSOR(ext, cr4, la57); +BUILD_MMU_ROLE_ACCESSOR(base, efer, nx); + struct kvm_mmu_role_regs vcpu_to_role_regs(struct kvm_vcpu *vcpu) { struct kvm_mmu_role_regs regs = { diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index b632606a87d6..5cf36eb96ee2 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -471,7 +471,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, error: errcode |= write_fault | user_fault; - if (fetch_fault && (mmu->nx || mmu->mmu_role.ext.cr4_smep)) + if (fetch_fault && (mmu->nx || is_cr4_smep(mmu))) errcode |= PFERR_FETCH_MASK; walker->fault.vector = PF_VECTOR; From patchwork Tue Jun 22 17:57:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFE9FC2B9F4 for ; Tue, 22 Jun 2021 18:00:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D7E2C60E0B for ; Tue, 22 Jun 2021 18:00:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232932AbhFVSDB (ORCPT ); Tue, 22 Jun 2021 14:03:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37710 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232602AbhFVSCV (ORCPT ); Tue, 22 Jun 2021 14:02:21 -0400 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0DC38C06115D for ; Tue, 22 Jun 2021 10:58:57 -0700 (PDT) Received: by mail-qv1-xf4a.google.com with SMTP id cj11-20020a056214056bb029026a99960c7aso13381148qvb.22 for ; Tue, 22 Jun 2021 10:58:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=qAcUYJ4rE26wRwaKw91Gj9RXBCVMndz9HvCWXXEKSPo=; b=b3fhx18283K9MrGgUv2p+x9vmN1eeEC2DosmQpwSkN7MMhwHqCVITVymBqSdok21IU k63f4Hewz0OStALS9LOygpQG5GszV/EIm7TdtQdvXnnQXtmfSMzlJTG2VygNS5r7wcG4 Uab9zWTysmrg740A33vzoHWOQSfl4YsZ/YOPP4PWDpNBV9GyGFWFZXg7WIka4TYCH5kq VMM94LXLZeE9WImPqiFtF7AYR4dlkH2noqBkjEcyX5+4T38MkqXylKcpaVpr6zju23o+ SUBf6Qd+YrxQzwad0UnFR0MDzpxrYiUiu/fuDscKQMm9cN74980J+CO43SuLj69tSNxA 3hjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=qAcUYJ4rE26wRwaKw91Gj9RXBCVMndz9HvCWXXEKSPo=; b=m6cEkZ3yMAUitm0gIxrEZpsTjhrvrQRfAhzbRUuV1zVNW0BcFcb/uUGJTJQPXj7GgC vnDcvrmbGlIA+Wz6/pewB2b7SafYZ5hevpw05IqHOUJq1oJbD1tG9/OkOGwXQE1jRwVe xZns8KuMsXogeV2DJsnEJgUQ0Okris3lSAXD4zoJPXZ+hxUukt4y87b4OZ/tEmt8z/OH tGwu0shCJwznDEuHNpnJLf4BYJX1RunkIQ0q8nhga113AyE5eD4w1HAKmVQZw7c3s+Uo IxQwQJqqknBkaZmvYDK3Z/ihN1C54UZJIn28qfnpb+YxIatEIiXzled4WPF5rLmPxjEE W5Gg== X-Gm-Message-State: AOAM533TuCrnsX0c4Y8xcy1baJ5RlsOZEDNnW8pKcZ3ZSpCd3YcQC0mD TPGDOGY7rHXZtBzeaOSq7CBz/U1/QfY= X-Google-Smtp-Source: ABdhPJyuyA1zqsVUtxZb9k71pRg2M4/qRtFktf7xtlsblvbmAp7JLRlfHWCd6hsYvpmfV+y9QBdR6xxXFbY= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:cc8b:: with SMTP id l133mr6594068ybf.518.1624384736176; Tue, 22 Jun 2021 10:58:56 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:11 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-27-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 26/54] KVM: x86/mmu: Do not set paging-related bits in MMU role if CR0.PG=0 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Don't set CR0/CR4/EFER bits in the MMU role if paging is disabled, paging modifiers are irrelevant if there is no paging in the first place. Somewhat arbitrarily clear gpte_is_8_bytes for shadow paging if paging is disabled in the guest. Again, there are no guest PTEs to process, so the size is meaningless. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index be95595b30c7..0eb77a45f1ff 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4568,13 +4568,15 @@ static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *vcpu, { union kvm_mmu_extended_role ext = {0}; - ext.cr0_pg = ____is_cr0_pg(regs); - ext.cr4_pae = ____is_cr4_pae(regs); - ext.cr4_smep = ____is_cr4_smep(regs); - ext.cr4_smap = ____is_cr4_smap(regs); - ext.cr4_pse = ____is_cr4_pse(regs); - ext.cr4_pke = ____is_cr4_pke(regs); - ext.cr4_la57 = ____is_cr4_la57(regs); + if (____is_cr0_pg(regs)) { + ext.cr0_pg = 1; + ext.cr4_pae = ____is_cr4_pae(regs); + ext.cr4_smep = ____is_cr4_smep(regs); + ext.cr4_smap = ____is_cr4_smap(regs); + ext.cr4_pse = ____is_cr4_pse(regs); + ext.cr4_pke = ____is_cr4_pke(regs); + ext.cr4_la57 = ____is_cr4_la57(regs); + } ext.valid = 1; @@ -4588,8 +4590,10 @@ static union kvm_mmu_role kvm_calc_mmu_role_common(struct kvm_vcpu *vcpu, union kvm_mmu_role role = {0}; role.base.access = ACC_ALL; - role.base.efer_nx = ____is_efer_nx(regs); - role.base.cr0_wp = ____is_cr0_wp(regs); + if (____is_cr0_pg(regs)) { + role.base.efer_nx = ____is_efer_nx(regs); + role.base.cr0_wp = ____is_cr0_wp(regs); + } role.base.smm = is_smm(vcpu); role.base.guest_mode = is_guest_mode(vcpu); @@ -4680,7 +4684,7 @@ kvm_calc_shadow_root_page_role_common(struct kvm_vcpu *vcpu, role.base.smep_andnot_wp = role.ext.cr4_smep && !____is_cr0_wp(regs); role.base.smap_andnot_wp = role.ext.cr4_smap && !____is_cr0_wp(regs); - role.base.gpte_is_8_bytes = ____is_cr4_pae(regs); + role.base.gpte_is_8_bytes = ____is_cr0_pg(regs) && ____is_cr4_pae(regs); return role; } From patchwork Tue Jun 22 17:57:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E20DC2B9F4 for ; Tue, 22 Jun 2021 18:00:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3907A61289 for ; Tue, 22 Jun 2021 18:00:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232838AbhFVSDN (ORCPT ); Tue, 22 Jun 2021 14:03:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232752AbhFVSCb (ORCPT ); Tue, 22 Jun 2021 14:02:31 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 799A9C061A11 for ; Tue, 22 Jun 2021 10:58:59 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id w3-20020ac80ec30000b029024e8c2383c1so99062qti.5 for ; Tue, 22 Jun 2021 10:58:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=3DOu4+0cS/pshiyG3lLEGT5b6ht46Mdr7WX/lWP7BwI=; b=nPFhgWZMt3pAj6j69pD7DLLwWsAoQB/YYk6rWFYo8+uPxN7Cb5TFI8xdvcjXg/tbdU VE6yvKbwIzx6/UNezGHZDeYdDr7XlFRGBu6ik6faKHfwfdcpVMKKcr0uZd3apzJbgjR/ C2XQ3AmTcIrTLfoAO01Ibli6oNE+DO8fzqFAQM4/NbMWUaLzpsXcQzLgd9HLjQnk/RMv no4dMjSv47lLhiUwG6hHxaq4EzGYk34NWBRAW58aJaojVESRlTFi97UBtrY/BUMgm3BS YylNu8THJxsYJ+mLHcX1BrCBlG5baaR1KCTrdH/JN/mm00XQZLjrola7LD0dKY47Srcq dcjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=3DOu4+0cS/pshiyG3lLEGT5b6ht46Mdr7WX/lWP7BwI=; b=lLZazSBof+UicV23yv1EEWPOY8QG5AwSDnZT5esx9tmgB8BIvPeyOeBI6wSeu+J856 VwjdKHjELri5rrnte5lh5347Kard3nHGYBmdZeOSYg0LdY5B7IcJnqmVNEK+ye0fgEty s8I1yS2TCd1UhzCAp/K92aVikqAhLlmtOX3uh4kZzL2V0w+OOHXXo7fetGNcR80IY+Lo ThoeQzkYIwYGtLwtDE3d6J2TUlpILNGxXt3eFqSXNzNcvjYJ+Uu2MFwPd8ShvEH0Xk1x xcYrs+eE6kBrTr/Ryqp5bR3zHArW+jo++nm62Iv4WBeyXd0KQ0j31rs9j8NUWna6Zj4Y TmDA== X-Gm-Message-State: AOAM530QCW8wsN01TrfjBShHW0unuePtUOpYUsRAhLtbCpIrjuLTBbfY 1+iB0BngGB2zmTG1pNlXlpxejNpW7rM= X-Google-Smtp-Source: ABdhPJyqEcXQ5QRnyyYk/tMvJAhfUb/MWnBznsnv5zof568XiwuCyvMnEDTUKCsOfycBj094kFmG6IeF+qs= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a5b:384:: with SMTP id k4mr6843302ybp.194.1624384738609; Tue, 22 Jun 2021 10:58:58 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:12 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-28-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 27/54] KVM: x86/mmu: Set CR4.PKE/LA57 in MMU role iff long mode is active From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Don't set cr4_pke or cr4_la57 in the MMU role if long mode isn't active, which is required for protection keys and 5-level paging to be fully enabled. Ignoring the bit avoids unnecessary reconfiguration on reuse, and also means consumers of mmu_role don't need to manually check for long mode. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 0eb77a45f1ff..31662283dac7 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4574,8 +4574,10 @@ static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *vcpu, ext.cr4_smep = ____is_cr4_smep(regs); ext.cr4_smap = ____is_cr4_smap(regs); ext.cr4_pse = ____is_cr4_pse(regs); - ext.cr4_pke = ____is_cr4_pke(regs); - ext.cr4_la57 = ____is_cr4_la57(regs); + + /* PKEY and LA57 are active iff long mode is active. */ + ext.cr4_pke = ____is_efer_lma(regs) && ____is_cr4_pke(regs); + ext.cr4_la57 = ____is_efer_lma(regs) && ____is_cr4_la57(regs); } ext.valid = 1; From patchwork Tue Jun 22 17:57:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338271 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EE27C2B9F4 for ; Tue, 22 Jun 2021 18:01:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E7D1661289 for ; Tue, 22 Jun 2021 18:01:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232884AbhFVSDQ (ORCPT ); Tue, 22 Jun 2021 14:03:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37648 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232706AbhFVSCg (ORCPT ); Tue, 22 Jun 2021 14:02:36 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B2758C061766 for ; Tue, 22 Jun 2021 10:59:01 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id 81-20020a370e540000b02903aacdbd70b7so18999528qko.23 for ; Tue, 22 Jun 2021 10:59:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=WD1PUUCS0E0fqZnmiW5pgM2pcHJQB0j4sfxQ6vSu2X8=; b=msNw20zEOEfFa4rUJbgL0jOo80hIc/3AGuZkEJcGTPnLShD8m8NYuBWxycxe9dISAI N85x9BbF3b00KhCMAxjQbuKO4vIttqHroup3f/bcorfbWGHTnpkyWjbWAa30YkYsI0LB +I1MrRED9mqFZogCfsc+eg21SAvfFanai5VSVR3CMxp6faYslAs7zmuGQ3sy3I2I/0Z6 UFumlFjSn1xJXQLd45isXA9mVlNt+FqCS6BhuxJsipapsyLnEGGVqPQKpc1pKWPOOCfg 0qDy1oflQTXgJ/2c+RIvtY/96cQqFC9e8aPIJe9nz1bqHYqtKv7O5EwUyMThFC8/ANxE k80Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=WD1PUUCS0E0fqZnmiW5pgM2pcHJQB0j4sfxQ6vSu2X8=; b=lEq5YP+VdxZ4OtABtf6CHzu3BsB9oIxsS21CgEO7VAnS6wWfiS2eb49nbsGOfNr4nk qNEmU05vJBgcIGbQ5l78Sba7BO08la19obfcdpid+y+5BBJoA3KRbJ/KqY8R1k4RMFSk YKw9Px9UdfcHGw1Hg6fdN02tipDHluLijasIK6ZrHxOVlPhLQlZ4B75xk/3gcjPTLKy8 oKT7blPx3DBF55rvmIH2LVY+2kkBgfC6Pj0TazhAKaUC5Waw0UFNv5L0E4xEE7/RWHFG ea00tPDSLUr1xiPh7Z8cQSdgU2uEM/y/UYYyyWc60pmlolLt3txEplBKNzLayis2Yr0w kdxA== X-Gm-Message-State: AOAM530kkrjPy0wBQAINviqokbKNzTNWV+IpVjZkVJ/Q4jTs12i4ndX5 lvUk2vILwoUiOia4KlIRVmu5CS5VNaw= X-Google-Smtp-Source: ABdhPJyQ51G5VCltsOz0DYJbYLD+M/qdiVghcwbWYbL4VDK2bewQ4uRMWlKa+yHqiVZsRPQtMaAS3jfo6M0= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:d44c:: with SMTP id m73mr6338166ybf.513.1624384740841; Tue, 22 Jun 2021 10:59:00 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:13 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-29-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 28/54] KVM: x86/mmu: Always Set new mmu_role immediately after checking old role From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor shadow MMU initialization to immediately set its new mmu_role after verifying it differs from the old role, and so that all flavors of MMU initialization share the same check-and-set pattern. Immediately setting the role will allow future commits to use mmu_role to configure the MMU without consuming stale state. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 31662283dac7..337a3e571db6 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4714,6 +4714,11 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte struct kvm_mmu_role_regs *regs, union kvm_mmu_role new_role) { + if (new_role.as_u64 == context->mmu_role.as_u64) + return; + + context->mmu_role.as_u64 = new_role.as_u64; + if (!____is_cr0_pg(regs)) nonpaging_init_context(vcpu, context); else if (____is_efer_lma(regs)) @@ -4731,7 +4736,6 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte } context->shadow_root_level = new_role.base.level; - context->mmu_role.as_u64 = new_role.as_u64; reset_shadow_zero_bits_mask(vcpu, context); } @@ -4742,8 +4746,7 @@ static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, union kvm_mmu_role new_role = kvm_calc_shadow_mmu_root_page_role(vcpu, regs, false); - if (new_role.as_u64 != context->mmu_role.as_u64) - shadow_mmu_init_context(vcpu, context, regs, new_role); + shadow_mmu_init_context(vcpu, context, regs, new_role); } static union kvm_mmu_role @@ -4774,8 +4777,7 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, __kvm_mmu_new_pgd(vcpu, nested_cr3, new_role.base); - if (new_role.as_u64 != context->mmu_role.as_u64) - shadow_mmu_init_context(vcpu, context, ®s, new_role); + shadow_mmu_init_context(vcpu, context, ®s, new_role); /* * Redo the shadow bits, the reset done by shadow_mmu_init_context() @@ -4823,6 +4825,8 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, if (new_role.as_u64 == context->mmu_role.as_u64) return; + context->mmu_role.as_u64 = new_role.as_u64; + context->shadow_root_level = level; context->nx = true; @@ -4833,7 +4837,6 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, context->invlpg = ept_invlpg; context->root_level = level; context->direct_map = false; - context->mmu_role.as_u64 = new_role.as_u64; update_permission_bitmask(vcpu, context, true); update_pkru_bitmask(vcpu, context, true); From patchwork Tue Jun 22 17:57:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338273 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CAFAC48BDF for ; Tue, 22 Jun 2021 18:01:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4A32861289 for ; Tue, 22 Jun 2021 18:01:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232527AbhFVSDV (ORCPT ); Tue, 22 Jun 2021 14:03:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37652 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232629AbhFVSCh (ORCPT ); Tue, 22 Jun 2021 14:02:37 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BAD8DC061767 for ; Tue, 22 Jun 2021 10:59:03 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id o189-20020a378cc60000b02903b2ccd94ea1so2512406qkd.19 for ; Tue, 22 Jun 2021 10:59:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=ekJelZQDtOg2bmuT8yVOm7weMscfRToXO3BLfoIb514=; b=FqGLS1J8OlF/+o5Y7PYq2w4GiMjzR2AyNSNlChYwAYHVMM9EvIfRCL2QcJ5cX/o8C5 L3w0Rl2aj/tTuUaOQT6+3mmlVZfjGOcVZM2bNeVSiNs1VMC3hnWMPIfLHeLBB7SXM7f4 7FlRQFSCtRfalvPEe9l+Xh5i84VC7dV1Pn/52qtnx76iwQqTYdpaQkD2+7ZXqPfi0sb2 V4n2Jx+SBbcGcHfyD4jPsIXidHWQ8YN+mbCcSALMWrjkis43g8UFJxDrJQpdiJMWBfYN bBZMMTt63jS4xPFv2PQ3kaqe+4NOPwTLDEwcMgkssUHoJ2PFzTrDAM3hzwH++EHpcFT2 g9Vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=ekJelZQDtOg2bmuT8yVOm7weMscfRToXO3BLfoIb514=; b=sU6bwGWsu8mMggqzFhQD8DK+IpasEsPGKNM4EWDYL6ZEuqlz3kd66F/OB9fyxc140M hNVYua33PZIjshWTWp7e7Hui+2fy3c/wkys4xO+K6Vgceg52JTGW8pva2vHMFzQO49O/ Cs9jWyDWtyBepVH1uhLKGEWalMCBiqdgc0XJAlwp2PI6hCAhE2PNeNKMk06ZF96Jw7C1 EA0FMDl09CmIaPQltTUwqvDFzDXt3O4LguNDlUddT5wuFOVN4IoC5pyD/omsVsHhWGfJ 3d3ClKWtmTC73BPHt/xCrTurwTbO1p8janwIpqWrbUEjNvO0xM5x4uU7c6+WjBV/a+c1 tWrw== X-Gm-Message-State: AOAM533MLIY+h/ezi94lXEHhxkB1J27T+NT6TeTPJG99hTLFatOn4gdt VrJM3npJNiSrt3J90Dt+V5lxbgV2uR4= X-Google-Smtp-Source: ABdhPJywm8cb6ygkCmprXmzhnEhAs7kBP3Sv3JtWBhBbNtgfJAngkfF78JyNZg4FQWMFwIF0v6eckeqbvJc= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:6088:: with SMTP id u130mr6652474ybb.257.1624384742891; Tue, 22 Jun 2021 10:59:02 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:14 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-30-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 29/54] KVM: x86/mmu: Don't grab CR4.PSE for calculating shadow reserved bits From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Unconditionally pass pse=false when calculating reserved bits for shadow PTEs. CR4.PSE is only relevant for 32-bit non-PAE paging, which KVM does not use for shadow paging (including nested NPT). Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 337a3e571db6..ffcaede019e4 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4281,19 +4281,22 @@ static void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, * MMU contexts. Note, KVM forces EFER.NX=1 when TDP is disabled. */ bool uses_nx = context->nx || !tdp_enabled; + + /* @amd adds a check on bit of SPTEs, which KVM shouldn't use anyways. */ + bool is_amd = true; + /* KVM doesn't use 2-level page tables for the shadow MMU. */ + bool is_pse = false; struct rsvd_bits_validate *shadow_zero_check; int i; - /* - * Passing "true" to the last argument is okay; it adds a check - * on bit 8 of the SPTEs which KVM doesn't use anyway. - */ + WARN_ON_ONCE(context->shadow_root_level < PT32E_ROOT_LEVEL); + shadow_zero_check = &context->shadow_zero_check; __reset_rsvds_bits_mask(vcpu, shadow_zero_check, reserved_hpa_bits(), context->shadow_root_level, uses_nx, guest_cpuid_has(vcpu, X86_FEATURE_GBPAGES), - is_pse(vcpu), true); + is_pse, is_amd); if (!shadow_me_mask) return; @@ -4329,7 +4332,7 @@ reset_tdp_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, reserved_hpa_bits(), context->shadow_root_level, false, boot_cpu_has(X86_FEATURE_GBPAGES), - true, true); + false, true); else __reset_rsvds_bits_mask_ept(shadow_zero_check, reserved_hpa_bits(), false); From patchwork Tue Jun 22 17:57:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338275 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71899C2B9F4 for ; Tue, 22 Jun 2021 18:01:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 58FC1611CE for ; Tue, 22 Jun 2021 18:01:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232480AbhFVSDg (ORCPT ); Tue, 22 Jun 2021 14:03:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37658 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232673AbhFVSCu (ORCPT ); Tue, 22 Jun 2021 14:02:50 -0400 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C31AEC0617AF for ; Tue, 22 Jun 2021 10:59:05 -0700 (PDT) Received: by mail-qv1-xf4a.google.com with SMTP id es19-20020a0562141933b029023930e98a57so10827805qvb.18 for ; Tue, 22 Jun 2021 10:59:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=ylpxsJVZqQHxCprlfV+E9MmLtYIbNcUZULCnv1BFgJI=; b=rcY3HHbecxnEiFCqLojmywKuoaLKVAuXKbtUoVUOvKUNNepltkBW7wdfYnc20myoXE K/LzfMBHLYxNcc/MyC7m/JFMPxGZE/kFQqlfR3bhZf2z6aukTiZ/8oT3bNAR3xag3dWW 7Pw5m1NohI5cVu+tdtbaO79J7rQaY++6PUWcymh7wMBU6mYG1EzJcppXg1nB5vmBqKup /eikxs+Ow+H0iuv1nRHcGVRBRacilJbLxXMB8mvk11jS20FqDHt9h19XlKZ4HRjqpmhK r2dn1PKU9Cr9R79gJojIQfPl4Gi09Yw6aZvBwAv5nK1oLE8PlV/YT3FX9vLcWO4z4STm yTTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=ylpxsJVZqQHxCprlfV+E9MmLtYIbNcUZULCnv1BFgJI=; b=tH3TiM0TvbwWb5S3bP71HrVzxWYbcn2hBdwJcV29TnqvrmiVDqNQiO4qFURN8rTdoZ z/8KeqNNfxGuo2o5CJ8RKovGOsGYHX6AizduokpDr9R/Qx748m6DwnZdKnOtbNSg8XSI LRoltGlUJeiHGQzsCwGLT03EOJY/RlThuBWIFVvALeaZ39vov+cgAPCj3fjtIz1+1wD2 Pl71PZ9hTp6GAOEsdNh+fbQLCpbl2mZmA8QZV+nDhatgAr9YZcb9xkSvaxpqjcz8vRlH j1fanYqrfiqfxgkHuyR7/ZpSviQOXD9bvCNAS/bkHi8eBvNYSZlo1rH4tFGc1zq+UumH vq6g== X-Gm-Message-State: AOAM530dL93dEJdGj9JDsSFqKio1lilZtaeJSa4NS/3BAD3NevY2RvSQ g93d2K05zmuNGNbDCmjIvy2SMQOv+ZA= X-Google-Smtp-Source: ABdhPJy86ED/Y4SC0aZmOaWshiCQsGt9u3jeW/uIQb9VsZgmX8MEPWpFhtoiRUhTvFl3ZCOes71q838ZLA8= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:bcd0:: with SMTP id l16mr6673381ybm.55.1624384744928; Tue, 22 Jun 2021 10:59:04 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:15 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-31-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 30/54] KVM: x86/mmu: Use MMU's role to get CR4.PSE for computing rsvd bits From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use the MMU's role to get CR4.PSE when calculating reserved bits for the guest's PTEs. Practically speaking, this is a glorified nop as the role always come from vCPU state for the relevant flows, but converting to the roles will provide consistency once everything else is converted, and will Just Work if the "always comes from vCPU" behavior were ever to change (unlikely). Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ffcaede019e4..e912d9a83e22 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4216,7 +4216,7 @@ static void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, vcpu->arch.reserved_gpa_bits, context->root_level, context->nx, guest_cpuid_has(vcpu, X86_FEATURE_GBPAGES), - is_pse(vcpu), + is_cr4_pse(context), guest_cpuid_is_amd_or_hygon(vcpu)); } From patchwork Tue Jun 22 17:57:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338277 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36236C2B9F4 for ; Tue, 22 Jun 2021 18:01:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1A49961289 for ; Tue, 22 Jun 2021 18:01:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232989AbhFVSDt (ORCPT ); Tue, 22 Jun 2021 14:03:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232723AbhFVSDL (ORCPT ); Tue, 22 Jun 2021 14:03:11 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16900C061A31 for ; Tue, 22 Jun 2021 10:59:08 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id d12-20020ac8668c0000b0290246e35b30f8so35081qtp.21 for ; Tue, 22 Jun 2021 10:59:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=H8ArnWlUNfoAxDJNt3hI336FC1ZAyBAzOuUBrFRlP+c=; b=Bnv/Cqw3SGOaMwXrlE5IAjRdvibUZQO8cQGd0wND0ys5E1f5JsNoZWowJ9r18xA0Xv RWvITiqcwT9afAr4UHD9PKLMYehdKpU8F1eKy0vqvShSL0NiJtDNI5QM6GgSrypwHKA2 MMT7ERrOit0WNsvi2nBgoOskdg8K5emIPBSWJFy1ZJJkJZrlvwi2jmNn/n7l7HJRZtp9 Qvt9doF6vHYsAZhQaXPk/N8zFt0BwOtUGHrHVcLEBOpXriOsL+2lxxOU+epq0nrP2aCZ KoDMFj8HyCW/KBWUS9/iMqdzhsc2h1HqwLTw0iD4l2w37GD7vyeM3uM/8jEWFdakn/Nr vSBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=H8ArnWlUNfoAxDJNt3hI336FC1ZAyBAzOuUBrFRlP+c=; b=fhWL79YF6oKQu0oBO5U39AIwhIYbkyKV+JVvfv5B2HeaJ2YP3fnZvqk1ER5DL9JUHW TegBE25Hz9P99HU2ZYChgTAKxLqBlta38w1MycBtKRQ0tHDCZQIy+jPI9Ye6d13M1XM0 PApX1Zel6yLh0H/sE4qj5GOBvDtGfo/2PF/brd3DYOrPf4YFjj5XHJxt23iy+dFSqO9Z Bx5h4xnf1IdZ5lWwNN4vfx59r5Fvyn/Hw38w4f6sKvOGtZYl6hphfcHBfO8vVazP2scg +tu1H7iKXBETNYJtgviGLweBU9F+D/vCJhLkZcgu6FGXvbJrHSmLrEXU7eZjr0LIUva0 n76g== X-Gm-Message-State: AOAM531BCxjswAYqYbaC/Arfw8Aux05sjz4tyxPPSXcuiZDHfTfZKCFC PEeFDNu9IzqQyjRNwhffTYbBgI/y33o= X-Google-Smtp-Source: ABdhPJy7prasZnVsVu93Mki513dM4gcsPMBYy1gDNHzMWo2A+0gIm7DNpjphZgOl8rawhlsX+kVCfpmA4GA= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:8607:: with SMTP id y7mr6676021ybk.17.1624384747179; Tue, 22 Jun 2021 10:59:07 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:16 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-32-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 31/54] KVM: x86/mmu: Drop vCPU param from reserved bits calculator From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Drop the vCPU param from __reset_rsvds_bits_mask() as it's now unused, and ideally will remain unused in the future. Any information that's needed by the low level helper should be explicitly provided as it's used for both shadow/host MMUs and guest MMUs, i.e. vCPU state may be meaningless or simply wrong. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e912d9a83e22..c3bf5d4186e9 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4119,8 +4119,7 @@ static inline bool is_last_gpte(struct kvm_mmu *mmu, #undef PTTYPE static void -__reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, - struct rsvd_bits_validate *rsvd_check, +__reset_rsvds_bits_mask(struct rsvd_bits_validate *rsvd_check, u64 pa_bits_rsvd, int level, bool nx, bool gbpages, bool pse, bool amd) { @@ -4212,7 +4211,7 @@ __reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, static void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context) { - __reset_rsvds_bits_mask(vcpu, &context->guest_rsvd_check, + __reset_rsvds_bits_mask(&context->guest_rsvd_check, vcpu->arch.reserved_gpa_bits, context->root_level, context->nx, guest_cpuid_has(vcpu, X86_FEATURE_GBPAGES), @@ -4292,8 +4291,7 @@ static void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, WARN_ON_ONCE(context->shadow_root_level < PT32E_ROOT_LEVEL); shadow_zero_check = &context->shadow_zero_check; - __reset_rsvds_bits_mask(vcpu, shadow_zero_check, - reserved_hpa_bits(), + __reset_rsvds_bits_mask(shadow_zero_check, reserved_hpa_bits(), context->shadow_root_level, uses_nx, guest_cpuid_has(vcpu, X86_FEATURE_GBPAGES), is_pse, is_amd); @@ -4328,8 +4326,7 @@ reset_tdp_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, shadow_zero_check = &context->shadow_zero_check; if (boot_cpu_is_amd()) - __reset_rsvds_bits_mask(vcpu, shadow_zero_check, - reserved_hpa_bits(), + __reset_rsvds_bits_mask(shadow_zero_check, reserved_hpa_bits(), context->shadow_root_level, false, boot_cpu_has(X86_FEATURE_GBPAGES), false, true); From patchwork Tue Jun 22 17:57:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338279 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DAA2C48BDF for ; Tue, 22 Jun 2021 18:01:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E723C61353 for ; Tue, 22 Jun 2021 18:01:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232498AbhFVSDz (ORCPT ); Tue, 22 Jun 2021 14:03:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232766AbhFVSDO (ORCPT ); Tue, 22 Jun 2021 14:03:14 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66E9BC0610CC for ; Tue, 22 Jun 2021 10:59:10 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id 14-20020a37060e0000b02903aad32851d2so19101588qkg.1 for ; Tue, 22 Jun 2021 10:59:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=AxsRK2ImS4sX2YhbK1ZoKxLuOaRGjI3JwD6rPRybQYU=; b=pDmKsy23UTC+dTo/4uIFj4a+wCnm5aPUtxKO+DelRIoI1MICP/8r9THgzjEDmqtbdn gS6hr3PX15L4UCXbFsvW17Ycf1fAiM1vEzvXBSo1ysYAfgiYpaTfIdQhz0ugb/qDynCv KPA8ACRvRpcmkUAJ0yMc09I27wa2jIkFvxT0NyyjieAvWOmc6szi/Tkuf3j2bOuPasfv /tf+DNr4FGFLd2Z0xwn8a3KMdVmxDeOlU6GbyC8fwuderczhYAk2PuF6/BQzbRkrey0Z hFpN+Eqgfxgo97SwH2ZyESvSKZnThkuMrsZbr8cmi2PPrEbtuSTtahf6otB4ibA1FG0e orBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=AxsRK2ImS4sX2YhbK1ZoKxLuOaRGjI3JwD6rPRybQYU=; b=o76IWo/ls5dkex+MWODyNwB2sy6bR/udNKC60pEMvbE7+UoW/CSSJDB9+uNw9q+MNK EKl3C3LloV3EWEDKdFf9qDPnxu33ShzcaOk4jGRONBl8JE7oPuN5/P0TsF4aANXMmzmW jX1SyYIXTW4mGcjFdLkrcTTH816+YUMkAkhNvJ2Y1TJsxwa5xvfJiGewvEfqsrs4H2PU rKJzskshDDlqPejQb9QRveIYdSSgvEOYp5S60Qahg+OqHqZC7/FqzbUOBvwSnNn/41YO mWddz/pCXfAKzjj4s799ILbV8fbp7RFKZBfMcjFJXBYrN7FNmMgkDuIfiJwL8335kzzC DXFQ== X-Gm-Message-State: AOAM533NmJgefHZVZwblFU3y+cLr1EURyfpZ9h8YEKHhTTv3buLykog4 kyUSzfZDDVVR08zj5owvgFoh3wEtLH4= X-Google-Smtp-Source: ABdhPJyX0zZB8kxLADbwUwXSQoMZ1QccAerKbpDfYMqg0PqZYli2jyMSrWHkHaTQ/5gbICTOwwbaLuXbL8o= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a05:6214:311:: with SMTP id i17mr7885985qvu.57.1624384749550; Tue, 22 Jun 2021 10:59:09 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:17 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-33-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 32/54] KVM: x86/mmu: Use MMU's role to compute permission bitmask From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use the MMU's role to generate the permission bitmasks for the MMU. For some flows, the vCPU state may not be correct (or relevant), e.g. the nested NPT MMU can be initialized with incoherent vCPU state. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c3bf5d4186e9..bd412e082356 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4365,8 +4365,7 @@ reset_ept_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, (7 & (access) ? 128 : 0)) -static void update_permission_bitmask(struct kvm_vcpu *vcpu, - struct kvm_mmu *mmu, bool ept) +static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept) { unsigned byte; @@ -4374,9 +4373,9 @@ static void update_permission_bitmask(struct kvm_vcpu *vcpu, const u8 w = BYTE_MASK(ACC_WRITE_MASK); const u8 u = BYTE_MASK(ACC_USER_MASK); - bool cr4_smep = kvm_read_cr4_bits(vcpu, X86_CR4_SMEP) != 0; - bool cr4_smap = kvm_read_cr4_bits(vcpu, X86_CR4_SMAP) != 0; - bool cr0_wp = is_write_protection(vcpu); + bool cr4_smep = is_cr4_smep(mmu); + bool cr4_smap = is_cr4_smap(mmu); + bool cr0_wp = is_cr0_wp(mmu); for (byte = 0; byte < ARRAY_SIZE(mmu->permissions); ++byte) { unsigned pfec = byte << 1; @@ -4672,7 +4671,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) context->gva_to_gpa = paging32_gva_to_gpa; } - update_permission_bitmask(vcpu, context, false); + update_permission_bitmask(context, false); update_pkru_bitmask(vcpu, context, false); update_last_nonleaf_level(vcpu, context); reset_tdp_shadow_zero_bits_mask(vcpu, context); @@ -4730,7 +4729,7 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte if (____is_cr0_pg(regs)) { reset_rsvds_bits_mask(vcpu, context); - update_permission_bitmask(vcpu, context, false); + update_permission_bitmask(context, false); update_pkru_bitmask(vcpu, context, false); update_last_nonleaf_level(vcpu, context); } @@ -4838,7 +4837,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, context->root_level = level; context->direct_map = false; - update_permission_bitmask(vcpu, context, true); + update_permission_bitmask(context, true); update_pkru_bitmask(vcpu, context, true); update_last_nonleaf_level(vcpu, context); reset_rsvds_bits_mask_ept(vcpu, context, execonly); @@ -4935,7 +4934,7 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu) g_context->gva_to_gpa = paging32_gva_to_gpa_nested; } - update_permission_bitmask(vcpu, g_context, false); + update_permission_bitmask(g_context, false); update_pkru_bitmask(vcpu, g_context, false); update_last_nonleaf_level(vcpu, g_context); } From patchwork Tue Jun 22 17:57:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338281 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90927C2B9F4 for ; Tue, 22 Jun 2021 18:01:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7730061289 for ; Tue, 22 Jun 2021 18:01:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232650AbhFVSEC (ORCPT ); Tue, 22 Jun 2021 14:04:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232977AbhFVSDX (ORCPT ); Tue, 22 Jun 2021 14:03:23 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B61A8C0610FF for ; Tue, 22 Jun 2021 10:59:12 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id 2-20020a3709020000b02903aa9873df32so19081954qkj.15 for ; Tue, 22 Jun 2021 10:59:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=pivc7qr4sT4+j8JNTwOxv+4OmDX8fKwSLBhBV0jBRec=; b=LOuf/n0b/jOkC5l+HdXmVP2fx3jHAayfatW/ZxJfv7QSu3qjpg4JIH8rFgP6oRLRUu SezQXmkFyqC6cuhiG+sqEIcSRc+m8F1kv77AZ4k/7jytk5vO2HXkruApcAaJoVR3+1QT gFs3f3BYboxssIi5RubGLwZtoHW54N98O1wf0WKxBRQnlS7AYWtyjDlSwkd/luvYw+Vo S1W1L+ERGK3ZIY2jjXLlibdxJzQtgTCRSFapZhKcgQopIoBzBpJjMfn8Qraz6tk5Cta8 NTeIakwjnIwQ/N0b9nr+kBSiPCs+xD/PqA1417nOMcT+mVNiw3qzOOkKYx6LIsAlkSul eCcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=pivc7qr4sT4+j8JNTwOxv+4OmDX8fKwSLBhBV0jBRec=; b=ephWR0cE0/j2EkdBbtT6oWjAPiws0rWmbujved4KGlWeKJ5YKXzq/kwh5OWnYx04DS tcRkENjL1xQfICwGARggrF3E/WnHQeffzD2L9f8PURPEuheMwJNz8zH+Vp9l2WiEAoDW F0fKYq6gr1AAYMOwNFcQG2cRIMJpfDcKtyTZ3JJnsznGl7Rq/lvNbGKgK3CobDPmChnI VoUqDt1/lFL9jDSPTnBT83JPe3IbeGstoW1xVLubiEfE8UT2QvVUYQy8g1lE86tbYdBL 9NswHhh8LnJX+ma0HhXeKyCTpdbFCTYC4K07k4X+FddZbJtcpcJgwW4LKKYQHHOk4IuA sjLQ== X-Gm-Message-State: AOAM5327R5QcGm4MnCD7hOC+2NVBs4LXMNEICZPoyQ3poPdP8SkdUn4I Y1GLelfUsbtv8uv6vKFAVbfgT0IWnkQ= X-Google-Smtp-Source: ABdhPJwg+YfK5rGt1xANj5+v6A8EJYEtR3MwEoo21FvvNMZSylbJP4YmYXQ6EQwFD8xXHO+jh0xQzxEG0sA= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:da8f:: with SMTP id n137mr6440668ybf.520.1624384751889; Tue, 22 Jun 2021 10:59:11 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:18 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-34-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 33/54] KVM: x86/mmu: Use MMU's role to compute PKRU bitmask From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use the MMU's role to calculate the Protection Keys (Restrict Userspace) bitmask instead of pulling bits from current vCPU state. For some flows, the vCPU state may not be correct (or relevant), e.g. EPT doesn't interact with PKRU. Case in point, the "ept" param simply disappears. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 21 +++++++-------------- 1 file changed, 7 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index bd412e082356..dcde7514358b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4460,24 +4460,17 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept) * away both AD and WD. For all reads or if the last condition holds, WD * only will be masked away. */ -static void update_pkru_bitmask(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, - bool ept) +static void update_pkru_bitmask(struct kvm_mmu *mmu) { unsigned bit; bool wp; - if (ept) { + if (!is_cr4_pke(mmu)) { mmu->pkru_mask = 0; return; } - /* PKEY is enabled only if CR4.PKE and EFER.LMA are both set. */ - if (!kvm_read_cr4_bits(vcpu, X86_CR4_PKE) || !is_long_mode(vcpu)) { - mmu->pkru_mask = 0; - return; - } - - wp = is_write_protection(vcpu); + wp = is_cr0_wp(mmu); for (bit = 0; bit < ARRAY_SIZE(mmu->permissions); ++bit) { unsigned pfec, pkey_bits; @@ -4672,7 +4665,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) } update_permission_bitmask(context, false); - update_pkru_bitmask(vcpu, context, false); + update_pkru_bitmask(context); update_last_nonleaf_level(vcpu, context); reset_tdp_shadow_zero_bits_mask(vcpu, context); } @@ -4730,7 +4723,7 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte if (____is_cr0_pg(regs)) { reset_rsvds_bits_mask(vcpu, context); update_permission_bitmask(context, false); - update_pkru_bitmask(vcpu, context, false); + update_pkru_bitmask(context); update_last_nonleaf_level(vcpu, context); } context->shadow_root_level = new_role.base.level; @@ -4838,8 +4831,8 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, context->direct_map = false; update_permission_bitmask(context, true); - update_pkru_bitmask(vcpu, context, true); update_last_nonleaf_level(vcpu, context); + update_pkru_bitmask(context); reset_rsvds_bits_mask_ept(vcpu, context, execonly); reset_ept_shadow_zero_bits_mask(vcpu, context, execonly); } @@ -4935,7 +4928,7 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu) } update_permission_bitmask(g_context, false); - update_pkru_bitmask(vcpu, g_context, false); + update_pkru_bitmask(g_context); update_last_nonleaf_level(vcpu, g_context); } From patchwork Tue Jun 22 17:57:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338283 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6640C2B9F4 for ; Tue, 22 Jun 2021 18:01:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8E3AF61289 for ; Tue, 22 Jun 2021 18:01:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232756AbhFVSEN (ORCPT ); Tue, 22 Jun 2021 14:04:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232673AbhFVSDg (ORCPT ); Tue, 22 Jun 2021 14:03:36 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E70D7C0611BC for ; Tue, 22 Jun 2021 10:59:14 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id es19-20020a0562141933b029023930e98a57so10828122qvb.18 for ; Tue, 22 Jun 2021 10:59:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=pzt94mBjAG5NVEqIJH/e3eGoUXo6CRdbPK68D1RlOck=; b=S6ljFFisciTpz8XtZ9DaEnAmz/WsqJMw/G/PenaFoUXrWQ31VdmW9pLplT3Ucxpbr2 OUem4tgqlxMyHsj/U71bi6OgoeDbwOO+YJGCXfNSBlCUcPQ3/EUKrTRxWTnnOU4eBzBO +XvH71S6P+KpnSFGyvWDBY6JohbsyssS+ozxvXtxzA1nepcHelQt95SeVNYLoooA/Dgv Fb6/+jRKrtCY7pMxVLRZuzvUoTAL+jISE4C8swVyjewc2siprSXpq5yWvgmgTZWPjeGK ghfhkdFA4x2dMnupI8PW5ItSoTHubgJcNfQ4fZa9MOFdP3WqH+TsB7Q6oF9rGuU2eUgM hR6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=pzt94mBjAG5NVEqIJH/e3eGoUXo6CRdbPK68D1RlOck=; b=HCM4/9ohgqVXLU8fRt4E3DnZZNsXor4TpwYffPsAYmNBiGPJYH08THSqPeqkSgcKdi NRVtNYBNd2sirlmmyAz7fP8hc8OUW0Ni56kvsXpjSvybkVFOdPIXBzi6g523BLFQTzC5 zsV+rY79lRkJLHNowwMqvZJND1xgB1x3Hq2xJyIhDiApcqz7oOgXO3vWXwHzTEipBuP9 l37fcB9y1ObQgeFdRqY1ZpWGhz7BEDo8Drwc8HsXvh+nu3+rd58cppCf+dMcHoAXmD53 A1Lp3nyVE0sPbqIc2DuP9icOUPmlvvTUoXlim3+krxfDO6uiIfiKbOTZ8Yw9/8HEq2bX kInw== X-Gm-Message-State: AOAM5313RZr4mM88k8yohRAIBJvQntBHnQ4Y0s6QEUaBqW0yFQeAYdm7 Ood73MukszxgjLg+uUnvSlHNLBB2WgQ= X-Google-Smtp-Source: ABdhPJxl9IUuzOPn0PO+cg+1H7HMcPFz89sz+AqtX6+aZFX/T5wmXNljGTtXZNeta93u69xedNrf9ehyy4Y= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:2315:: with SMTP id j21mr6102427ybj.37.1624384754058; Tue, 22 Jun 2021 10:59:14 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:19 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-35-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 34/54] KVM: x86/mmu: Use MMU's roles to compute last non-leaf level From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use the MMU's role to get CR4.PSE when determining the last level at which the guest _cannot_ create a non-leaf PTE, i.e. cannot create a huge page. Note, the existing logic is arguably wrong when considering 5-level paging and the case where 1gb pages aren't supported. In practice, the logic is confusing but not broken, because except for 32-bit non-PAE paging, the PAGE_SIZE bit is reserved when a huge page isn't supported at that level. I.e. PAGE_SIZE=1 will terminate the guest walk one way or another. Furthermore, last_nonleaf_level is only consulted after KVM has verified there are no reserved bits set. All that confusion will be addressed in a future patch by dropping last_nonleaf_level entirely. For now, massage the code to continue the march toward using mmu_role for (almost) all MMU computations. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index dcde7514358b..67aa19ab628d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4504,12 +4504,12 @@ static void update_pkru_bitmask(struct kvm_mmu *mmu) } } -static void update_last_nonleaf_level(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu) +static void update_last_nonleaf_level(struct kvm_mmu *mmu) { unsigned root_level = mmu->root_level; mmu->last_nonleaf_level = root_level; - if (root_level == PT32_ROOT_LEVEL && is_pse(vcpu)) + if (root_level == PT32_ROOT_LEVEL && is_cr4_pse(mmu)) mmu->last_nonleaf_level++; } @@ -4666,7 +4666,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) update_permission_bitmask(context, false); update_pkru_bitmask(context); - update_last_nonleaf_level(vcpu, context); + update_last_nonleaf_level(context); reset_tdp_shadow_zero_bits_mask(vcpu, context); } @@ -4724,7 +4724,7 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte reset_rsvds_bits_mask(vcpu, context); update_permission_bitmask(context, false); update_pkru_bitmask(context); - update_last_nonleaf_level(vcpu, context); + update_last_nonleaf_level(context); } context->shadow_root_level = new_role.base.level; @@ -4831,7 +4831,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, context->direct_map = false; update_permission_bitmask(context, true); - update_last_nonleaf_level(vcpu, context); + update_last_nonleaf_level(context); update_pkru_bitmask(context); reset_rsvds_bits_mask_ept(vcpu, context, execonly); reset_ept_shadow_zero_bits_mask(vcpu, context, execonly); @@ -4929,7 +4929,7 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu) update_permission_bitmask(g_context, false); update_pkru_bitmask(g_context); - update_last_nonleaf_level(vcpu, g_context); + update_last_nonleaf_level(g_context); } void kvm_init_mmu(struct kvm_vcpu *vcpu) From patchwork Tue Jun 22 17:57:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338285 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93F80C48BDF for ; Tue, 22 Jun 2021 18:02:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7A99C61353 for ; Tue, 22 Jun 2021 18:02:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232809AbhFVSER (ORCPT ); Tue, 22 Jun 2021 14:04:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232441AbhFVSDj (ORCPT ); Tue, 22 Jun 2021 14:03:39 -0400 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2C23C061094 for ; Tue, 22 Jun 2021 10:59:16 -0700 (PDT) Received: by mail-qv1-xf4a.google.com with SMTP id r15-20020a0562140c4fb0290262f40bf4bcso18350008qvj.11 for ; Tue, 22 Jun 2021 10:59:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=/fOnrQt0b+P3Ity+GO+jt22k738kvVjgqmFqrCLqebY=; b=L4RgXrf9+KUDS4xiUlAqkS4rxjbz53wPTpgtwoP6J2RbPVhFt3wPIvtLheHnG2WCaI dyFYt7+27Uq6gyTerEpPSOjrGsZIH5FO68fDv5aXPp2EkixEinfpuzf+YoPQx8lvlUQO TKOfTXBjXkwNWAvLWnLjhVpNipFboI4b3PP9gfiyAuvO+FgbQbGbQChPmRiHAZmis45x sr/1YxJOtzRQ5uW1vpNdBcQPLjPwsPsCeDMluHNPQmHZQr+cgUrYwTdUcB/pCIBxNT+n xRAAkriPZ8Fx47e+D/zvt3CdngWL4HscSAjhveSl2YjxIQ9AWEdtZ4abdG3ci42ROwgY YCjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=/fOnrQt0b+P3Ity+GO+jt22k738kvVjgqmFqrCLqebY=; b=YtViFydIrqaUIRuadJR1eHEKq6w9yV0MpkB1E+/bPHN5pPdLdkxE0LPwPxwMDuoR8i C/N3rBuUofYyG+etamTtWEExmSq8on3XcYPyZFhBbufDH+vzPEKXokfbIeIICtz2YoA7 OE1VW3krh+nvwYxRcTARIdo17Wyi2vITIm2daV8HNwNMF12V6Qmrhzja+TmisLM0UEyI 95Ear/7MLl+QNxbwFRnepHYZlKsSXBN1hvo0zy/rXz+9QUUM/xOQgq8aVg1h6uzbSAm/ v93cpz2xXFzPCPIAawrcKAh33OxxbezybJIB5KZPAJZjdCuO8ErsZvi8W14uNbRNNQ9w fdWg== X-Gm-Message-State: AOAM530ECYlexa3KnnUhEf+7tuIftihUraSaI3OyP2nXIeTqOmdUidbg 2/T7PhA6zqUcaEn64IinvkjusSeUPMw= X-Google-Smtp-Source: ABdhPJwyKNqTgWBLpleYfWkDsfKxvljdyXHQcnLrZnIzrjrXorasmn/thSn2YFBKL92PkohQu22bCtSyG7Q= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a0c:d7c4:: with SMTP id g4mr26103996qvj.23.1624384756086; Tue, 22 Jun 2021 10:59:16 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:20 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-36-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 35/54] KVM: x86/mmu: Use MMU's role to detect EFER.NX in guest page walk From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use the NX bit from the MMU's role instead of the MMU itself so that the redundant, dedicated "nx" flag can be dropped. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 5cf36eb96ee2..c92e712607b6 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -471,7 +471,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, error: errcode |= write_fault | user_fault; - if (fetch_fault && (mmu->nx || is_cr4_smep(mmu))) + if (fetch_fault && (is_efer_nx(mmu) || is_cr4_smep(mmu))) errcode |= PFERR_FETCH_MASK; walker->fault.vector = PF_VECTOR; From patchwork Tue Jun 22 17:57:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338287 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6405CC2B9F4 for ; Tue, 22 Jun 2021 18:02:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4E0AB60E0B for ; Tue, 22 Jun 2021 18:02:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232777AbhFVSEr (ORCPT ); Tue, 22 Jun 2021 14:04:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232977AbhFVSEL (ORCPT ); Tue, 22 Jun 2021 14:04:11 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 207DDC0604C8 for ; Tue, 22 Jun 2021 10:59:19 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id 12-20020a05621420ecb02902766cc25115so5309454qvk.1 for ; Tue, 22 Jun 2021 10:59:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=ZjfAtztgIxSnkfwWudgTHhql9ALcAbuUdFL41aYtPBI=; b=csSsNrTm7SI12ppYN58JeYha3x+cE+fgCutJGEvssLDjbBEMULDn1BwPwszMlXx1GB 6+lYsiYTW4zTUgzVS6UFclb9K1iRvv+VzsYeVjAtPSOGbHq1+lowv20Ckt3x2RyPnGLm D8oP2XMw2uUruTasrKXEpjXmsuxJlaShM13diQbc19bSuzd86my16ue/NCBb+zoh8p+7 8uMk+rhOQITvUfGXvyag8v/fQH/naHWyyk9WIg+WBPz10Cg4x5QhTdhWCisCZcfwTTeb NzXrODE/xUOf+eNwhdPjsCfeKJi6zWf4u0uvNPRju9jEpids20Q5QnkV1pGyDoZdVayb 8xmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=ZjfAtztgIxSnkfwWudgTHhql9ALcAbuUdFL41aYtPBI=; b=go0nGTN2G1oAPy3RHuhPk1+Ig5D9akA5NDIZH8PjqFkN7nVBTjESaKdpp7P/bsp8GQ ou1oxEElATPOXcgDILd+4fd6FaYQPF0mRY94y7/yW3QVzmmSrwd6vH3VRKzBoo7BXr4a MZjd6gLwK6lTgbaYLk3MVcm/1P1Cq0tEC0tuusfq2ke5P95S+ARyrt1SCYH4Qlk4TxRc FqRtISB357wKbmahuyHriLZ8lO49JW2rvmp4q7mnbisxnxMHyEpvgENRt82EoffShzXE fAsyp1PbRERdkfZgTa1dQfbuPwY+JgrKBnf4IvoVI8eHmhTpyakk9MMvt2lE1OtNA1zF 2ylw== X-Gm-Message-State: AOAM530OEM4ptMxKv79+YWilLY61QYRNh8pgccm1Q4KNJwRgu90kQhmU 65ON2+C1oiMuHF932+DalVniXmSJS3E= X-Google-Smtp-Source: ABdhPJxXx+Wwn/dBmayYyuiHxcuuIHWJp1yEqrDn8Bw02FbchoW38HiSb1lW7Cnc9zkgBeCHHlIg9YNn/PM= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:ab91:: with SMTP id v17mr6397028ybi.512.1624384758232; Tue, 22 Jun 2021 10:59:18 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:21 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-37-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 36/54] KVM: x86/mmu: Use MMU's role/role_regs to compute context's metadata From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use the MMU's role and role_regs to calculate the MMU's guest root level and NX bit. For some flows, the vCPU state may not be correct (or relevant), e.g. EPT doesn't interact with EFER.NX and nested NPT will configure the guest_mmu with possibly-stale vCPU state. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 36 ++++++++++++++++-------------------- 1 file changed, 16 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 67aa19ab628d..30cbc6cdb0db 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3948,8 +3948,7 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, max_level, true); } -static void nonpaging_init_context(struct kvm_vcpu *vcpu, - struct kvm_mmu *context) +static void nonpaging_init_context(struct kvm_mmu *context) { context->page_fault = nonpaging_page_fault; context->gva_to_gpa = nonpaging_gva_to_gpa; @@ -4513,14 +4512,13 @@ static void update_last_nonleaf_level(struct kvm_mmu *mmu) mmu->last_nonleaf_level++; } -static void paging64_init_context_common(struct kvm_vcpu *vcpu, - struct kvm_mmu *context, +static void paging64_init_context_common(struct kvm_mmu *context, int root_level) { - context->nx = is_nx(vcpu); + context->nx = is_efer_nx(context); context->root_level = root_level; - MMU_WARN_ON(!is_pae(vcpu)); + WARN_ON_ONCE(!is_cr4_pae(context)); context->page_fault = paging64_page_fault; context->gva_to_gpa = paging64_gva_to_gpa; context->sync_page = paging64_sync_page; @@ -4528,17 +4526,16 @@ static void paging64_init_context_common(struct kvm_vcpu *vcpu, context->direct_map = false; } -static void paging64_init_context(struct kvm_vcpu *vcpu, - struct kvm_mmu *context) +static void paging64_init_context(struct kvm_mmu *context, + struct kvm_mmu_role_regs *regs) { - int root_level = is_la57_mode(vcpu) ? - PT64_ROOT_5LEVEL : PT64_ROOT_4LEVEL; + int root_level = ____is_cr4_la57(regs) ? PT64_ROOT_5LEVEL : + PT64_ROOT_4LEVEL; - paging64_init_context_common(vcpu, context, root_level); + paging64_init_context_common(context, root_level); } -static void paging32_init_context(struct kvm_vcpu *vcpu, - struct kvm_mmu *context) +static void paging32_init_context(struct kvm_mmu *context) { context->nx = false; context->root_level = PT32_ROOT_LEVEL; @@ -4549,10 +4546,9 @@ static void paging32_init_context(struct kvm_vcpu *vcpu, context->direct_map = false; } -static void paging32E_init_context(struct kvm_vcpu *vcpu, - struct kvm_mmu *context) +static void paging32E_init_context(struct kvm_mmu *context) { - paging64_init_context_common(vcpu, context, PT32E_ROOT_LEVEL); + paging64_init_context_common(context, PT32E_ROOT_LEVEL); } static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *vcpu, @@ -4712,13 +4708,13 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte context->mmu_role.as_u64 = new_role.as_u64; if (!____is_cr0_pg(regs)) - nonpaging_init_context(vcpu, context); + nonpaging_init_context(context); else if (____is_efer_lma(regs)) - paging64_init_context(vcpu, context); + paging64_init_context(context, regs); else if (____is_cr4_pae(regs)) - paging32E_init_context(vcpu, context); + paging32E_init_context(context); else - paging32_init_context(vcpu, context); + paging32_init_context(context); if (____is_cr0_pg(regs)) { reset_rsvds_bits_mask(vcpu, context); From patchwork Tue Jun 22 17:57:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338289 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABCD1C48BE5 for ; Tue, 22 Jun 2021 18:02:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9C64861289 for ; Tue, 22 Jun 2021 18:02:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232835AbhFVSEt (ORCPT ); Tue, 22 Jun 2021 14:04:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37654 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232986AbhFVSEM (ORCPT ); Tue, 22 Jun 2021 14:04:12 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64A49C0604CB for ; Tue, 22 Jun 2021 10:59:21 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id a193-20020a3766ca0000b02903a9be00d619so19031740qkc.12 for ; Tue, 22 Jun 2021 10:59:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=GFmSqL1fRu1x77v6GuXyeRAd8lytkFiB6WoZ78NJftw=; b=h8rKZtqgcHWYDg6qYyOxcz5EXRhV7X7mk5TWb1CDO+/bWd0H1HZ3pw50iIQX4bz3c6 nQ+pfAFc99yXSG6lCHTjJd3VGPvS2TQJqeyUjE9s1vyYpGERitUZObd7VBOjLUov3w8U zmDyHVpOpnhGFyTSTA6YYya0vXr2YLGWqRybmpkf5xTgVv1XmS231qLr6vDQL3bF8yml PnbNMf3Jem+GsQgAreft00w7MZDSptyrHnlBDnZO1vIu4KXFcw0QhQ4IRBsIQXHlp60Y Um8WOXlYY89+qte2GKxLXCAhFPg3APbntPckSTx5GEHhNiFZ6ffOIZJ37pl2olysLMug B7lQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=GFmSqL1fRu1x77v6GuXyeRAd8lytkFiB6WoZ78NJftw=; b=F3U75k3d1k5g24S/alUNaD3awHFlivwvy9DwM6wkEcQb+rjcZamCgUivxsjWVj3Kw1 RsLD9aCSyXqP0T2eN7z1FYylrFTgtfnON2EiPW2zoHamVYLiQy290kp5sw6AkS04h2Jh 0kXxO1bwhWE+veOwlNkTkAt893G/VQ4d0Sdjr/45hObzDfBZziIKUnNufQD1QTBBo6lJ W3flAAPSBRtsKXbB4POKYHHwrHRwfPDp4t1/LWYx3xxZquzvqMbhvr/okHSdTjc0+i9g HyTboT/V7f2ao+z4qpR6QQMt8LLQ1aYf+mPxPczWKefwXmnZ4BhC1v12TkozWZ95MxFM SfwA== X-Gm-Message-State: AOAM532OiG3zk98Vi/mSCd/CvgovQgDoUBTxWdh8Ez3DOXI/rBbJVWsl o8p78ADCHETg0m0hAwNbPgqunDiXZNY= X-Google-Smtp-Source: ABdhPJxb9mJtgwKjWHe9lS4lXfoF74aK0R3gmHDbTDDxAJcbJV/tSyQ6pl8euCgd6zj/1MHVAFHr9aQbt8U= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:2d0b:: with SMTP id t11mr7180655ybt.106.1624384760559; Tue, 22 Jun 2021 10:59:20 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:22 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-38-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 37/54] KVM: x86/mmu: Use MMU's role to get EFER.NX during MMU configuration From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Get the MMU's effective EFER.NX from its role instead of using the one-off, dedicated flag. This will allow dropping said flag in a future commit. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 30cbc6cdb0db..eb6386bcc2ef 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4212,7 +4212,7 @@ static void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, { __reset_rsvds_bits_mask(&context->guest_rsvd_check, vcpu->arch.reserved_gpa_bits, - context->root_level, context->nx, + context->root_level, is_efer_nx(context), guest_cpuid_has(vcpu, X86_FEATURE_GBPAGES), is_cr4_pse(context), guest_cpuid_is_amd_or_hygon(vcpu)); @@ -4278,7 +4278,7 @@ static void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, * NX can be used by any non-nested shadow MMU to avoid having to reset * MMU contexts. Note, KVM forces EFER.NX=1 when TDP is disabled. */ - bool uses_nx = context->nx || !tdp_enabled; + bool uses_nx = is_efer_nx(context) || !tdp_enabled; /* @amd adds a check on bit of SPTEs, which KVM shouldn't use anyways. */ bool is_amd = true; @@ -4375,6 +4375,7 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept) bool cr4_smep = is_cr4_smep(mmu); bool cr4_smap = is_cr4_smap(mmu); bool cr0_wp = is_cr0_wp(mmu); + bool efer_nx = is_efer_nx(mmu); for (byte = 0; byte < ARRAY_SIZE(mmu->permissions); ++byte) { unsigned pfec = byte << 1; @@ -4400,7 +4401,7 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept) u8 kf = (pfec & PFERR_USER_MASK) ? 0 : u; /* Not really needed: !nx will cause pte.nx to fault */ - if (!mmu->nx) + if (!efer_nx) ff = 0; /* Allow supervisor writes if !cr0.wp */ From patchwork Tue Jun 22 17:57:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338291 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A35DC2B9F4 for ; Tue, 22 Jun 2021 18:02:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6D45660E0B for ; Tue, 22 Jun 2021 18:02:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232845AbhFVSFF (ORCPT ); Tue, 22 Jun 2021 14:05:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232418AbhFVSEf (ORCPT ); Tue, 22 Jun 2021 14:04:35 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CC7D8C0698C6 for ; Tue, 22 Jun 2021 10:59:23 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id e11-20020a17090a77cbb029016f97f61097so2552267pjs.6 for ; Tue, 22 Jun 2021 10:59:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=sfaf2IGfGyV8jrRA1K16QS51TpLUBf3mITH/5zkyd+0=; b=lxl6gjgOp/QD3H2OOX2yInb7NSJjgsrk/Y9I/xjy8O4HDGKgy0mEDE+vH7bnThYWig j6UhcM9JxacEGJ+WNc7CIFvWoAvca91Ry+DPFv/ZaKa3MEOryFeDa4/65QrU2yypNRHq baLlrsoNicQgEHT/8Si19A5LBsoS3hm6HIGm2SrUvdhwU3guYCXKQyBlAAsKjwGwlkLY XTBnrEMt+d4Xy5CcBa0tvCIRnM/8I6A0NQN9XQ0KTTnNswSkDDC+G3Xermjf0nqhmuzY ktd+8gcLkIkslD8rjLXGVueyAm9yRav4ZQAW9M7idjJQuzGYiDeYWViQeuxZm4+5P8el sWfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=sfaf2IGfGyV8jrRA1K16QS51TpLUBf3mITH/5zkyd+0=; b=ALHpZwmfxAoJ79Xo5Oz7nvhQKD+oLYal6k9HyzgYdgZIkFAmtuW4A2pU34f/+eFqPZ 93R/bQviWMsR7Ch90r3KuE7z2YwGmRTtPJCuhqsDXBqHjBpVfenEaaeiUnk4QzSE18Z9 tife6Go/HASpJ7XZvm4R6O4LePNHWFGd2o19w0ROokmIm2GT6BPkCG3gE1fTkQdb5412 fKfVjGJTIoCFW/YBX/hOBR+ROGitriqM6t9jKrsCMSIrWcb/o/VFqYV0kniqI791/4gu 6ZDKVydh9xS55Ibj75t8HQL5/te8Q9qNOLDHSJsBhVn2E8HLKaOsRNF19xTQG2sNQPji s+/A== X-Gm-Message-State: AOAM530tclAWnC/13prQhby+RaGWTvgmtn+p0oGVU8b94kt466DYitCv yeY6FXu6RNR1AMlFsdzgpQ2TAEKsMfo= X-Google-Smtp-Source: ABdhPJyRq0HKb5yJuVL4wlclkhB7vNy+XVJbe26EkhATnU+3/hH4U/ach5cmsQ2beiHPijYkFaY++JH3IAU= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a62:2601:0:b029:300:bd5a:9268 with SMTP id m1-20020a6226010000b0290300bd5a9268mr4905343pfm.1.1624384763183; Tue, 22 Jun 2021 10:59:23 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:23 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-39-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 38/54] KVM: x86/mmu: Drop "nx" from MMU context now that there are no readers From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Drop kvm_mmu.nx as there no consumers left. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 2 -- arch/x86/kvm/mmu/mmu.c | 17 ----------------- 2 files changed, 19 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 8aa798c75e9a..be7088fb0594 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -423,8 +423,6 @@ struct kvm_mmu { /* Can have large pages at levels 2..last_nonleaf_level-1. */ u8 last_nonleaf_level; - bool nx; - u64 pdptrs[4]; /* pae */ }; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index eb6386bcc2ef..6c4655c356b7 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -322,11 +322,6 @@ static int is_cpuid_PSE36(void) return 1; } -static int is_nx(struct kvm_vcpu *vcpu) -{ - return vcpu->arch.efer & EFER_NX; -} - static gfn_t pse36_gfn_delta(u32 gpte) { int shift = 32 - PT32_DIR_PSE36_SHIFT - PAGE_SHIFT; @@ -3956,7 +3951,6 @@ static void nonpaging_init_context(struct kvm_mmu *context) context->invlpg = NULL; context->root_level = 0; context->direct_map = true; - context->nx = false; } static inline bool is_root_usable(struct kvm_mmu_root_info *root, gpa_t pgd, @@ -4516,7 +4510,6 @@ static void update_last_nonleaf_level(struct kvm_mmu *mmu) static void paging64_init_context_common(struct kvm_mmu *context, int root_level) { - context->nx = is_efer_nx(context); context->root_level = root_level; WARN_ON_ONCE(!is_cr4_pae(context)); @@ -4538,7 +4531,6 @@ static void paging64_init_context(struct kvm_mmu *context, static void paging32_init_context(struct kvm_mmu *context) { - context->nx = false; context->root_level = PT32_ROOT_LEVEL; context->page_fault = paging32_page_fault; context->gva_to_gpa = paging32_gva_to_gpa; @@ -4640,22 +4632,18 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) context->inject_page_fault = kvm_inject_page_fault; if (!is_paging(vcpu)) { - context->nx = false; context->gva_to_gpa = nonpaging_gva_to_gpa; context->root_level = 0; } else if (is_long_mode(vcpu)) { - context->nx = is_nx(vcpu); context->root_level = is_la57_mode(vcpu) ? PT64_ROOT_5LEVEL : PT64_ROOT_4LEVEL; reset_rsvds_bits_mask(vcpu, context); context->gva_to_gpa = paging64_gva_to_gpa; } else if (is_pae(vcpu)) { - context->nx = is_nx(vcpu); context->root_level = PT32E_ROOT_LEVEL; reset_rsvds_bits_mask(vcpu, context); context->gva_to_gpa = paging64_gva_to_gpa; } else { - context->nx = false; context->root_level = PT32_ROOT_LEVEL; reset_rsvds_bits_mask(vcpu, context); context->gva_to_gpa = paging32_gva_to_gpa; @@ -4818,7 +4806,6 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, context->shadow_root_level = level; - context->nx = true; context->ept_ad = accessed_dirty; context->page_fault = ept_page_fault; context->gva_to_gpa = ept_gva_to_gpa; @@ -4903,22 +4890,18 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu) * the gva_to_gpa functions between mmu and nested_mmu are swapped. */ if (!is_paging(vcpu)) { - g_context->nx = false; g_context->root_level = 0; g_context->gva_to_gpa = nonpaging_gva_to_gpa_nested; } else if (is_long_mode(vcpu)) { - g_context->nx = is_nx(vcpu); g_context->root_level = is_la57_mode(vcpu) ? PT64_ROOT_5LEVEL : PT64_ROOT_4LEVEL; reset_rsvds_bits_mask(vcpu, g_context); g_context->gva_to_gpa = paging64_gva_to_gpa_nested; } else if (is_pae(vcpu)) { - g_context->nx = is_nx(vcpu); g_context->root_level = PT32E_ROOT_LEVEL; reset_rsvds_bits_mask(vcpu, g_context); g_context->gva_to_gpa = paging64_gva_to_gpa_nested; } else { - g_context->nx = false; g_context->root_level = PT32_ROOT_LEVEL; reset_rsvds_bits_mask(vcpu, g_context); g_context->gva_to_gpa = paging32_gva_to_gpa_nested; From patchwork Tue Jun 22 17:57:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338293 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72A9DC2B9F4 for ; Tue, 22 Jun 2021 18:02:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5517861353 for ; Tue, 22 Jun 2021 18:02:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232771AbhFVSFJ (ORCPT ); Tue, 22 Jun 2021 14:05:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37694 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232563AbhFVSEl (ORCPT ); Tue, 22 Jun 2021 14:04:41 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73F70C061146 for ; Tue, 22 Jun 2021 10:59:26 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id z4-20020ac87f840000b02902488809b6d6so81689qtj.9 for ; Tue, 22 Jun 2021 10:59:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=SUdiAi8V1y7Z3MBnbTcTkMuuwqyidzgCcsX7sikEJEQ=; b=J1cfKg6xHabJrWb5g4kR0m+FmJsMCgYM/sH2Vkm0Hume9zJNWLXNMMNex29SDqNSup Ck1Hb7C8NT3hu22sk6V03qdNP9N5XUDZIYzxxu3ocjy1KkQ3O1kkKu9jIvzBXl9E0uvo jadWdl8W6QX0BsoeyZOEc7zCLQPfSfd6sxlzz77qU11Tu0EEAn/qkdTi/Kj42hwsf93b J71Cwi6DSSfGK/P91N8SQzHa5bC48CS5vj/EgdvVOdFoHvWoCpJY5ZvNU7a9PGx7UMpA Byuacb32QnUz0MuQpSzvfkRn5Pm/v/0Bj6onuXBv4kxUxsdZDfx/jXiZNo4UWQ8mTvtC Up4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=SUdiAi8V1y7Z3MBnbTcTkMuuwqyidzgCcsX7sikEJEQ=; b=MV3bF+n5qQLRpfk21udq294TINQvxve9Te/NzWzXt/cLDkwBCC1tak+TVkVhpH3hR7 kUfh05f117V8jYbv5x0g8SBnHNsqWUJjk4AZDVM3WMy9puftnliDi4GDedBgqvAgOIdG p6S9PYcUoQLyIZbF0h+xnQzPwFRMldvwvnYsjqjBgYJ+QAdd0fFyRtqxEsQBRLfFV/mo uo65UTWC5AKP2Py8EOGRmUhiVaMMvnVracFr9ES1BjVyQY9+knb/lEq7/UDngV2RjVxZ dEBxQbIkoGP0o6smLtP+kwJJUN21IkzcNLQB7bsIgP58DVRkF9ZzLK9lyzoktKdMnpGc qCLA== X-Gm-Message-State: AOAM531eZ/keUVnlWQzXbAQfreqf6RjSpkkzZ2hKmWqOsJYYJHn4z4LM Xs1K89do4ARxMwIxkTJAhGi3pNqCmEI= X-Google-Smtp-Source: ABdhPJxhmqfC37t7ov4Yduhj/jsrJVlsWvs45r1W3NgB4VVTQrxoRM5IXJVU557xrk4EB9ew9XNMQiZcYEY= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:c9c7:: with SMTP id z190mr6597910ybf.21.1624384765584; Tue, 22 Jun 2021 10:59:25 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:24 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-40-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 39/54] KVM: x86/mmu: Get nested MMU's root level from the MMU's role From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Initialize the MMU's (guest) root_level using its mmu_role instead of redoing the calculations. The role_regs used to calculate the mmu_role are initialized from the vCPU, i.e. this should be a complete nop. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6c4655c356b7..6418b50d33ca 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4874,6 +4874,7 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu) g_context->get_guest_pgd = get_cr3; g_context->get_pdptr = kvm_pdptr_read; g_context->inject_page_fault = kvm_inject_page_fault; + g_context->root_level = new_role.base.level; /* * L2 page tables are never shadowed, so there is no need to sync @@ -4890,19 +4891,14 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu) * the gva_to_gpa functions between mmu and nested_mmu are swapped. */ if (!is_paging(vcpu)) { - g_context->root_level = 0; g_context->gva_to_gpa = nonpaging_gva_to_gpa_nested; } else if (is_long_mode(vcpu)) { - g_context->root_level = is_la57_mode(vcpu) ? - PT64_ROOT_5LEVEL : PT64_ROOT_4LEVEL; reset_rsvds_bits_mask(vcpu, g_context); g_context->gva_to_gpa = paging64_gva_to_gpa_nested; } else if (is_pae(vcpu)) { - g_context->root_level = PT32E_ROOT_LEVEL; reset_rsvds_bits_mask(vcpu, g_context); g_context->gva_to_gpa = paging64_gva_to_gpa_nested; } else { - g_context->root_level = PT32_ROOT_LEVEL; reset_rsvds_bits_mask(vcpu, g_context); g_context->gva_to_gpa = paging32_gva_to_gpa_nested; } From patchwork Tue Jun 22 17:57:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338295 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B68E5C48BE5 for ; Tue, 22 Jun 2021 18:02:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A28BF60E0B for ; Tue, 22 Jun 2021 18:02:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232644AbhFVSFM (ORCPT ); Tue, 22 Jun 2021 14:05:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37658 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232691AbhFVSEq (ORCPT ); Tue, 22 Jun 2021 14:04:46 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB25DC061149 for ; Tue, 22 Jun 2021 10:59:28 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id m133-20020a37a38b0000b02903adaf1dd081so18979717qke.14 for ; Tue, 22 Jun 2021 10:59:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=a9zPNtBlg/qu0kCUht360R1GJCEujAU69kTvJewr3ik=; b=bPVlD7rtWzLf2AsOGFWd9xztbB2LhjIoidy1eG/n3+quQfDjyQpU9ZzO4GYUnDyqxD nsKaWU23fDyPnMniQ8ZO+LwlgscLpPXe7d10Ie1XNl429HLxF8WpeGzRVd1Am0tcElND tyXkDY7YiAvwhECSbBqV9A2Z04QBqRRdzLufrV0NNHtEUcAW+q5GoZ7TOpSsOTilNkyh z6PKt1ckGmOPPvDS+GjhseTilVBydl8fjabGHd35SS6HOlhK7l/G4V88D6wfeyVb6zcM 5GmDoiLtWoZQ7REMuNNKZ0ctj/VJVdsJRfmt2JrjxIYOoj7zAJwTgAMJa46Z4ZVN+Pkj cNew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=a9zPNtBlg/qu0kCUht360R1GJCEujAU69kTvJewr3ik=; b=RRaSXzdMF8EXxTkOQmn3YkZw0F+eJ68EAMWWIweA9VcS82vXL1qIaLN2ecY3MLje4l aXuJBr6uC9syjRFU6Raao0S19MoFau8S8NSBEpOS2tYueTfYuxejxKhgnRDr8jlmd5UF hcGFTI5JX1/kf2+TGS3/Moe0Dg5bFM8JeTJfEX2/URr5FStOgA9Mt3LaUnU/w5I4YGSu B61GiiBkiFWj78OoiXGg+uEJY2p3MTuieR/+L8Ueuizwm8MrmBJS4K5XyYKmHkHgbGsh qVIaPRrN9PEnryORz/RgQWp9mHxNQX0bDSmXKyY3OnHAa7ZIpbd/9dPo9j3HvuVegAav scdg== X-Gm-Message-State: AOAM530yQ/YxY2mZk/zNWr8+nqj2GtajQz6Av74wZu4ho7LlBDaep1lP YZcqcoZ1coo5RXSugOXXH/6YhwYY/aA= X-Google-Smtp-Source: ABdhPJzOrTtOd9hL1wlbgR79i9jOPb4MgOv+cS188btXnFinG+K6DbjdBbWNJMmGrPI48QUDFEBjrIW0DLw= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a5b:5c6:: with SMTP id w6mr6485868ybp.279.1624384767865; Tue, 22 Jun 2021 10:59:27 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:25 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-41-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 40/54] KVM: x86/mmu: Use MMU role_regs to get LA57, and drop vCPU LA57 helper From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Get LA57 from the role_regs, which are initialized from the vCPU even though TDP is enabled, instead of pulling the value directly from the vCPU when computing the guest's root_level for TDP MMUs. Note, the check is inside an is_long_mode() statement, so that requirement is not lost. Use role_regs even though the MMU's role is available and arguably "better". A future commit will consolidate the guest root level logic, and it needs access to EFER.LMA, which is not tracked in the role (it can't be toggled on VM-Exit, unlike LA57). Drop is_la57_mode() as there are no remaining users, and to discourage pulling MMU state from the vCPU (in the future). No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/x86.h | 10 ---------- 2 files changed, 1 insertion(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6418b50d33ca..30557b3e5c37 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4635,7 +4635,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) context->gva_to_gpa = nonpaging_gva_to_gpa; context->root_level = 0; } else if (is_long_mode(vcpu)) { - context->root_level = is_la57_mode(vcpu) ? + context->root_level = ____is_cr4_la57(®s) ? PT64_ROOT_5LEVEL : PT64_ROOT_4LEVEL; reset_rsvds_bits_mask(vcpu, context); context->gva_to_gpa = paging64_gva_to_gpa; diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 521f74e5bbf2..44ae10312740 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -157,16 +157,6 @@ static inline bool is_64_bit_mode(struct kvm_vcpu *vcpu) return cs_l; } -static inline bool is_la57_mode(struct kvm_vcpu *vcpu) -{ -#ifdef CONFIG_X86_64 - return (vcpu->arch.efer & EFER_LMA) && - kvm_read_cr4_bits(vcpu, X86_CR4_LA57); -#else - return 0; -#endif -} - static inline bool x86_exception_has_error_code(unsigned int vector) { static u32 exception_has_error_code = BIT(DF_VECTOR) | BIT(TS_VECTOR) | From patchwork Tue Jun 22 17:57:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9ABAC2B9F4 for ; Tue, 22 Jun 2021 18:02:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BD1A161353 for ; Tue, 22 Jun 2021 18:02:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232862AbhFVSFO (ORCPT ); Tue, 22 Jun 2021 14:05:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37710 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232833AbhFVSEt (ORCPT ); Tue, 22 Jun 2021 14:04:49 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 25FBEC0698CB for ; Tue, 22 Jun 2021 10:59:31 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id a12-20020ac8108c0000b029023c90fba3dcso95050qtj.7 for ; Tue, 22 Jun 2021 10:59:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=DcMbbuPBdfPM+4lhKP45794XPTQIo8DE9uxp3mrtfRE=; b=ZUTZm06KBeB9b7MlUTKQEHVR9WwluCSjklC225fpBVtq4CUgkjb4Re8U3IBnjKmymp lqb0kG/4H3yEwk/exFCc2Vp7i2hQPuoE814Rzf4AiZgEPs2h5hujtnsGhB0VaXYBmAjD 7CGiJQkgpmqCo7Ao73Jzirh7kC41yaLz+EDpNUxNt1WmRZfIkb/PZ3PISpv1BWKmkEM4 3OoaOEN+7eDVymRJ91TZ5yLI3Nyuq5d+W3MutyoJjB7KECLmUf8XPVsMxmwol0H7ZZ7N uKueQ4LUkpqu1Pvc+mk4v+/VA0yxI5gWJdln3rXdNAkYOeT6DEaJ5DgfDPBKJI8G44KB FzQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=DcMbbuPBdfPM+4lhKP45794XPTQIo8DE9uxp3mrtfRE=; b=SqqjTZ1BXFVEuW0w+ziqyr1RHw9umKIgJ4XsS0fFZdoiUyMl6XmcOYS71rYaKg0zVz vrzeBy/0oNTbz/f27dtiZGToZU+i5V3oiOWToASbA2nRO1R9zLlZt16F0qJ5YP7C7jlN jyOy4hBLuvQTBcmj14D4abnhz2N2uQb7Mzt1THbi0VARchtJWnPk9yg8g8+R4c3LCNjL VTMJxVOPb8tvZ26zp9jO9fOLNQXD+FvaQHg6x19x5LsMl/PPJO6d2tpOu9iMXQViLeAm flx0SV4bDA5DfBjkpIU+fvhTlkOB8vOxx9S2I8ga44SjyU+yvcTmVIzux7vDF9EIT5q3 HylQ== X-Gm-Message-State: AOAM533bMuwDmD/DbfgYWdegkV+Wpyl0glvT+hlXRSmt2HjXJrFpSgZk txJNHxb8tem9nNapa/JS2A65abeTmKk= X-Google-Smtp-Source: ABdhPJxAqEDaFA+Cih4tUN7s9w+Yx69U6In3XMLv7KpllZiFI25rKVcydJAv9mkD6segs6qvdHXFPIE3eFI= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:4181:: with SMTP id o123mr5893293yba.23.1624384770440; Tue, 22 Jun 2021 10:59:30 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:26 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-42-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 41/54] KVM: x86/mmu: Consolidate reset_rsvds_bits_mask() calls From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move calls to reset_rsvds_bits_mask() out of the various mode statements and under a more generic !CR0.PG check. This will allow for additional code consolidation in the future. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 30557b3e5c37..52311c2efd5d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4637,18 +4637,18 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) } else if (is_long_mode(vcpu)) { context->root_level = ____is_cr4_la57(®s) ? PT64_ROOT_5LEVEL : PT64_ROOT_4LEVEL; - reset_rsvds_bits_mask(vcpu, context); context->gva_to_gpa = paging64_gva_to_gpa; } else if (is_pae(vcpu)) { context->root_level = PT32E_ROOT_LEVEL; - reset_rsvds_bits_mask(vcpu, context); context->gva_to_gpa = paging64_gva_to_gpa; } else { context->root_level = PT32_ROOT_LEVEL; - reset_rsvds_bits_mask(vcpu, context); context->gva_to_gpa = paging32_gva_to_gpa; } + if (is_cr0_pg(context)) + reset_rsvds_bits_mask(vcpu, context); + update_permission_bitmask(context, false); update_pkru_bitmask(context); update_last_nonleaf_level(context); @@ -4890,18 +4890,17 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu) * nested page tables as the second level of translation. Basically * the gva_to_gpa functions between mmu and nested_mmu are swapped. */ - if (!is_paging(vcpu)) { + if (!is_paging(vcpu)) g_context->gva_to_gpa = nonpaging_gva_to_gpa_nested; - } else if (is_long_mode(vcpu)) { - reset_rsvds_bits_mask(vcpu, g_context); + else if (is_long_mode(vcpu)) g_context->gva_to_gpa = paging64_gva_to_gpa_nested; - } else if (is_pae(vcpu)) { - reset_rsvds_bits_mask(vcpu, g_context); + else if (is_pae(vcpu)) g_context->gva_to_gpa = paging64_gva_to_gpa_nested; - } else { - reset_rsvds_bits_mask(vcpu, g_context); + else g_context->gva_to_gpa = paging32_gva_to_gpa_nested; - } + + if (is_cr0_pg(g_context)) + reset_rsvds_bits_mask(vcpu, g_context); update_permission_bitmask(g_context, false); update_pkru_bitmask(g_context); From patchwork Tue Jun 22 17:57:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338299 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C6DDC2B9F4 for ; Tue, 22 Jun 2021 18:03:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 632E261353 for ; Tue, 22 Jun 2021 18:03:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232869AbhFVSFQ (ORCPT ); Tue, 22 Jun 2021 14:05:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232837AbhFVSEt (ORCPT ); Tue, 22 Jun 2021 14:04:49 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8749CC0698CC for ; Tue, 22 Jun 2021 10:59:33 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id x13-20020a0cfe0d0000b0290264540cb5d3so2302980qvr.17 for ; Tue, 22 Jun 2021 10:59:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=Xji9fO4ehtc0Wp4cFa9JaLWH4nQL37UEYpVVFmvSfJU=; b=AxJMpsex3fYOX1nMA8b9tdJxRYNwNCcSCTuNbsqSOTVJSpkkmbua2i2Dxl0Eb74I8L sarHxA6nG7prRgVLBMDnQAkdj8LGYGKA8tFdp3ywyK0afvGy7fjTCfE6vku7XB4EiJmk YymVMUay9N+/kC1Rtvo8jPSVi/B36ZhXOQpCAiL/bWPit5WLC0WUc08uYt2NydVTwL2Z CMFvbcHqVItyfXsSUZKPtuPRxPMl2tkfh6ikJk/Mgqyug8JrL+pftYHj0iqRbh3AtPIl nabJanoremtM2SsdrJqJYaSidrEOaEoJgbW3Rv25T9FnH9ydD1gLFq5MmYiNN/NKi8Kx TOaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=Xji9fO4ehtc0Wp4cFa9JaLWH4nQL37UEYpVVFmvSfJU=; b=ZS9zDdwDEvmvlN8EAJbjkig13p/MnbviI5RnQn0Y6UqSo6V/uP2lrTdcH7qGTg1m0T cO7cu7Y6GhrEzZzGoViTFOt5DeTMODHfx/e6uy2HU5Gb2bfwA6zCjXLtLtOMK4/gNgOR QgW8/0nqdLdLtglhKAAQ7SZT9tXQ+WaIad+qkX2da7jd/a322Jc1StANEzxhPXTgTtB6 22L2CI/J6/0NM9qrWzmsYluaoodE+i80qWslmNaQOWm3CtoabGSk9iEe1Vn+aayCUhQB RFejZmxAmVHEmS5jdtjr2v1bTigauV1iG7VY9wU7u/lr55IK3/tD5CRbSxy1kLiaIDgf sjcQ== X-Gm-Message-State: AOAM533DrzF9IoitrY4vSeFKh3H9a6Q8Z+6CKjnz/lRx4Y7YpvNPrhCy PFL0j0JY0wsGZX3SvFoUmvbjcT1Gogw= X-Google-Smtp-Source: ABdhPJxQzACrK0zNjJinF7ycCFJ7NecdMn71Qkrrv2vbBGO8p+CehT7DcHTwOV/qi8H/R7/5hy0T/fXe/R8= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a05:6214:2c5:: with SMTP id g5mr1645qvu.4.1624384772667; Tue, 22 Jun 2021 10:59:32 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:27 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-43-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 42/54] KVM: x86/mmu: Don't update nested guest's paging bitmasks if CR0.PG=0 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Don't bother updating the bitmasks and last-leaf information if paging is disabled as the metadata will never be used. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 52311c2efd5d..30eb1364fc20 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4646,12 +4646,12 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) context->gva_to_gpa = paging32_gva_to_gpa; } - if (is_cr0_pg(context)) + if (is_cr0_pg(context)) { reset_rsvds_bits_mask(vcpu, context); - - update_permission_bitmask(context, false); - update_pkru_bitmask(context); - update_last_nonleaf_level(context); + update_permission_bitmask(context, false); + update_pkru_bitmask(context); + update_last_nonleaf_level(context); + } reset_tdp_shadow_zero_bits_mask(vcpu, context); } @@ -4899,12 +4899,12 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu) else g_context->gva_to_gpa = paging32_gva_to_gpa_nested; - if (is_cr0_pg(g_context)) + if (is_cr0_pg(g_context)) { reset_rsvds_bits_mask(vcpu, g_context); - - update_permission_bitmask(g_context, false); - update_pkru_bitmask(g_context); - update_last_nonleaf_level(g_context); + update_permission_bitmask(g_context, false); + update_pkru_bitmask(g_context); + update_last_nonleaf_level(g_context); + } } void kvm_init_mmu(struct kvm_vcpu *vcpu) From patchwork Tue Jun 22 17:57:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.5 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4E87C2B9F4 for ; Tue, 22 Jun 2021 18:03:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9FF6061353 for ; Tue, 22 Jun 2021 18:03:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232441AbhFVSFg (ORCPT ); Tue, 22 Jun 2021 14:05:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230338AbhFVSFC (ORCPT ); Tue, 22 Jun 2021 14:05:02 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 036C7C06114B for ; Tue, 22 Jun 2021 10:59:36 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id y35-20020a0cb8a30000b0290270c2da88e8so9360283qvf.13 for ; Tue, 22 Jun 2021 10:59:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=hRJEGhbhqAfNPA27dpmj7uFB2lkP9w7d+2FH9hTQv88=; b=LbT2NueKOHWi3WQ9RrIdxDQukWUrHBAy8KHwwTY0PfGQdvo/rfzzCU/pUOetX+Gt+7 +ty6lgib0c8N5qi5RRnDTG7Lrm0NODd41wWo5cyiSel2S80Rz2DgY4z2ghdQWEocKCZn J+S0yZrGoHEJaADq0l2rrDG6KB2gQE+MRnpq3YZ4D9+ijHqE5gHol4yoaP9R2jezvX+j lO8bcI394J4YuhqTck7GPx1Utc8qZfoUCvARNhfeQFL2lDiWWevxpVA5axo7gJK2o19F X/WXVBkuWIyivjwh1Mi5IkbEVDx+s02urFIIeQni7sIsf0GC7DtiiEhUz+VdifdMU1kl r5WA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=hRJEGhbhqAfNPA27dpmj7uFB2lkP9w7d+2FH9hTQv88=; b=CbGIvrmVDTVl6XMbIZdktynK6jvz9QwAqjgfHLRjLG5Q/CKgVaP80epNHmcejtxYIb DNOLNOFypwa414nj97N9wyYgsbGzHZY/6EB+/IkR5ZUPhiBeewXECtbNTC7Mhu3nx03j zXCmymwZHZ8mFMvojhTQPUI0HfUkSPPENLRqDYNhrK/02vCG2IRU+Fcnqz0EUyj4lefI +aQm0BzwKYnzIdIFLuGFPJKAxc/Yw674YkSSIC4ghLUDriGzF83NkLfFNC4+PnoKoSrO +7/DftEo16Zy+WR4BJiUWFKJ7Rg6Aeu4UgLX9WYtiHo9fosH/bVsu1UzqI/BplWcuJfu 0AzQ== X-Gm-Message-State: AOAM5324Z/RX/DL3PG+dixk5eLmyyAqURwcWoasqVuwa8GIvquCvY1/g kpZsMzVEKo7CP2t+Rb20k5zXqbi/seo= X-Google-Smtp-Source: ABdhPJzluTOXRCgRoqzpFVGxHtv5KgW0UhCZNTXUeKDUhFO4g61nDrIyNf/ErV5RUcJdmrUHVxw8MkXBgnE= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a05:6214:14e4:: with SMTP id k4mr16630569qvw.3.1624384775081; Tue, 22 Jun 2021 10:59:35 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:28 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-44-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 43/54] KVM: x86/mmu: Add helper to update paging metadata From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Consolidate MMU guest metadata updates into a common helper for TDP, shadow, and nested MMUs. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 33 +++++++++++++++------------------ 1 file changed, 15 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 30eb1364fc20..a79871fe5b01 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4507,6 +4507,18 @@ static void update_last_nonleaf_level(struct kvm_mmu *mmu) mmu->last_nonleaf_level++; } +static void reset_guest_paging_metadata(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu) +{ + if (!is_cr0_pg(mmu)) + return; + + reset_rsvds_bits_mask(vcpu, mmu); + update_permission_bitmask(mmu, false); + update_pkru_bitmask(mmu); + update_last_nonleaf_level(mmu); +} + static void paging64_init_context_common(struct kvm_mmu *context, int root_level) { @@ -4646,12 +4658,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) context->gva_to_gpa = paging32_gva_to_gpa; } - if (is_cr0_pg(context)) { - reset_rsvds_bits_mask(vcpu, context); - update_permission_bitmask(context, false); - update_pkru_bitmask(context); - update_last_nonleaf_level(context); - } + reset_guest_paging_metadata(vcpu, context); reset_tdp_shadow_zero_bits_mask(vcpu, context); } @@ -4705,12 +4712,7 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte else paging32_init_context(context); - if (____is_cr0_pg(regs)) { - reset_rsvds_bits_mask(vcpu, context); - update_permission_bitmask(context, false); - update_pkru_bitmask(context); - update_last_nonleaf_level(context); - } + reset_guest_paging_metadata(vcpu, context); context->shadow_root_level = new_role.base.level; reset_shadow_zero_bits_mask(vcpu, context); @@ -4899,12 +4901,7 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu) else g_context->gva_to_gpa = paging32_gva_to_gpa_nested; - if (is_cr0_pg(g_context)) { - reset_rsvds_bits_mask(vcpu, g_context); - update_permission_bitmask(g_context, false); - update_pkru_bitmask(g_context); - update_last_nonleaf_level(g_context); - } + reset_guest_paging_metadata(vcpu, g_context); } void kvm_init_mmu(struct kvm_vcpu *vcpu) From patchwork Tue Jun 22 17:57:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59C5EC48BE5 for ; Tue, 22 Jun 2021 18:03:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3B556613B1 for ; Tue, 22 Jun 2021 18:03:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233048AbhFVSFl (ORCPT ); Tue, 22 Jun 2021 14:05:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232684AbhFVSFE (ORCPT ); Tue, 22 Jun 2021 14:05:04 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E2FAC0698D0 for ; Tue, 22 Jun 2021 10:59:38 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id v134-20020a37618c0000b02902fa5329f2b4so4596807qkb.18 for ; Tue, 22 Jun 2021 10:59:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=9EgKKDv2eyneOWnDUXvhkE8EvMv/2XlNSaE/g3GbCnE=; b=VgzhmExW5EWuOBKT8sYAyFXZVRqLUl3y74ZsDuEycV0aKwNhv/+M2Lf7Vw1MhxexqH iw3z/0+TAui+mk9IOpFB9TNgCvGU0hZeflavWWRrwR4rkKx9BUqBk9oc2MT+reARIgLt XPLYB30cn7u4MLvl4n7l7KkRlm+54aCXY1c+O1nMX+i8bHUO2IzDV9KU9pIJ3Yzkmd/0 bd4Skjut5EmowCr1Fvzh64WLqos6BptkUrGPWpFGQSFu8A2242F/E/5y7MVVzcLD/4Yr V0XhP/AgnY5okYxC20/b8tGNl2FTIzw26BLf12be9cgHaAbb/tbl8I1UgFjjYQCb7+NC 6wtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=9EgKKDv2eyneOWnDUXvhkE8EvMv/2XlNSaE/g3GbCnE=; b=njpPTDWONZJAFwoI3T3ezF8pic//sPVPhMgdrtX/ISc3pfRGTpH+gsgBU/EFKlcwwF uMJ1KYumWmb0HYRlpLXgIqtY6T/KiUCfCe1ja7v4NClWKIpC3W1KgdWrmLmo7fmwOikc x0et0nGtO7yGfjgu/621QLYHmzr1RJz76Vh/2czU+NU48+u7GA8uKIlgqWbWG+IiTvoo TpmGVfCz9LYpaMsifwTD4Kz9XRdWIXCDJV1hsxvd1vGkIDDbaZ5JCKQIxl/MiwSwwUrz afx/TfFZ99rNdOuMpDOUU1mzrD/OnbJ4HtmqLqX8JvrXDEXLqy3ppeGkNENBTqWmfqPG 4gGA== X-Gm-Message-State: AOAM533jp8hDjue13V83T4zjryucPMHFDY0/SPJSO4Cey1WIGbPNiOIx 3yB4ndtAph+he6HUVOHjd9VPy7SOyks= X-Google-Smtp-Source: ABdhPJwFhJhBUprEOYt+YRRpWkw67AmhmFI51QqJAt1nW2b0ySAqTTSjTACYgdInnGHyUcpKAmBqQtNNahs= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:b687:: with SMTP id s7mr6448853ybj.138.1624384777562; Tue, 22 Jun 2021 10:59:37 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:29 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-45-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 44/54] KVM: x86/mmu: Add a helper to calculate root from role_regs From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a helper to calculate the level for non-EPT page tables from the MMU's role_regs. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 60 ++++++++++++++++++------------------------ 1 file changed, 25 insertions(+), 35 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a79871fe5b01..b83fd635e1f2 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -238,6 +238,19 @@ struct kvm_mmu_role_regs vcpu_to_role_regs(struct kvm_vcpu *vcpu) return regs; } +static int role_regs_to_root_level(struct kvm_mmu_role_regs *regs) +{ + if (!____is_cr0_pg(regs)) + return 0; + else if (____is_efer_lma(regs)) + return ____is_cr4_la57(regs) ? PT64_ROOT_5LEVEL : + PT64_ROOT_4LEVEL; + else if (____is_cr4_pae(regs)) + return PT32E_ROOT_LEVEL; + else + return PT32_ROOT_LEVEL; +} + static inline bool kvm_available_flush_tlb_with_range(void) { return kvm_x86_ops.tlb_remote_flush_with_range; @@ -3949,7 +3962,6 @@ static void nonpaging_init_context(struct kvm_mmu *context) context->gva_to_gpa = nonpaging_gva_to_gpa; context->sync_page = nonpaging_sync_page; context->invlpg = NULL; - context->root_level = 0; context->direct_map = true; } @@ -4519,11 +4531,8 @@ static void reset_guest_paging_metadata(struct kvm_vcpu *vcpu, update_last_nonleaf_level(mmu); } -static void paging64_init_context_common(struct kvm_mmu *context, - int root_level) +static void paging64_init_context_common(struct kvm_mmu *context) { - context->root_level = root_level; - WARN_ON_ONCE(!is_cr4_pae(context)); context->page_fault = paging64_page_fault; context->gva_to_gpa = paging64_gva_to_gpa; @@ -4532,18 +4541,13 @@ static void paging64_init_context_common(struct kvm_mmu *context, context->direct_map = false; } -static void paging64_init_context(struct kvm_mmu *context, - struct kvm_mmu_role_regs *regs) +static void paging64_init_context(struct kvm_mmu *context) { - int root_level = ____is_cr4_la57(regs) ? PT64_ROOT_5LEVEL : - PT64_ROOT_4LEVEL; - - paging64_init_context_common(context, root_level); + paging64_init_context_common(context); } static void paging32_init_context(struct kvm_mmu *context) { - context->root_level = PT32_ROOT_LEVEL; context->page_fault = paging32_page_fault; context->gva_to_gpa = paging32_gva_to_gpa; context->sync_page = paging32_sync_page; @@ -4553,7 +4557,7 @@ static void paging32_init_context(struct kvm_mmu *context) static void paging32E_init_context(struct kvm_mmu *context) { - paging64_init_context_common(context, PT32E_ROOT_LEVEL); + paging64_init_context_common(context); } static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *vcpu, @@ -4642,21 +4646,16 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) context->get_guest_pgd = get_cr3; context->get_pdptr = kvm_pdptr_read; context->inject_page_fault = kvm_inject_page_fault; + context->root_level = role_regs_to_root_level(®s); - if (!is_paging(vcpu)) { + if (!is_paging(vcpu)) context->gva_to_gpa = nonpaging_gva_to_gpa; - context->root_level = 0; - } else if (is_long_mode(vcpu)) { - context->root_level = ____is_cr4_la57(®s) ? - PT64_ROOT_5LEVEL : PT64_ROOT_4LEVEL; + else if (is_long_mode(vcpu)) context->gva_to_gpa = paging64_gva_to_gpa; - } else if (is_pae(vcpu)) { - context->root_level = PT32E_ROOT_LEVEL; + else if (is_pae(vcpu)) context->gva_to_gpa = paging64_gva_to_gpa; - } else { - context->root_level = PT32_ROOT_LEVEL; + else context->gva_to_gpa = paging32_gva_to_gpa; - } reset_guest_paging_metadata(vcpu, context); reset_tdp_shadow_zero_bits_mask(vcpu, context); @@ -4706,11 +4705,12 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte if (!____is_cr0_pg(regs)) nonpaging_init_context(context); else if (____is_efer_lma(regs)) - paging64_init_context(context, regs); + paging64_init_context(context); else if (____is_cr4_pae(regs)) paging32E_init_context(context); else paging32_init_context(context); + context->root_level = role_regs_to_root_level(regs); reset_guest_paging_metadata(vcpu, context); context->shadow_root_level = new_role.base.level; @@ -4849,17 +4849,7 @@ kvm_calc_nested_mmu_role(struct kvm_vcpu *vcpu, struct kvm_mmu_role_regs *regs) * to "true" to try to detect bogus usage of the nested MMU. */ role.base.direct = true; - - if (!____is_cr0_pg(regs)) - role.base.level = 0; - else if (____is_efer_lma(regs)) - role.base.level = ____is_cr4_la57(regs) ? PT64_ROOT_5LEVEL : - PT64_ROOT_4LEVEL; - else if (____is_cr4_pae(regs)) - role.base.level = PT32E_ROOT_LEVEL; - else - role.base.level = PT32_ROOT_LEVEL; - + role.base.level = role_regs_to_root_level(regs); return role; } From patchwork Tue Jun 22 17:57:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 827F2C2B9F4 for ; Tue, 22 Jun 2021 18:03:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6DA6560E0B for ; Tue, 22 Jun 2021 18:03:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233068AbhFVSF4 (ORCPT ); Tue, 22 Jun 2021 14:05:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232902AbhFVSFM (ORCPT ); Tue, 22 Jun 2021 14:05:12 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E10AEC0698D8 for ; Tue, 22 Jun 2021 10:59:40 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id z4-20020ac87f840000b02902488809b6d6so82383qtj.9 for ; Tue, 22 Jun 2021 10:59:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=k+Hy19n7FuHwFcs3JayYH/6a6QV1H82GfzCUZvXtG9E=; b=kc43GkOoFjhxBlR2WnQizL7y/R6T20mYKUPc4GWABbmYOlM+hIYTLQly6uwfypaxcN hbqSMfJGPmbRYC0pc6LMOeCo2Q5wyUWTehdI26c7fwa28fuVht/I8l3tCycOpeMhGbqR HIvG+LqzkKzq7y2iEHXyjiW9wyKm8/017obkkSMOpK2yDqz5mgxc2qFkFiXJhgcfhzo3 2KbnRJCbw3z50LzyZ34L258jbdmuFgi1ZkZmbxDLohfETk33DzmzjwwyCqqcQKhXOlBg NZgkGjCSNYK/y0II5mcoczplHqFfr+fZarq5qcPGjZkgL+sqE8I1sTaUrBOMCKxLcYex Qn6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=k+Hy19n7FuHwFcs3JayYH/6a6QV1H82GfzCUZvXtG9E=; b=Syha37GmNt98Ejegl4zj7HlcblmYNxYeezK99tr6N9MD2Bos1V5PbZE4MrMcGpbsTb AwxXVxQlwAbYJy+Fwn2uoujCl7Z9FBX+Jk5JCcZWhR/NbM0Mq13z+eVzLPxa3V50bYzJ wCaToWmaSREgtV3tV9r50MQbh4Yp4XoveiaOTvACpNCyTHabjPMnlw7iwkExxQR0zE0D 2MWCEgCQDEBm3oIZ/SpsrlrNiosMkKfYaVt7xUiyA8l5Hmv8XuPLNYF7xLJ9iDCS9rnr 3TmCfk43Sl8P+5j7zNnpHn3BJu01PnfkK1iHc+eEVLq1UtWdfChkZwKs8YJRZgkB6UXJ 9aew== X-Gm-Message-State: AOAM533SZUOgRokierwWCY5vfiWfBdjaCEXrUA4pXIr/IArSx8D9kSAu KbWqsxAoL1Dspst6HlZZJucIOfBJ1SM= X-Google-Smtp-Source: ABdhPJzDV7t+Mio030wqwE3ZWqwSFDniRvJIygSpTKs4WI9yL6JICXgMf3/orZizQZF8JOXx/lMAOkL+95U= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:ca10:: with SMTP id a16mr6601987ybg.172.1624384780054; Tue, 22 Jun 2021 10:59:40 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:30 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-46-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 45/54] KVM: x86/mmu: Collapse 32-bit PAE and 64-bit statements for helpers From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Skip paging32E_init_context() and paging64_init_context_common() and go directly to paging64_init_context() (was the common version) now that the relevant flows don't need to distinguish between 64-bit PAE and 32-bit PAE for other reasons. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 19 ++----------------- 1 file changed, 2 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b83fd635e1f2..4e11cb284006 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4531,9 +4531,8 @@ static void reset_guest_paging_metadata(struct kvm_vcpu *vcpu, update_last_nonleaf_level(mmu); } -static void paging64_init_context_common(struct kvm_mmu *context) +static void paging64_init_context(struct kvm_mmu *context) { - WARN_ON_ONCE(!is_cr4_pae(context)); context->page_fault = paging64_page_fault; context->gva_to_gpa = paging64_gva_to_gpa; context->sync_page = paging64_sync_page; @@ -4541,11 +4540,6 @@ static void paging64_init_context_common(struct kvm_mmu *context) context->direct_map = false; } -static void paging64_init_context(struct kvm_mmu *context) -{ - paging64_init_context_common(context); -} - static void paging32_init_context(struct kvm_mmu *context) { context->page_fault = paging32_page_fault; @@ -4555,11 +4549,6 @@ static void paging32_init_context(struct kvm_mmu *context) context->direct_map = false; } -static void paging32E_init_context(struct kvm_mmu *context) -{ - paging64_init_context_common(context); -} - static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *vcpu, struct kvm_mmu_role_regs *regs) { @@ -4650,8 +4639,6 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) if (!is_paging(vcpu)) context->gva_to_gpa = nonpaging_gva_to_gpa; - else if (is_long_mode(vcpu)) - context->gva_to_gpa = paging64_gva_to_gpa; else if (is_pae(vcpu)) context->gva_to_gpa = paging64_gva_to_gpa; else @@ -4704,10 +4691,8 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte if (!____is_cr0_pg(regs)) nonpaging_init_context(context); - else if (____is_efer_lma(regs)) + else if (____is_cr4_pae(regs)) paging64_init_context(context); - else if (____is_cr4_pae(regs)) - paging32E_init_context(context); else paging32_init_context(context); context->root_level = role_regs_to_root_level(regs); From patchwork Tue Jun 22 17:57:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338307 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 384BEC48BDF for ; Tue, 22 Jun 2021 18:03:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 216C861289 for ; Tue, 22 Jun 2021 18:03:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232749AbhFVSF7 (ORCPT ); Tue, 22 Jun 2021 14:05:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37654 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232911AbhFVSFM (ORCPT ); Tue, 22 Jun 2021 14:05:12 -0400 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02BBBC0698DC for ; Tue, 22 Jun 2021 10:59:43 -0700 (PDT) Received: by mail-qv1-xf4a.google.com with SMTP id k12-20020a0cfd6c0000b029020df9543019so18997278qvs.14 for ; Tue, 22 Jun 2021 10:59:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=1ZiFOSwRtx32Cmg7gPoYH6t1rIajFQFzmmt4+1jIiJU=; b=Y53/BvHc6UgBTsWlXJMVwoW18CMnCIdlsbhixtdLTOkc1gfD7ZUohb8GJahWXHVEH8 ga7V0ax66f2eCcZTSl7ivlh37s/0jvP6YnK4vzsERBKzF2EaPHDEKh6jesaowpyW/L0c VDHUrOUxtgGGxfssFVzMAiEmmCCEPos9SdfRzoTfvQ9sgz0lehIB1HLdbc+vMehGzqDi 4tvwXCLnBfwWULefc4Q+SFW5rki89N7fFYK1sIQyZ0aiQExhl54ZcfLlIlHb+eWQgqtE +lAN3RNr+53em+DB0zBbzlhUdFsh+9yw0Cih+rODz6DmtDbO8pG9MHVWHBUDTkkGgwZT kXqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=1ZiFOSwRtx32Cmg7gPoYH6t1rIajFQFzmmt4+1jIiJU=; b=Eip/jsdKVozmPwqPUeOZExfFqeNzFsbqDUqVD5VGYwTMh8zfqe93txIhOHjFFKkxdR rgkZ0rzZOtbjMlbKgFypdUBGGdq1tNX5cpn6CN5qIWOvs3Sdpl/ctEZDF00o38WF/4tR /So1zGq8lRGNXAz8DXbNO8EMro1jVI7hJKhkXqIitPJ62wqgBFOCirY+JDpzUugFA/Tr HDKKot45cRqCE0NDR5pzW268sIL4dDlcubQCheAxJf2t/hiKUFjBbOWeccCRQflmR37g JS0sp2jBnw263J6gJjYP2kkT2DTJxZNSmvnDcSaXmK3m0lQFIIgik0WEKma6BmVVlWDu mc1Q== X-Gm-Message-State: AOAM532be9jdqvCLDf2EbjEV+pj5SjiDJJDSKOcv8LzN2BSxOjJbBTLp DPb7ssVFI4nB+pWQShu/xVbBJj/K+BQ= X-Google-Smtp-Source: ABdhPJw4hXz1ZOVS1kR/VpmOtIrSSZMI49ufuWuH2yrlO4nuS0i+RZkS1BvaQo8S4ZKz+GBLbLUboFrS+o4= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:8b0b:: with SMTP id i11mr6993196ybl.484.1624384782096; Tue, 22 Jun 2021 10:59:42 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:31 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-47-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 46/54] KVM: x86/mmu: Use MMU's role to determine PTTYPE From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use the MMU's role instead of vCPU state or role_regs to determine the PTTYPE, i.e. which helpers to wire up. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4e11cb284006..92260cf48d5e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4637,9 +4637,9 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) context->inject_page_fault = kvm_inject_page_fault; context->root_level = role_regs_to_root_level(®s); - if (!is_paging(vcpu)) + if (!is_cr0_pg(context)) context->gva_to_gpa = nonpaging_gva_to_gpa; - else if (is_pae(vcpu)) + else if (is_cr4_pae(context)) context->gva_to_gpa = paging64_gva_to_gpa; else context->gva_to_gpa = paging32_gva_to_gpa; @@ -4689,9 +4689,9 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte context->mmu_role.as_u64 = new_role.as_u64; - if (!____is_cr0_pg(regs)) + if (!is_cr0_pg(context)) nonpaging_init_context(context); - else if (____is_cr4_pae(regs)) + else if (is_cr4_pae(context)) paging64_init_context(context); else paging32_init_context(context); From patchwork Tue Jun 22 17:57:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338309 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BFD2C2B9F4 for ; Tue, 22 Jun 2021 18:04:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 618D460E0B for ; Tue, 22 Jun 2021 18:04:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232879AbhFVSGN (ORCPT ); Tue, 22 Jun 2021 14:06:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37666 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232624AbhFVSF3 (ORCPT ); Tue, 22 Jun 2021 14:05:29 -0400 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C39FC0698DF for ; Tue, 22 Jun 2021 10:59:45 -0700 (PDT) Received: by mail-qv1-xf4a.google.com with SMTP id 12-20020a05621420ecb02902766cc25115so5310939qvk.1 for ; Tue, 22 Jun 2021 10:59:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=ArNWygRGzFaRn0DtOEttycOemqsRGUKy10ZzfwmfjdA=; b=NdobaZFo1AJ5qZn1W5xQjilSWnQ10dmUjvDh3pH951/nI8D1BlNW7ygwJWUO+76xXD SQRhnLk3abbZKDiNMECc1ejEAKdQil/sgQMbME6jFDP8+xDucxV60AlMxNBy2VQg3J36 fimDg2n0kzevRKnoaGffvgSQpJeY9liCcoXF5BG8oZ+Rid4XjR92TEK+Ma2veNPDYM5P GVcnT/ySE0CbnXEEe5XcqlshcWAOZqZS12nYHtDuD4ngMRwcaBn6o3kH88TSh3Sil3Pj oPwuPNWXJyMhabaHvaHtUIQsGStMu3+1d04EkfEVjBw42YFtGXTutPwA+oDAjDeY7hAS s6Fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=ArNWygRGzFaRn0DtOEttycOemqsRGUKy10ZzfwmfjdA=; b=XwxnSvtsMzzVftwgP41+nrXAXtdBFcrDWA6OraRHkoORqay/y+ZHZ7gStswRGfEoNv rfBxvChYRYhKOiFo9rZPYbvG/aX41SUC16nim7jX9F4V5/gIRmRhqaArgMlZsgXI37RX Xo0qDU0QkSMfOC1rj+n7jgY0BKVivK2/89xznMRckvV5jrSMN8yzwzYOdcV4aTALtH8x D7Qqnw8keNhV9ZsbFawUnoCbyvBI2azlAV7Fd0xc4tU9maOQA6O/Eyb6iZekXm7BvkP6 O333QZwNwUnQ1iuC6xS+T0vSfJEqPveZ3ajFFR7gPklYq2zO4UmgA9Jgm9sBmEXwjYh9 rJRg== X-Gm-Message-State: AOAM531voGz4cmVkhUITwkTOfJ9yStP7k548jp0ef/MvP2Usi12IHtNS YPV0zPZTCC8kvv/qFuket7XKsRhQGXg= X-Google-Smtp-Source: ABdhPJyQBaWws0OZcWOnZgrxIlW0N8oEXQNvB+0eJIWd+vQYclJkJIKiQQC1jX03w/E97vYMieMW4pD8IuE= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a0c:e414:: with SMTP id o20mr23096122qvl.23.1624384784294; Tue, 22 Jun 2021 10:59:44 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:32 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-48-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 47/54] KVM: x86/mmu: Add helpers to do full reserved SPTE checks w/ generic MMU From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Extract the reserved SPTE check and print helpers in get_mmio_spte() to new helpers so that KVM can also WARN on reserved badness when making a SPTE. Tag the checking helper with __always_inline to improve the probability of the compiler generating optimal code for the checking loop, e.g. gcc appears to avoid using %rbp when the helper is tagged with a vanilla "inline". No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 23 ++--------------------- arch/x86/kvm/mmu/spte.h | 32 ++++++++++++++++++++++++++++++++ 2 files changed, 34 insertions(+), 21 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 92260cf48d5e..34e7a489e71b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3594,19 +3594,6 @@ static gpa_t nonpaging_gva_to_gpa_nested(struct kvm_vcpu *vcpu, gpa_t vaddr, return vcpu->arch.nested_mmu.translate_gpa(vcpu, vaddr, access, exception); } -static bool -__is_rsvd_bits_set(struct rsvd_bits_validate *rsvd_check, u64 pte, int level) -{ - int bit7 = (pte >> 7) & 1; - - return pte & rsvd_check->rsvd_bits_mask[bit7][level-1]; -} - -static bool __is_bad_mt_xwr(struct rsvd_bits_validate *rsvd_check, u64 pte) -{ - return rsvd_check->bad_mt_xwr & BIT_ULL(pte & 0x3f); -} - static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct) { /* @@ -3684,13 +3671,7 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) rsvd_check = &vcpu->arch.mmu->shadow_zero_check; for (level = root; level >= leaf; level--) - /* - * Use a bitwise-OR instead of a logical-OR to aggregate the - * reserved bit and EPT's invalid memtype/XWR checks to avoid - * adding a Jcc in the loop. - */ - reserved |= __is_bad_mt_xwr(rsvd_check, sptes[level]) | - __is_rsvd_bits_set(rsvd_check, sptes[level], level); + reserved |= is_rsvd_spte(rsvd_check, sptes[level], level); if (reserved) { pr_err("%s: reserved bits set on MMU-present spte, addr 0x%llx, hierarchy:\n", @@ -3698,7 +3679,7 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) for (level = root; level >= leaf; level--) pr_err("------ spte = 0x%llx level = %d, rsvd bits = 0x%llx", sptes[level], level, - rsvd_check->rsvd_bits_mask[(sptes[level] >> 7) & 1][level-1]); + get_rsvd_bits(rsvd_check, sptes[level], level)); } return reserved; diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index bca0ba11cccf..47e10dd9352d 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -293,6 +293,38 @@ static inline bool is_dirty_spte(u64 spte) return dirty_mask ? spte & dirty_mask : spte & PT_WRITABLE_MASK; } +static inline u64 get_rsvd_bits(struct rsvd_bits_validate *rsvd_check, u64 pte, + int level) +{ + int bit7 = (pte >> 7) & 1; + + return rsvd_check->rsvd_bits_mask[bit7][level-1]; +} + +static inline bool __is_rsvd_bits_set(struct rsvd_bits_validate *rsvd_check, + u64 pte, int level) +{ + return pte & get_rsvd_bits(rsvd_check, pte, level); +} + +static inline bool __is_bad_mt_xwr(struct rsvd_bits_validate *rsvd_check, + u64 pte) +{ + return rsvd_check->bad_mt_xwr & BIT_ULL(pte & 0x3f); +} + +static __always_inline bool is_rsvd_spte(struct rsvd_bits_validate *rsvd_check, + u64 spte, int level) +{ + /* + * Use a bitwise-OR instead of a logical-OR to aggregate the reserved + * bits and EPT's invalid memtype/XWR checks to avoid an extra Jcc + * (this is used in hot paths). + */ + return __is_bad_mt_xwr(rsvd_check, spte) | + __is_rsvd_bits_set(rsvd_check, spte, level); +} + static inline bool spte_can_locklessly_be_made_writable(u64 spte) { return (spte & shadow_host_writable_mask) && From patchwork Tue Jun 22 17:57:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BBA4C2B9F4 for ; Tue, 22 Jun 2021 18:04:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2C891611CE for ; Tue, 22 Jun 2021 18:04:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232816AbhFVSGv (ORCPT ); Tue, 22 Jun 2021 14:06:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232520AbhFVSFo (ORCPT ); Tue, 22 Jun 2021 14:05:44 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C3E9C06121D for ; Tue, 22 Jun 2021 10:59:47 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id z4-20020ac87f840000b02902488809b6d6so82613qtj.9 for ; Tue, 22 Jun 2021 10:59:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=jqg/PyNu/CSyHTFVeBuuLGehL9/qaV195uMT3A4mr/g=; b=LquhTXqyz24eI9YGUsfTyD0LyYUd2xnbESLro4MVRoB1R1qzZVQRbsHsgI8idus0RJ oaeHeQDQ6+nlUhqiSvN0a7QdF08rBMsB/RqRcg/7qz3qwPXduVip3SjA2tPixm3frCMr WSgduMNonn7lu+bLk+OFlmv/KFMdZdiyiROXXfbOajucu44SQGyJ6W/5YCV2I0F4eRTW E5xbsjmlZ/ZtwZjkrhtPGRWIFFR/5KR+Onss7pgoj50p/I1spHrxJwlcO1xbE6YNpcZV OOrIw2kDpykA6EX9DNGza5oyUt8xAVaoq8TIjuXiWYnlqbsjU5FsxSelHsqXyzB3aSxR s5Aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=jqg/PyNu/CSyHTFVeBuuLGehL9/qaV195uMT3A4mr/g=; b=cCiiTfmdie5aJzfl9hkhFkkHhMIVPMFQsd3r5jUY1VtXk0rcEFlRoMhKDWkNee3+fK 5nVU5NiV9RYU5gZlSzHpizhLSoCo7bqNzI6m0k6Oviug5cVdQMV8mvPLFvfltWHHmxNB IYFZC0owLw8bCni+1GxhMLV+07kNdMtaKiy2L3o0jU8/p7XHguQhVAPYF2NrI1ZMw31y 9AeDMuYeD/9BpHzh+QrR8Hec76QreGhiwLtJZ1r1RrYwyYo30nu9xNMzhvlCdpWGi94m BP/W5yFikSSLboY7rKU8VvAiN5NTR3U4n1myFsDa/aAl26B+wNN9/BKfOo84NVucFswj KsNA== X-Gm-Message-State: AOAM532aGjx9BkABpfWETyZ4D0kPRM4VL5L8TNKM32WThVxIbBWMQykG EerBfScTwdQ6CI8IjCcreRR8GCPe3JU= X-Google-Smtp-Source: ABdhPJw3Z5nrv/deUhYu6Pr3e+J3O1cNVlD0syU/ZctFSOxjcNcZXd8br5tuMa9yzupldAnlITTeUCkDQuk= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a0c:fb12:: with SMTP id c18mr1244qvp.40.1624384786360; Tue, 22 Jun 2021 10:59:46 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:33 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-49-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 48/54] KVM: x86/mmu: WARN on any reserved SPTE value when making a valid SPTE From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Replace make_spte()'s WARN on a collision with the magic MMIO value with a generic WARN on reserved bits being set (including EPT's reserved WX combination). Warning on any reserved bits covers MMIO, A/D tracking bits with PAE paging, and in theory any future goofs that are introduced. Opportunistically convert to ONCE behavior to avoid spamming the kernel log, odds are very good that if KVM screws up one SPTE, it will botch all SPTEs for the same MMU. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/spte.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 246e61e0771e..3e97cdb13eb7 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -175,7 +175,10 @@ int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, spte = mark_spte_for_access_track(spte); out: - WARN_ON(is_mmio_spte(spte)); + WARN_ONCE(is_rsvd_spte(&vcpu->arch.mmu->shadow_zero_check, spte, level), + "spte = 0x%llx, level = %d, rsvd bits = 0x%llx", spte, level, + get_rsvd_bits(&vcpu->arch.mmu->shadow_zero_check, spte, level)); + *new_spte = spte; return ret; } From patchwork Tue Jun 22 17:57:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338313 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00D6AC2B9F4 for ; Tue, 22 Jun 2021 18:04:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CEF96611CE for ; Tue, 22 Jun 2021 18:04:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232892AbhFVSG6 (ORCPT ); Tue, 22 Jun 2021 14:06:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232456AbhFVSFz (ORCPT ); Tue, 22 Jun 2021 14:05:55 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8FB62C07E5E0 for ; Tue, 22 Jun 2021 10:59:49 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id c29-20020ac86e9d0000b0290247b267c8e4so33759qtv.22 for ; Tue, 22 Jun 2021 10:59:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=LTdWaGBB+4s5m2BBkCxc7E6E/euT/fJ4rCSegaNd6CU=; b=rm+C67MCyZUSKw7JFoVlyx3PKqIAWeOtjpjpZU2QLM9zJkJG1S/E8hbxQtyG4cIhMw R06SCfptDnbvipdh8pN/PR1e0qEhu1pZP+PTt4KIbJ9wHYtg24JszsbO8OTE1Y5njbp1 krkOQ3nQ7l6UrMX6WZ1/cx4xaMTmEH4g1KvqNrnGq9H35TZIwKylJLBU3GLwG6gndk5W +ONu5SDF/x7RpruixI26BcpBvFuQVtNkw9v132lBXPfblr1V989IFzmQCYYsbKFuzTXe tNKA90O7WBCmJt0Ny91YNnGAseb5rKX6xYRNWae29LRUZNbEWdFl2IDTJwW5Glqv4O7F zN5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=LTdWaGBB+4s5m2BBkCxc7E6E/euT/fJ4rCSegaNd6CU=; b=TfdBlc3rqOSazOnCvhO7jWRJsdSBbFlzi5JIZUCziAW1/JvwWMSAgqlqvI9F7Wu69g mtOHnF98sgNEEqB1XlX7zFAtMCY1kzKyywwQbJesFrvDp6+/ok4yrql34vHIB9gM3qXA oizf8T4pMcLQvMvfPVtEl4RAHEOIBLXRVBGJ8C3g7V/Ydj+mLLonbyPXd6WS9FLvHcJo bapS20nRYasr6Xm4S38BsFfwKRgskagB6uv2+bp8d/Vsmj1kz4Etde/qDVQrjBojzQ8l FdlCIFvdarYzXXa+TP6LeQN4w8Vd/e035jnwBNWv31WNc467XlsD8fFRCPNBKgZw9gFf rSHA== X-Gm-Message-State: AOAM5309Wtc4X8KvoSvW7Wprggi4Hwzx4aqTACsmRztrwxZtoC+j6C/L vxKP36cdigawWI3DAINSNvGhjyB+Kp0= X-Google-Smtp-Source: ABdhPJzT6UOVzUyWsMpZYiXvi7no3jLkErUSZR0L8SA9EGI8L/8F3SKbe4Vz84UsWINDLeug2LbhMC6jq9E= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:bb0b:: with SMTP id z11mr6168135ybg.449.1624384788721; Tue, 22 Jun 2021 10:59:48 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:34 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-50-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 49/54] KVM: x86: Enhance comments for MMU roles and nested transition trickiness From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Expand the comments for the MMU roles. The interactions with gfn_track PGD reuse in particular are hairy. Regarding PGD reuse, add comments in the nested virtualization flows to call out why kvm_init_mmu() is unconditionally called even when nested TDP is used. Cc: Vitaly Kuznetsov Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 59 +++++++++++++++++++++++++++------ arch/x86/kvm/svm/nested.c | 1 + arch/x86/kvm/vmx/nested.c | 1 + 3 files changed, 50 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index be7088fb0594..2da8b5ddbd6a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -269,12 +269,36 @@ enum x86_intercept_stage; struct kvm_kernel_irq_routing_entry; /* - * the pages used as guest page table on soft mmu are tracked by - * kvm_memory_slot.arch.gfn_track which is 16 bits, so the role bits used - * by indirect shadow page can not be more than 15 bits. + * kvm_mmu_page_role tracks the properties of a shadow page (where shadow page + * also includes TDP pages) to determine whether or not a page can be used in + * the given MMU context. This is a subset of the overall kvm_mmu_role to + * minimize the size of kvm_memory_slot.arch.gfn_track, i.e. allows allocating + * 2 bytes per gfn instead of 4 bytes per gfn. * - * Currently, we used 14 bits that are @level, @gpte_is_8_bytes, @quadrant, @access, - * @efer_nx, @cr0_wp, @smep_andnot_wp and @smap_andnot_wp. + * Indirect upper-level shadow pages are tracked for write-protection via + * gfn_track. As above, gfn_track is a 16 bit counter, so KVM must not create + * more than 2^16-1 upper-level shadow pages at a single gfn, otherwise + * gfn_track will overflow and explosions will ensure. + * + * A unique shadow page (SP) for a gfn is created if and only if an existing SP + * cannot be reused. The ability to reuse a SP is tracked by its role, which + * incorporates various mode bits and properties of the SP. Roughly speaking, + * the number of unique SPs that can theoretically be created is 2^n, where n + * is the number of bits that are used to compute the role. + * + * But, even though there are 18 bits in the mask below, not all combinations + * of modes and flags are possible. The maximum number of possible upper-level + * shadow pages for a single gfn is in the neighborhood of 2^13. + * + * - invalid shadow pages are not accounted. + * - level is effectively limited to four combinations, not 16 as the number + * bits would imply, as 4k SPs are not tracked (allowed to go unsync). + * - level is effectively unused for non-PAE paging because there is exactly + * one upper level (see 4k SP exception above). + * - quadrant is used only for non-PAE paging and is exclusive with + * gpte_is_8_bytes. + * - execonly and ad_disabled are used only for nested EPT, which makes it + * exclusive with quadrant. */ union kvm_mmu_page_role { u32 word; @@ -303,13 +327,26 @@ union kvm_mmu_page_role { }; }; +/* + * kvm_mmu_extended_role complements kvm_mmu_page_role, tracking properties + * relevant to the current MMU configuration. When loading CR0, CR4, or EFER, + * including on nested transitions, if nothing in the full role changes then + * MMU re-configuration can be skipped. @valid bit is set on first usage so we + * don't treat all-zero structure as valid data. + * + * The properties that are tracked in the extended role but not the page role + * are for things that either (a) do not affect the validity of the shadow page + * or (b) are indirectly reflected in the shadow page's role. For example, + * CR4.PKE only affects permission checks for software walks of the guest page + * tables (because KVM doesn't support Protection Keys with shadow paging), and + * CR0.PG, CR4.PAE, and CR4.PSE are indirectly reflected in role.level. + * + * Note, SMEP and SMAP are not redundant with sm*p_andnot_wp in the page role. + * If CR0.WP=1, KVM can reuse shadow pages for the guest regardless of SMEP and + * SMAP, but the MMU's permission checks for software walks need to be SMEP and + * SMAP aware regardless of CR0.WP. + */ union kvm_mmu_extended_role { -/* - * This structure complements kvm_mmu_page_role caching everything needed for - * MMU configuration. If nothing in both these structures changed, MMU - * re-configuration can be skipped. @valid bit is set on first usage so we don't - * treat all-zero structure as valid data. - */ u32 word; struct { unsigned int valid:1; diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 927e545591c3..94389f974ba9 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -424,6 +424,7 @@ static int nested_svm_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, vcpu->arch.cr3 = cr3; kvm_register_mark_available(vcpu, VCPU_EXREG_CR3); + /* Re-initialize the MMU, e.g. to pick up CR4 MMU role changes. */ kvm_init_mmu(vcpu); return 0; diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 183fd9d62fc5..77fc51a852cf 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -1098,6 +1098,7 @@ static int nested_vmx_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, vcpu->arch.cr3 = cr3; kvm_register_mark_available(vcpu, VCPU_EXREG_CR3); + /* Re-initialize the MMU, e.g. to pick up CR4 MMU role changes. */ kvm_init_mmu(vcpu); return 0; From patchwork Tue Jun 22 17:57:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338315 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C2CFC48BDF for ; Tue, 22 Jun 2021 18:04:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 514B36102A for ; Tue, 22 Jun 2021 18:04:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233009AbhFVSG7 (ORCPT ); Tue, 22 Jun 2021 14:06:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233069AbhFVSF4 (ORCPT ); Tue, 22 Jun 2021 14:05:56 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83718C0611C0 for ; Tue, 22 Jun 2021 10:59:51 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id 100-20020aed206d0000b029024ea3acef5bso70135qta.12 for ; Tue, 22 Jun 2021 10:59:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=SxMwq+Fb//YeteExsjIbg17V9/7EC+7XImLfVTV5nh0=; b=fAoidzFy5hhmiTZx77H2Mmpx6u8YLPdcyUQUhyg4Nxuo+jTSYmtT9Dd3BXKXjSRXrN +h8r7QscP2cXvRR2Z7hb4kJSGQxbHWMoKH2ovOrpdxX67nEiAq7WgRnOdHKmewkBr2v5 0z5OZcl9bQ5OySrHCxmJ9Hc79IPYa8B8C8W7f1Ir8rwFzoLrsA873n+aa3ylzsUu3Qv+ sqhZnDCCneH9MyXd25YT0V98giBBZXLYV96Udme3zKII/k4elffLVI0vDC+7svbGQ0p6 ZksbfvHnOxbxNc+jeND2xmuYuZWG43Kv+MFr7z2ysYXATYd0kuh8nKtJ7/P92btseqZt kBjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=SxMwq+Fb//YeteExsjIbg17V9/7EC+7XImLfVTV5nh0=; b=mLBoLVDNIYRkPtXS8NBkEbp9UPeNu+EexaSdpEIsD/A+nNZuLXJvBYadk1J6RgrTaD DPF9yFYyJg+zbBzw9BwEIiT7C+qxVH59mclTcbRyjdGpn4eW1YH0ENIIKsg/jFcC7ntG 8HQ+2s0nceKn7oGI8j4+Ei39h3eBmMuekJU+bxnyc/nh0bRQPyeObcmCJNlHlvLcIfkB vpxY3/VQIgQPwW1fOahit7muqRIz3SK3VQKAw/pzHcxwxihAXGj1sxEff10jQIp3muCT Zrxn/scmRV8rhonwkJhhy8xPRj2aJqJW86dH6WsRdknqDtqKPzYoAO2QL+ZMEPbUbugp m1GA== X-Gm-Message-State: AOAM531qrv5U3zHL4WQ22++UnI0LynVnfuL/hRXCmtOkVENI2Jilv7BA CwV2GnP65f/mB4Os51MhOLgud34D780= X-Google-Smtp-Source: ABdhPJziCZqztf+hFpD0pPSembI4mPzbyKWFKGLTcjCfTLgRUsaT93V3nnoMdpr4KohNkXnphZICRQcS4M8= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:a208:: with SMTP id b8mr6821090ybi.411.1624384790675; Tue, 22 Jun 2021 10:59:50 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:35 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-51-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 50/54] KVM: x86/mmu: Optimize and clean up so called "last nonleaf level" logic From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Drop the pre-computed last_nonleaf_level, which is arguably wrong and at best confusing. Per the comment: Can have large pages at levels 2..last_nonleaf_level-1. the intent of the variable would appear to be to track what levels can _legally_ have large pages, but that intent doesn't align with reality. The computed value will be wrong for 5-level paging, or if 1gb pages are not supported. The flawed code is not a problem in practice, because except for 32-bit PSE paging, bit 7 is reserved if large pages aren't supported at the level. Take advantage of this invariant and simply omit the level magic math for 64-bit page tables (including PAE). For 32-bit paging (non-PAE), the adjustments are needed purely because bit 7 is ignored if PSE=0. Retain that logic as is, but make is_last_gpte() unique per PTTYPE so that the PSE check is avoided for PAE and EPT paging. In the spirit of avoiding branches, bump the "last nonleaf level" for 32-bit PSE paging by adding the PSE bit itself. Note, bit 7 is ignored or has other meaning in CR3/EPTP, but despite FNAME(walk_addr_generic) briefly grabbing CR3/EPTP in "pte", they are not PTEs and will blow up all the other gpte helpers. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 3 --- arch/x86/kvm/mmu/mmu.c | 31 ------------------------------- arch/x86/kvm/mmu/paging_tmpl.h | 31 ++++++++++++++++++++++++++++++- 3 files changed, 30 insertions(+), 35 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 2da8b5ddbd6a..c97b83cf8381 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -457,9 +457,6 @@ struct kvm_mmu { struct rsvd_bits_validate guest_rsvd_check; - /* Can have large pages at levels 2..last_nonleaf_level-1. */ - u8 last_nonleaf_level; - u64 pdptrs[4]; /* pae */ }; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 34e7a489e71b..7849f53fd874 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4071,26 +4071,6 @@ static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn, return false; } -static inline bool is_last_gpte(struct kvm_mmu *mmu, - unsigned level, unsigned gpte) -{ - /* - * The RHS has bit 7 set iff level < mmu->last_nonleaf_level. - * If it is clear, there are no large pages at this level, so clear - * PT_PAGE_SIZE_MASK in gpte if that is the case. - */ - gpte &= level - mmu->last_nonleaf_level; - - /* - * PG_LEVEL_4K always terminates. The RHS has bit 7 set - * iff level <= PG_LEVEL_4K, which for our purpose means - * level == PG_LEVEL_4K; set PT_PAGE_SIZE_MASK in gpte then. - */ - gpte |= level - PG_LEVEL_4K - 1; - - return gpte & PT_PAGE_SIZE_MASK; -} - #define PTTYPE_EPT 18 /* arbitrary */ #define PTTYPE PTTYPE_EPT #include "paging_tmpl.h" @@ -4491,15 +4471,6 @@ static void update_pkru_bitmask(struct kvm_mmu *mmu) } } -static void update_last_nonleaf_level(struct kvm_mmu *mmu) -{ - unsigned root_level = mmu->root_level; - - mmu->last_nonleaf_level = root_level; - if (root_level == PT32_ROOT_LEVEL && is_cr4_pse(mmu)) - mmu->last_nonleaf_level++; -} - static void reset_guest_paging_metadata(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu) { @@ -4509,7 +4480,6 @@ static void reset_guest_paging_metadata(struct kvm_vcpu *vcpu, reset_rsvds_bits_mask(vcpu, mmu); update_permission_bitmask(mmu, false); update_pkru_bitmask(mmu); - update_last_nonleaf_level(mmu); } static void paging64_init_context(struct kvm_mmu *context) @@ -4783,7 +4753,6 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, context->direct_map = false; update_permission_bitmask(context, true); - update_last_nonleaf_level(context); update_pkru_bitmask(context); reset_rsvds_bits_mask_ept(vcpu, context, execonly); reset_ept_shadow_zero_bits_mask(vcpu, context, execonly); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index c92e712607b6..ec1de57f3572 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -305,6 +305,35 @@ static inline unsigned FNAME(gpte_pkeys)(struct kvm_vcpu *vcpu, u64 gpte) return pkeys; } +static inline bool FNAME(is_last_gpte)(struct kvm_mmu *mmu, + unsigned int level, unsigned int gpte) +{ + /* + * For EPT and PAE paging (both variants), bit 7 is either reserved at + * all level or indicates a huge page (ignoring CR3/EPTP). In either + * case, bit 7 being set terminates the walk. + */ +#if PTTYPE == 32 + /* + * 32-bit paging requires special handling because bit 7 is ignored if + * CR4.PSE=0, not reserved. Clear bit 7 in the gpte if the level is + * greater than the last level for which bit 7 is the PAGE_SIZE bit. + * + * The RHS has bit 7 set iff level < (2 + PSE). If it is clear, bit 7 + * is not reserved and does not indicate a large page at this level, + * so clear PT_PAGE_SIZE_MASK in gpte if that is the case. + */ + gpte &= level - (PT32_ROOT_LEVEL + !!mmu->mmu_role.ext.cr4_pse); +#endif + /* + * PG_LEVEL_4K always terminates. The RHS has bit 7 set + * iff level <= PG_LEVEL_4K, which for our purpose means + * level == PG_LEVEL_4K; set PT_PAGE_SIZE_MASK in gpte then. + */ + gpte |= level - PG_LEVEL_4K - 1; + + return gpte & PT_PAGE_SIZE_MASK; +} /* * Fetch a guest pte for a guest virtual address, or for an L2's GPA. */ @@ -421,7 +450,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, /* Convert to ACC_*_MASK flags for struct guest_walker. */ walker->pt_access[walker->level - 1] = FNAME(gpte_access)(pt_access ^ walk_nx_mask); - } while (!is_last_gpte(mmu, walker->level, pte)); + } while (!FNAME(is_last_gpte)(mmu, walker->level, pte)); pte_pkey = FNAME(gpte_pkeys)(vcpu, pte); accessed_dirty = have_ad ? pte_access & PT_GUEST_ACCESSED_MASK : 0; From patchwork Tue Jun 22 17:57:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338317 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91F6CC2B9F4 for ; Tue, 22 Jun 2021 18:04:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 72FF461002 for ; Tue, 22 Jun 2021 18:04:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232673AbhFVSHO (ORCPT ); Tue, 22 Jun 2021 14:07:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232878AbhFVSGL (ORCPT ); Tue, 22 Jun 2021 14:06:11 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88307C03C191 for ; Tue, 22 Jun 2021 10:59:53 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id k18-20020ac847520000b029024ec8734412so105519qtp.4 for ; Tue, 22 Jun 2021 10:59:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=rUs+966f+CDP8VHde1NoisS+8ww2aUm1j0f8NYlZtqE=; b=Fl+C43avI6SiI4WTsJKZYi1XdqV8GHBrHMpvS/ibNPFWN1hA2wusCpPlf5pWbQwNDw Psp0XUxCPO55t2Li1b5nvreIrIP93otSJBs8gTJUr+VutLMuOwpKPAtySTDllyGVJT8t FPuoLdcXrBl4HE+I6cMmkY1VAuCMPuLr+VsMwb+VN3TZJKgG+35I0R2VXzCnM+mF+Xqe 6dRRk5DMbiXVc5P3XTcgwYcTKRf62KVxhPyGhVJzGPlqcWvSyZW5Zb76Fw+//rzevQZT JAgk7Jn8YrWX9jDpPWk/HwZoi8XH7dWENAAJ5EI8mLPo1PYVDJkNPc0D3IqrgUNcvOnP uVYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=rUs+966f+CDP8VHde1NoisS+8ww2aUm1j0f8NYlZtqE=; b=hKnxeKSR5aEX/RFBSqq1/7J2NWQcOfMCokE+C7Sxy4coXv20InpYZ9sG0vv3Wr5R5y 7YIE/wdHnZ3GWIGZDWtuqTOyn9be2FEd3MxSmmdNOQvCfU/68gUJhxLDUK4fbND96/rL frv3pfB+CSeTYu0r8xnGQDzVxAN8QcFtFBwK1bp7EXKmZX6zOshb70iSvgwXjafLXDum mSwIC5iNlwZUfbbJ9Vk+OCuRUwZ3bi/ZgZ0v36leTmqEu8Xhu8iG1P7zNfusR7operOS Ox56vApLmmbOxsfDc3rD9djzWh4tgZcgeGx5Ynqv07dFAk1ptcONYb5T5klh41Bmq+JX cARg== X-Gm-Message-State: AOAM531TPQccHSwQTeGqwlxEtcv1f7Hl+28hdXpW/KmRKh08nIgiY8MS fT42VK7w3swk+B2/pSrAUWyjVzyjiu0= X-Google-Smtp-Source: ABdhPJwg+s3kNjwriwzqMO3mMxkbDvv/AnG8NwcyEcukQUoufCwMdy4XzMqIuM/miw5QWP8OB3hoX3k4BOs= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:c0d7:: with SMTP id c206mr5187941ybf.369.1624384792699; Tue, 22 Jun 2021 10:59:52 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:36 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-52-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 51/54] KVM: x86/mmu: Drop redundant rsvd bits reset for nested NPT From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Drop the extra reset of shadow_zero_bits in the nested NPT flow now that shadow_mmu_init_context computes the correct level for nested NPT. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7849f53fd874..d4969ac98a4b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4693,12 +4693,6 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, __kvm_mmu_new_pgd(vcpu, nested_cr3, new_role.base); shadow_mmu_init_context(vcpu, context, ®s, new_role); - - /* - * Redo the shadow bits, the reset done by shadow_mmu_init_context() - * (above) may use the wrong shadow_root_level. - */ - reset_shadow_zero_bits_mask(vcpu, context); } EXPORT_SYMBOL_GPL(kvm_init_shadow_npt_mmu); From patchwork Tue Jun 22 17:57:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338319 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69630C2B9F4 for ; Tue, 22 Jun 2021 18:05:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3CEA36102A for ; Tue, 22 Jun 2021 18:05:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232915AbhFVSHb (ORCPT ); Tue, 22 Jun 2021 14:07:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232649AbhFVSGs (ORCPT ); Tue, 22 Jun 2021 14:06:48 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CFBFAC035468 for ; Tue, 22 Jun 2021 10:59:55 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id 44-20020aed30af0000b029024e8ccfcd07so74271qtf.11 for ; Tue, 22 Jun 2021 10:59:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=wT/dm8cE+d6lo3Bb4AloJvwz2DyF/vxG+6pSp7piqUc=; b=vODvpVd/aV/+C3hrbB7LmkOt9TLPWBJrnp4HDVmBSvtNkk/XhhwpGqRjmD8LQ35TYx aT46oCsvARGg8Y5XqWGcw6dBr/OLNgdXh4giv0iKHCsBCcz840knZGUdWNINCm4EjA/x 41kz3dKNZpETycko4uTAKjq2E353Sxmw48wwPD/oLASTXbydu7qaNtnCBa1bgFjgjrue 9NgRt4OyM+lKsurHIcEyGRLKgQMeC3uujOBdv+FzfHMrm93Kl1iEf7Ykni8GAapvLSwY eORtAZ0tMxLhWrxhzuUcNA223E9vR6Af6t7klIQlyZaZYAiU0msihVOK7EDbwAWhOrDP 2cJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=wT/dm8cE+d6lo3Bb4AloJvwz2DyF/vxG+6pSp7piqUc=; b=sbt8pO/XuD7Xa6r9zs4bYVEZCxsqr6irnB3zGUDOqxFPMU7LRHFmYRWvJ19S5wciPA ypTBfnXixHTNuefYwF94unsWty6tSxxfRdNypN1XWZ4GaeSk13QRTw0y3VsBAOhOStdQ jPqO+ZzoUURCCw2o8HHHhtyYvbaQ8g9xh54JSGLh+95uo4+o9DR+WRAICP9ylH9WkwAH GwfOd7+Pf6R4FpZav6LOWdkXn+dDgrMAo2j/SoPPADWwrYu1F67EIeEsvRVhs+Vt6w+r 5hOLYp4Dmv6oqq9E0vcfmEs1H1E1V1AtGJvJ8YjkBaWHpHzl1AGWUz4WPi9+nw1E94RB L1hA== X-Gm-Message-State: AOAM532k7t9NrnW0mNVww0/NKdMkJIX0bvQDXTvGqBtKWltxdyI0U0FR WdUnDKT2srmNA4dU1d/YQgDYdwG2YYk= X-Google-Smtp-Source: ABdhPJzrVPF6qRfoHU/js/FPcvbFY8LsAicQYbANz8g6CGZgDt5fVOp0cv0lfhr2lG5KFLK0p88JNOQuxf4= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a05:6214:c6b:: with SMTP id t11mr26682145qvj.31.1624384794991; Tue, 22 Jun 2021 10:59:54 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:37 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-53-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 52/54] KVM: x86/mmu: Get CR0.WP from MMU, not vCPU, in shadow page fault From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use the current MMU instead of vCPU state to query CR0.WP when handling a page fault. In the nested NPT case, the current CR0.WP reflects L2, whereas the page fault is shadowing L1's NPT. Practically speaking, this is a nop a NPT walks are always user faults, but fix it up for consistency. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu.h | 5 ----- arch/x86/kvm/mmu/paging_tmpl.h | 5 ++--- 2 files changed, 2 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 62844bacd13f..83e6c6965f1e 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -165,11 +165,6 @@ static inline bool is_writable_pte(unsigned long pte) return pte & PT_WRITABLE_MASK; } -static inline bool is_write_protection(struct kvm_vcpu *vcpu) -{ - return kvm_read_cr0_bits(vcpu, X86_CR0_WP); -} - /* * Check if a given access (described through the I/D, W/R and U/S bits of a * page fault error code pfec) causes a permission fault with the given PTE diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index ec1de57f3572..260a9c06d764 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -795,7 +795,7 @@ FNAME(is_self_change_mapping)(struct kvm_vcpu *vcpu, bool self_changed = false; if (!(walker->pte_access & ACC_WRITE_MASK || - (!is_write_protection(vcpu) && !user_fault))) + (!is_cr0_wp(vcpu->arch.mmu) && !user_fault))) return false; for (level = walker->level; level <= walker->max_level; level++) { @@ -893,8 +893,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code, * we will cache the incorrect access into mmio spte. */ if (write_fault && !(walker.pte_access & ACC_WRITE_MASK) && - !is_write_protection(vcpu) && !user_fault && - !is_noslot_pfn(pfn)) { + !is_cr0_wp(vcpu->arch.mmu) && !user_fault && !is_noslot_pfn(pfn)) { walker.pte_access |= ACC_WRITE_MASK; walker.pte_access &= ~ACC_USER_MASK; From patchwork Tue Jun 22 17:57:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1C89C2B9F4 for ; Tue, 22 Jun 2021 18:05:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BACBC6128E for ; Tue, 22 Jun 2021 18:05:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233034AbhFVSHp (ORCPT ); Tue, 22 Jun 2021 14:07:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232520AbhFVSGv (ORCPT ); Tue, 22 Jun 2021 14:06:51 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3747AC06114C for ; Tue, 22 Jun 2021 10:59:58 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id t144-20020a3746960000b02903ad9c5e94baso19000965qka.16 for ; Tue, 22 Jun 2021 10:59:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=7ggmnZk0ZzA95oYgw0v8Eh/QeJL7koCtTmvSi5/Mn+Y=; b=GSPSKBn/mQa9joamTAFl6zMOdhYEESRbgOqd8UHPNjOXCoC9+Z1jdQ4U/r5IvhektS MLuT3lGf/XSxaElBtNmnnrySdVZu1INpWQValxhEfHROzGlKnz2yen8L7mtACUKRr8iD A0/OOst/h6JBkoE/FScuXq8Tq/o6/ZKTJfBelTvAIwLgymmPYiQSb8fNqOfOVJTiUO37 nIk6jAqWv7TjYZdswnskjommFAq4N4g4Vr5u6gVHwsOjQ0q/WUjHbRcT+SIGl/YwYsDK hzx+CNhm0FqMv2dDXXQZ8pjFeMFxT1MK2JrEywcLQitJNttAe9Ith9Knta4EJC3WpZip 6I+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=7ggmnZk0ZzA95oYgw0v8Eh/QeJL7koCtTmvSi5/Mn+Y=; b=NeRRPSnDcSL48dv2mnTpJ+dEKpK+SVHuSo4sENQUhqQg7Srhz5y1/0+0mpHKcstS+Y 2ttgFqO7MrSUAgFm3sTeSohndciH5bod3cYyzBlOlbuwxmDMKZhC5glHmhUjSMIBJ/xr 25oDCk35D0XtclCwUta/nVop3iVBn/1oo0uFI3mMLsoUXbMX78IL57HRnMtJ8TtlcOtc JUxQt7P7GvsTnjrwehbNcDX9gAJSPIl8SCWoPc2L677tVGQcgV4sFnSB9PeP4d+tUih8 AcHUyZVcJeTINVziMlHZ5DnEazjsUtuywl6s6C9/nRdFJcgRkZ0ZETQwnzreDJ0Z75Ui 6vDw== X-Gm-Message-State: AOAM531zJgRoTEpdVEHETrjwiysqFAOwRQsgJZktxgJOfNQr3jtoUc6B F6L1Q1xZoSvYdAA23tEhVT9mXpWMuQI= X-Google-Smtp-Source: ABdhPJysNMvdkS/WTlKJMwsHbXLxcAJ6RVqXZDGyfoX7iGnHKFJcn3s93O9i2ByNOSLGPeju8fo4etFNxnU= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:2d41:: with SMTP id s1mr6221176ybe.120.1624384797315; Tue, 22 Jun 2021 10:59:57 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:38 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-54-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 53/54] KVM: x86/mmu: Get CR4.SMEP from MMU, not vCPU, in shadow page fault From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use the current MMU instead of vCPU state to query CR4.SMEP when handling a page fault. In the nested NPT case, the current CR4.SMEP reflects L2, whereas the page fault is shadowing L1's NPT, which uses L1's hCR4. Practically speaking, this is a nop a NPT walks are always user faults, i.e. this code will never be reached, but fix it up for consistency. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 260a9c06d764..a79353fc6efd 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -903,7 +903,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code, * then we should prevent the kernel from executing it * if SMEP is enabled. */ - if (kvm_read_cr4_bits(vcpu, X86_CR4_SMEP)) + if (is_cr4_smep(vcpu->arch.mmu)) walker.pte_access &= ~ACC_EXEC_MASK; } From patchwork Tue Jun 22 17:57:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12338323 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A111C2B9F4 for ; Tue, 22 Jun 2021 18:05:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DF9E261289 for ; Tue, 22 Jun 2021 18:05:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232456AbhFVSH5 (ORCPT ); Tue, 22 Jun 2021 14:07:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232979AbhFVSG7 (ORCPT ); Tue, 22 Jun 2021 14:06:59 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 81F58C03540F for ; Tue, 22 Jun 2021 11:00:00 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id w3-20020ac80ec30000b029024e8c2383c1so101884qti.5 for ; Tue, 22 Jun 2021 11:00:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=IqH/coxPh0ByAAe8tc78x3ztmN8TTOQkDLtNWwfe75w=; b=DZs+osiJDshw77pcrc5vq2d1EvY10o5PpgQ80tI2x4sJk3l67+x33t6zY4B/q2ah3T 6FzcG+++e9Scl8IAVpYAsFQSiEgLoeS8Dj0RQbfreQ1j5lF/jGl0GIs/tde7hvZZJn/w Z5KOR/RHhRkCuKSc4gwR9d7E0gHMclLI19VWY8NOnd4co/3Wl5ESvIiLKMt2E7ZsJzyh s9tFz7fOAUk5Clxao2Sd2f1ochsswYBo5tNkKKkTOhr5o/jEjD/+Zmw8RVUHMLh/0egd hfmMQaa3KlGUyhZy3szAxfwNoGoBNzNy3/Wm7iOOX/fE5CqTHJcmKtWposnQ0oynuCNK zshw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=IqH/coxPh0ByAAe8tc78x3ztmN8TTOQkDLtNWwfe75w=; b=pKd/hvXKLQlc1hXpNOHSDFn670ZDH16GXVISFGLV7QFydeXyFhzE1+8qcF44DuoRWy Ip152fR7ZYt4Nz377XuHJDCOnltSWZjPZRVrHaSaT8kqwTZI8x52sP6RIXQlDL7SnIsH ruIyruhL2ioXwkapQ8NbGFgzkMsZVCA1DBF9sGnX4RBfMKBBXRMpHnl/f1O5rXrUtGxy SVKd4S04UxR+WPTX9f5NpQwJg8AlGrrHPiEPxnl/I96XRG8BjNtollzVdWPFRK/X8Vmq EIau/+9Ia+h2wA3+ikdeipdjMl3U4eGZVXuoJzknzO+zd6K/fyZkKiL0SHS6MytK20m/ /8cA== X-Gm-Message-State: AOAM5338NKyqVIhvcYE04mghABtRFDAlsGwyaHvAYByhRn0Ls2+NH0eH fxSGjj3waqRjg/kkSfpid+B/x9ZxktE= X-Google-Smtp-Source: ABdhPJwP4Zh3alZPREoZIcweWAzgk0DTf86jxxhCnxkCnpRIQOV32EVth3C48VrQ1ng8hVQ2RwJSEKi9nK0= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:cf92:: with SMTP id f140mr5867665ybg.38.1624384799659; Tue, 22 Jun 2021 10:59:59 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:39 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-55-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 54/54] KVM: x86/mmu: Let guest use GBPAGES if supported in hardware and TDP is on From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Let the guest use 1g hugepages if TDP is enabled and the host supports GBPAGES, KVM can't actively prevent the guest from using 1g pages in this case since they can't be disabled in the hardware page walker. While injecting a page fault if a bogus 1g page is encountered during a software page walk is perfectly reasonable since KVM is simply honoring userspace's vCPU model, doing so arguably doesn't provide any meaningful value, and at worst will be horribly confusing as the guest will see inconsistent behavior and seemingly spurious page faults. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index d4969ac98a4b..684255defb33 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4174,13 +4174,28 @@ __reset_rsvds_bits_mask(struct rsvd_bits_validate *rsvd_check, } } +static bool guest_can_use_gbpages(struct kvm_vcpu *vcpu) +{ + /* + * If TDP is enabled, let the guest use GBPAGES if they're supported in + * hardware. The hardware page walker doesn't let KVM disable GBPAGES, + * i.e. won't treat them as reserved, and KVM doesn't redo the GVA->GPA + * walk for performance and complexity reasons. Not to mention KVM + * _can't_ solve the problem because GVA->GPA walks aren't visible to + * KVM once a TDP translation is installed. Mimic hardware behavior so + * that KVM's is at least consistent, i.e. doesn't randomly inject #PF. + */ + return tdp_enabled ? boot_cpu_has(X86_FEATURE_GBPAGES) : + guest_cpuid_has(vcpu, X86_FEATURE_GBPAGES); +} + static void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context) { __reset_rsvds_bits_mask(&context->guest_rsvd_check, vcpu->arch.reserved_gpa_bits, context->root_level, is_efer_nx(context), - guest_cpuid_has(vcpu, X86_FEATURE_GBPAGES), + guest_can_use_gbpages(vcpu), is_cr4_pse(context), guest_cpuid_is_amd_or_hygon(vcpu)); } @@ -4259,8 +4274,7 @@ static void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, shadow_zero_check = &context->shadow_zero_check; __reset_rsvds_bits_mask(shadow_zero_check, reserved_hpa_bits(), context->shadow_root_level, uses_nx, - guest_cpuid_has(vcpu, X86_FEATURE_GBPAGES), - is_pse, is_amd); + guest_can_use_gbpages(vcpu), is_pse, is_amd); if (!shadow_me_mask) return;