From patchwork Fri Apr 23 00:06:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12219265 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0ADA9C43460 for ; Fri, 23 Apr 2021 00:06:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D8FDD61155 for ; Fri, 23 Apr 2021 00:06:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239930AbhDWAHU (ORCPT ); Thu, 22 Apr 2021 20:07:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235691AbhDWAHS (ORCPT ); Thu, 22 Apr 2021 20:07:18 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 665B0C061574 for ; Thu, 22 Apr 2021 17:06:43 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id s8-20020a5b04480000b029049fb35700b9so22197446ybp.5 for ; Thu, 22 Apr 2021 17:06:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=lASHK+mshsNE6LJP7Zp6NlK5R8UQDK0WUI+mht9WyPE=; b=nR8t0PkGYw/2A/kvZCxu8TfVsllhGN5jOk7SvtzvsvBN+Tx3hsA3m7H2B/lA9cyoKy GMPQqsT8Ufna//5YSSNqhXz7CMeDFiBY6DMf3Mzn6p2FqJLDDEJJ1VbCqnHmj4r95II3 kFxNHKjI68ahJRvg2xZLwaC3dyAMIqG3aiDKkZLNjbEw1rZzjqpIfnCVM/g2URB4/Gu8 rUFeGo+YNLUy6UChPG6ksDUtNmFBvbtISKq75LvNpIQQax5aHt0S22AwuUvdVc2xjek+ n+zW3wej9JKPjp4cQ0mD+CJyYmCeTtlV8S3/lT0Vsl8ZWCawIQl55kAx7/MtuXxOPiKt l6sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=lASHK+mshsNE6LJP7Zp6NlK5R8UQDK0WUI+mht9WyPE=; b=SWyISeMrhLjNd1NNWyox4pkaOyNKuf9s8edtb5dLBX1GGiui9E+Ulu7y0T1YHg0vot HEbZqGN87+ya3j0wnjoEG5I4yTYNfrmBGvfQe8TlbCa4ik/+EwbMPmzi69NkGnvhaj/Y bnOFKULEI6X2sZUtJmi2FB7GB1zvPocQbPBYfLm2p9BHsNRMahjUAHMNLzaDofX4WC4p ttxSsg4TIjSF26zkWsov9S70e9rLcAxpOMgbSWYieH5TkvyXgWB9O05HFMtBUdqCnpVF 1rSK/55Ti1Eadjg16x82yY8DEQwq5ukPXZ5T6tAuPP3iK2VoIVbNSN/D1SobTFONq3qH 73qQ== X-Gm-Message-State: AOAM530SetAx/QtIab8rOUYSrVD0rd3H2QrnkAFbdk675eyJgBEWKjxA kGR/FQ0zGvebxFFwKQUG5y9olKSg2lU= X-Google-Smtp-Source: ABdhPJy7d9MnSiTtuKa3JHkCqRIujr96wSnEQOCKmysEuCLbTdPmBdl16HXOQgwFsTodV7qVQbg9XHlS9gA= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:e012:374c:592:6194]) (user=seanjc job=sendgmr) by 2002:a5b:611:: with SMTP id d17mr1784818ybq.421.1619136402687; Thu, 22 Apr 2021 17:06:42 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 22 Apr 2021 17:06:34 -0700 In-Reply-To: <20210423000637.3692951-1-seanjc@google.com> Message-Id: <20210423000637.3692951-2-seanjc@google.com> Mime-Version: 1.0 References: <20210423000637.3692951-1-seanjc@google.com> X-Mailer: git-send-email 2.31.1.498.g6c1eba8ee3d-goog Subject: [PATCH 1/4] KVM: nVMX: Drop obsolete (and pointless) pdptrs_changed() check From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Remove the pdptrs_changed() check when loading L2's CR3. The set of available registers is always reset when switching VMCSes (see commit e5d03de5937e, "KVM: nVMX: Reset register cache (available and dirty masks) on VMCS switch"), thus the "are PDPTRs available" check will always fail. And even if it didn't fail, reading guest memory to check the PDPTRs is just as expensive as reading guest memory to load 'em. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 00339d624c92..eece7fff0441 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -1118,11 +1118,9 @@ static int nested_vmx_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, bool ne * must not be dereferenced. */ if (!nested_ept && is_pae_paging(vcpu) && - (cr3 != kvm_read_cr3(vcpu) || pdptrs_changed(vcpu))) { - if (CC(!load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3))) { - *entry_failure_code = ENTRY_FAIL_PDPTE; - return -EINVAL; - } + CC(!load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3))) { + *entry_failure_code = ENTRY_FAIL_PDPTE; + return -EINVAL; } /* From patchwork Fri Apr 23 00:06:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12219267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5527C433B4 for ; Fri, 23 Apr 2021 00:06:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BFE7161155 for ; Fri, 23 Apr 2021 00:06:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239964AbhDWAHX (ORCPT ); Thu, 22 Apr 2021 20:07:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50888 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239953AbhDWAHV (ORCPT ); Thu, 22 Apr 2021 20:07:21 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D5378C061574 for ; Thu, 22 Apr 2021 17:06:45 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id n18-20020a05620a1532b02902e4141f2fd8so6534900qkk.0 for ; Thu, 22 Apr 2021 17:06:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=YGll+NjmGWhramX5dXO7MOZlqzk3DnLvXUr2tOwm9cI=; b=VmqDeC6LHr1iBA3uHHzTF19eQT9ZPzL0WCJWZ/KSl4CzR3JbDocs1AIcGKdBZWxaCg scgSm8jtEiGjaYmHC9MgKAR36RUq5/5Y+hk+wbPc6vL+yAEfY0DeW8KPgACsuiscJZJh cE5nBAN3WxqTf2gin9fqtvJo+CCQWItX2fyJuYxPLaurFY2raJF3j/93B2C4WhyD7hAD iFJiKMc79SVw2BSpksDunA4aM2Sz9xXbl34vBbLM8qFaVL6V47RS78yUX9+6GIyf/1H8 3fpjHTXHQN+nmRStXRQ/NixqCPC4yy8psW7idFr14GoRpoqjwUrQtKiYecHmY5lbN2rT MFcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=YGll+NjmGWhramX5dXO7MOZlqzk3DnLvXUr2tOwm9cI=; b=fdL6Wb07QWfECCD+YmZ4qJIslJD3Yew5+ouvAceCvDuJ911E5TDVhUWU1iIRKkPQpx NuJ4RABmtzomW3NR2uGtrm8uc/MgJXMX5cmjtqYFsQT/hoVDE++N7IaZpqTtkKXUeJnA chvHZ2WeWi+ymKQkD9oWZAe8zO91RCL0OBvvebVh85ZuD/kO6GAowAypJoVUdtT64tV9 LzK+hoG7uPlG3lYpmnsaYIi9QEu2WewWB9Bt4rUvAYdyFN2CZ7LLf0vfPzGmMX0XYfZ8 2M7muytkA+5IbDu4e8UUCmQ0mxsFgtjJeU5FIBf5hFPmky/zKvlGKBe15tNJHUluRV9o 7gDw== X-Gm-Message-State: AOAM530o3zcwx5+9fsHwZEeePjj5228BMt0Z6QN7XWR7ky3L8nuxepIA Z4RgAGY0ZjLSb41X9v4jaerozn1RqWA= X-Google-Smtp-Source: ABdhPJyOWctz83RlNz3iwtG+qeHndbWdhpCEfTLgpALU6PWqDeoLl2Hzmnt6/bS04IMpCFE450LPAyUq+h4= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:e012:374c:592:6194]) (user=seanjc job=sendgmr) by 2002:a05:6214:246a:: with SMTP id im10mr1410628qvb.7.1619136404945; Thu, 22 Apr 2021 17:06:44 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 22 Apr 2021 17:06:35 -0700 In-Reply-To: <20210423000637.3692951-1-seanjc@google.com> Message-Id: <20210423000637.3692951-3-seanjc@google.com> Mime-Version: 1.0 References: <20210423000637.3692951-1-seanjc@google.com> X-Mailer: git-send-email 2.31.1.498.g6c1eba8ee3d-goog Subject: [PATCH 2/4] KVM: nSVM: Drop pointless pdptrs_changed() check on nested transition From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Remove the "PDPTRs unchanged" check to skip PDPTR loading during nested SVM transitions as it's not at all an optimization. Reading guest memory to get the PDPTRs isn't magically cheaper by doing it in pdptrs_changed(), and if the PDPTRs did change, KVM will end up doing the read twice. Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/nested.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 540d43ba2cf4..9cc95895866a 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -391,10 +391,8 @@ static int nested_svm_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, return -EINVAL; if (!nested_npt && is_pae_paging(vcpu) && - (cr3 != kvm_read_cr3(vcpu) || pdptrs_changed(vcpu))) { - if (CC(!load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3))) - return -EINVAL; - } + CC(!load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3))) + return -EINVAL; /* * TODO: optimize unconditional TLB flush/MMU sync here and in From patchwork Fri Apr 23 00:06:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12219269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88F61C433ED for ; Fri, 23 Apr 2021 00:06:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 572866145B for ; Fri, 23 Apr 2021 00:06:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240000AbhDWAHY (ORCPT ); Thu, 22 Apr 2021 20:07:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239976AbhDWAHX (ORCPT ); Thu, 22 Apr 2021 20:07:23 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1DD3C06174A for ; Thu, 22 Apr 2021 17:06:47 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id e13-20020a25d30d0000b02904ec4109da25so21997053ybf.7 for ; Thu, 22 Apr 2021 17:06:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=IGXfwfXD78G9Ehh8nz9VOrnaPh2MN+c9PGYDs1kjN3Y=; b=bv/Un5aPqGBWb/OBgRH9+bb3UargdwBWqSNVY+wLxsX41UKwlKwv8hCx7ggQJ5PQAZ NXi/SQT4HMLTFNBLaJHJ7jCS+nPR1fRemFYLJlPqAzf+xNjr5cj4Vhaw1jWDEaJeywM1 zXE1mrN3CmaRgElOP3MEvfvjrp4QGXTroDLE16bQ4UpEBkpR3CHPzP/XIWM7Ed33r+SK 6z8qF8eFpFqsTcz1kx+COIjIAOgzemB4j3WAaRgoa+R9E+2PQ1625OAQ7O44tptOUdqR uTw27GDumL444ME2yhdtr+PzpvcuWPQHIGD+D74aXwvYyTQVbydtG+TvBWjfZ8xFs/SI rQMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=IGXfwfXD78G9Ehh8nz9VOrnaPh2MN+c9PGYDs1kjN3Y=; b=C0nBhoTmgHhsYRFqHUA4Cc5O4yKe6vJXsCfa6mSMB6A6W+N0P9gTR7MjopN13dZMJC 2EjaAKyWVBWQkzyXnyFxpbDf36BI8YPoegbfHVjPcdHxLJBObiKhyuxVdeglYREZ+StF gM/MbgMKTGC2zYRlPmjiIRWDEcYwRE8GueoUIOytyjrzppn8VNIhR9FAX6U5KmNaKLCp 5vECwvIRzdgl+AW1BuVXXC0sNJzF2XiJxo1gBtOEbuQLS/S5/FLBwI5MXVHV27+hFUqa /ec1PfTEVynS4x32bila1ZIHJ9jANeMivSzOPwBu8clFHLlmNwr/OIjl5uMKh58+f06p qbNA== X-Gm-Message-State: AOAM532hSe3u0e0R0W8YYgZHKyDWJdwjsCxPt0NYlIe72dtLufqhhWTQ lgU4t2hidV6SoIXZGnEgX73z4042L5c= X-Google-Smtp-Source: ABdhPJyWh3H8mbwN66UAycr7VAov5SVM1XKiDeNiu+aEWquKgyO5ry6R4f49378qTqHyaDAg9LAwjcTRSv0= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:e012:374c:592:6194]) (user=seanjc job=sendgmr) by 2002:a25:8302:: with SMTP id s2mr1880980ybk.290.1619136407233; Thu, 22 Apr 2021 17:06:47 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 22 Apr 2021 17:06:36 -0700 In-Reply-To: <20210423000637.3692951-1-seanjc@google.com> Message-Id: <20210423000637.3692951-4-seanjc@google.com> Mime-Version: 1.0 References: <20210423000637.3692951-1-seanjc@google.com> X-Mailer: git-send-email 2.31.1.498.g6c1eba8ee3d-goog Subject: [PATCH 3/4] KVM: x86: Always load PDPTRs on CR3 load for SVM w/o NPT and a PAE guest From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Kill off pdptrs_changed() and instead go through the full kvm_set_cr3() for PAE guest, even if the new CR3 is the same as the current CR3. For VMX, and SVM with NPT enabled, the PDPTRs are unconditionally marked as unavailable after VM-Exit, i.e. the optimization is dead code except for SVM without NPT. In the unlikely scenario that anyone cares about SVM without NPT _and_ a PAE guest, they've got bigger problems if their guest is loading the same CR3 so frequently that the performance of kvm_set_cr3() is notable, especially since KVM's fast PGD switching means reloading the same CR3 does not require a full rebuild. Given that PAE and PCID are mutually exclusive, i.e. a sync and flush are guaranteed in any case, the actual benefits of the pdptrs_changed() optimization are marginal at best. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/x86.c | 34 ++------------------------------- 2 files changed, 2 insertions(+), 33 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 3e5fc80a35c8..30e95c52769c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1475,7 +1475,6 @@ unsigned long kvm_mmu_calculate_default_mmu_pages(struct kvm *kvm); void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned long kvm_nr_mmu_pages); int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3); -bool pdptrs_changed(struct kvm_vcpu *vcpu); int emulator_write_phys(struct kvm_vcpu *vcpu, gpa_t gpa, const void *val, int bytes); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 3bf52ba5f2bb..d099d6e54a6f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -751,13 +751,6 @@ int kvm_read_guest_page_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, } EXPORT_SYMBOL_GPL(kvm_read_guest_page_mmu); -static int kvm_read_nested_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, - void *data, int offset, int len, u32 access) -{ - return kvm_read_guest_page_mmu(vcpu, vcpu->arch.walk_mmu, gfn, - data, offset, len, access); -} - static inline u64 pdptr_rsvd_bits(struct kvm_vcpu *vcpu) { return vcpu->arch.reserved_gpa_bits | rsvd_bits(5, 8) | rsvd_bits(1, 2); @@ -799,30 +792,6 @@ int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3) } EXPORT_SYMBOL_GPL(load_pdptrs); -bool pdptrs_changed(struct kvm_vcpu *vcpu) -{ - u64 pdpte[ARRAY_SIZE(vcpu->arch.walk_mmu->pdptrs)]; - int offset; - gfn_t gfn; - int r; - - if (!is_pae_paging(vcpu)) - return false; - - if (!kvm_register_is_available(vcpu, VCPU_EXREG_PDPTR)) - return true; - - gfn = (kvm_read_cr3(vcpu) & 0xffffffe0ul) >> PAGE_SHIFT; - offset = (kvm_read_cr3(vcpu) & 0xffffffe0ul) & (PAGE_SIZE - 1); - r = kvm_read_nested_guest_page(vcpu, gfn, pdpte, offset, sizeof(pdpte), - PFERR_USER_MASK | PFERR_WRITE_MASK); - if (r < 0) - return true; - - return memcmp(pdpte, vcpu->arch.walk_mmu->pdptrs, sizeof(pdpte)) != 0; -} -EXPORT_SYMBOL_GPL(pdptrs_changed); - void kvm_post_set_cr0(struct kvm_vcpu *vcpu, unsigned long old_cr0, unsigned long cr0) { unsigned long update_bits = X86_CR0_PG | X86_CR0_WP; @@ -1069,7 +1038,8 @@ int kvm_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3) } #endif - if (cr3 == kvm_read_cr3(vcpu) && !pdptrs_changed(vcpu)) { + /* PDPTRs are always reloaded for PAE paging. */ + if (cr3 == kvm_read_cr3(vcpu) && !is_pae_paging(vcpu)) { if (!skip_tlb_flush) { kvm_mmu_sync_roots(vcpu); kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu); From patchwork Fri Apr 23 00:06:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12219271 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3756AC433B4 for ; Fri, 23 Apr 2021 00:07:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0E587613C8 for ; Fri, 23 Apr 2021 00:07:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239999AbhDWAH1 (ORCPT ); Thu, 22 Apr 2021 20:07:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240003AbhDWAHZ (ORCPT ); Thu, 22 Apr 2021 20:07:25 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 527F1C06138B for ; Thu, 22 Apr 2021 17:06:50 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id v7-20020a05620a0a87b02902e02f31812fso13224216qkg.6 for ; Thu, 22 Apr 2021 17:06:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=04rMnX1KRluyYRhHfvRETT5XQnHF/bHP++SHNJjdrlM=; b=qRjkpnEEYcV15FkcVmKDDr/KwpIYnoIkmVvlHvx60fCBdYj9SitM6Pownm0tBklXjD XtTyostQC2Cap5js+E8toPDKK7/1tblNPx7krQ4PA2IodSW3y9ZcomiUikmtL5jbBXSu Y1yiKMNSDJbs+qn5W8lQuC5+cntTXNQunhD/OkgMIUJFq3/WZtKF/OYjfy8SG90h1wdM EM3LYdxh28p1erpuR9M8Gi/YtDcUnBNfwVVYXxSeR4joc7uAfrm/WkOgVo1588JL3ZKw hjDVTnEz3p2/hF/jThXrMbVnd0P+zcA2XVAToI1VwxKnwcqItGmTI8410Iby4xk9MFD+ wUxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=04rMnX1KRluyYRhHfvRETT5XQnHF/bHP++SHNJjdrlM=; b=heqalg7d+LiGbN31AxdFiZznSrhMFp5SqUsbPv0cAOtpvdk9Tlx89gZeFhexOO26pQ DSvkGlSxVBh/NeVraM4ty9Ct43aufmTj+Yujqu1JnnoVSqNd3Wd1oNTdF61AzALy/Z5r uMHshYXOPxRAnVl3ynFzyfO04pAl3Ye/qvw65eNcJS6ocTLcCKw4ASU5pkd/ymqOx3Re NHaexVN2ba5FcOVth3hSWV3jyGog1bZPd+LYc4/iTXPBeIvf4gFyuZRglB0Wfp2tpf2E BsMnvQ667GbiCzkdtsBo5E86nD+tF30cpAj2/UYyo2vdBE692p4vu124lmUN1kYk2dyA 2+6Q== X-Gm-Message-State: AOAM531EHlYvaZl8c2Ap+Xx3E1+0RZIM0Flr/kOPZr7Y0ZIBVOxgHZLX NxEs7gNADvQfhBHOvTCxTndToWnEU7w= X-Google-Smtp-Source: ABdhPJzpZZnf0lX1geWBYLTLNkLtH9hnPjIogk27b3aa01i0VhjhX50Emrk8zl+xOAgk8VTupLhUJ+6BMj0= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:e012:374c:592:6194]) (user=seanjc job=sendgmr) by 2002:a0c:f349:: with SMTP id e9mr1258441qvm.12.1619136409419; Thu, 22 Apr 2021 17:06:49 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 22 Apr 2021 17:06:37 -0700 In-Reply-To: <20210423000637.3692951-1-seanjc@google.com> Message-Id: <20210423000637.3692951-5-seanjc@google.com> Mime-Version: 1.0 References: <20210423000637.3692951-1-seanjc@google.com> X-Mailer: git-send-email 2.31.1.498.g6c1eba8ee3d-goog Subject: [PATCH 4/4] KVM: x86: Unexport kvm_read_guest_page_mmu() From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Unexport kvm_read_guest_page_mmu(), its only current user is for loading PDPTRs, and with luck, KVM will not have to support similar insanity in the future. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 3 --- arch/x86/kvm/x86.c | 7 +++---- 2 files changed, 3 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 30e95c52769c..be271fdf584e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1614,9 +1614,6 @@ void kvm_requeue_exception_e(struct kvm_vcpu *vcpu, unsigned nr, u32 error_code) void kvm_inject_page_fault(struct kvm_vcpu *vcpu, struct x86_exception *fault); bool kvm_inject_emulated_page_fault(struct kvm_vcpu *vcpu, struct x86_exception *fault); -int kvm_read_guest_page_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, - gfn_t gfn, void *data, int offset, int len, - u32 access); bool kvm_require_cpl(struct kvm_vcpu *vcpu, int required_cpl); bool kvm_require_dr(struct kvm_vcpu *vcpu, int dr); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d099d6e54a6f..06bc59c3abb9 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -732,9 +732,9 @@ EXPORT_SYMBOL_GPL(kvm_require_dr); * running guest. The difference to kvm_vcpu_read_guest_page is that this function * can read from guest physical or from the guest's guest physical memory. */ -int kvm_read_guest_page_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, - gfn_t ngfn, void *data, int offset, int len, - u32 access) +static int kvm_read_guest_page_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, + gfn_t ngfn, void *data, int offset, int len, + u32 access) { struct x86_exception exception; gfn_t real_gfn; @@ -749,7 +749,6 @@ int kvm_read_guest_page_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, return kvm_vcpu_read_guest_page(vcpu, real_gfn, data, offset, len); } -EXPORT_SYMBOL_GPL(kvm_read_guest_page_mmu); static inline u64 pdptr_rsvd_bits(struct kvm_vcpu *vcpu) {