From patchwork Thu Apr 29 04:12:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12230423 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA9E7C433B4 for ; Thu, 29 Apr 2021 04:12:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AC9BD61405 for ; Thu, 29 Apr 2021 04:12:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237239AbhD2ENX (ORCPT ); Thu, 29 Apr 2021 00:13:23 -0400 Received: from mga01.intel.com ([192.55.52.88]:59903 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233053AbhD2ENX (ORCPT ); Thu, 29 Apr 2021 00:13:23 -0400 IronPort-SDR: E7qnCvHrzfKsfoVVNVjl9Pa09FL98WwfkRrp+p8oXISMB3kTBscycSWn6BAgyOHdn58d2EnTgD 1zh91BHKGeyg== X-IronPort-AV: E=McAfee;i="6200,9189,9968"; a="217641754" X-IronPort-AV: E=Sophos;i="5.82,258,1613462400"; d="scan'208";a="217641754" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Apr 2021 21:12:37 -0700 IronPort-SDR: R6JlBqBXNxjXKXkVDgj2NAGbgnKw9sqHCl5eOC3IxqEqi/5e3F1fPQbMHjU876lRlfXkQn1g+y z01Dr+f563Cg== X-IronPort-AV: E=Sophos;i="5.82,258,1613462400"; d="scan'208";a="393727468" Received: from savora-mobl.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.209.50.252]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Apr 2021 21:12:35 -0700 From: Kai Huang To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, bgardon@google.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, Kai Huang Subject: [PATCH] KVM: x86/mmu: Avoid unnecessary page table allocation in kvm_tdp_mmu_map() Date: Thu, 29 Apr 2021 16:12:26 +1200 Message-Id: <20210429041226.50279-1-kai.huang@intel.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In kvm_tdp_mmu_map(), while iterating TDP MMU page table entries, it is possible SPTE has already been frozen by another thread but the frozen is not done yet, for instance, when another thread is still in middle of zapping large page. In this case, the !is_shadow_present_pte() check for old SPTE in tdp_mmu_for_each_pte() may hit true, and in this case allocating new page table is unnecessary since tdp_mmu_set_spte_atomic() later will return false and page table will need to be freed. Add is_removed_spte() check before allocating new page table to avoid this. Signed-off-by: Kai Huang Reviewed-by: Ben Gardon --- arch/x86/kvm/mmu/tdp_mmu.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 83cbdbe5de5a..84ee1a76a79d 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1009,6 +1009,14 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, } if (!is_shadow_present_pte(iter.old_spte)) { + /* + * If SPTE has been forzen by another thread, just + * give up and retry, avoiding unnecessary page table + * allocation and free. + */ + if (is_removed_spte(iter.old_spte)) + break; + sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level); child_pt = sp->spt;