From patchwork Tue Nov 7 14:56:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13448959 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1AB9B31A8F for ; Tue, 7 Nov 2023 14:58:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="mLCPOr/u" Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A3971FE4; Tue, 7 Nov 2023 06:58:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369109; x=1730905109; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JrzCs/OOWDx77lnxpac9qcH3fG7OrhsRaFsr0begvQk=; b=mLCPOr/uc5y1kwsgQUt9jrvE1awZnjzQUCbAN1RN/tEXY6gJ1heGefWC RvA+OKclP+rkmypirHFiIrEfTsTD2WmsxwpPRpTPg5TEIZ1U955wPhGYG yEXQqBvY7j5xlECdtHTx4d7CqwK9silHI1Cp+FEms7f8H+bsCRuzOyhy7 AFLfKfDV9NalWdWpwSwJKZAVt34d+LtvkiNoOTABvB/vCzd6OlJwAfns0 RqAakChMSEi9Ynt/hsJibevAhbZaZiYQ+hHpOyRZIR2kMTxGu5XorYkMt r1qqeIxKsHM+taoE8dZAAixitNSuDKF8qoCkOeSMecsM4RW4micgB4cyT g==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="2462239" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="2462239" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 06:58:10 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10851321" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 06:58:08 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [PATCH v17 036/116] KVM: x86/mmu: Disallow fast page fault on private GPA Date: Tue, 7 Nov 2023 06:56:02 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Isaku Yamahata TDX requires TDX SEAMCALL to operate Secure EPT instead of direct memory access and TDX SEAMCALL is heavy operation. Fast page fault on private GPA doesn't make sense. Disallow fast page fault on private GPA. Signed-off-by: Isaku Yamahata Reviewed-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f0bc0395831e..53ad71a930e8 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3343,8 +3343,16 @@ static int kvm_handle_noslot_fault(struct kvm_vcpu *vcpu, return RET_PF_CONTINUE; } -static bool page_fault_can_be_fast(struct kvm_page_fault *fault) +static bool page_fault_can_be_fast(struct kvm *kvm, struct kvm_page_fault *fault) { + /* + * TDX private mapping doesn't support fast page fault because the EPT + * entry is read/written with TDX SEAMCALLs instead of direct memory + * access. + */ + if (kvm_is_private_gpa(kvm, fault->addr)) + return false; + /* * Page faults with reserved bits set, i.e. faults on MMIO SPTEs, only * reach the common page fault handler if the SPTE has an invalid MMIO @@ -3454,7 +3462,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) u64 *sptep; uint retry_count = 0; - if (!page_fault_can_be_fast(fault)) + if (!page_fault_can_be_fast(vcpu->kvm, fault)) return ret; walk_shadow_page_lockless_begin(vcpu);