From patchwork Tue Nov 7 14:56:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13449048 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4FDD23AC18 for ; Tue, 7 Nov 2023 15:05:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="GEVd8x+f" Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC54459CF; Tue, 7 Nov 2023 07:01:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369317; x=1730905317; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=O5GPE9trwBoOZ/3ivF1qv08a3MB5fwv7I+GbUuy/TxI=; b=GEVd8x+f4ilPyUWu1mf7HgbbyaNJtnXWOmTX7+Tef5427yFRu9Cy1lsC PYOBgmJdUIS1qJyJSQhGXR0rsaBUbpebo+FOD7uE5EcaN+CWE28AW3qFX JhxTOEBh73r5g3Xms0G5JP3r1SV40ZjZ1TAy3lXGZpg1xHNtiCroWhsrf SajAS/GGs/vmZonnUnZ5O+iOxXRHo0xE3RlscqQlCGhGATOVd+zqw+3B1 4UgcqwvO+RPECMGsCF/uAP+DIYAvb5/hxq2Bb5Ke62dlcLXhMSSMIUcbp fXmisfGZDx4GUMX5yTM9T3ErEnWadpCFAxD/vCYDGjTE+gux4RirFJ917 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="2462516" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="2462516" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 06:58:22 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10851516" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 06:58:21 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [PATCH v17 078/116] KVM: TDX: Implement methods to inject NMI Date: Tue, 7 Nov 2023 06:56:44 -0800 Message-Id: <5d6f5d97e651b178ae85f1a5d7d5200d45a8717c.1699368322.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Isaku Yamahata TDX vcpu control structure defines one bit for pending NMI for VMM to inject NMI by setting the bit without knowing TDX vcpu NMI states. Because the vcpu state is protected, VMM can't know about NMI states of TDX vcpu. The TDX module handles actual injection and NMI states transition. Add methods for NMI and treat NMI can be injected always. Signed-off-by: Isaku Yamahata Reviewed-by: Paolo Bonzini --- arch/x86/kvm/vmx/main.c | 64 +++++++++++++++++++++++++++++++++++--- arch/x86/kvm/vmx/tdx.c | 6 ++++ arch/x86/kvm/vmx/x86_ops.h | 2 ++ 3 files changed, 67 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index 5c26787bc1e0..ad3feb48d9ec 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -315,6 +315,60 @@ static void vt_flush_tlb_guest(struct kvm_vcpu *vcpu) vmx_flush_tlb_guest(vcpu); } +static void vt_inject_nmi(struct kvm_vcpu *vcpu) +{ + if (is_td_vcpu(vcpu)) { + tdx_inject_nmi(vcpu); + return; + } + + vmx_inject_nmi(vcpu); +} + +static int vt_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection) +{ + /* + * The TDX module manages NMI windows and NMI reinjection, and hides NMI + * blocking, all KVM can do is throw an NMI over the wall. + */ + if (is_td_vcpu(vcpu)) + return true; + + return vmx_nmi_allowed(vcpu, for_injection); +} + +static bool vt_get_nmi_mask(struct kvm_vcpu *vcpu) +{ + /* + * Assume NMIs are always unmasked. KVM could query PEND_NMI and treat + * NMIs as masked if a previous NMI is still pending, but SEAMCALLs are + * expensive and the end result is unchanged as the only relevant usage + * of get_nmi_mask() is to limit the number of pending NMIs, i.e. it + * only changes whether KVM or the TDX module drops an NMI. + */ + if (is_td_vcpu(vcpu)) + return false; + + return vmx_get_nmi_mask(vcpu); +} + +static void vt_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked) +{ + if (is_td_vcpu(vcpu)) + return; + + vmx_set_nmi_mask(vcpu, masked); +} + +static void vt_enable_nmi_window(struct kvm_vcpu *vcpu) +{ + /* Refer the comment in vt_get_nmi_mask(). */ + if (is_td_vcpu(vcpu)) + return; + + vmx_enable_nmi_window(vcpu); +} + static void vt_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level) { @@ -493,14 +547,14 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .get_interrupt_shadow = vt_get_interrupt_shadow, .patch_hypercall = vmx_patch_hypercall, .inject_irq = vt_inject_irq, - .inject_nmi = vmx_inject_nmi, + .inject_nmi = vt_inject_nmi, .inject_exception = vmx_inject_exception, .cancel_injection = vt_cancel_injection, .interrupt_allowed = vt_interrupt_allowed, - .nmi_allowed = vmx_nmi_allowed, - .get_nmi_mask = vmx_get_nmi_mask, - .set_nmi_mask = vmx_set_nmi_mask, - .enable_nmi_window = vmx_enable_nmi_window, + .nmi_allowed = vt_nmi_allowed, + .get_nmi_mask = vt_get_nmi_mask, + .set_nmi_mask = vt_set_nmi_mask, + .enable_nmi_window = vt_enable_nmi_window, .enable_irq_window = vt_enable_irq_window, .update_cr8_intercept = vmx_update_cr8_intercept, .set_virtual_apic_mode = vmx_set_virtual_apic_mode, diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index c63e87ed8f03..8d21e902b34a 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -848,6 +848,12 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu) return EXIT_FASTPATH_NONE; } +void tdx_inject_nmi(struct kvm_vcpu *vcpu) +{ + ++vcpu->stat.nmi_injections; + td_management_write8(to_tdx(vcpu), TD_VCPU_PEND_NMI, 1); +} + void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level) { td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK); diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index 801c39b294f7..c0ffb8dda750 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -159,6 +159,7 @@ u8 tdx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio); void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode, int trig_mode, int vector); +void tdx_inject_nmi(struct kvm_vcpu *vcpu); int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp); @@ -195,6 +196,7 @@ static inline u8 tdx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) static inline void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode, int trig_mode, int vector) {} +static inline void tdx_inject_nmi(struct kvm_vcpu *vcpu) {} static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; }