Message ID | 71e4c19d1dff8135792e6c5a17d3a483bc99875b.1656366338.git.isaku.yamahata@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM TDX basic feature support | expand |
On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote: > From: Sean Christopherson <sean.j.christopherson@intel.com> > > Explicitly check for an MMIO spte in the fast page fault flow. TDX will > use a not-present entry for MMIO sptes, which can be mistaken for an > access-tracked spte since both have SPTE_SPECIAL_MASK set. SPTE_SPECIAL_MASK has been removed in latest KVM code. The changelog needs update. In fact, if I understand correctly, I don't think this changelog is correct: The existing code doesn't check is_mmio_spte() because: 1) If MMIO caching is enabled, MMIO fault is always handled in handle_mmio_page_fault() before reaching here; 2) If MMIO caching is disabled, is_shadow_present_pte() always returns false for MMIO spte, and is_mmio_spte() also always return false for MMIO spte, so there's no need check here. "A non-present entry for MMIO spte" doesn't necessarily mean is_shadow_present_pte() will return true for it, and there's no explanation at all that for TDX guest a MMIO spte could reach here and is_shadow_present_pte() returns true for it. If this patch is ever needed, it should come with or after the patch (patches) that handles MMIO fault for TD guest. Hi Sean, Paolo, Did I miss anything? > > MMIO sptes are handled in handle_mmio_page_fault for non-TDX VMs, so this > patch does not affect them. TDX will handle MMIO emulation through a > hypercall instead. > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> > --- > arch/x86/kvm/mmu/mmu.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 17252f39bd7c..51306b80f47c 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -3163,7 +3163,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > else > sptep = fast_pf_get_last_sptep(vcpu, fault->addr, &spte); > > - if (!is_shadow_present_pte(spte)) > + if (!is_shadow_present_pte(spte) || is_mmio_spte(spte)) > break; > > sp = sptep_to_sp(sptep);
On Thu, Jun 30, 2022 at 11:37:15PM +1200, Kai Huang <kai.huang@intel.com> wrote: > On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote: > > From: Sean Christopherson <sean.j.christopherson@intel.com> > > > > Explicitly check for an MMIO spte in the fast page fault flow. TDX will > > use a not-present entry for MMIO sptes, which can be mistaken for an > > access-tracked spte since both have SPTE_SPECIAL_MASK set. > > SPTE_SPECIAL_MASK has been removed in latest KVM code. The changelog needs > update. It was renamed to SPTE_TDP_AD_MASK. not removed. > In fact, if I understand correctly, I don't think this changelog is correct: > The existing code doesn't check is_mmio_spte() because: > > 1) If MMIO caching is enabled, MMIO fault is always handled in > handle_mmio_page_fault() before reaching here; > > 2) If MMIO caching is disabled, is_shadow_present_pte() always returns false for > MMIO spte, and is_mmio_spte() also always return false for MMIO spte, so there's > no need check here. > > "A non-present entry for MMIO spte" doesn't necessarily mean > is_shadow_present_pte() will return true for it, and there's no explanation at > all that for TDX guest a MMIO spte could reach here and is_shadow_present_pte() > returns true for it. Although it was needed, I noticed the following commit made this patch unnecessary. So I'll drop this patch. Kudos to Sean. edea7c4fc215c7ee1cc98363b016ad505cbac9f7 "KVM: x86/mmu: Use a dedicated bit to track shadow/MMU-present SPTEs"
> > Although it was needed, I noticed the following commit made this patch > unnecessary. So I'll drop this patch. Kudos to Sean. > > edea7c4fc215c7ee1cc98363b016ad505cbac9f7 > "KVM: x86/mmu: Use a dedicated bit to track shadow/MMU-present SPTEs" > Yes is_shadow_present_pte() always return false for MMIO so this patch isn't needed anymore.
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 17252f39bd7c..51306b80f47c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3163,7 +3163,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) else sptep = fast_pf_get_last_sptep(vcpu, fault->addr, &spte); - if (!is_shadow_present_pte(spte)) + if (!is_shadow_present_pte(spte) || is_mmio_spte(spte)) break; sp = sptep_to_sp(sptep);