Message ID | 20210406162550.3732490-1-pbonzini@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86/mmu: preserve pending TLB flush across calls to kvm_tdp_mmu_zap_sp | expand |
On Tue, Apr 06, 2021 at 12:25:50PM -0400, Paolo Bonzini wrote: > Right now, if a call to kvm_tdp_mmu_zap_sp returns false, the caller > will skip the TLB flush, which is wrong. There are two ways to fix > it: > > - since kvm_tdp_mmu_zap_sp will not yield and therefore will not flush > the TLB itself, we could change the call to kvm_tdp_mmu_zap_sp to > use "flush |= ..." > > - or we can chain the flush argument through kvm_tdp_mmu_zap_sp down > to __kvm_tdp_mmu_zap_gfn_range. > > This patch does the former to simplify application to stable kernels. > > Cc: seanjc@google.com > Fixes: 048f49809c526 ("KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zapping") > Cc: <stable@vger.kernel.org> # 5.10.x: 048f49809c: KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zapping > Cc: <stable@vger.kernel.org> # 5.10.x: 33a3164161: KVM: x86/mmu: Don't allow TDP MMU to yield when recovering NX pages > Cc: <stable@vger.kernel.org> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> > --- > arch/x86/kvm/mmu/mmu.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) Is this for only the stable kernels, or is it addressed toward upstream merges? Confused, greg k-h
On 06/04/21 20:25, Greg KH wrote: > On Tue, Apr 06, 2021 at 12:25:50PM -0400, Paolo Bonzini wrote: >> Right now, if a call to kvm_tdp_mmu_zap_sp returns false, the caller >> will skip the TLB flush, which is wrong. There are two ways to fix >> it: >> >> - since kvm_tdp_mmu_zap_sp will not yield and therefore will not flush >> the TLB itself, we could change the call to kvm_tdp_mmu_zap_sp to >> use "flush |= ..." >> >> - or we can chain the flush argument through kvm_tdp_mmu_zap_sp down >> to __kvm_tdp_mmu_zap_gfn_range. >> >> This patch does the former to simplify application to stable kernels. >> >> Cc: seanjc@google.com >> Fixes: 048f49809c526 ("KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zapping") >> Cc: <stable@vger.kernel.org> # 5.10.x: 048f49809c: KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zapping >> Cc: <stable@vger.kernel.org> # 5.10.x: 33a3164161: KVM: x86/mmu: Don't allow TDP MMU to yield when recovering NX pages >> Cc: <stable@vger.kernel.org> >> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> >> --- >> arch/x86/kvm/mmu/mmu.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) > > Is this for only the stable kernels, or is it addressed toward upstream > merges? > > Confused, It's for upstream. I'll include it (with the expected "[ Upstream commit abcd ]" header) when I post the complete backport. I'll send this patch to Linus as soon as I get a review even if I don't have anything else in the queue, so (as a general idea) the full backport should be sent and tested on Thursday-Friday. Paolo
On Tue, Apr 06, 2021, Paolo Bonzini wrote: > Right now, if a call to kvm_tdp_mmu_zap_sp returns false, the caller > will skip the TLB flush, which is wrong. There are two ways to fix > it: > > - since kvm_tdp_mmu_zap_sp will not yield and therefore will not flush > the TLB itself, we could change the call to kvm_tdp_mmu_zap_sp to > use "flush |= ..." > > - or we can chain the flush argument through kvm_tdp_mmu_zap_sp down > to __kvm_tdp_mmu_zap_gfn_range. > > This patch does the former to simplify application to stable kernels. Eh, that and passing flush down the stack is pointless because kvm_tdp_mmu_zap_sp() will never yield. If you want to justify |= over passing flush, it probably makes sense to link to the discussion that led to me changing from passing flush to accumulating the result (well, tried to, doh). https://lkml.kernel.org/r/20210319232006.3468382-3-seanjc@google.com > Cc: seanjc@google.com > Fixes: 048f49809c526 ("KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zapping") > Cc: <stable@vger.kernel.org> # 5.10.x: 048f49809c: KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zapping > Cc: <stable@vger.kernel.org> # 5.10.x: 33a3164161: KVM: x86/mmu: Don't allow TDP MMU to yield when recovering NX pages > Cc: <stable@vger.kernel.org> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Sean Christopherson <seanjc@google.com> > --- > arch/x86/kvm/mmu/mmu.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 486aa94ecf1d..951dae4e7175 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -5906,7 +5906,7 @@ static void kvm_recover_nx_lpages(struct kvm *kvm) > lpage_disallowed_link); > WARN_ON_ONCE(!sp->lpage_disallowed); > if (is_tdp_mmu_page(sp)) { > - flush = kvm_tdp_mmu_zap_sp(kvm, sp); > + flush |= kvm_tdp_mmu_zap_sp(kvm, sp); > } else { > kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); > WARN_ON_ONCE(sp->lpage_disallowed); > -- > 2.26.2 >
On Tue, Apr 06, 2021 at 08:35:55PM +0200, Paolo Bonzini wrote: > On 06/04/21 20:25, Greg KH wrote: > > On Tue, Apr 06, 2021 at 12:25:50PM -0400, Paolo Bonzini wrote: > > > Right now, if a call to kvm_tdp_mmu_zap_sp returns false, the caller > > > will skip the TLB flush, which is wrong. There are two ways to fix > > > it: > > > > > > - since kvm_tdp_mmu_zap_sp will not yield and therefore will not flush > > > the TLB itself, we could change the call to kvm_tdp_mmu_zap_sp to > > > use "flush |= ..." > > > > > > - or we can chain the flush argument through kvm_tdp_mmu_zap_sp down > > > to __kvm_tdp_mmu_zap_gfn_range. > > > > > > This patch does the former to simplify application to stable kernels. > > > > > > Cc: seanjc@google.com > > > Fixes: 048f49809c526 ("KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zapping") > > > Cc: <stable@vger.kernel.org> # 5.10.x: 048f49809c: KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zapping > > > Cc: <stable@vger.kernel.org> # 5.10.x: 33a3164161: KVM: x86/mmu: Don't allow TDP MMU to yield when recovering NX pages > > > Cc: <stable@vger.kernel.org> > > > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> > > > --- > > > arch/x86/kvm/mmu/mmu.c | 2 +- > > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > Is this for only the stable kernels, or is it addressed toward upstream > > merges? > > > > Confused, > > It's for upstream. I'll include it (with the expected "[ Upstream commit > abcd ]" header) when I post the complete backport. I'll send this patch to > Linus as soon as I get a review even if I don't have anything else in the > queue, so (as a general idea) the full backport should be sent and tested on > Thursday-Friday. Ah, ok, thanks, got confused there. greg k-h
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 486aa94ecf1d..951dae4e7175 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5906,7 +5906,7 @@ static void kvm_recover_nx_lpages(struct kvm *kvm) lpage_disallowed_link); WARN_ON_ONCE(!sp->lpage_disallowed); if (is_tdp_mmu_page(sp)) { - flush = kvm_tdp_mmu_zap_sp(kvm, sp); + flush |= kvm_tdp_mmu_zap_sp(kvm, sp); } else { kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); WARN_ON_ONCE(sp->lpage_disallowed);
Right now, if a call to kvm_tdp_mmu_zap_sp returns false, the caller will skip the TLB flush, which is wrong. There are two ways to fix it: - since kvm_tdp_mmu_zap_sp will not yield and therefore will not flush the TLB itself, we could change the call to kvm_tdp_mmu_zap_sp to use "flush |= ..." - or we can chain the flush argument through kvm_tdp_mmu_zap_sp down to __kvm_tdp_mmu_zap_gfn_range. This patch does the former to simplify application to stable kernels. Cc: seanjc@google.com Fixes: 048f49809c526 ("KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zapping") Cc: <stable@vger.kernel.org> # 5.10.x: 048f49809c: KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zapping Cc: <stable@vger.kernel.org> # 5.10.x: 33a3164161: KVM: x86/mmu: Don't allow TDP MMU to yield when recovering NX pages Cc: <stable@vger.kernel.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)