diff mbox series

KVM: MMU: protect TDP MMU pages only down to required level

Message ID 20210402121704.3424115-1-pbonzini@redhat.com (mailing list archive)
State New, archived
Headers show
Series KVM: MMU: protect TDP MMU pages only down to required level | expand

Commit Message

Paolo Bonzini April 2, 2021, 12:17 p.m. UTC
When using manual protection of dirty pages, it is not necessary
to protect nested page tables down to the 4K level; instead KVM
can protect only hugepages in order to split them lazily, and
delay write protection at 4K-granularity until KVM_CLEAR_DIRTY_LOG.
This was overlooked in the TDP MMU, so do it there as well.

Fixes: a6a0b05da9f37 ("kvm: x86/mmu: Support dirty logging for the TDP MMU")
Cc: Ben Gardon <bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/mmu/mmu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

zhukeqian April 6, 2021, 6:26 a.m. UTC | #1
Hi Paolo,

I'm just going to fix this issue, and found that you have done this ;-)
Please feel free to add:

Reviewed-by: Keqian Zhu <zhukeqian1@huawei.com>

Thanks,
Keqian

On 2021/4/2 20:17, Paolo Bonzini wrote:
> When using manual protection of dirty pages, it is not necessary
> to protect nested page tables down to the 4K level; instead KVM
> can protect only hugepages in order to split them lazily, and
> delay write protection at 4K-granularity until KVM_CLEAR_DIRTY_LOG.
> This was overlooked in the TDP MMU, so do it there as well.
> 
> Fixes: a6a0b05da9f37 ("kvm: x86/mmu: Support dirty logging for the TDP MMU")
> Cc: Ben Gardon <bgardon@google.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/mmu/mmu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index efb41f31e80a..0d92a269c5fa 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -5538,7 +5538,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
>  	flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect,
>  				start_level, KVM_MAX_HUGEPAGE_LEVEL, false);
>  	if (is_tdp_mmu_enabled(kvm))
> -		flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, PG_LEVEL_4K);
> +		flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level);
>  	write_unlock(&kvm->mmu_lock);
>  
>  	/*
>
Sean Christopherson April 6, 2021, 11:38 p.m. UTC | #2
On Tue, Apr 06, 2021, Keqian Zhu wrote:
> Hi Paolo,
> 
> I'm just going to fix this issue, and found that you have done this ;-)

Ha, and meanwhile I'm having a serious case of deja vu[1].  It even received a
variant of the magic "Queued, thanks"[2].  Doesn't appear in either of the 5.12
pull requests though, must have gotten lost along the way.

[1] https://lkml.kernel.org/r/20210213005015.1651772-3-seanjc@google.com
[2] https://lkml.kernel.org/r/b5ab72f2-970f-64bd-891c-48f1c303548d@redhat.com

> Please feel free to add:
> 
> Reviewed-by: Keqian Zhu <zhukeqian1@huawei.com>
> 
> Thanks,
> Keqian
> 
> On 2021/4/2 20:17, Paolo Bonzini wrote:
> > When using manual protection of dirty pages, it is not necessary
> > to protect nested page tables down to the 4K level; instead KVM
> > can protect only hugepages in order to split them lazily, and
> > delay write protection at 4K-granularity until KVM_CLEAR_DIRTY_LOG.
> > This was overlooked in the TDP MMU, so do it there as well.
> > 
> > Fixes: a6a0b05da9f37 ("kvm: x86/mmu: Support dirty logging for the TDP MMU")
> > Cc: Ben Gardon <bgardon@google.com>
> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > ---
> >  arch/x86/kvm/mmu/mmu.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index efb41f31e80a..0d92a269c5fa 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -5538,7 +5538,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
> >  	flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect,
> >  				start_level, KVM_MAX_HUGEPAGE_LEVEL, false);
> >  	if (is_tdp_mmu_enabled(kvm))
> > -		flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, PG_LEVEL_4K);
> > +		flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level);
> >  	write_unlock(&kvm->mmu_lock);
> >  
> >  	/*
> >
zhukeqian April 7, 2021, 1:31 a.m. UTC | #3
On 2021/4/7 7:38, Sean Christopherson wrote:
> On Tue, Apr 06, 2021, Keqian Zhu wrote:
>> Hi Paolo,
>>
>> I'm just going to fix this issue, and found that you have done this ;-)
> 
> Ha, and meanwhile I'm having a serious case of deja vu[1].  It even received a
> variant of the magic "Queued, thanks"[2].  Doesn't appear in either of the 5.12
> pull requests though, must have gotten lost along the way.
Good job. We should pick them up :)

> 
> [1] https://lkml.kernel.org/r/20210213005015.1651772-3-seanjc@google.com
> [2] https://lkml.kernel.org/r/b5ab72f2-970f-64bd-891c-48f1c303548d@redhat.com
> 
>> Please feel free to add:
>>
>> Reviewed-by: Keqian Zhu <zhukeqian1@huawei.com>
>>
>> Thanks,
>> Keqian
>>
>> On 2021/4/2 20:17, Paolo Bonzini wrote:
>>> When using manual protection of dirty pages, it is not necessary
>>> to protect nested page tables down to the 4K level; instead KVM
>>> can protect only hugepages in order to split them lazily, and
>>> delay write protection at 4K-granularity until KVM_CLEAR_DIRTY_LOG.
>>> This was overlooked in the TDP MMU, so do it there as well.
>>>
>>> Fixes: a6a0b05da9f37 ("kvm: x86/mmu: Support dirty logging for the TDP MMU")
>>> Cc: Ben Gardon <bgardon@google.com>
>>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>>> ---
>>>  arch/x86/kvm/mmu/mmu.c | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
>>> index efb41f31e80a..0d92a269c5fa 100644
>>> --- a/arch/x86/kvm/mmu/mmu.c
>>> +++ b/arch/x86/kvm/mmu/mmu.c
>>> @@ -5538,7 +5538,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
>>>  	flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect,
>>>  				start_level, KVM_MAX_HUGEPAGE_LEVEL, false);
>>>  	if (is_tdp_mmu_enabled(kvm))
>>> -		flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, PG_LEVEL_4K);
>>> +		flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level);
>>>  	write_unlock(&kvm->mmu_lock);
>>>  
>>>  	/*
>>>
> .
>
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index efb41f31e80a..0d92a269c5fa 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5538,7 +5538,7 @@  void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
 	flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect,
 				start_level, KVM_MAX_HUGEPAGE_LEVEL, false);
 	if (is_tdp_mmu_enabled(kvm))
-		flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, PG_LEVEL_4K);
+		flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level);
 	write_unlock(&kvm->mmu_lock);
 
 	/*