diff mbox series

[v2,1/2] KVM: arm64: Do not transfer page refcount for THP adjustment

Message ID 20230928173205.2826598-2-vdonnefort@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: arm64: Use folio for THP support | expand

Commit Message

Vincent Donnefort Sept. 28, 2023, 5:32 p.m. UTC
GUP affects a refcount common to all pages forming the THP. There is
therefore no need to move the refcount from a tail to the head page.
Under the hood it decrements and increments the same counter.

Signed-off-by: Vincent Donnefort <vdonnefort@google.com>

Comments

Gavin Shan Sept. 29, 2023, 6:59 a.m. UTC | #1
On 9/29/23 03:32, Vincent Donnefort wrote:
> GUP affects a refcount common to all pages forming the THP. There is
> therefore no need to move the refcount from a tail to the head page.
> Under the hood it decrements and increments the same counter.
> 
> Signed-off-by: Vincent Donnefort <vdonnefort@google.com>
> 

Reviewed-by: Gavin Shan <gshan@redhat.com>

> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 587a104f66c3..de5e5148ef5d 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1295,28 +1295,8 @@ transparent_hugepage_adjust(struct kvm *kvm, struct kvm_memory_slot *memslot,
>   		if (sz < PMD_SIZE)
>   			return PAGE_SIZE;
>   
> -		/*
> -		 * The address we faulted on is backed by a transparent huge
> -		 * page.  However, because we map the compound huge page and
> -		 * not the individual tail page, we need to transfer the
> -		 * refcount to the head page.  We have to be careful that the
> -		 * THP doesn't start to split while we are adjusting the
> -		 * refcounts.
> -		 *
> -		 * We are sure this doesn't happen, because mmu_invalidate_retry
> -		 * was successful and we are holding the mmu_lock, so if this
> -		 * THP is trying to split, it will be blocked in the mmu
> -		 * notifier before touching any of the pages, specifically
> -		 * before being able to call __split_huge_page_refcount().
> -		 *
> -		 * We can therefore safely transfer the refcount from PG_tail
> -		 * to PG_head and switch the pfn from a tail page to the head
> -		 * page accordingly.
> -		 */
>   		*ipap &= PMD_MASK;
> -		kvm_release_pfn_clean(pfn);
>   		pfn &= ~(PTRS_PER_PMD - 1);
> -		get_page(pfn_to_page(pfn));
>   		*pfnp = pfn;
>   
>   		return PMD_SIZE;

The local variable @pfn can be dropped either.

                 *pfnp &= ~(PTRS_PER_PMD - 1);

Thanks,
Gavin
Vincent Donnefort Sept. 29, 2023, 12:47 p.m. UTC | #2
On Fri, Sep 29, 2023 at 04:59:20PM +1000, Gavin Shan wrote:
> On 9/29/23 03:32, Vincent Donnefort wrote:
> > GUP affects a refcount common to all pages forming the THP. There is
> > therefore no need to move the refcount from a tail to the head page.
> > Under the hood it decrements and increments the same counter.
> > 
> > Signed-off-by: Vincent Donnefort <vdonnefort@google.com>
> > 
> 
> Reviewed-by: Gavin Shan <gshan@redhat.com>
> 
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index 587a104f66c3..de5e5148ef5d 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -1295,28 +1295,8 @@ transparent_hugepage_adjust(struct kvm *kvm, struct kvm_memory_slot *memslot,
> >   		if (sz < PMD_SIZE)
> >   			return PAGE_SIZE;
> > -		/*
> > -		 * The address we faulted on is backed by a transparent huge
> > -		 * page.  However, because we map the compound huge page and
> > -		 * not the individual tail page, we need to transfer the
> > -		 * refcount to the head page.  We have to be careful that the
> > -		 * THP doesn't start to split while we are adjusting the
> > -		 * refcounts.
> > -		 *
> > -		 * We are sure this doesn't happen, because mmu_invalidate_retry
> > -		 * was successful and we are holding the mmu_lock, so if this
> > -		 * THP is trying to split, it will be blocked in the mmu
> > -		 * notifier before touching any of the pages, specifically
> > -		 * before being able to call __split_huge_page_refcount().
> > -		 *
> > -		 * We can therefore safely transfer the refcount from PG_tail
> > -		 * to PG_head and switch the pfn from a tail page to the head
> > -		 * page accordingly.
> > -		 */
> >   		*ipap &= PMD_MASK;
> > -		kvm_release_pfn_clean(pfn);
> >   		pfn &= ~(PTRS_PER_PMD - 1);
> > -		get_page(pfn_to_page(pfn));
> >   		*pfnp = pfn;
> >   		return PMD_SIZE;
> 
> The local variable @pfn can be dropped either.

I would like to keep it for the following patch: pfn_to_folio(pfn);

> 
>                 *pfnp &= ~(PTRS_PER_PMD - 1);
> 
> Thanks,
> Gavin
>
diff mbox series

Patch

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 587a104f66c3..de5e5148ef5d 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1295,28 +1295,8 @@  transparent_hugepage_adjust(struct kvm *kvm, struct kvm_memory_slot *memslot,
 		if (sz < PMD_SIZE)
 			return PAGE_SIZE;
 
-		/*
-		 * The address we faulted on is backed by a transparent huge
-		 * page.  However, because we map the compound huge page and
-		 * not the individual tail page, we need to transfer the
-		 * refcount to the head page.  We have to be careful that the
-		 * THP doesn't start to split while we are adjusting the
-		 * refcounts.
-		 *
-		 * We are sure this doesn't happen, because mmu_invalidate_retry
-		 * was successful and we are holding the mmu_lock, so if this
-		 * THP is trying to split, it will be blocked in the mmu
-		 * notifier before touching any of the pages, specifically
-		 * before being able to call __split_huge_page_refcount().
-		 *
-		 * We can therefore safely transfer the refcount from PG_tail
-		 * to PG_head and switch the pfn from a tail page to the head
-		 * page accordingly.
-		 */
 		*ipap &= PMD_MASK;
-		kvm_release_pfn_clean(pfn);
 		pfn &= ~(PTRS_PER_PMD - 1);
-		get_page(pfn_to_page(pfn));
 		*pfnp = pfn;
 
 		return PMD_SIZE;