diff mbox series

[v2,5/6] KVM: arm64: Use get_page() instead of kvm_get_pfn()

Message ID 20210726153552.1535838-6-maz@kernel.org (mailing list archive)
State New, archived
Headers show
Series KVM: Remove kvm_is_transparent_hugepage() and friends | expand

Commit Message

Marc Zyngier July 26, 2021, 3:35 p.m. UTC
When mapping a THP, we are guaranteed that the page isn't reserved,
and we can safely avoid the kvm_is_reserved_pfn() call.

Replace kvm_get_pfn() with get_page(pfn_to_page()).

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/mmu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Alexandru Elisei July 27, 2021, 5:46 p.m. UTC | #1
Hi Marc,

On 7/26/21 4:35 PM, Marc Zyngier wrote:
> When mapping a THP, we are guaranteed that the page isn't reserved,
> and we can safely avoid the kvm_is_reserved_pfn() call.
>
> Replace kvm_get_pfn() with get_page(pfn_to_page()).
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/mmu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index ebb28dd4f2c9..b303aa143592 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -840,7 +840,7 @@ transparent_hugepage_adjust(struct kvm *kvm, struct kvm_memory_slot *memslot,
>  		*ipap &= PMD_MASK;
>  		kvm_release_pfn_clean(pfn);
>  		pfn &= ~(PTRS_PER_PMD - 1);
> -		kvm_get_pfn(pfn);
> +		get_page(pfn_to_page(pfn));
>  		*pfnp = pfn;
>  
>  		return PMD_SIZE;

I am not very familiar with the mm subsystem, but I did my best to review this change.

kvm_get_pfn() uses get_page(pfn) if !PageReserved(pfn_to_page(pfn)). I looked at
the documentation for the PG_reserved page flag, and for normal memory, what
looked to me like the most probable situation where that can be set for a
transparent hugepage was for the zero page. Looked at mm/huge_memory.c, and
huge_zero_pfn is allocated via alloc_pages(__GFP_ZERO) (and other flags), which
doesn't call SetPageReserved().

I looked at how a huge page can be mapped from handle_mm_fault and from
khugepaged, and it also looks to like both are using using alloc_pages() to
allocate a new hugepage.

I also did a grep for SetPageReserved(), and there are very few places where that
is called, and none looked like they have anything to do with hugepages.

As far as I can tell, this change is correct, but I think someone who is familiar
with mm would be better suited for reviewing this patch.
diff mbox series

Patch

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index ebb28dd4f2c9..b303aa143592 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -840,7 +840,7 @@  transparent_hugepage_adjust(struct kvm *kvm, struct kvm_memory_slot *memslot,
 		*ipap &= PMD_MASK;
 		kvm_release_pfn_clean(pfn);
 		pfn &= ~(PTRS_PER_PMD - 1);
-		kvm_get_pfn(pfn);
+		get_page(pfn_to_page(pfn));
 		*pfnp = pfn;
 
 		return PMD_SIZE;