diff mbox series

[v4,56/61] arm64: kvm: avoid CONFIG_PGTABLE_LEVELS for runtime levels

Message ID 20230912141549.278777-119-ardb@google.com (mailing list archive)
State New, archived
Headers show
Series arm64: Add support for LPA2 at stage1 and WXN | expand

Commit Message

Ard Biesheuvel Sept. 12, 2023, 2:16 p.m. UTC
From: Ard Biesheuvel <ardb@kernel.org>

get_user_mapping_size() uses vabits_actual and CONFIG_PGTABLE_LEVELS to
provide the starting point for a table walk. This is fine for LVA, as
the number of translation levels is the same regardless of whether LVA
is enabled. However, with LPA2, this will no longer be the case, so
let's derive the number of levels from the number of VA bits directly.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kvm/mmu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Marc Zyngier Oct. 23, 2023, 6:42 p.m. UTC | #1
On Tue, 12 Sep 2023 15:16:46 +0100,
Ard Biesheuvel <ardb@google.com> wrote:
> 
> From: Ard Biesheuvel <ardb@kernel.org>
> 
> get_user_mapping_size() uses vabits_actual and CONFIG_PGTABLE_LEVELS to
> provide the starting point for a table walk. This is fine for LVA, as
> the number of translation levels is the same regardless of whether LVA
> is enabled. However, with LPA2, this will no longer be the case, so
> let's derive the number of levels from the number of VA bits directly.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>  arch/arm64/kvm/mmu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index beee6408534d..a4c5d7f44e32 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -802,7 +802,7 @@ static int get_user_mapping_size(struct kvm *kvm, u64 addr)
>  		.pgd		= (kvm_pteref_t)kvm->mm->pgd,
>  		.ia_bits	= vabits_actual,
>  		.start_level	= (KVM_PGTABLE_MAX_LEVELS -
> -				   CONFIG_PGTABLE_LEVELS),
> +				   ARM64_HW_PGTABLE_LEVELS(pgt.ia_bits)),
>  		.mm_ops		= &kvm_user_mm_ops,
>  	};
>  	unsigned long flags;

Note: this walker is going away in 6.7 and we will rely on the folio
infrastructure to avoid the userspace walk.

Acked-by: Marc Zyngier <maz@kernel.org>

	M.
Oliver Upton Oct. 24, 2023, 6:55 a.m. UTC | #2
On Tue, Sep 12, 2023 at 02:16:46PM +0000, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
> 
> get_user_mapping_size() uses vabits_actual and CONFIG_PGTABLE_LEVELS to
> provide the starting point for a table walk. This is fine for LVA, as
> the number of translation levels is the same regardless of whether LVA
> is enabled. However, with LPA2, this will no longer be the case, so
> let's derive the number of levels from the number of VA bits directly.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

Acked-by: Oliver Upton <oliver.upton@linux.dev>
diff mbox series

Patch

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index beee6408534d..a4c5d7f44e32 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -802,7 +802,7 @@  static int get_user_mapping_size(struct kvm *kvm, u64 addr)
 		.pgd		= (kvm_pteref_t)kvm->mm->pgd,
 		.ia_bits	= vabits_actual,
 		.start_level	= (KVM_PGTABLE_MAX_LEVELS -
-				   CONFIG_PGTABLE_LEVELS),
+				   ARM64_HW_PGTABLE_LEVELS(pgt.ia_bits)),
 		.mm_ops		= &kvm_user_mm_ops,
 	};
 	unsigned long flags;