diff mbox series

[v3,55/60] arm64: kvm: avoid CONFIG_PGTABLE_LEVELS for runtime levels

Message ID 20230307140522.2311461-56-ardb@kernel.org (mailing list archive)
State New, archived
Headers show
Series arm64: Add support for LPA2 at stage1 and WXN | expand

Commit Message

Ard Biesheuvel March 7, 2023, 2:05 p.m. UTC
get_user_mapping_size() uses vabits_actual and CONFIG_PGTABLE_LEVELS to
provide the starting point for a table walk. This is fine for LVA, as
the number of translation levels is the same regardless of whether LVA
is enabled. However, with LPA2, this will no longer be the case, so
let's derive the number of levels from the number of VA bits directly.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kvm/mmu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Ryan Roberts April 18, 2023, 2:29 p.m. UTC | #1
On 07/03/2023 14:05, Ard Biesheuvel wrote:
> get_user_mapping_size() uses vabits_actual and CONFIG_PGTABLE_LEVELS to
> provide the starting point for a table walk. This is fine for LVA, as
> the number of translation levels is the same regardless of whether LVA
> is enabled. However, with LPA2, this will no longer be the case, so
> let's derive the number of levels from the number of VA bits directly.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>  arch/arm64/kvm/mmu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index d64be7b5f6692e8b..4e7c0f9a9c286c09 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -663,7 +663,7 @@ static int get_user_mapping_size(struct kvm *kvm, u64 addr)
>  		.pgd		= (kvm_pteref_t)kvm->mm->pgd,
>  		.ia_bits	= vabits_actual,
>  		.start_level	= (KVM_PGTABLE_MAX_LEVELS -
> -				   CONFIG_PGTABLE_LEVELS),
> +				   ARM64_HW_PGTABLE_LEVELS(pgt.ia_bits)),
>  		.mm_ops		= &kvm_user_mm_ops,
>  	};
>  	kvm_pte_t pte = 0;	/* Keep GCC quiet... */

You have the problem here that the KVM library (which isn't LPA2 aware) is
walking a kernel page table, which may now be in LPA2 format. I think this works
out ok as long as there are no physical addresses above 48 bits in the page
table. But otherwise, I doubt it works out very well...
diff mbox series

Patch

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index d64be7b5f6692e8b..4e7c0f9a9c286c09 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -663,7 +663,7 @@  static int get_user_mapping_size(struct kvm *kvm, u64 addr)
 		.pgd		= (kvm_pteref_t)kvm->mm->pgd,
 		.ia_bits	= vabits_actual,
 		.start_level	= (KVM_PGTABLE_MAX_LEVELS -
-				   CONFIG_PGTABLE_LEVELS),
+				   ARM64_HW_PGTABLE_LEVELS(pgt.ia_bits)),
 		.mm_ops		= &kvm_user_mm_ops,
 	};
 	kvm_pte_t pte = 0;	/* Keep GCC quiet... */