@@ -1532,6 +1532,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (logging_active) {
force_pte = true;
vma_shift = PAGE_SHIFT;
+ } else if (kvm_is_realm(kvm)) {
+ // Force PTE level mappings for realms
+ force_pte = true;
+ vma_shift = PAGE_SHIFT;
} else {
vma_shift = get_vma_page_shift(vma, hva);
}
@@ -847,7 +847,9 @@ static int populate_par_region(struct kvm *kvm,
break;
}
- if (is_vm_hugetlb_page(vma))
+ // FIXME: To avoid the overmapping issue (see below comment)
+ // force the use of 4k pages
+ if (is_vm_hugetlb_page(vma) && 0)
vma_shift = huge_page_shift(hstate_vma(vma));
else
vma_shift = PAGE_SHIFT;
Always split up huge pages to avoid problems managing huge pages. There are two issues currently: 1. The uABI for the VMM allows populating memory on 4k boundaries even if the underlying allocator (e.g. hugetlbfs) is using a larger page size. Using a memfd for private allocations will push this issue onto the VMM as it will need to respect the granularity of the allocator. 2. The guest is able to request arbitrary ranges to be remapped as shared. Again with a memfd approach it will be up to the VMM to deal with the complexity and either overmap (need the huge mapping and add an additional 'overlapping' shared mapping) or reject the request as invalid due to the use of a huge page allocator. For now just break everything down to 4k pages in the RMM controlled stage 2. Signed-off-by: Steven Price <steven.price@arm.com> --- arch/arm64/kvm/mmu.c | 4 ++++ arch/arm64/kvm/rme.c | 4 +++- 2 files changed, 7 insertions(+), 1 deletion(-)