diff mbox series

[1/2] KVM: arm64: Update page shift if stage 2 block mapping not supported

Message ID 20200901133357.52640-2-alexandru.elisei@arm.com (mailing list archive)
State New, archived
Headers show
Series KVM: arm64: user_mem_abort() improvements | expand

Commit Message

Alexandru Elisei Sept. 1, 2020, 1:33 p.m. UTC
Commit 196f878a7ac2e (" KVM: arm/arm64: Signal SIGBUS when stage2 discovers
hwpoison memory") modifies user_mem_abort() to send a SIGBUS signal when
the fault IPA maps to a hwpoisoned page. Commit 1559b7583ff6 ("KVM:
arm/arm64: Re-check VMA on detecting a poisoned page") changed
kvm_send_hwpoison_signal() to use the page shift instead of the VMA because
at that point the code had already released the mmap lock, which means
userspace could have modified the VMA.

If userspace uses hugetlbfs for the VM memory, user_mem_abort() tries to
map the guest fault IPA using block mappings in stage 2. That is not always
possible, if, for example, userspace uses dirty page logging for the VM.
Update the page shift appropriately in those cases when we downgrade the
stage 2 entry from a block mapping to a page.

Fixes: 1559b7583ff6 ("KVM: arm/arm64: Re-check VMA on detecting a poisoned page")
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kvm/mmu.c | 1 +
 1 file changed, 1 insertion(+)

Comments

Gavin Shan Sept. 2, 2020, 12:57 a.m. UTC | #1
On 9/1/20 11:33 PM, Alexandru Elisei wrote:
> Commit 196f878a7ac2e (" KVM: arm/arm64: Signal SIGBUS when stage2 discovers
> hwpoison memory") modifies user_mem_abort() to send a SIGBUS signal when
> the fault IPA maps to a hwpoisoned page. Commit 1559b7583ff6 ("KVM:
> arm/arm64: Re-check VMA on detecting a poisoned page") changed
> kvm_send_hwpoison_signal() to use the page shift instead of the VMA because
> at that point the code had already released the mmap lock, which means
> userspace could have modified the VMA.
> 
> If userspace uses hugetlbfs for the VM memory, user_mem_abort() tries to
> map the guest fault IPA using block mappings in stage 2. That is not always
> possible, if, for example, userspace uses dirty page logging for the VM.
> Update the page shift appropriately in those cases when we downgrade the
> stage 2 entry from a block mapping to a page.
> 
> Fixes: 1559b7583ff6 ("KVM: arm/arm64: Re-check VMA on detecting a poisoned page")
> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
> ---

Reviewed-by: Gavin Shan <gshan@redhat.com>

>   arch/arm64/kvm/mmu.c | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index ba00bcc0c884..25e7dc52c086 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1877,6 +1877,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>   	    !fault_supports_stage2_huge_mapping(memslot, hva, vma_pagesize)) {
>   		force_pte = true;
>   		vma_pagesize = PAGE_SIZE;
> +		vma_shift = PAGE_SHIFT;
>   	}
>   
>   	/*
>
diff mbox series

Patch

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index ba00bcc0c884..25e7dc52c086 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1877,6 +1877,7 @@  static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	    !fault_supports_stage2_huge_mapping(memslot, hva, vma_pagesize)) {
 		force_pte = true;
 		vma_pagesize = PAGE_SIZE;
+		vma_shift = PAGE_SHIFT;
 	}
 
 	/*