diff mbox series

[v2,05/22] KVM: x86: Retry to-be-emulated insn in "slow" unprotect path iff sp is zapped

Message ID 20240831001538.336683-6-seanjc@google.com (mailing list archive)
State New
Headers show
Series KVM: x86: Fix multiple #PF RO infinite loop bugs | expand

Commit Message

Sean Christopherson Aug. 31, 2024, 12:15 a.m. UTC
Resume the guest and thus skip emulation of a non-PTE-writing instruction
if and only if unprotecting the gfn actually zapped at least one shadow
page.  If the gfn is write-protected for some reason other than shadow
paging, attempting to unprotect the gfn will effectively fail, and thus
retrying the instruction is all but guaranteed to be pointless.  This bug
has existed for a long time, but was effectively fudged around by the
retry RIP+address anti-loop detection.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/x86.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

Comments

Yuan Yao Sept. 6, 2024, 8:17 a.m. UTC | #1
On Fri, Aug 30, 2024 at 05:15:20PM -0700, Sean Christopherson wrote:
> Resume the guest and thus skip emulation of a non-PTE-writing instruction
> if and only if unprotecting the gfn actually zapped at least one shadow
> page.  If the gfn is write-protected for some reason other than shadow
> paging, attempting to unprotect the gfn will effectively fail, and thus
> retrying the instruction is all but guaranteed to be pointless.  This bug
> has existed for a long time, but was effectively fudged around by the
> retry RIP+address anti-loop detection.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  arch/x86/kvm/x86.c | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 966fb301d44b..c4cb6c6d605b 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -8961,14 +8961,14 @@ static bool retry_instruction(struct x86_emulate_ctxt *ctxt,
>  	if (ctxt->eip == last_retry_eip && last_retry_addr == cr2_or_gpa)
>  		return false;
>
> +	if (!vcpu->arch.mmu->root_role.direct)
> +		gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL);
> +
> +	if (!kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)))
> +		return false;
> +

Reviewed-by: Yuan Yao <yuan.yao@intel.com>

>  	vcpu->arch.last_retry_eip = ctxt->eip;
>  	vcpu->arch.last_retry_addr = cr2_or_gpa;
> -
> -	if (!vcpu->arch.mmu->root_role.direct)
> -		gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL);
> -
> -	kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa));
> -
>  	return true;
>  }
>
> --
> 2.46.0.469.g59c65b2a67-goog
>
diff mbox series

Patch

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 966fb301d44b..c4cb6c6d605b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8961,14 +8961,14 @@  static bool retry_instruction(struct x86_emulate_ctxt *ctxt,
 	if (ctxt->eip == last_retry_eip && last_retry_addr == cr2_or_gpa)
 		return false;
 
+	if (!vcpu->arch.mmu->root_role.direct)
+		gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL);
+
+	if (!kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)))
+		return false;
+
 	vcpu->arch.last_retry_eip = ctxt->eip;
 	vcpu->arch.last_retry_addr = cr2_or_gpa;
-
-	if (!vcpu->arch.mmu->root_role.direct)
-		gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL);
-
-	kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa));
-
 	return true;
 }