diff mbox series

KVM: VMX: Skip #PF(RSVD) intercepts when emulating smaller maxphyaddr

Message ID 20210618235941.1041604-1-jmattson@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: VMX: Skip #PF(RSVD) intercepts when emulating smaller maxphyaddr | expand

Commit Message

Jim Mattson June 18, 2021, 11:59 p.m. UTC
As part of smaller maxphyaddr emulation, kvm needs to intercept
present page faults to see if it needs to add the RSVD flag (bit 3) to
the error code. However, there is no need to intercept page faults
that already have the RSVD flag set. When setting up the page fault
intercept, add the RSVD flag into the #PF error code mask field (but
not the #PF error code match field) to skip the intercept when the
RSVD flag is already set.

Signed-off-by: Jim Mattson <jmattson@google.com>
---
 arch/x86/kvm/vmx/vmx.c | 23 ++++++++++++++---------
 1 file changed, 14 insertions(+), 9 deletions(-)

Comments

Paolo Bonzini June 21, 2021, 4:39 p.m. UTC | #1
On 19/06/21 01:59, Jim Mattson wrote:
> As part of smaller maxphyaddr emulation, kvm needs to intercept
> present page faults to see if it needs to add the RSVD flag (bit 3) to
> the error code. However, there is no need to intercept page faults
> that already have the RSVD flag set. When setting up the page fault
> intercept, add the RSVD flag into the #PF error code mask field (but
> not the #PF error code match field) to skip the intercept when the
> RSVD flag is already set.
> 
> Signed-off-by: Jim Mattson <jmattson@google.com>
> ---
>   arch/x86/kvm/vmx/vmx.c | 23 ++++++++++++++---------
>   1 file changed, 14 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 68a72c80bd3f..1fc28d8b72c7 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -747,16 +747,21 @@ void vmx_update_exception_bitmap(struct kvm_vcpu *vcpu)
>   	if (is_guest_mode(vcpu))
>   		eb |= get_vmcs12(vcpu)->exception_bitmap;
>           else {
> -		/*
> -		 * If EPT is enabled, #PF is only trapped if MAXPHYADDR is mismatched
> -		 * between guest and host.  In that case we only care about present
> -		 * faults.  For vmcs02, however, PFEC_MASK and PFEC_MATCH are set in
> -		 * prepare_vmcs02_rare.
> -		 */
> -		bool selective_pf_trap = enable_ept && (eb & (1u << PF_VECTOR));
> -		int mask = selective_pf_trap ? PFERR_PRESENT_MASK : 0;
> +		int mask = 0, match = 0;
> +
> +		if (enable_ept && (eb & (1u << PF_VECTOR))) {
> +			/*
> +			 * If EPT is enabled, #PF is currently only intercepted
> +			 * if MAXPHYADDR is smaller on the guest than on the
> +			 * host.  In that case we only care about present,
> +			 * non-reserved faults.  For vmcs02, however, PFEC_MASK
> +			 * and PFEC_MATCH are set in prepare_vmcs02_rare.
> +			 */
> +			mask = PFERR_PRESENT_MASK | PFERR_RSVD_MASK;
> +			match = PFERR_PRESENT_MASK;
> +		}
>   		vmcs_write32(PAGE_FAULT_ERROR_CODE_MASK, mask);
> -		vmcs_write32(PAGE_FAULT_ERROR_CODE_MATCH, mask);
> +		vmcs_write32(PAGE_FAULT_ERROR_CODE_MATCH, match);
>   	}
>   
>   	vmcs_write32(EXCEPTION_BITMAP, eb);
> 

Queued, thanks.

Paolo
diff mbox series

Patch

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 68a72c80bd3f..1fc28d8b72c7 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -747,16 +747,21 @@  void vmx_update_exception_bitmap(struct kvm_vcpu *vcpu)
 	if (is_guest_mode(vcpu))
 		eb |= get_vmcs12(vcpu)->exception_bitmap;
         else {
-		/*
-		 * If EPT is enabled, #PF is only trapped if MAXPHYADDR is mismatched
-		 * between guest and host.  In that case we only care about present
-		 * faults.  For vmcs02, however, PFEC_MASK and PFEC_MATCH are set in
-		 * prepare_vmcs02_rare.
-		 */
-		bool selective_pf_trap = enable_ept && (eb & (1u << PF_VECTOR));
-		int mask = selective_pf_trap ? PFERR_PRESENT_MASK : 0;
+		int mask = 0, match = 0;
+
+		if (enable_ept && (eb & (1u << PF_VECTOR))) {
+			/*
+			 * If EPT is enabled, #PF is currently only intercepted
+			 * if MAXPHYADDR is smaller on the guest than on the
+			 * host.  In that case we only care about present,
+			 * non-reserved faults.  For vmcs02, however, PFEC_MASK
+			 * and PFEC_MATCH are set in prepare_vmcs02_rare.
+			 */
+			mask = PFERR_PRESENT_MASK | PFERR_RSVD_MASK;
+			match = PFERR_PRESENT_MASK;
+		}
 		vmcs_write32(PAGE_FAULT_ERROR_CODE_MASK, mask);
-		vmcs_write32(PAGE_FAULT_ERROR_CODE_MATCH, mask);
+		vmcs_write32(PAGE_FAULT_ERROR_CODE_MATCH, match);
 	}
 
 	vmcs_write32(EXCEPTION_BITMAP, eb);