Message ID | 20200923163629.20168-3-sean.j.christopherson@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: VMX: Clean up RTIT MAXPHYADDR usage | expand |
On 23/09/20 18:36, Sean Christopherson wrote: > +static inline bool pt_output_base_valid(struct kvm_vcpu *vcpu, u64 base) > +{ > + /* The base must be 128-byte aligned and a legal physical address. */ > + return !(base & (~((1UL << cpuid_maxphyaddr(vcpu)) - 1) | 0x7f)); > +} The fact that you deemed a comment necessary says something already. :) What about: return !kvm_mmu_is_illegal_gpa(vcpu, base) && !(base & 0x7f); (where this new usage makes it obvious that mmu should have been vcpu). Paolo
On Wed, Sep 23, 2020 at 07:07:22PM +0200, Paolo Bonzini wrote: > On 23/09/20 18:36, Sean Christopherson wrote: > > +static inline bool pt_output_base_valid(struct kvm_vcpu *vcpu, u64 base) > > +{ > > + /* The base must be 128-byte aligned and a legal physical address. */ > > + return !(base & (~((1UL << cpuid_maxphyaddr(vcpu)) - 1) | 0x7f)); > > +} > > The fact that you deemed a comment necessary says something already. :) > What about: > > return !kvm_mmu_is_illegal_gpa(vcpu, base) && !(base & 0x7f); > > (where this new usage makes it obvious that mmu should have been vcpu). Ya. I think it was a sort of sunk cost fallacy. Dammit, I spent all that time figuring out what this code does, I'm keeping it!!! v3 incoming...
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index be82da055fc4..0d41faf63b57 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -146,9 +146,6 @@ module_param_named(preemption_timer, enable_preemption_timer, bool, S_IRUGO); RTIT_STATUS_ERROR | RTIT_STATUS_STOPPED | \ RTIT_STATUS_BYTECNT)) -#define MSR_IA32_RTIT_OUTPUT_BASE_MASK \ - (~((1UL << cpuid_maxphyaddr(vcpu)) - 1) | 0x7f) - /* * These 2 parameters are used to config the controls for Pause-Loop Exiting: * ple_gap: upper bound on the amount of time between two successive @@ -1037,6 +1034,12 @@ static inline bool pt_can_write_msr(struct vcpu_vmx *vmx) !(vmx->pt_desc.guest.ctl & RTIT_CTL_TRACEEN); } +static inline bool pt_output_base_valid(struct kvm_vcpu *vcpu, u64 base) +{ + /* The base must be 128-byte aligned and a legal physical address. */ + return !(base & (~((1UL << cpuid_maxphyaddr(vcpu)) - 1) | 0x7f)); +} + static inline void pt_load_msr(struct pt_ctx *ctx, u32 addr_range) { u32 i; @@ -2167,7 +2170,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) !intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_single_range_output)) return 1; - if (data & MSR_IA32_RTIT_OUTPUT_BASE_MASK) + if (!pt_output_base_valid(vcpu, data)) return 1; vmx->pt_desc.guest.output_base = data; break;
Replace the subtly not-a-constant MSR_IA32_RTIT_OUTPUT_BASE_MASK with a proper helper function to check whether or not the specified base is valid. Blindly referencing the local 'vcpu' is especially nasty. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> --- arch/x86/kvm/vmx/vmx.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)