Message ID | 20241127201929.4005605-7-aaronlewis@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Unify MSR intercepts in x86 | expand |
On Wed, Nov 27, 2024, Aaron Lewis wrote: > From: Anish Ghulati <aghulati@google.com> > > For all direct access MSRs, disable the MSR interception explicitly. > svm_disable_intercept_for_msr() checks the new MSR filter and ensures that > KVM enables interception if userspace wants to filter the MSR. > > This change is similar to the VMX change: > d895f28ed6da ("KVM: VMX: Skip filter updates for MSRs that KVM is already intercepting") > > Adopting in SVM to align the implementations. Wording and mood are all funky. Give SVM the same treatment as was given VMX in commit d895f28ed6da ("KVM: VMX: Skip filter updates for MSRs that KVM is already intercepting"), and explicitly disable MSR interception when reacting to an MSR filter change. There is no need to change anything for MSRs KVM is already intercepting, and svm_disable_intercept_for_msr() performs the necessary filter checks. > > Suggested-by: Sean Christopherson <seanjc@google.com> > Co-developed-by: Aaron Lewis <aaronlewis@google.com> > Signed-off-by: Anish Ghulati <aghulati@google.com> See the docs again. The order is wrong, and your SoB is missing.
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index b982729ef7638..37b8683849ed2 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1025,17 +1025,21 @@ static void svm_msr_filter_changed(struct kvm_vcpu *vcpu) u32 i; /* - * Set intercept permissions for all direct access MSRs again. They - * will automatically get filtered through the MSR filter, so we are - * back in sync after this. + * Redo intercept permissions for MSRs that KVM is passing through to + * the guest. Disabling interception will check the new MSR filter and + * ensure that KVM enables interception if usersepace wants to filter + * the MSR. MSRs that KVM is already intercepting don't need to be + * refreshed since KVM is going to intercept them regardless of what + * userspace wants. */ for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) { u32 msr = direct_access_msrs[i].index; - u32 read = !test_bit(i, svm->shadow_msr_intercept.read); - u32 write = !test_bit(i, svm->shadow_msr_intercept.write); - /* FIXME: Align the polarity of the bitmaps and params. */ - set_msr_interception_bitmap(vcpu, svm->msrpm, msr, read, write); + if (!test_bit(i, svm->shadow_msr_intercept.read)) + svm_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_R); + + if (!test_bit(i, svm->shadow_msr_intercept.write)) + svm_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_W); } }