Message ID | 1494411564-76243-1-git-send-email-wanpeng.li@hotmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 10/05/2017 12:19, Wanpeng Li wrote: > From: Wanpeng Li <wanpeng.li@hotmail.com> > > Reported by syzkaller: > > BUG: unable to handle kernel paging request at ffffffffc07f6a2e > IP: report_bug+0x94/0x120 > PGD 348e12067 > P4D 348e12067 > PUD 348e14067 > PMD 3cbd84067 > PTE 80000003f7e87161 > > Oops: 0003 [#1] SMP > CPU: 2 PID: 7091 Comm: kvm_load_guest_ Tainted: G OE 4.11.0+ #8 > task: ffff92fdfb525400 task.stack: ffffbda6c3d04000 > RIP: 0010:report_bug+0x94/0x120 > RSP: 0018:ffffbda6c3d07b20 EFLAGS: 00010202 > do_trap+0x156/0x170 > do_error_trap+0xa3/0x170 > ? kvm_load_guest_fpu.part.175+0x12a/0x170 [kvm] > ? mark_held_locks+0x79/0xa0 > ? retint_kernel+0x10/0x10 > ? trace_hardirqs_off_thunk+0x1a/0x1c > do_invalid_op+0x20/0x30 > invalid_op+0x1e/0x30 > RIP: 0010:kvm_load_guest_fpu.part.175+0x12a/0x170 [kvm] > ? kvm_load_guest_fpu.part.175+0x1c/0x170 [kvm] > kvm_arch_vcpu_ioctl_run+0xed6/0x1b70 [kvm] > kvm_vcpu_ioctl+0x384/0x780 [kvm] > ? kvm_vcpu_ioctl+0x384/0x780 [kvm] > ? sched_clock+0x13/0x20 > ? __do_page_fault+0x2a0/0x550 > do_vfs_ioctl+0xa4/0x700 > ? up_read+0x1f/0x40 > ? __do_page_fault+0x2a0/0x550 > SyS_ioctl+0x79/0x90 > entry_SYSCALL_64_fastpath+0x23/0xc2 > > SDM mentioned that "The MXCSR has several reserved bits, and attempting to write > a 1 to any of these bits will cause a general-protection exception(#GP) to be > generated". The syzkaller forks' testcase overrides xsave area w/ random values > and steps on the reserved bits of MXCSR register. The damaged MXCSR register > values of guest will be restored to SSEx MXCSR register before vmentry. This > patch fixes it by catching userspace override MXCSR register reserved bits w/ > random values and bails out immediately. > > Reported-by: Andrey Konovalov <andreyknvl@google.com> > Cc: Paolo Bonzini <pbonzini@redhat.com> > Cc: Radim Krčmář <rkrcmar@redhat.com> > Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Cc: stable@vger.kernel.org Radim should be able to merge it before -rc2. Thanks! Paolo > --- > arch/x86/kvm/x86.c | 9 +++++++-- > 1 file changed, 7 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 464da93..5e9e0e7 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -3288,11 +3288,14 @@ static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu, > } > } > > +#define XSAVE_MXCSR_OFFSET 24 > + > static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu, > struct kvm_xsave *guest_xsave) > { > u64 xstate_bv = > *(u64 *)&guest_xsave->region[XSAVE_HDR_OFFSET / sizeof(u32)]; > + u32 mxcsr = *(u32 *)&guest_xsave->region[XSAVE_MXCSR_OFFSET / sizeof(u32)]; > > if (boot_cpu_has(X86_FEATURE_XSAVE)) { > /* > @@ -3300,11 +3303,13 @@ static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu, > * CPUID leaf 0xD, index 0, EDX:EAX. This is for compatibility > * with old userspace. > */ > - if (xstate_bv & ~kvm_supported_xcr0()) > + if (xstate_bv & ~kvm_supported_xcr0() || > + mxcsr & ~vcpu->arch.guest_fpu.state.xsave.i387.mxcsr_mask) > return -EINVAL; > load_xsave(vcpu, (u8 *)guest_xsave->region); > } else { > - if (xstate_bv & ~XFEATURE_MASK_FPSSE) > + if (xstate_bv & ~XFEATURE_MASK_FPSSE || > + mxcsr & ~vcpu->arch.guest_fpu.state.fxsave.mxcsr_mask) > return -EINVAL; > memcpy(&vcpu->arch.guest_fpu.state.fxsave, > guest_xsave->region, sizeof(struct fxregs_state)); >
On 10/05/2017 12:19, Wanpeng Li wrote: > * with old userspace. > */ > - if (xstate_bv & ~kvm_supported_xcr0()) > + if (xstate_bv & ~kvm_supported_xcr0() || > + mxcsr & ~vcpu->arch.guest_fpu.state.xsave.i387.mxcsr_mask) > return -EINVAL; > load_xsave(vcpu, (u8 *)guest_xsave->region); > } else { > - if (xstate_bv & ~XFEATURE_MASK_FPSSE) > + if (xstate_bv & ~XFEATURE_MASK_FPSSE || > + mxcsr & ~vcpu->arch.guest_fpu.state.fxsave.mxcsr_mask) > return -EINVAL; > memcpy(&vcpu->arch.guest_fpu.state.fxsave, > guest_xsave->region, sizeof(struct fxregs_state)); Hmm, thinking more about it, maybe use mxcsr_feature_mask instead of digging into vcpu->arch.guest_fpu? If you send v2, please remember to Cc stable@vger.kernel.org. Paolo
2017-05-10 23:35 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>: > > > On 10/05/2017 12:19, Wanpeng Li wrote: >> * with old userspace. >> */ >> - if (xstate_bv & ~kvm_supported_xcr0()) >> + if (xstate_bv & ~kvm_supported_xcr0() || >> + mxcsr & ~vcpu->arch.guest_fpu.state.xsave.i387.mxcsr_mask) >> return -EINVAL; >> load_xsave(vcpu, (u8 *)guest_xsave->region); >> } else { >> - if (xstate_bv & ~XFEATURE_MASK_FPSSE) >> + if (xstate_bv & ~XFEATURE_MASK_FPSSE || >> + mxcsr & ~vcpu->arch.guest_fpu.state.fxsave.mxcsr_mask) >> return -EINVAL; >> memcpy(&vcpu->arch.guest_fpu.state.fxsave, >> guest_xsave->region, sizeof(struct fxregs_state)); > > Hmm, thinking more about it, maybe use mxcsr_feature_mask instead of > digging into vcpu->arch.guest_fpu? If you send v2, please remember to ERROR: "mxcsr_feature_mask" [arch/x86/kvm/kvm.ko] undefined. So we should dig into vcpu->arch.guest_fpu. Regards, Wanpeng Li
On 11/05/2017 02:56, Wanpeng Li wrote: >> Hmm, thinking more about it, maybe use mxcsr_feature_mask instead of >> digging into vcpu->arch.guest_fpu? If you send v2, please remember to > ERROR: "mxcsr_feature_mask" [arch/x86/kvm/kvm.ko] undefined. So we > should dig into vcpu->arch.guest_fpu. Yes, you need to export it. Paolo
2017-05-11 15:44 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>: > > > On 11/05/2017 02:56, Wanpeng Li wrote: >>> Hmm, thinking more about it, maybe use mxcsr_feature_mask instead of >>> digging into vcpu->arch.guest_fpu? If you send v2, please remember to >> ERROR: "mxcsr_feature_mask" [arch/x86/kvm/kvm.ko] undefined. So we >> should dig into vcpu->arch.guest_fpu. > > Yes, you need to export it. I will do it in v2. :) Regards, Wanpeng Li
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 464da93..5e9e0e7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3288,11 +3288,14 @@ static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu, } } +#define XSAVE_MXCSR_OFFSET 24 + static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu, struct kvm_xsave *guest_xsave) { u64 xstate_bv = *(u64 *)&guest_xsave->region[XSAVE_HDR_OFFSET / sizeof(u32)]; + u32 mxcsr = *(u32 *)&guest_xsave->region[XSAVE_MXCSR_OFFSET / sizeof(u32)]; if (boot_cpu_has(X86_FEATURE_XSAVE)) { /* @@ -3300,11 +3303,13 @@ static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu, * CPUID leaf 0xD, index 0, EDX:EAX. This is for compatibility * with old userspace. */ - if (xstate_bv & ~kvm_supported_xcr0()) + if (xstate_bv & ~kvm_supported_xcr0() || + mxcsr & ~vcpu->arch.guest_fpu.state.xsave.i387.mxcsr_mask) return -EINVAL; load_xsave(vcpu, (u8 *)guest_xsave->region); } else { - if (xstate_bv & ~XFEATURE_MASK_FPSSE) + if (xstate_bv & ~XFEATURE_MASK_FPSSE || + mxcsr & ~vcpu->arch.guest_fpu.state.fxsave.mxcsr_mask) return -EINVAL; memcpy(&vcpu->arch.guest_fpu.state.fxsave, guest_xsave->region, sizeof(struct fxregs_state));