Message ID | 20210121065508.1169585-2-wei.huang2@amd.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Handle #GP for SVM execution instructions | expand |
On Thu, 2021-01-21 at 01:55 -0500, Wei Huang wrote: > Move the instruction decode part out of x86_emulate_instruction() for it > to be used in other places. Also kvm_clear_exception_queue() is moved > inside the if-statement as it doesn't apply when KVM are coming back from > userspace. > > Co-developed-by: Bandan Das <bsd@redhat.com> > Signed-off-by: Bandan Das <bsd@redhat.com> > Signed-off-by: Wei Huang <wei.huang2@amd.com> > --- > arch/x86/kvm/x86.c | 63 +++++++++++++++++++++++++++++----------------- > arch/x86/kvm/x86.h | 2 ++ > 2 files changed, 42 insertions(+), 23 deletions(-) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 9a8969a6dd06..580883cee493 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -7298,6 +7298,43 @@ static bool is_vmware_backdoor_opcode(struct x86_emulate_ctxt *ctxt) > return false; > } > > +/* > + * Decode and emulate instruction. Return EMULATION_OK if success. > + */ > +int x86_emulate_decoded_instruction(struct kvm_vcpu *vcpu, int emulation_type, > + void *insn, int insn_len) Isn't the name of this function wrong? This function decodes the instruction. So I would expect something like x86_decode_instruction. > +{ > + int r = EMULATION_OK; > + struct x86_emulate_ctxt *ctxt = vcpu->arch.emulate_ctxt; > + > + init_emulate_ctxt(vcpu); > + > + /* > + * We will reenter on the same instruction since > + * we do not set complete_userspace_io. This does not > + * handle watchpoints yet, those would be handled in > + * the emulate_ops. > + */ > + if (!(emulation_type & EMULTYPE_SKIP) && > + kvm_vcpu_check_breakpoint(vcpu, &r)) > + return r; > + > + ctxt->interruptibility = 0; > + ctxt->have_exception = false; > + ctxt->exception.vector = -1; > + ctxt->perm_ok = false; > + > + ctxt->ud = emulation_type & EMULTYPE_TRAP_UD; > + > + r = x86_decode_insn(ctxt, insn, insn_len); > + > + trace_kvm_emulate_insn_start(vcpu); > + ++vcpu->stat.insn_emulation; > + > + return r; > +} > +EXPORT_SYMBOL_GPL(x86_emulate_decoded_instruction); > + > int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, > int emulation_type, void *insn, int insn_len) > { > @@ -7317,32 +7354,12 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, > */ > write_fault_to_spt = vcpu->arch.write_fault_to_shadow_pgtable; > vcpu->arch.write_fault_to_shadow_pgtable = false; > - kvm_clear_exception_queue(vcpu); I think that this change is OK, but I can't be 100% sure about this. Best regards, Maxim Levitsky > > if (!(emulation_type & EMULTYPE_NO_DECODE)) { > - init_emulate_ctxt(vcpu); > - > - /* > - * We will reenter on the same instruction since > - * we do not set complete_userspace_io. This does not > - * handle watchpoints yet, those would be handled in > - * the emulate_ops. > - */ > - if (!(emulation_type & EMULTYPE_SKIP) && > - kvm_vcpu_check_breakpoint(vcpu, &r)) > - return r; > - > - ctxt->interruptibility = 0; > - ctxt->have_exception = false; > - ctxt->exception.vector = -1; > - ctxt->perm_ok = false; > - > - ctxt->ud = emulation_type & EMULTYPE_TRAP_UD; > - > - r = x86_decode_insn(ctxt, insn, insn_len); > + kvm_clear_exception_queue(vcpu); > > - trace_kvm_emulate_insn_start(vcpu); > - ++vcpu->stat.insn_emulation; > + r = x86_emulate_decoded_instruction(vcpu, emulation_type, > + insn, insn_len); > if (r != EMULATION_OK) { > if ((emulation_type & EMULTYPE_TRAP_UD) || > (emulation_type & EMULTYPE_TRAP_UD_FORCED)) { > diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h > index c5ee0f5ce0f1..fc42454a4c27 100644 > --- a/arch/x86/kvm/x86.h > +++ b/arch/x86/kvm/x86.h > @@ -273,6 +273,8 @@ bool kvm_mtrr_check_gfn_range_consistency(struct kvm_vcpu *vcpu, gfn_t gfn, > int page_num); > bool kvm_vector_hashing_enabled(void); > void kvm_fixup_and_inject_pf_error(struct kvm_vcpu *vcpu, gva_t gva, u16 error_code); > +int x86_emulate_decoded_instruction(struct kvm_vcpu *vcpu, int emulation_type, > + void *insn, int insn_len); > int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, > int emulation_type, void *insn, int insn_len); > fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu);
On 21/01/21 15:04, Maxim Levitsky wrote: >> +int x86_emulate_decoded_instruction(struct kvm_vcpu *vcpu, int emulation_type, >> + void *insn, int insn_len) > Isn't the name of this function wrong? This function decodes the instruction. > So I would expect something like x86_decode_instruction. > Yes, that or x86_decode_emulated_instruction. Paolo
On 1/21/21 8:23 AM, Paolo Bonzini wrote: > On 21/01/21 15:04, Maxim Levitsky wrote: >>> +int x86_emulate_decoded_instruction(struct kvm_vcpu *vcpu, int >>> emulation_type, >>> + void *insn, int insn_len) >> Isn't the name of this function wrong? This function decodes the >> instruction. >> So I would expect something like x86_decode_instruction. >> > > Yes, that or x86_decode_emulated_instruction. I was debating about it while making the change. I will update it to new name in v3. > > Paolo >
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9a8969a6dd06..580883cee493 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7298,6 +7298,43 @@ static bool is_vmware_backdoor_opcode(struct x86_emulate_ctxt *ctxt) return false; } +/* + * Decode and emulate instruction. Return EMULATION_OK if success. + */ +int x86_emulate_decoded_instruction(struct kvm_vcpu *vcpu, int emulation_type, + void *insn, int insn_len) +{ + int r = EMULATION_OK; + struct x86_emulate_ctxt *ctxt = vcpu->arch.emulate_ctxt; + + init_emulate_ctxt(vcpu); + + /* + * We will reenter on the same instruction since + * we do not set complete_userspace_io. This does not + * handle watchpoints yet, those would be handled in + * the emulate_ops. + */ + if (!(emulation_type & EMULTYPE_SKIP) && + kvm_vcpu_check_breakpoint(vcpu, &r)) + return r; + + ctxt->interruptibility = 0; + ctxt->have_exception = false; + ctxt->exception.vector = -1; + ctxt->perm_ok = false; + + ctxt->ud = emulation_type & EMULTYPE_TRAP_UD; + + r = x86_decode_insn(ctxt, insn, insn_len); + + trace_kvm_emulate_insn_start(vcpu); + ++vcpu->stat.insn_emulation; + + return r; +} +EXPORT_SYMBOL_GPL(x86_emulate_decoded_instruction); + int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int emulation_type, void *insn, int insn_len) { @@ -7317,32 +7354,12 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, */ write_fault_to_spt = vcpu->arch.write_fault_to_shadow_pgtable; vcpu->arch.write_fault_to_shadow_pgtable = false; - kvm_clear_exception_queue(vcpu); if (!(emulation_type & EMULTYPE_NO_DECODE)) { - init_emulate_ctxt(vcpu); - - /* - * We will reenter on the same instruction since - * we do not set complete_userspace_io. This does not - * handle watchpoints yet, those would be handled in - * the emulate_ops. - */ - if (!(emulation_type & EMULTYPE_SKIP) && - kvm_vcpu_check_breakpoint(vcpu, &r)) - return r; - - ctxt->interruptibility = 0; - ctxt->have_exception = false; - ctxt->exception.vector = -1; - ctxt->perm_ok = false; - - ctxt->ud = emulation_type & EMULTYPE_TRAP_UD; - - r = x86_decode_insn(ctxt, insn, insn_len); + kvm_clear_exception_queue(vcpu); - trace_kvm_emulate_insn_start(vcpu); - ++vcpu->stat.insn_emulation; + r = x86_emulate_decoded_instruction(vcpu, emulation_type, + insn, insn_len); if (r != EMULATION_OK) { if ((emulation_type & EMULTYPE_TRAP_UD) || (emulation_type & EMULTYPE_TRAP_UD_FORCED)) { diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index c5ee0f5ce0f1..fc42454a4c27 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -273,6 +273,8 @@ bool kvm_mtrr_check_gfn_range_consistency(struct kvm_vcpu *vcpu, gfn_t gfn, int page_num); bool kvm_vector_hashing_enabled(void); void kvm_fixup_and_inject_pf_error(struct kvm_vcpu *vcpu, gva_t gva, u16 error_code); +int x86_emulate_decoded_instruction(struct kvm_vcpu *vcpu, int emulation_type, + void *insn, int insn_len); int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int emulation_type, void *insn, int insn_len); fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu);