Message ID | 1455236525-14866-2-git-send-email-tlengyel@novetta.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 02/12/2016 02:22 AM, Tamas K Lengyel wrote: > Currently the registers saved in the request depend on which type of event > is filling in the registers. In this patch we consolidate the two versions > of register filling function as to return a fix set of registers irrespective > of the underlying event. > > Signed-off-by: Tamas K Lengyel <tlengyel@novetta.com> > Cc: Razvan Cojocaru <rcojocaru@bitdefender.com> > Cc: Keir Fraser <keir@xen.org> > Cc: Jan Beulich <jbeulich@suse.com> > Cc: Andrew Cooper <andrew.cooper3@citrix.com> > Cc: George Dunlap <george.dunlap@eu.citrix.com> > --- > v2: get dr7 from hvm context > --- > xen/arch/x86/hvm/event.c | 35 ++----------------------- > xen/arch/x86/mm/p2m.c | 57 +--------------------------------------- > xen/arch/x86/vm_event.c | 59 ++++++++++++++++++++++++++++++++++++++++++ > xen/include/asm-x86/vm_event.h | 2 ++ > 4 files changed, 64 insertions(+), 89 deletions(-) Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com> Thanks, Razvan
>>> On 12.02.16 at 01:22, <tlengyel@novetta.com> wrote: > --- a/xen/arch/x86/hvm/event.c > +++ b/xen/arch/x86/hvm/event.c > @@ -23,40 +23,9 @@ > #include <asm/hvm/event.h> > #include <asm/monitor.h> > #include <asm/altp2m.h> > +#include <asm/vm_event.h> > #include <public/vm_event.h> > > -static void hvm_event_fill_regs(vm_event_request_t *req) > -{ > - const struct cpu_user_regs *regs = guest_cpu_user_regs(); > - const struct vcpu *curr = current; > - > - req->data.regs.x86.rax = regs->eax; > - req->data.regs.x86.rcx = regs->ecx; > - req->data.regs.x86.rdx = regs->edx; > - req->data.regs.x86.rbx = regs->ebx; > - req->data.regs.x86.rsp = regs->esp; > - req->data.regs.x86.rbp = regs->ebp; > - req->data.regs.x86.rsi = regs->esi; > - req->data.regs.x86.rdi = regs->edi; > - > - req->data.regs.x86.r8 = regs->r8; > - req->data.regs.x86.r9 = regs->r9; > - req->data.regs.x86.r10 = regs->r10; > - req->data.regs.x86.r11 = regs->r11; > - req->data.regs.x86.r12 = regs->r12; > - req->data.regs.x86.r13 = regs->r13; > - req->data.regs.x86.r14 = regs->r14; > - req->data.regs.x86.r15 = regs->r15; > - > - req->data.regs.x86.rflags = regs->eflags; > - req->data.regs.x86.rip = regs->eip; > - > - req->data.regs.x86.msr_efer = curr->arch.hvm_vcpu.guest_efer; > - req->data.regs.x86.cr0 = curr->arch.hvm_vcpu.guest_cr[0]; > - req->data.regs.x86.cr3 = curr->arch.hvm_vcpu.guest_cr[3]; > - req->data.regs.x86.cr4 = curr->arch.hvm_vcpu.guest_cr[4]; > -} With this diff I suppose the patch here is meant to replace "vm_event: Record FS_BASE/GS_BASE during events"? Such should be made explicit, either by adding a note here (after the first --- separator) or by explicitly withdrawing the other patch. Jan
>>> On 12.02.16 at 01:22, <tlengyel@novetta.com> wrote: > --- a/xen/arch/x86/vm_event.c > +++ b/xen/arch/x86/vm_event.c > @@ -122,6 +122,65 @@ void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp) > v->arch.user_regs.eip = rsp->data.regs.x86.rip; > } > > +void vm_event_fill_regs(vm_event_request_t *req) > +{ > + const struct cpu_user_regs *regs = guest_cpu_user_regs(); > + struct segment_register seg; > + struct hvm_hw_cpu ctxt; > + struct vcpu *curr = current; > + > + req->data.regs.x86.rax = regs->eax; > + req->data.regs.x86.rcx = regs->ecx; > + req->data.regs.x86.rdx = regs->edx; > + req->data.regs.x86.rbx = regs->ebx; > + req->data.regs.x86.rsp = regs->esp; > + req->data.regs.x86.rbp = regs->ebp; > + req->data.regs.x86.rsi = regs->esi; > + req->data.regs.x86.rdi = regs->edi; > + > + req->data.regs.x86.r8 = regs->r8; > + req->data.regs.x86.r9 = regs->r9; > + req->data.regs.x86.r10 = regs->r10; > + req->data.regs.x86.r11 = regs->r11; > + req->data.regs.x86.r12 = regs->r12; > + req->data.regs.x86.r13 = regs->r13; > + req->data.regs.x86.r14 = regs->r14; > + req->data.regs.x86.r15 = regs->r15; > + > + req->data.regs.x86.rflags = regs->eflags; > + req->data.regs.x86.rip = regs->eip; > + > + if ( !is_hvm_domain(curr->domain) ) > + return; No such check existed in either of the two original functions. Why is it needed all of the sudden? And if it is needed, why do the other fields not get filled (as far as possible at least) for PV guests? Jan
On 02/12/2016 11:57 AM, Jan Beulich wrote: >>>> On 12.02.16 at 01:22, <tlengyel@novetta.com> wrote: >> --- a/xen/arch/x86/vm_event.c >> +++ b/xen/arch/x86/vm_event.c >> @@ -122,6 +122,65 @@ void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp) >> v->arch.user_regs.eip = rsp->data.regs.x86.rip; >> } >> >> +void vm_event_fill_regs(vm_event_request_t *req) >> +{ >> + const struct cpu_user_regs *regs = guest_cpu_user_regs(); >> + struct segment_register seg; >> + struct hvm_hw_cpu ctxt; >> + struct vcpu *curr = current; >> + >> + req->data.regs.x86.rax = regs->eax; >> + req->data.regs.x86.rcx = regs->ecx; >> + req->data.regs.x86.rdx = regs->edx; >> + req->data.regs.x86.rbx = regs->ebx; >> + req->data.regs.x86.rsp = regs->esp; >> + req->data.regs.x86.rbp = regs->ebp; >> + req->data.regs.x86.rsi = regs->esi; >> + req->data.regs.x86.rdi = regs->edi; >> + >> + req->data.regs.x86.r8 = regs->r8; >> + req->data.regs.x86.r9 = regs->r9; >> + req->data.regs.x86.r10 = regs->r10; >> + req->data.regs.x86.r11 = regs->r11; >> + req->data.regs.x86.r12 = regs->r12; >> + req->data.regs.x86.r13 = regs->r13; >> + req->data.regs.x86.r14 = regs->r14; >> + req->data.regs.x86.r15 = regs->r15; >> + >> + req->data.regs.x86.rflags = regs->eflags; >> + req->data.regs.x86.rip = regs->eip; >> + >> + if ( !is_hvm_domain(curr->domain) ) >> + return; > > No such check existed in either of the two original functions. Why is > it needed all of the sudden? And if it is needed, why do the other > fields not get filled (as far as possible at least) for PV guests? I can't speak for Tamas, but I suspect the check has been placed there because calls to hvm_funcs.save_cpu_ctxt(curr, &ctxt) and hvm_get_segment_register(curr, x86_seg_fs, &seg) follow, and he's put vm_event_fill_regs() in xen/arch/x86/vm_event.c (a previous function was called hvm_event_fill_regs(), in arch/x86/hvm/event.c, so no checking for HVM was needed). I don't think the check is needed for the current codepaths, but since the code has been moved to xen/arch/x86/ the question about future PV events is fair. Thanks, Razvan
>>> On 12.02.16 at 11:19, <rcojocaru@bitdefender.com> wrote: > On 02/12/2016 11:57 AM, Jan Beulich wrote: >>>>> On 12.02.16 at 01:22, <tlengyel@novetta.com> wrote: >>> --- a/xen/arch/x86/vm_event.c >>> +++ b/xen/arch/x86/vm_event.c >>> @@ -122,6 +122,65 @@ void vm_event_set_registers(struct vcpu *v, > vm_event_response_t *rsp) >>> v->arch.user_regs.eip = rsp->data.regs.x86.rip; >>> } >>> >>> +void vm_event_fill_regs(vm_event_request_t *req) >>> +{ >>> + const struct cpu_user_regs *regs = guest_cpu_user_regs(); >>> + struct segment_register seg; >>> + struct hvm_hw_cpu ctxt; >>> + struct vcpu *curr = current; >>> + >>> + req->data.regs.x86.rax = regs->eax; >>> + req->data.regs.x86.rcx = regs->ecx; >>> + req->data.regs.x86.rdx = regs->edx; >>> + req->data.regs.x86.rbx = regs->ebx; >>> + req->data.regs.x86.rsp = regs->esp; >>> + req->data.regs.x86.rbp = regs->ebp; >>> + req->data.regs.x86.rsi = regs->esi; >>> + req->data.regs.x86.rdi = regs->edi; >>> + >>> + req->data.regs.x86.r8 = regs->r8; >>> + req->data.regs.x86.r9 = regs->r9; >>> + req->data.regs.x86.r10 = regs->r10; >>> + req->data.regs.x86.r11 = regs->r11; >>> + req->data.regs.x86.r12 = regs->r12; >>> + req->data.regs.x86.r13 = regs->r13; >>> + req->data.regs.x86.r14 = regs->r14; >>> + req->data.regs.x86.r15 = regs->r15; >>> + >>> + req->data.regs.x86.rflags = regs->eflags; >>> + req->data.regs.x86.rip = regs->eip; >>> + >>> + if ( !is_hvm_domain(curr->domain) ) >>> + return; >> >> No such check existed in either of the two original functions. Why is >> it needed all of the sudden? And if it is needed, why do the other >> fields not get filled (as far as possible at least) for PV guests? > > I can't speak for Tamas, but I suspect the check has been placed there > because calls to hvm_funcs.save_cpu_ctxt(curr, &ctxt) and > hvm_get_segment_register(curr, x86_seg_fs, &seg) follow, and he's put > vm_event_fill_regs() in xen/arch/x86/vm_event.c (a previous function was > called hvm_event_fill_regs(), in arch/x86/hvm/event.c, so no checking > for HVM was needed). > > I don't think the check is needed for the current codepaths, but since > the code has been moved to xen/arch/x86/ the question about future PV > events is fair. In which case ASSERT(is_hvm_vcpu(curr)) would be the common way to document this (at once avoiding the open coding of is_hvm_vcpu()). Jan
On Feb 12, 2016 03:41, "Jan Beulich" <JBeulich@suse.com> wrote: > > >>> On 12.02.16 at 11:19, <rcojocaru@bitdefender.com> wrote: > > On 02/12/2016 11:57 AM, Jan Beulich wrote: > >>>>> On 12.02.16 at 01:22, <tlengyel@novetta.com> wrote: > >>> --- a/xen/arch/x86/vm_event.c > >>> +++ b/xen/arch/x86/vm_event.c > >>> @@ -122,6 +122,65 @@ void vm_event_set_registers(struct vcpu *v, > > vm_event_response_t *rsp) > >>> v->arch.user_regs.eip = rsp->data.regs.x86.rip; > >>> } > >>> > >>> +void vm_event_fill_regs(vm_event_request_t *req) > >>> +{ > >>> + const struct cpu_user_regs *regs = guest_cpu_user_regs(); > >>> + struct segment_register seg; > >>> + struct hvm_hw_cpu ctxt; > >>> + struct vcpu *curr = current; > >>> + > >>> + req->data.regs.x86.rax = regs->eax; > >>> + req->data.regs.x86.rcx = regs->ecx; > >>> + req->data.regs.x86.rdx = regs->edx; > >>> + req->data.regs.x86.rbx = regs->ebx; > >>> + req->data.regs.x86.rsp = regs->esp; > >>> + req->data.regs.x86.rbp = regs->ebp; > >>> + req->data.regs.x86.rsi = regs->esi; > >>> + req->data.regs.x86.rdi = regs->edi; > >>> + > >>> + req->data.regs.x86.r8 = regs->r8; > >>> + req->data.regs.x86.r9 = regs->r9; > >>> + req->data.regs.x86.r10 = regs->r10; > >>> + req->data.regs.x86.r11 = regs->r11; > >>> + req->data.regs.x86.r12 = regs->r12; > >>> + req->data.regs.x86.r13 = regs->r13; > >>> + req->data.regs.x86.r14 = regs->r14; > >>> + req->data.regs.x86.r15 = regs->r15; > >>> + > >>> + req->data.regs.x86.rflags = regs->eflags; > >>> + req->data.regs.x86.rip = regs->eip; > >>> + > >>> + if ( !is_hvm_domain(curr->domain) ) > >>> + return; > >> > >> No such check existed in either of the two original functions. Why is > >> it needed all of the sudden? And if it is needed, why do the other > >> fields not get filled (as far as possible at least) for PV guests? > > > > I can't speak for Tamas, but I suspect the check has been placed there > > because calls to hvm_funcs.save_cpu_ctxt(curr, &ctxt) and > > hvm_get_segment_register(curr, x86_seg_fs, &seg) follow, and he's put > > vm_event_fill_regs() in xen/arch/x86/vm_event.c (a previous function was > > called hvm_event_fill_regs(), in arch/x86/hvm/event.c, so no checking > > for HVM was needed). > > > > I don't think the check is needed for the current codepaths, but since > > the code has been moved to xen/arch/x86/ the question about future PV > > events is fair. That was the idea. > > In which case ASSERT(is_hvm_vcpu(curr)) would be the common > way to document this (at once avoiding the open coding of > is_hvm_vcpu()). > I don't think we need an assert here. The function is fine for pv guests as well up to that point. Filling in the rest of the registers for pv guests can be done when pv events are implemented. Maybe a comment saying so is warranted. Tamas
>>> On 12.02.16 at 13:50, <tlengyel@novetta.com> wrote: > On Feb 12, 2016 03:41, "Jan Beulich" <JBeulich@suse.com> wrote: >> In which case ASSERT(is_hvm_vcpu(curr)) would be the common >> way to document this (at once avoiding the open coding of >> is_hvm_vcpu()). >> > > I don't think we need an assert here. The function is fine for pv guests as > well up to that point. Filling in the rest of the registers for pv guests > can be done when pv events are implemented. Maybe a comment saying so is > warranted. I disagree: Either you mean to support PV in the function, in which case all fields should be filled, or you don't, in which case the ASSERT() would at once document that PV is intended to not be supported right now. With the conditional as in your patch any future reader may either think "bug" or get confused as to the actual intentions here. Jan
On 12/02/16 00:22, Tamas K Lengyel wrote: > Currently the registers saved in the request depend on which type of event > is filling in the registers. In this patch we consolidate the two versions > of register filling function as to return a fix set of registers irrespective > of the underlying event. > > Signed-off-by: Tamas K Lengyel <tlengyel@novetta.com> This isn't p2m code per se; but insofar as it's necessary, this patch or any future version of the patch which is just removing p2m_vm_event_fill_regs() elsewhere and renaming it can have: Acked-by: George Dunlap <george.dunlap@citrix.com> for the p2m.c change
On Fri, Feb 12, 2016 at 7:57 AM, Jan Beulich <JBeulich@suse.com> wrote: > >>> On 12.02.16 at 13:50, <tlengyel@novetta.com> wrote: > > On Feb 12, 2016 03:41, "Jan Beulich" <JBeulich@suse.com> wrote: > >> In which case ASSERT(is_hvm_vcpu(curr)) would be the common > >> way to document this (at once avoiding the open coding of > >> is_hvm_vcpu()). > >> > > > > I don't think we need an assert here. The function is fine for pv guests > as > > well up to that point. Filling in the rest of the registers for pv guests > > can be done when pv events are implemented. Maybe a comment saying so is > > warranted. > > I disagree: Either you mean to support PV in the function, in which > case all fields should be filled, or you don't, in which case the > ASSERT() would at once document that PV is intended to not be > supported right now. With the conditional as in your patch any > future reader may either think "bug" or get confused as to the > actual intentions here. > Alright, sounds good to me. Tamas
diff --git a/xen/arch/x86/hvm/event.c b/xen/arch/x86/hvm/event.c index a3d4892..e84d44b 100644 --- a/xen/arch/x86/hvm/event.c +++ b/xen/arch/x86/hvm/event.c @@ -23,40 +23,9 @@ #include <asm/hvm/event.h> #include <asm/monitor.h> #include <asm/altp2m.h> +#include <asm/vm_event.h> #include <public/vm_event.h> -static void hvm_event_fill_regs(vm_event_request_t *req) -{ - const struct cpu_user_regs *regs = guest_cpu_user_regs(); - const struct vcpu *curr = current; - - req->data.regs.x86.rax = regs->eax; - req->data.regs.x86.rcx = regs->ecx; - req->data.regs.x86.rdx = regs->edx; - req->data.regs.x86.rbx = regs->ebx; - req->data.regs.x86.rsp = regs->esp; - req->data.regs.x86.rbp = regs->ebp; - req->data.regs.x86.rsi = regs->esi; - req->data.regs.x86.rdi = regs->edi; - - req->data.regs.x86.r8 = regs->r8; - req->data.regs.x86.r9 = regs->r9; - req->data.regs.x86.r10 = regs->r10; - req->data.regs.x86.r11 = regs->r11; - req->data.regs.x86.r12 = regs->r12; - req->data.regs.x86.r13 = regs->r13; - req->data.regs.x86.r14 = regs->r14; - req->data.regs.x86.r15 = regs->r15; - - req->data.regs.x86.rflags = regs->eflags; - req->data.regs.x86.rip = regs->eip; - - req->data.regs.x86.msr_efer = curr->arch.hvm_vcpu.guest_efer; - req->data.regs.x86.cr0 = curr->arch.hvm_vcpu.guest_cr[0]; - req->data.regs.x86.cr3 = curr->arch.hvm_vcpu.guest_cr[3]; - req->data.regs.x86.cr4 = curr->arch.hvm_vcpu.guest_cr[4]; -} - static int hvm_event_traps(uint8_t sync, vm_event_request_t *req) { int rc; @@ -90,7 +59,7 @@ static int hvm_event_traps(uint8_t sync, vm_event_request_t *req) req->altp2m_idx = vcpu_altp2m(curr).p2midx; } - hvm_event_fill_regs(req); + vm_event_fill_regs(req); vm_event_put_request(currd, &currd->vm_event->monitor, req); return 1; diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index a45ee35..45c6caa 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1507,61 +1507,6 @@ void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp) } } -static void p2m_vm_event_fill_regs(vm_event_request_t *req) -{ - const struct cpu_user_regs *regs = guest_cpu_user_regs(); - struct segment_register seg; - struct hvm_hw_cpu ctxt; - struct vcpu *curr = current; - - /* Architecture-specific vmcs/vmcb bits */ - hvm_funcs.save_cpu_ctxt(curr, &ctxt); - - req->data.regs.x86.rax = regs->eax; - req->data.regs.x86.rcx = regs->ecx; - req->data.regs.x86.rdx = regs->edx; - req->data.regs.x86.rbx = regs->ebx; - req->data.regs.x86.rsp = regs->esp; - req->data.regs.x86.rbp = regs->ebp; - req->data.regs.x86.rsi = regs->esi; - req->data.regs.x86.rdi = regs->edi; - - req->data.regs.x86.r8 = regs->r8; - req->data.regs.x86.r9 = regs->r9; - req->data.regs.x86.r10 = regs->r10; - req->data.regs.x86.r11 = regs->r11; - req->data.regs.x86.r12 = regs->r12; - req->data.regs.x86.r13 = regs->r13; - req->data.regs.x86.r14 = regs->r14; - req->data.regs.x86.r15 = regs->r15; - - req->data.regs.x86.rflags = regs->eflags; - req->data.regs.x86.rip = regs->eip; - - req->data.regs.x86.dr7 = curr->arch.debugreg[7]; - req->data.regs.x86.cr0 = ctxt.cr0; - req->data.regs.x86.cr2 = ctxt.cr2; - req->data.regs.x86.cr3 = ctxt.cr3; - req->data.regs.x86.cr4 = ctxt.cr4; - - req->data.regs.x86.sysenter_cs = ctxt.sysenter_cs; - req->data.regs.x86.sysenter_esp = ctxt.sysenter_esp; - req->data.regs.x86.sysenter_eip = ctxt.sysenter_eip; - - req->data.regs.x86.msr_efer = ctxt.msr_efer; - req->data.regs.x86.msr_star = ctxt.msr_star; - req->data.regs.x86.msr_lstar = ctxt.msr_lstar; - - hvm_get_segment_register(curr, x86_seg_fs, &seg); - req->data.regs.x86.fs_base = seg.base; - - hvm_get_segment_register(curr, x86_seg_gs, &seg); - req->data.regs.x86.gs_base = seg.base; - - hvm_get_segment_register(curr, x86_seg_cs, &seg); - req->data.regs.x86.cs_arbytes = seg.attr.bytes; -} - void p2m_mem_access_emulate_check(struct vcpu *v, const vm_event_response_t *rsp) { @@ -1760,7 +1705,7 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla, req->u.mem_access.flags |= npfec.insn_fetch ? MEM_ACCESS_X : 0; req->vcpu_id = v->vcpu_id; - p2m_vm_event_fill_regs(req); + vm_event_fill_regs(req); if ( altp2m_active(v->domain) ) { diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c index 08d678a..4e71948 100644 --- a/xen/arch/x86/vm_event.c +++ b/xen/arch/x86/vm_event.c @@ -122,6 +122,65 @@ void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp) v->arch.user_regs.eip = rsp->data.regs.x86.rip; } +void vm_event_fill_regs(vm_event_request_t *req) +{ + const struct cpu_user_regs *regs = guest_cpu_user_regs(); + struct segment_register seg; + struct hvm_hw_cpu ctxt; + struct vcpu *curr = current; + + req->data.regs.x86.rax = regs->eax; + req->data.regs.x86.rcx = regs->ecx; + req->data.regs.x86.rdx = regs->edx; + req->data.regs.x86.rbx = regs->ebx; + req->data.regs.x86.rsp = regs->esp; + req->data.regs.x86.rbp = regs->ebp; + req->data.regs.x86.rsi = regs->esi; + req->data.regs.x86.rdi = regs->edi; + + req->data.regs.x86.r8 = regs->r8; + req->data.regs.x86.r9 = regs->r9; + req->data.regs.x86.r10 = regs->r10; + req->data.regs.x86.r11 = regs->r11; + req->data.regs.x86.r12 = regs->r12; + req->data.regs.x86.r13 = regs->r13; + req->data.regs.x86.r14 = regs->r14; + req->data.regs.x86.r15 = regs->r15; + + req->data.regs.x86.rflags = regs->eflags; + req->data.regs.x86.rip = regs->eip; + + if ( !is_hvm_domain(curr->domain) ) + return; + + /* Architecture-specific vmcs/vmcb bits */ + hvm_funcs.save_cpu_ctxt(curr, &ctxt); + + req->data.regs.x86.cr0 = ctxt.cr0; + req->data.regs.x86.cr2 = ctxt.cr2; + req->data.regs.x86.cr3 = ctxt.cr3; + req->data.regs.x86.cr4 = ctxt.cr4; + + req->data.regs.x86.sysenter_cs = ctxt.sysenter_cs; + req->data.regs.x86.sysenter_esp = ctxt.sysenter_esp; + req->data.regs.x86.sysenter_eip = ctxt.sysenter_eip; + + req->data.regs.x86.msr_efer = ctxt.msr_efer; + req->data.regs.x86.msr_star = ctxt.msr_star; + req->data.regs.x86.msr_lstar = ctxt.msr_lstar; + + req->data.regs.x86.dr7 = ctxt.dr7; + + hvm_get_segment_register(curr, x86_seg_fs, &seg); + req->data.regs.x86.fs_base = seg.base; + + hvm_get_segment_register(curr, x86_seg_gs, &seg); + req->data.regs.x86.gs_base = seg.base; + + hvm_get_segment_register(curr, x86_seg_cs, &seg); + req->data.regs.x86.cs_arbytes = seg.attr.bytes; +} + /* * Local variables: * mode: C diff --git a/xen/include/asm-x86/vm_event.h b/xen/include/asm-x86/vm_event.h index 5aff834..e81148d 100644 --- a/xen/include/asm-x86/vm_event.h +++ b/xen/include/asm-x86/vm_event.h @@ -44,4 +44,6 @@ void vm_event_register_write_resume(struct vcpu *v, vm_event_response_t *rsp); void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp); +void vm_event_fill_regs(vm_event_request_t *req); + #endif /* __ASM_X86_VM_EVENT_H__ */
Currently the registers saved in the request depend on which type of event is filling in the registers. In this patch we consolidate the two versions of register filling function as to return a fix set of registers irrespective of the underlying event. Signed-off-by: Tamas K Lengyel <tlengyel@novetta.com> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com> Cc: Keir Fraser <keir@xen.org> Cc: Jan Beulich <jbeulich@suse.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: George Dunlap <george.dunlap@eu.citrix.com> --- v2: get dr7 from hvm context --- xen/arch/x86/hvm/event.c | 35 ++----------------------- xen/arch/x86/mm/p2m.c | 57 +--------------------------------------- xen/arch/x86/vm_event.c | 59 ++++++++++++++++++++++++++++++++++++++++++ xen/include/asm-x86/vm_event.h | 2 ++ 4 files changed, 64 insertions(+), 89 deletions(-)