diff mbox

x86/hvm: Allow guest_request vm_events coming from userspace

Message ID 1470380899-11382-1-git-send-email-rcojocaru@bitdefender.com (mailing list archive)
State New, archived
Headers show

Commit Message

Razvan Cojocaru Aug. 5, 2016, 7:08 a.m. UTC
Allow guest userspace code to request that a vm_event be sent out
via VMCALL. This functionality seems to be handy for a number of
Xen developers, as stated on the mailing list (thread "[Xen-devel]
HVMOP_guest_request_vm_event only works from guest in ring0").

Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
---
 xen/arch/x86/hvm/hvm.c | 6 ++++++
 1 file changed, 6 insertions(+)

Comments

Jan Beulich Aug. 5, 2016, 7:46 a.m. UTC | #1
>>> On 05.08.16 at 09:08, <rcojocaru@bitdefender.com> wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -5349,8 +5349,14 @@ int hvm_do_hypercall(struct cpu_user_regs *regs)
>      switch ( mode )
>      {
>      case 8:        
> +        if ( eax == __HYPERVISOR_hvm_op &&
> +             regs->rdi == HVMOP_guest_request_vm_event )
> +            break;
>      case 4:
>      case 2:
> +        if ( eax == __HYPERVISOR_hvm_op &&
> +             regs->_ebx == HVMOP_guest_request_vm_event )
> +            break;

For one, Coverity will choke on there not being a fall-through
annotation. And then the resulting fall-through behavior is now
wrong: You don't want the 64-bit case to also do the 16-/32-bit
check.

Finally, as indicated before, such a special casing model doesn't
scale well, i.e. as soon as further exception would get suggested
the code would quickly become hard to understand and maintain
(as is already being supported by the issue pointed out above,
addressing of which will likely further convolute this code).

Jan
diff mbox

Patch

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 202866a..2d38936 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -5349,8 +5349,14 @@  int hvm_do_hypercall(struct cpu_user_regs *regs)
     switch ( mode )
     {
     case 8:        
+        if ( eax == __HYPERVISOR_hvm_op &&
+             regs->rdi == HVMOP_guest_request_vm_event )
+            break;
     case 4:
     case 2:
+        if ( eax == __HYPERVISOR_hvm_op &&
+             regs->_ebx == HVMOP_guest_request_vm_event )
+            break;
         hvm_get_segment_register(curr, x86_seg_ss, &sreg);
         if ( unlikely(sreg.attr.fields.dpl) )
         {