diff mbox

[2/3] KVM: x86: KVM_HC_RT_PRIO hypercall (host-side)

Message ID 20170921114039.364395490@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Marcelo Tosatti Sept. 21, 2017, 11:38 a.m. UTC
When executing guest vcpu-0 with FIFO:1 priority, which is necessary to 
deal with the following situation:

VCPU-0 (housekeeping VCPU)		VCPU-1 (realtime VCPU)

raw_spin_lock(A)
interrupted, schedule task T-1		raw_spin_lock(A) (spin)

raw_spin_unlock(A)

Certain operations must interrupt guest vcpu-0 (see trace below).

To fix this issue, only change guest vcpu-0 to FIFO priority
on spinlock critical sections (see patch).

Hang trace

Comments

Konrad Rzeszutek Wilk Sept. 21, 2017, 1:32 p.m. UTC | #1
On Thu, Sep 21, 2017 at 08:38:37AM -0300, Marcelo Tosatti wrote:
> When executing guest vcpu-0 with FIFO:1 priority, which is necessary to 
> deal with the following situation:
> 
> VCPU-0 (housekeeping VCPU)		VCPU-1 (realtime VCPU)
> 
> raw_spin_lock(A)
> interrupted, schedule task T-1		raw_spin_lock(A) (spin)
> 
> raw_spin_unlock(A)
> 
> Certain operations must interrupt guest vcpu-0 (see trace below).
> 
> To fix this issue, only change guest vcpu-0 to FIFO priority
> on spinlock critical sections (see patch).
> 
> Hang trace
> ==========
> 
> Without FIFO priority:
> 
> qemu-kvm-6705  [002] ....1.. 767785.648964: kvm_exit: reason IO_INSTRUCTION rip 0xe8fe info 1f00039 0
> qemu-kvm-6705  [002] ....1.. 767785.648965: kvm_exit: reason IO_INSTRUCTION rip 0xe911 info 3f60008 0
> qemu-kvm-6705  [002] ....1.. 767785.648968: kvm_exit: reason IO_INSTRUCTION rip 0x8984 info 608000b 0
> qemu-kvm-6705  [002] ....1.. 767785.648971: kvm_exit: reason IO_INSTRUCTION rip 0xb313 info 1f70008 0
> qemu-kvm-6705  [002] ....1.. 767785.648974: kvm_exit: reason IO_INSTRUCTION rip 0xb514 info 3f60000 0
> qemu-kvm-6705  [002] ....1.. 767785.648977: kvm_exit: reason PENDING_INTERRUPT rip 0x8052 info 0 0
> qemu-kvm-6705  [002] ....1.. 767785.648980: kvm_exit: reason IO_INSTRUCTION rip 0xeee6 info 200040 0
> qemu-kvm-6705  [002] ....1.. 767785.648999: kvm_exit: reason EPT_MISCONFIG rip 0x2120 info 0 0
> 
> With FIFO priority:
> 
> qemu-kvm-7636  [002] ....1.. 768218.205065: kvm_exit: reason IO_INSTRUCTION rip 0xb313 info 1f70008 0
> qemu-kvm-7636  [002] ....1.. 768218.205068: kvm_exit: reason IO_INSTRUCTION rip 0x8984 info 608000b 0
> qemu-kvm-7636  [002] ....1.. 768218.205071: kvm_exit: reason IO_INSTRUCTION rip 0xb313 info 1f70008 0
> qemu-kvm-7636  [002] ....1.. 768218.205074: kvm_exit: reason IO_INSTRUCTION rip 0x8984 info 608000b 0
> qemu-kvm-7636  [002] ....1.. 768218.205077: kvm_exit: reason IO_INSTRUCTION rip 0xb313 info 1f70008 0
> ..
> 
> Performance numbers (kernel compilation with make -j2)
> ======================================================
> 
> With hypercall: 4:40.  (make -j2)
> Without hypercall: 3:38.  (make -j2)
> 
> Note for NFV workloads spinlock performance is not relevant
> since DPDK should not enter the kernel (and housekeeping vcpu
> performance is far from a key factor).
> 
> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> 
> ---
>  Documentation/virtual/kvm/hypercalls.txt |   22 +++++++++++++++
>  arch/x86/kvm/x86.c                       |   43 +++++++++++++++++++++++++++++++
>  include/uapi/linux/kvm_para.h            |    2 +
>  3 files changed, 67 insertions(+)
> 
> Index: kvm.fifopriohc-submit/Documentation/virtual/kvm/hypercalls.txt
> ===================================================================
> --- kvm.fifopriohc-submit.orig/Documentation/virtual/kvm/hypercalls.txt
> +++ kvm.fifopriohc-submit/Documentation/virtual/kvm/hypercalls.txt
> @@ -121,3 +121,25 @@ compute the CLOCK_REALTIME for its clock
>  
>  Returns KVM_EOPNOTSUPP if the host does not use TSC clocksource,
>  or if clock type is different than KVM_CLOCK_PAIRING_WALLCLOCK.
> +
> +6. KVM_HC_RT_PRIO
> +------------------------
> +Architecture: x86
> +Status: active
> +Purpose: Hypercall used to change qemu vcpu process -RT priority.

So the guest can change the scheduling decisions at the host level?
And the host HAS to follow it? There is no policy override for the
host to say - nah, not going to do it?

Also wouldn't the guest want to always be at SCHED_FIFO? [I am thinking
of a guest admin who wants all the CPU resources he can get]


> +
> +Usage: Having a pCPU share a FIFO:1 vcpu and a QEMU emulator thread
> +can be problematic: especially if the vcpu busy-spins on memory waiting
> +for the QEMU emulator thread to write to, which leads to a hang

.. Is the QEMU emulator writing to the spinlock memory?

> +(because the FIFO:1 vcpu is never scheduled in favor of QEMU emulator
> +thread).
> +So this hypercall is supposed to be called by the guest when
> +the OS knows its not going to busy spin on memory thats
> +written by the emulator thread as above.
> +
> +a0: bit 0 contains enable bit, if 0 indicates that SCHED_OTHER
> +priority should be set for vcpu, if 1 indicates SCHED_FIFO
> +priority (the actual value for FIFO priority is decided
> +by the host).
> +
> +
> Index: kvm.fifopriohc-submit/include/uapi/linux/kvm_para.h
> ===================================================================
> --- kvm.fifopriohc-submit.orig/include/uapi/linux/kvm_para.h
> +++ kvm.fifopriohc-submit/include/uapi/linux/kvm_para.h
> @@ -15,6 +15,7 @@
>  #define KVM_E2BIG		E2BIG
>  #define KVM_EPERM		EPERM
>  #define KVM_EOPNOTSUPP		95
> +#define KVM_EINVAL		EINVAL
>  
>  #define KVM_HC_VAPIC_POLL_IRQ		1
>  #define KVM_HC_MMU_OP			2
> @@ -25,6 +26,7 @@
>  #define KVM_HC_MIPS_EXIT_VM		7
>  #define KVM_HC_MIPS_CONSOLE_OUTPUT	8
>  #define KVM_HC_CLOCK_PAIRING		9
> +#define KVM_HC_RT_PRIO			10
>  
>  /*
>   * hypercalls use architecture specific
> Index: kvm.fifopriohc-submit/arch/x86/kvm/x86.c
> ===================================================================
> --- kvm.fifopriohc-submit.orig/arch/x86/kvm/x86.c
> +++ kvm.fifopriohc-submit/arch/x86/kvm/x86.c
> @@ -66,6 +66,8 @@
>  #include <asm/pvclock.h>
>  #include <asm/div64.h>
>  #include <asm/irq_remapping.h>
> +#include <uapi/linux/sched/types.h>
> +#include <uapi/linux/sched.h>
>  
>  #define CREATE_TRACE_POINTS
>  #include "trace.h"
> @@ -6261,6 +6263,44 @@ void kvm_vcpu_deactivate_apicv(struct kv
>  	kvm_x86_ops->refresh_apicv_exec_ctrl(vcpu);
>  }
>  
> +static int convert_to_kvm_errcode(int error)
> +{
> +	switch (error) {
> +	case -EPERM:
> +		return -KVM_EPERM;
> +	case -EINVAL:
> +	default:
> +		return -KVM_EINVAL;
> +	}
> +}
> +
> +int kvm_pv_rt_prio(struct kvm_vcpu *vcpu, unsigned long a0)
> +{
> +	int ret;
> +	bool enable;
> +	struct sched_param param;
> +
> +	memset(&param, 0, sizeof(struct sched_param));
> +	param.sched_priority = vcpu->arch.rt_sched_priority;
> +
> +	enable = a0 & 0x1;
> +
> +	if (vcpu->arch.enable_rt_prio_hc == false)
> +		return -KVM_EPERM;
> +
> +	if (enable) {
> +		ret = sched_setscheduler(current, SCHED_FIFO, &param);
> +	} else {
> +		param.sched_priority = 0;
> +		ret = sched_setscheduler(current, SCHED_NORMAL, &param);
> +	}
> +
> +	if (ret)
> +		ret = convert_to_kvm_errcode(ret);
> +
> +	return ret;
> +}
> +
>  int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
>  {
>  	unsigned long nr, a0, a1, a2, a3, ret;
> @@ -6306,6 +6346,9 @@ int kvm_emulate_hypercall(struct kvm_vcp
>  		ret = kvm_pv_clock_pairing(vcpu, a0, a1);
>  		break;
>  #endif
> +	case KVM_HC_RT_PRIO:
> +		ret = kvm_pv_rt_prio(vcpu, a0);
> +		break;
>  	default:
>  		ret = -KVM_ENOSYS;
>  		break;
> 
>
Paolo Bonzini Sept. 21, 2017, 1:49 p.m. UTC | #2
On 21/09/2017 15:32, Konrad Rzeszutek Wilk wrote:
> So the guest can change the scheduling decisions at the host level?
> And the host HAS to follow it? There is no policy override for the
> host to say - nah, not going to do it?
> 
> Also wouldn't the guest want to always be at SCHED_FIFO? [I am thinking
> of a guest admin who wants all the CPU resources he can get]

Yeah, I do not understand why there should be a housekeeping VCPU that
is running at SCHED_NORMAL.  If it hurts, don't do it...

Paolo
Marcelo Tosatti Sept. 22, 2017, 1:08 a.m. UTC | #3
On Thu, Sep 21, 2017 at 03:49:33PM +0200, Paolo Bonzini wrote:
> On 21/09/2017 15:32, Konrad Rzeszutek Wilk wrote:
> > So the guest can change the scheduling decisions at the host level?
> > And the host HAS to follow it? There is no policy override for the
> > host to say - nah, not going to do it?

In that case the host should not even configure the guest with this
option (this is QEMU's 'enable-rt-fifo-hc' option).

> > Also wouldn't the guest want to always be at SCHED_FIFO? [I am thinking
> > of a guest admin who wants all the CPU resources he can get]

No. Because in the following code, executed by the housekeeping vCPU
running at constant SCHED_FIFO priority:

	1. Start disk I/O.
	2. busy spin

With the emulator thread sharing the same pCPU with the housekeeping
vCPU, the emulator thread (which runs at SCHED_NORMAL), will never
be scheduled in in place of the vcpu thread at SCHED_FIFO.

This causes a hang.

> Yeah, I do not understand why there should be a housekeeping VCPU that
> is running at SCHED_NORMAL.  If it hurts, don't do it...

Hope explanation above makes sense (in fact, it was you who pointed 
out SCHED_FIFO should not be constant on the housekeeping vCPU,
when sharing pCPU with emulator thread at SCHED_NORMAL).
Paolo Bonzini Sept. 22, 2017, 7:23 a.m. UTC | #4
On 22/09/2017 03:08, Marcelo Tosatti wrote:
> On Thu, Sep 21, 2017 at 03:49:33PM +0200, Paolo Bonzini wrote:
>> On 21/09/2017 15:32, Konrad Rzeszutek Wilk wrote:
>>> So the guest can change the scheduling decisions at the host level?
>>> And the host HAS to follow it? There is no policy override for the
>>> host to say - nah, not going to do it?
> 
> In that case the host should not even configure the guest with this
> option (this is QEMU's 'enable-rt-fifo-hc' option).
> 
>>> Also wouldn't the guest want to always be at SCHED_FIFO? [I am thinking
>>> of a guest admin who wants all the CPU resources he can get]
> 
> No. Because in the following code, executed by the housekeeping vCPU
> running at constant SCHED_FIFO priority:
> 
> 	1. Start disk I/O.
> 	2. busy spin
> 
> With the emulator thread sharing the same pCPU with the housekeeping
> vCPU, the emulator thread (which runs at SCHED_NORMAL), will never
> be scheduled in in place of the vcpu thread at SCHED_FIFO.
> 
> This causes a hang.

But if the emulator thread can interrupt the housekeeping thread, the
emulator thread should also be SCHED_FIFO at higher priority; IIRC this
was in Jan's talk from a few years ago.

QEMU would also have to use PI mutexes (which is the main reason why
it's using QemuMutex instead of e.g. GMutex).

>> Yeah, I do not understand why there should be a housekeeping VCPU that
>> is running at SCHED_NORMAL.  If it hurts, don't do it...
> 
> Hope explanation above makes sense (in fact, it was you who pointed 
> out SCHED_FIFO should not be constant on the housekeeping vCPU,
> when sharing pCPU with emulator thread at SCHED_NORMAL).

The two are not exclusive... As you point out, it depends on the
workload.  For DPDK you can put both of them at SCHED_NORMAL.  For
kernel-intensive uses you must use SCHED_FIFO.

Perhaps we could consider running these threads at SCHED_RR instead.
Unlike SCHED_NORMAL, I am not against a hypercall that bumps temporarily
SCHED_RR to SCHED_FIFO, but perhaps that's not even necessary.

Paolo
Marcelo Tosatti Sept. 22, 2017, 12:24 p.m. UTC | #5
On Fri, Sep 22, 2017 at 09:23:47AM +0200, Paolo Bonzini wrote:
> On 22/09/2017 03:08, Marcelo Tosatti wrote:
> > On Thu, Sep 21, 2017 at 03:49:33PM +0200, Paolo Bonzini wrote:
> >> On 21/09/2017 15:32, Konrad Rzeszutek Wilk wrote:
> >>> So the guest can change the scheduling decisions at the host level?
> >>> And the host HAS to follow it? There is no policy override for the
> >>> host to say - nah, not going to do it?
> > 
> > In that case the host should not even configure the guest with this
> > option (this is QEMU's 'enable-rt-fifo-hc' option).
> > 
> >>> Also wouldn't the guest want to always be at SCHED_FIFO? [I am thinking
> >>> of a guest admin who wants all the CPU resources he can get]
> > 
> > No. Because in the following code, executed by the housekeeping vCPU
> > running at constant SCHED_FIFO priority:
> > 
> > 	1. Start disk I/O.
> > 	2. busy spin
> > 
> > With the emulator thread sharing the same pCPU with the housekeeping
> > vCPU, the emulator thread (which runs at SCHED_NORMAL), will never
> > be scheduled in in place of the vcpu thread at SCHED_FIFO.
> > 
> > This causes a hang.
> 
> But if the emulator thread can interrupt the housekeeping thread, the
> emulator thread should also be SCHED_FIFO at higher priority; IIRC this
> was in Jan's talk from a few years ago.

The point is we do not want the emulator thread to interrupt the
housekeeping thread at all times: we only want it to interrupt the 
housekeeping thread when it is not in a spinlock protected section (because
that has an effect on realtime vcpu's attempting to grab
that particular spinlock).

Otherwise, it can interrupt the housekeeping thread.

> QEMU would also have to use PI mutexes (which is the main reason why
> it's using QemuMutex instead of e.g. GMutex).
> 
> >> Yeah, I do not understand why there should be a housekeeping VCPU that
> >> is running at SCHED_NORMAL.  If it hurts, don't do it...
> > 
> > Hope explanation above makes sense (in fact, it was you who pointed 
> > out SCHED_FIFO should not be constant on the housekeeping vCPU,
> > when sharing pCPU with emulator thread at SCHED_NORMAL).
> 
> The two are not exclusive... As you point out, it depends on the
> workload.  For DPDK you can put both of them at SCHED_NORMAL.  For
> kernel-intensive uses you must use SCHED_FIFO.
> 
> Perhaps we could consider running these threads at SCHED_RR instead.
> Unlike SCHED_NORMAL, I am not against a hypercall that bumps temporarily
> SCHED_RR to SCHED_FIFO, but perhaps that's not even necessary.

Sorry Paolo, i don't see how SCHED_RR is going to help here:

"   SCHED_RR: Round-robin scheduling
       SCHED_RR is a simple enhancement of SCHED_FIFO.  Everything
described
       above for SCHED_FIFO also applies to SCHED_RR, except that each
       thread is allowed to run only for a maximum time quantum."

What must happen is that vcpu0 should run _until its finished with 
spinlock protected section_ (that is, any job the emulator thread 
has, in that period where vcpu0 has work to do, is of less priority
and must not execute). Otherwise vcpu1, running a realtime workload, 
will attempt to grab the spinlock vcpu0 has grabbed, and busy 
spin waiting on the emulator thread to finish.

If you have the emulator thread at a higher priority than vcpu0, as you 
suggested above, the same problem will happen. So that option is not
viable.

We tried to have vcpu0 with SCHED_FIFO at all times, to avoid this
hypercall, but unfortunately that'll cause the hang as described in the
trace.

So i fail to see how SCHED_RR should help here?

Thanks
diff mbox

Patch

==========

Without FIFO priority:

qemu-kvm-6705  [002] ....1.. 767785.648964: kvm_exit: reason IO_INSTRUCTION rip 0xe8fe info 1f00039 0
qemu-kvm-6705  [002] ....1.. 767785.648965: kvm_exit: reason IO_INSTRUCTION rip 0xe911 info 3f60008 0
qemu-kvm-6705  [002] ....1.. 767785.648968: kvm_exit: reason IO_INSTRUCTION rip 0x8984 info 608000b 0
qemu-kvm-6705  [002] ....1.. 767785.648971: kvm_exit: reason IO_INSTRUCTION rip 0xb313 info 1f70008 0
qemu-kvm-6705  [002] ....1.. 767785.648974: kvm_exit: reason IO_INSTRUCTION rip 0xb514 info 3f60000 0
qemu-kvm-6705  [002] ....1.. 767785.648977: kvm_exit: reason PENDING_INTERRUPT rip 0x8052 info 0 0
qemu-kvm-6705  [002] ....1.. 767785.648980: kvm_exit: reason IO_INSTRUCTION rip 0xeee6 info 200040 0
qemu-kvm-6705  [002] ....1.. 767785.648999: kvm_exit: reason EPT_MISCONFIG rip 0x2120 info 0 0

With FIFO priority:

qemu-kvm-7636  [002] ....1.. 768218.205065: kvm_exit: reason IO_INSTRUCTION rip 0xb313 info 1f70008 0
qemu-kvm-7636  [002] ....1.. 768218.205068: kvm_exit: reason IO_INSTRUCTION rip 0x8984 info 608000b 0
qemu-kvm-7636  [002] ....1.. 768218.205071: kvm_exit: reason IO_INSTRUCTION rip 0xb313 info 1f70008 0
qemu-kvm-7636  [002] ....1.. 768218.205074: kvm_exit: reason IO_INSTRUCTION rip 0x8984 info 608000b 0
qemu-kvm-7636  [002] ....1.. 768218.205077: kvm_exit: reason IO_INSTRUCTION rip 0xb313 info 1f70008 0
..

Performance numbers (kernel compilation with make -j2)
======================================================

With hypercall: 4:40.  (make -j2)
Without hypercall: 3:38.  (make -j2)

Note for NFV workloads spinlock performance is not relevant
since DPDK should not enter the kernel (and housekeeping vcpu
performance is far from a key factor).

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>

---
 Documentation/virtual/kvm/hypercalls.txt |   22 +++++++++++++++
 arch/x86/kvm/x86.c                       |   43 +++++++++++++++++++++++++++++++
 include/uapi/linux/kvm_para.h            |    2 +
 3 files changed, 67 insertions(+)

Index: kvm.fifopriohc-submit/Documentation/virtual/kvm/hypercalls.txt
===================================================================
--- kvm.fifopriohc-submit.orig/Documentation/virtual/kvm/hypercalls.txt
+++ kvm.fifopriohc-submit/Documentation/virtual/kvm/hypercalls.txt
@@ -121,3 +121,25 @@  compute the CLOCK_REALTIME for its clock
 
 Returns KVM_EOPNOTSUPP if the host does not use TSC clocksource,
 or if clock type is different than KVM_CLOCK_PAIRING_WALLCLOCK.
+
+6. KVM_HC_RT_PRIO
+------------------------
+Architecture: x86
+Status: active
+Purpose: Hypercall used to change qemu vcpu process -RT priority.
+
+Usage: Having a pCPU share a FIFO:1 vcpu and a QEMU emulator thread
+can be problematic: especially if the vcpu busy-spins on memory waiting
+for the QEMU emulator thread to write to, which leads to a hang
+(because the FIFO:1 vcpu is never scheduled in favor of QEMU emulator
+thread).
+So this hypercall is supposed to be called by the guest when
+the OS knows its not going to busy spin on memory thats
+written by the emulator thread as above.
+
+a0: bit 0 contains enable bit, if 0 indicates that SCHED_OTHER
+priority should be set for vcpu, if 1 indicates SCHED_FIFO
+priority (the actual value for FIFO priority is decided
+by the host).
+
+
Index: kvm.fifopriohc-submit/include/uapi/linux/kvm_para.h
===================================================================
--- kvm.fifopriohc-submit.orig/include/uapi/linux/kvm_para.h
+++ kvm.fifopriohc-submit/include/uapi/linux/kvm_para.h
@@ -15,6 +15,7 @@ 
 #define KVM_E2BIG		E2BIG
 #define KVM_EPERM		EPERM
 #define KVM_EOPNOTSUPP		95
+#define KVM_EINVAL		EINVAL
 
 #define KVM_HC_VAPIC_POLL_IRQ		1
 #define KVM_HC_MMU_OP			2
@@ -25,6 +26,7 @@ 
 #define KVM_HC_MIPS_EXIT_VM		7
 #define KVM_HC_MIPS_CONSOLE_OUTPUT	8
 #define KVM_HC_CLOCK_PAIRING		9
+#define KVM_HC_RT_PRIO			10
 
 /*
  * hypercalls use architecture specific
Index: kvm.fifopriohc-submit/arch/x86/kvm/x86.c
===================================================================
--- kvm.fifopriohc-submit.orig/arch/x86/kvm/x86.c
+++ kvm.fifopriohc-submit/arch/x86/kvm/x86.c
@@ -66,6 +66,8 @@ 
 #include <asm/pvclock.h>
 #include <asm/div64.h>
 #include <asm/irq_remapping.h>
+#include <uapi/linux/sched/types.h>
+#include <uapi/linux/sched.h>
 
 #define CREATE_TRACE_POINTS
 #include "trace.h"
@@ -6261,6 +6263,44 @@  void kvm_vcpu_deactivate_apicv(struct kv
 	kvm_x86_ops->refresh_apicv_exec_ctrl(vcpu);
 }
 
+static int convert_to_kvm_errcode(int error)
+{
+	switch (error) {
+	case -EPERM:
+		return -KVM_EPERM;
+	case -EINVAL:
+	default:
+		return -KVM_EINVAL;
+	}
+}
+
+int kvm_pv_rt_prio(struct kvm_vcpu *vcpu, unsigned long a0)
+{
+	int ret;
+	bool enable;
+	struct sched_param param;
+
+	memset(&param, 0, sizeof(struct sched_param));
+	param.sched_priority = vcpu->arch.rt_sched_priority;
+
+	enable = a0 & 0x1;
+
+	if (vcpu->arch.enable_rt_prio_hc == false)
+		return -KVM_EPERM;
+
+	if (enable) {
+		ret = sched_setscheduler(current, SCHED_FIFO, &param);
+	} else {
+		param.sched_priority = 0;
+		ret = sched_setscheduler(current, SCHED_NORMAL, &param);
+	}
+
+	if (ret)
+		ret = convert_to_kvm_errcode(ret);
+
+	return ret;
+}
+
 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 {
 	unsigned long nr, a0, a1, a2, a3, ret;
@@ -6306,6 +6346,9 @@  int kvm_emulate_hypercall(struct kvm_vcp
 		ret = kvm_pv_clock_pairing(vcpu, a0, a1);
 		break;
 #endif
+	case KVM_HC_RT_PRIO:
+		ret = kvm_pv_rt_prio(vcpu, a0);
+		break;
 	default:
 		ret = -KVM_ENOSYS;
 		break;