Message ID | 20170217151227.GA27770@potion (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 02/17/2017 04:12 PM, Radim Krčmář wrote: > 2017-02-17 14:10+0100, Christian Borntraeger: >> needed by KVM common code. >> >> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> >> --- >> arch/s390/kernel/smp.c | 1 + >> 1 file changed, 1 insertion(+) >> >> diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c >> index 4c9ebb3..a4d7124 100644 >> --- a/arch/s390/kernel/smp.c >> +++ b/arch/s390/kernel/smp.c >> @@ -513,6 +513,7 @@ void smp_send_reschedule(int cpu) >> { >> pcpu_ec_call(pcpu_devices + cpu, ec_schedule); >> } >> +EXPORT_SYMBOL_GPL(smp_send_reschedule); > > s390 doesn't want to use smp_send_reschedule() so I think the patch at > the bottom is going in a better direction. > > Btw. I think that we shouldn't be using smp_send_reschedule() for > forcing VCPUs out of guest mode anyway -- the task we want to run is > already scheduled and we don't want the schduler to do anything. > > We could use smp_call_function() with an empty function instead, but > that also has high overhead ... allocating a new IPI seems best. > > But optimizing the current code is premature. :) > > ---8<--- > s390 has a different mechanism from bringing the VCPU out of guest mode. > Make the kick arch-specific. Looks good. The kick does not have to be synchronous and its ok if we reenter the guest as long as we execute the request in a timely manner, correct? e.g. - kick vcpu - vcpu enters SIE - vcpu exits SIE immediately - vcpu handles request - vcpu enters SIE would be perfectly fine? In that case the s390 implementation could be pretty simple. > > Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> > --- > arch/arm/kvm/arm.c | 5 +++++ > arch/mips/kvm/mips.c | 5 +++++ > arch/powerpc/kvm/powerpc.c | 5 +++++ > arch/s390/kvm/kvm-s390.c | 6 ++++++ > arch/x86/kvm/x86.c | 5 +++++ > include/linux/kvm_host.h | 1 + > virt/kvm/kvm_main.c | 2 +- > 7 files changed, 28 insertions(+), 1 deletion(-) > > diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c > index 21c493a9e5c9..a52b0399fa43 100644 > --- a/arch/arm/kvm/arm.c > +++ b/arch/arm/kvm/arm.c > @@ -97,6 +97,11 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) > return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE; > } > > +void kvm_arch_cpu_kick(int cpu) > +{ > + smp_send_reschedule(cpu); > +} > + > int kvm_arch_hardware_setup(void) > { > return 0; > diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c > index 31ee5ee0010b..8ed510d7f8c5 100644 > --- a/arch/mips/kvm/mips.c > +++ b/arch/mips/kvm/mips.c > @@ -78,6 +78,11 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) > return 1; > } > > +void kvm_arch_cpu_kick(int cpu) > +{ > + smp_send_reschedule(cpu); > +} > + > int kvm_arch_hardware_enable(void) > { > return 0; > diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c > index b094d9c1e1ef..b31149e63817 100644 > --- a/arch/powerpc/kvm/powerpc.c > +++ b/arch/powerpc/kvm/powerpc.c > @@ -60,6 +60,11 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) > return 1; > } > > +void kvm_arch_cpu_kick(int cpu) > +{ > + smp_send_reschedule(cpu); > +} > + > /* > * Common checks before entering the guest world. Call with interrupts > * disabled. > diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c > index 8e36734fa5b6..f67ec9796f5e 100644 > --- a/arch/s390/kvm/kvm-s390.c > +++ b/arch/s390/kvm/kvm-s390.c > @@ -2131,6 +2131,12 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) > return 0; > } > > +void kvm_arch_cpu_kick(int cpu) > +{ > + /* TODO: implement kicking with SIE */ > + BUG(); > +} > + > static int kvm_arch_vcpu_ioctl_get_one_reg(struct kvm_vcpu *vcpu, > struct kvm_one_reg *reg) > { > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 19c853f87dd4..d6f40f20c0b5 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -8405,6 +8405,11 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) > return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE; > } > > +void kvm_arch_cpu_kick(int cpu) > +{ > + smp_send_reschedule(cpu); > +} > + > int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu) > { > return kvm_x86_ops->interrupt_allowed(vcpu); > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index 5438548fec10..9872bd7713f1 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -766,6 +766,7 @@ void kvm_arch_hardware_unsetup(void); > void kvm_arch_check_processor_compat(void *rtn); > int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu); > int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu); > +void kvm_arch_cpu_kick(int cpu); > > void *kvm_kvzalloc(unsigned long size); > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 8d2ef0661ea8..4f4d250e1f53 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -2240,7 +2240,7 @@ void kvm_vcpu_kick(struct kvm_vcpu *vcpu) > me = get_cpu(); > if (cpu != me && (unsigned)cpu < nr_cpu_ids && cpu_online(cpu)) > if (kvm_arch_vcpu_should_kick(vcpu)) > - smp_send_reschedule(cpu); > + kvm_arch_cpu_kick(cpu); > put_cpu(); > } > EXPORT_SYMBOL_GPL(kvm_vcpu_kick); >
On 17/02/2017 16:46, Christian Borntraeger wrote: > Looks good. The kick does not have to be synchronous and its ok if we > reenter the guest as long as we execute the request in a timely manner, > correct? > > e.g. > - kick vcpu > - vcpu enters SIE > - vcpu exits SIE immediately > - vcpu handles request > - vcpu enters SIE > > would be perfectly fine? Yes, it would. There's some parallel with QEMU's qemu_cpu_kick, where the signal would be processed immediately after entering KVM_RUN. Paolo
On 02/17/2017 05:23 PM, Paolo Bonzini wrote: > > > On 17/02/2017 16:46, Christian Borntraeger wrote: >> Looks good. The kick does not have to be synchronous and its ok if we >> reenter the guest as long as we execute the request in a timely manner, >> correct? >> >> e.g. >> - kick vcpu >> - vcpu enters SIE >> - vcpu exits SIE immediately >> - vcpu handles request >> - vcpu enters SIE >> >> would be perfectly fine? > > Yes, it would. There's some parallel with QEMU's qemu_cpu_kick, where > the signal would be processed immediately after entering KVM_RUN. Something like ---snip----- struct kvm_s390_sie_block *scb = READ_ONCE(vcpu->arch.vsie_block); atomic_or(CPUSTAT_STOP_INT, &vcpu->arch.sie_block->cpuflags); if (scb) atomic_or(CPUSTAT_STOP_INT, &scb->cpuflags); ---snip----- or ---snip----- atomic_or(CPUSTAT_STOP_INT, &vcpu->arch.sie_block->cpuflags); kvm_s390_vsie_kick(vcpu); ---snip----- should be enough then. The code will either delete that stop request when processing interrupts, but then the requests will be handled afterwards, or we enter the guest once, exit and process then the requests in the next loop iteration. As I am on my way into the weekend this needs double checking.
Am 17.02.2017 um 16:12 schrieb Radim Krčmář: > 2017-02-17 14:10+0100, Christian Borntraeger: >> needed by KVM common code. >> >> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> >> --- >> arch/s390/kernel/smp.c | 1 + >> 1 file changed, 1 insertion(+) >> >> diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c >> index 4c9ebb3..a4d7124 100644 >> --- a/arch/s390/kernel/smp.c >> +++ b/arch/s390/kernel/smp.c >> @@ -513,6 +513,7 @@ void smp_send_reschedule(int cpu) >> { >> pcpu_ec_call(pcpu_devices + cpu, ec_schedule); >> } >> +EXPORT_SYMBOL_GPL(smp_send_reschedule); > > s390 doesn't want to use smp_send_reschedule() so I think the patch at > the bottom is going in a better direction. > > Btw. I think that we shouldn't be using smp_send_reschedule() for > forcing VCPUs out of guest mode anyway -- the task we want to run is > already scheduled and we don't want the schduler to do anything. > > We could use smp_call_function() with an empty function instead, but > that also has high overhead ... allocating a new IPI seems best. > > But optimizing the current code is premature. :) > > ---8<--- > s390 has a different mechanism from bringing the VCPU out of guest mode. > Make the kick arch-specific. > > Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> > --- > arch/arm/kvm/arm.c | 5 +++++ > arch/mips/kvm/mips.c | 5 +++++ > arch/powerpc/kvm/powerpc.c | 5 +++++ > arch/s390/kvm/kvm-s390.c | 6 ++++++ > arch/x86/kvm/x86.c | 5 +++++ > include/linux/kvm_host.h | 1 + > virt/kvm/kvm_main.c | 2 +- > 7 files changed, 28 insertions(+), 1 deletion(-) > > diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c > index 21c493a9e5c9..a52b0399fa43 100644 > --- a/arch/arm/kvm/arm.c > +++ b/arch/arm/kvm/arm.c > @@ -97,6 +97,11 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) > return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE; > } > That's what I had in mind. Looks good.
>> Yes, it would. There's some parallel with QEMU's qemu_cpu_kick, where >> the signal would be processed immediately after entering KVM_RUN. > > Something like > > ---snip----- > struct kvm_s390_sie_block *scb = READ_ONCE(vcpu->arch.vsie_block); > > atomic_or(CPUSTAT_STOP_INT, &vcpu->arch.sie_block->cpuflags); > if (scb) > atomic_or(CPUSTAT_STOP_INT, &scb->cpuflags); > ---snip----- > > or > ---snip----- > atomic_or(CPUSTAT_STOP_INT, &vcpu->arch.sie_block->cpuflags); > kvm_s390_vsie_kick(vcpu); > ---snip----- I'd go for the latter one. Keep the vsie stuff isolated. Please note that the vsie loop currently doesn't look for vcpu->requests yet. (I left it out because its just racy either way and all current requests don't require to be processes immediately - and it doesn't harm). > > should be enough then. The code will either delete that stop request when > processing interrupts, but then the requests will be handled afterwards, > or we enter the guest once, exit and process then the requests in the next > loop iteration.
On 02/17/2017 06:10 PM, David Hildenbrand wrote: > >>> Yes, it would. There's some parallel with QEMU's qemu_cpu_kick, where >>> the signal would be processed immediately after entering KVM_RUN. >> >> Something like >> >> ---snip----- >> struct kvm_s390_sie_block *scb = READ_ONCE(vcpu->arch.vsie_block); >> >> atomic_or(CPUSTAT_STOP_INT, &vcpu->arch.sie_block->cpuflags); >> if (scb) >> atomic_or(CPUSTAT_STOP_INT, &scb->cpuflags); >> ---snip----- >> >> or >> ---snip----- >> atomic_or(CPUSTAT_STOP_INT, &vcpu->arch.sie_block->cpuflags); >> kvm_s390_vsie_kick(vcpu); >> ---snip----- > > I'd go for the latter one. Keep the vsie stuff isolated. Please note Yes makes sense. Radim, if you go with this patch something like this can be used as the s390 variant of kvm_arch_cpu_kick: ---snip--- /* * The stop indication is reset in the interrupt code. As the CPU * loop handles requests after interrupts, we will * a: miss the request handler and enter the guest, but then the * stop request will exit the CPU and handle the request in the next * round or * b: handle the request directly before entering the guest */ atomic_or(CPUSTAT_STOP_INT, &vcpu->arch.sie_block->cpuflags); kvm_s390_vsie_kick(vcpu); ---snip--- feel free to add that to your patch. I can also send a fixup patch later on if you prefer that. Christian
Am 20.02.2017 um 12:12 schrieb Christian Borntraeger: > On 02/17/2017 06:10 PM, David Hildenbrand wrote: >> >>>> Yes, it would. There's some parallel with QEMU's qemu_cpu_kick, where >>>> the signal would be processed immediately after entering KVM_RUN. >>> >>> Something like >>> >>> ---snip----- >>> struct kvm_s390_sie_block *scb = READ_ONCE(vcpu->arch.vsie_block); >>> >>> atomic_or(CPUSTAT_STOP_INT, &vcpu->arch.sie_block->cpuflags); >>> if (scb) >>> atomic_or(CPUSTAT_STOP_INT, &scb->cpuflags); >>> ---snip----- >>> >>> or >>> ---snip----- >>> atomic_or(CPUSTAT_STOP_INT, &vcpu->arch.sie_block->cpuflags); >>> kvm_s390_vsie_kick(vcpu); >>> ---snip----- >> >> I'd go for the latter one. Keep the vsie stuff isolated. Please note > > Yes makes sense. > > Radim, if you go with this patch something like this can be used as the > s390 variant of kvm_arch_cpu_kick: > > ---snip--- > /* > * The stop indication is reset in the interrupt code. As the CPU > * loop handles requests after interrupts, we will > * a: miss the request handler and enter the guest, but then the > * stop request will exit the CPU and handle the request in the next > * round or > * b: handle the request directly before entering the guest > */ > atomic_or(CPUSTAT_STOP_INT, &vcpu->arch.sie_block->cpuflags); > kvm_s390_vsie_kick(vcpu); > > ---snip--- > feel free to add that to your patch. I can also send a fixup patch later > on if you prefer that. kvm_arch_vcpu_should_kick() then also has to be changed to return 1 for now. An interesting thing to note is how vcpu->cpu is used. Again, as s390x can preempt just before entering the guest, vcpu_kick() might see vcpu->cpu = -1. Therefore, kvm_arch_vcpu_should_kick() won't even be called. So our cpu might go into guest mode and stay there longer than expected (as we won't kick it). On x86, it is the following way: If vcpu->cpu is -1, no need to kick the VCPU. It will check for requests when preemption is disabled, therefore when rescheduled. If vcpu->cpu is set, kvm_arch_vcpu_should_kick() remembers if the VCPU has already been kicked while in the critical section. It will get kicked by smp resched as soon as entering guest mode. So here, disabled preemption + checks in the section with disabled preemption (for requests and EXITING_GUEST_MODE) make sure that the guest will leave guest mode and process requests in a timely fashion. On s390x, this is not 100% true. vcpu->cpu cannot be used as an indicator whether a kick is necessary. Either that is ok for now, or the vcpu->cpu != -1 check has to be disabled for s390x, e.g. by moving the check into kvm_arch_vcpu_should_kick().
2017-02-20 12:12+0100, Christian Borntraeger: > On 02/17/2017 06:10 PM, David Hildenbrand wrote: >> >>>> Yes, it would. There's some parallel with QEMU's qemu_cpu_kick, where >>>> the signal would be processed immediately after entering KVM_RUN. >>> >>> Something like >>> >>> ---snip----- >>> struct kvm_s390_sie_block *scb = READ_ONCE(vcpu->arch.vsie_block); >>> >>> atomic_or(CPUSTAT_STOP_INT, &vcpu->arch.sie_block->cpuflags); >>> if (scb) >>> atomic_or(CPUSTAT_STOP_INT, &scb->cpuflags); >>> ---snip----- >>> >>> or >>> ---snip----- >>> atomic_or(CPUSTAT_STOP_INT, &vcpu->arch.sie_block->cpuflags); >>> kvm_s390_vsie_kick(vcpu); >>> ---snip----- >> >> I'd go for the latter one. Keep the vsie stuff isolated. Please note > > Yes makes sense. > > Radim, if you go with this patch something like this can be used as the > s390 variant of kvm_arch_cpu_kick: > > ---snip--- > /* > * The stop indication is reset in the interrupt code. As the CPU > * loop handles requests after interrupts, we will > * a: miss the request handler and enter the guest, but then the > * stop request will exit the CPU and handle the request in the next > * round or > * b: handle the request directly before entering the guest > */ > atomic_or(CPUSTAT_STOP_INT, &vcpu->arch.sie_block->cpuflags); > kvm_s390_vsie_kick(vcpu); > > ---snip--- > feel free to add that to your patch. I can also send a fixup patch later > on if you prefer that. I'll probably use it. Just looking at the hunk made me realize that the interface will need to pass the vcpu as well, so I'd be best to have the whole picture. Thanks.
2017-02-20 12:35+0100, David Hildenbrand: > Am 20.02.2017 um 12:12 schrieb Christian Borntraeger: >> On 02/17/2017 06:10 PM, David Hildenbrand wrote: >>> >>>>> Yes, it would. There's some parallel with QEMU's qemu_cpu_kick, where >>>>> the signal would be processed immediately after entering KVM_RUN. >>>> >>>> Something like >>>> >>>> ---snip----- >>>> struct kvm_s390_sie_block *scb = READ_ONCE(vcpu->arch.vsie_block); >>>> >>>> atomic_or(CPUSTAT_STOP_INT, &vcpu->arch.sie_block->cpuflags); >>>> if (scb) >>>> atomic_or(CPUSTAT_STOP_INT, &scb->cpuflags); >>>> ---snip----- >>>> >>>> or >>>> ---snip----- >>>> atomic_or(CPUSTAT_STOP_INT, &vcpu->arch.sie_block->cpuflags); >>>> kvm_s390_vsie_kick(vcpu); >>>> ---snip----- >>> >>> I'd go for the latter one. Keep the vsie stuff isolated. Please note >> >> Yes makes sense. >> >> Radim, if you go with this patch something like this can be used as the >> s390 variant of kvm_arch_cpu_kick: >> >> ---snip--- >> /* >> * The stop indication is reset in the interrupt code. As the CPU >> * loop handles requests after interrupts, we will >> * a: miss the request handler and enter the guest, but then the >> * stop request will exit the CPU and handle the request in the next >> * round or >> * b: handle the request directly before entering the guest >> */ >> atomic_or(CPUSTAT_STOP_INT, &vcpu->arch.sie_block->cpuflags); >> kvm_s390_vsie_kick(vcpu); >> >> ---snip--- >> feel free to add that to your patch. I can also send a fixup patch later >> on if you prefer that. > > kvm_arch_vcpu_should_kick() then also has to be changed to return 1 for now. > > An interesting thing to note is how vcpu->cpu is used. > > Again, as s390x can preempt just before entering the guest, vcpu_kick() > might see vcpu->cpu = -1. Therefore, kvm_arch_vcpu_should_kick() won't > even be called. So our cpu might go into guest mode and stay there > longer than expected (as we won't kick it). > > On x86, it is the following way: > > If vcpu->cpu is -1, no need to kick the VCPU. It will check for requests > when preemption is disabled, therefore when rescheduled. > > If vcpu->cpu is set, kvm_arch_vcpu_should_kick() remembers if the VCPU > has already been kicked while in the critical section. It will get > kicked by smp resched as soon as entering guest mode. > > So here, disabled preemption + checks in the section with disabled > preemption (for requests and EXITING_GUEST_MODE) make sure that the > guest will leave guest mode and process requests in a timely fashion. > > On s390x, this is not 100% true. vcpu->cpu cannot be used as an > indicator whether a kick is necessary. Either that is ok for now, or the > vcpu->cpu != -1 check has to be disabled for s390x, e.g. by moving the > check into kvm_arch_vcpu_should_kick(). Good point. So s390 doesn't need vcpu->cpu and only sets it because other arches do? And do I understand it correctly that the s390 SIE block operations have no side-effect, apart from changed memory, when outside of guest-mode? (We have cpu->mode mostly because interrupts are expensive. :]) In the end, I'd like to use kvm_vcpu_kick() for kvm_s390_vcpu_wakeup(). s390 sets vcpu->preempted to get a performance boost, which makes touching it less than desirable ... On s390, vcpu->preempted is only used in __diag_time_slice_end(), which seems to be a type of spinning-on-a-taken-lock hypercall -- any reason why that optimization shouldn't work on other architectures? Thanks.
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 21c493a9e5c9..a52b0399fa43 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -97,6 +97,11 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE; } +void kvm_arch_cpu_kick(int cpu) +{ + smp_send_reschedule(cpu); +} + int kvm_arch_hardware_setup(void) { return 0; diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index 31ee5ee0010b..8ed510d7f8c5 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -78,6 +78,11 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) return 1; } +void kvm_arch_cpu_kick(int cpu) +{ + smp_send_reschedule(cpu); +} + int kvm_arch_hardware_enable(void) { return 0; diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index b094d9c1e1ef..b31149e63817 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -60,6 +60,11 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) return 1; } +void kvm_arch_cpu_kick(int cpu) +{ + smp_send_reschedule(cpu); +} + /* * Common checks before entering the guest world. Call with interrupts * disabled. diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index 8e36734fa5b6..f67ec9796f5e 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -2131,6 +2131,12 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) return 0; } +void kvm_arch_cpu_kick(int cpu) +{ + /* TODO: implement kicking with SIE */ + BUG(); +} + static int kvm_arch_vcpu_ioctl_get_one_reg(struct kvm_vcpu *vcpu, struct kvm_one_reg *reg) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 19c853f87dd4..d6f40f20c0b5 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8405,6 +8405,11 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE; } +void kvm_arch_cpu_kick(int cpu) +{ + smp_send_reschedule(cpu); +} + int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu) { return kvm_x86_ops->interrupt_allowed(vcpu); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 5438548fec10..9872bd7713f1 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -766,6 +766,7 @@ void kvm_arch_hardware_unsetup(void); void kvm_arch_check_processor_compat(void *rtn); int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu); int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu); +void kvm_arch_cpu_kick(int cpu); void *kvm_kvzalloc(unsigned long size); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 8d2ef0661ea8..4f4d250e1f53 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2240,7 +2240,7 @@ void kvm_vcpu_kick(struct kvm_vcpu *vcpu) me = get_cpu(); if (cpu != me && (unsigned)cpu < nr_cpu_ids && cpu_online(cpu)) if (kvm_arch_vcpu_should_kick(vcpu)) - smp_send_reschedule(cpu); + kvm_arch_cpu_kick(cpu); put_cpu(); } EXPORT_SYMBOL_GPL(kvm_vcpu_kick);