diff mbox

[05/17] Stop/start cpus before/after devices

Message ID 1242574999-20887-6-git-send-email-aliguori@us.ibm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Anthony Liguori May 17, 2009, 3:43 p.m. UTC
From: Yaniv Kamay <yaniv@qumranet.com>

Stop cpus before devices when stopping the VM, start cpus after devices
when starting VM.  Otherwise a vcpu could access a stopped device.

Acked-by: Dor Laor <dlaor@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>

Comments

Avi Kivity May 17, 2009, 4:59 p.m. UTC | #1
Anthony Liguori wrote:
> From: Yaniv Kamay <yaniv@qumranet.com>
>
> Stop cpus before devices when stopping the VM, start cpus after devices
> when starting VM.  Otherwise a vcpu could access a stopped device.
>
>   

IIRC we decided this was unnecessary since everything is under 
qemu_mutex.  However we might want to be on the safe side here and be 
similar to master.

Marcelo?
Marcelo Tosatti May 18, 2009, 1:58 p.m. UTC | #2
On Sun, May 17, 2009 at 07:59:51PM +0300, Avi Kivity wrote:
> Anthony Liguori wrote:
>> From: Yaniv Kamay <yaniv@qumranet.com>
>>
>> Stop cpus before devices when stopping the VM, start cpus after devices
>> when starting VM.  Otherwise a vcpu could access a stopped device.
>>
>>   
>
> IIRC we decided this was unnecessary since everything is under  
> qemu_mutex.  However we might want to be on the safe side here and be  
> similar to master.

Yes, it is not necessary. Since upstream is already on the safe side
(and the plan is to merge with it), IMO better leave this patch out.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Anthony Liguori May 18, 2009, 2:26 p.m. UTC | #3
Marcelo Tosatti wrote:
> On Sun, May 17, 2009 at 07:59:51PM +0300, Avi Kivity wrote:
>   
>> Anthony Liguori wrote:
>>     
>>> From: Yaniv Kamay <yaniv@qumranet.com>
>>>
>>> Stop cpus before devices when stopping the VM, start cpus after devices
>>> when starting VM.  Otherwise a vcpu could access a stopped device.
>>>
>>>   
>>>       
>> IIRC we decided this was unnecessary since everything is under  
>> qemu_mutex.  However we might want to be on the safe side here and be  
>> similar to master.
>>     
>
> Yes, it is not necessary. Since upstream is already on the safe side
> (and the plan is to merge with it), IMO better leave this patch out.
>   

Maybe it ought to be reverted from master then?
Avi Kivity May 18, 2009, 2:46 p.m. UTC | #4
Anthony Liguori wrote:
> Marcelo Tosatti wrote:
>> On Sun, May 17, 2009 at 07:59:51PM +0300, Avi Kivity wrote:
>>  
>>> Anthony Liguori wrote:
>>>    
>>>> From: Yaniv Kamay <yaniv@qumranet.com>
>>>>
>>>> Stop cpus before devices when stopping the VM, start cpus after 
>>>> devices
>>>> when starting VM.  Otherwise a vcpu could access a stopped device.
>>>>
>>>>         
>>> IIRC we decided this was unnecessary since everything is under  
>>> qemu_mutex.  However we might want to be on the safe side here and 
>>> be  similar to master.
>>>     
>>
>> Yes, it is not necessary. Since upstream is already on the safe side
>> (and the plan is to merge with it), IMO better leave this patch out.
>>   
>
> Maybe it ought to be reverted from master then?
>


Yes.  I'll do that.
Marcelo Tosatti May 18, 2009, 2:47 p.m. UTC | #5
On Mon, May 18, 2009 at 09:26:31AM -0500, Anthony Liguori wrote:
> Marcelo Tosatti wrote:
>> On Sun, May 17, 2009 at 07:59:51PM +0300, Avi Kivity wrote:
>>   
>>> Anthony Liguori wrote:
>>>     
>>>> From: Yaniv Kamay <yaniv@qumranet.com>
>>>>
>>>> Stop cpus before devices when stopping the VM, start cpus after devices
>>>> when starting VM.  Otherwise a vcpu could access a stopped device.
>>>>
>>>>         
>>> IIRC we decided this was unnecessary since everything is under   
>>> qemu_mutex.  However we might want to be on the safe side here and be 
>>>  similar to master.
>>>     
>>
>> Yes, it is not necessary. Since upstream is already on the safe side
>> (and the plan is to merge with it), IMO better leave this patch out.
>>   
>
> Maybe it ought to be reverted from master then?

Nope, its likelt to be useful in the future.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Avi Kivity May 18, 2009, 2:55 p.m. UTC | #6
Marcelo Tosatti wrote:
> On Mon, May 18, 2009 at 09:26:31AM -0500, Anthony Liguori wrote:
>   
>> Marcelo Tosatti wrote:
>>     
>>> On Sun, May 17, 2009 at 07:59:51PM +0300, Avi Kivity wrote:
>>>   
>>>       
>>>> Anthony Liguori wrote:
>>>>     
>>>>         
>>>>> From: Yaniv Kamay <yaniv@qumranet.com>
>>>>>
>>>>> Stop cpus before devices when stopping the VM, start cpus after devices
>>>>> when starting VM.  Otherwise a vcpu could access a stopped device.
>>>>>
>>>>>         
>>>>>           
>>>> IIRC we decided this was unnecessary since everything is under   
>>>> qemu_mutex.  However we might want to be on the safe side here and be 
>>>>  similar to master.
>>>>     
>>>>         
>>> Yes, it is not necessary. Since upstream is already on the safe side
>>> (and the plan is to merge with it), IMO better leave this patch out.
>>>   
>>>       
>> Maybe it ought to be reverted from master then?
>>     
>
> Nope, its likelt to be useful in the future.
>   

If we change the locking, we'll likely need to change this.  In any case 
I don't like start/stop; I prefer explicit locking.
diff mbox

Patch

diff --git a/qemu-kvm.c b/qemu-kvm.c
index 6893cfe..fab00ac 100644
--- a/qemu-kvm.c
+++ b/qemu-kvm.c
@@ -285,7 +285,7 @@  static int all_threads_paused(void)
     return 1;
 }
 
-static void pause_all_threads(void)
+void qemu_kvm_pause_all_threads(void)
 {
     CPUState *penv = first_cpu;
 
@@ -305,7 +305,7 @@  static void pause_all_threads(void)
 	qemu_cond_wait(&qemu_pause_cond);
 }
 
-static void resume_all_threads(void)
+void qemu_kvm_resume_all_threads(void)
 {
     CPUState *penv = first_cpu;
 
@@ -319,14 +319,6 @@  static void resume_all_threads(void)
     }
 }
 
-static void kvm_vm_state_change_handler(void *context, int running, int reason)
-{
-    if (running)
-	resume_all_threads();
-    else
-	pause_all_threads();
-}
-
 static void update_regs_for_sipi(CPUState *env)
 {
     kvm_arch_update_regs_for_sipi(env);
@@ -371,7 +363,7 @@  static void qemu_kvm_system_reset(void)
 {
     CPUState *penv = first_cpu;
 
-    pause_all_threads();
+    qemu_kvm_pause_all_threads();
 
     qemu_system_reset();
 
@@ -380,7 +372,7 @@  static void qemu_kvm_system_reset(void)
         penv = (CPUState *)penv->next_cpu;
     }
 
-    resume_all_threads();
+    qemu_kvm_resume_all_threads();
 }
 
 static int kvm_main_loop_cpu(CPUState *env)
@@ -465,7 +457,6 @@  int kvm_init_ap(void)
 #ifdef TARGET_I386
     kvm_tpr_opt_setup();
 #endif
-    qemu_add_vm_change_state_handler(kvm_vm_state_change_handler, NULL);
 
     signal(SIG_IPI, sig_ipi_handler);
     return 0;
@@ -609,7 +600,7 @@  int kvm_main_loop(void)
 #endif
     }
 
-    pause_all_threads();
+    qemu_kvm_pause_all_threads();
     pthread_mutex_unlock(&qemu_mutex);
 
     return 0;
diff --git a/qemu-kvm.h b/qemu-kvm.h
index 85f8668..6dd9448 100644
--- a/qemu-kvm.h
+++ b/qemu-kvm.h
@@ -119,6 +119,9 @@  int qemu_kvm_register_coalesced_mmio(target_phys_addr_t addr,
 int qemu_kvm_unregister_coalesced_mmio(target_phys_addr_t addr,
 				       unsigned int size);
 
+void qemu_kvm_pause_all_threads(void);
+void qemu_kvm_resume_all_threads(void);
+
 int kvm_coalesce_mmio_region(target_phys_addr_t start, ram_addr_t size);
 int kvm_uncoalesce_mmio_region(target_phys_addr_t start, ram_addr_t size);
 
diff --git a/vl.c b/vl.c
index 3cba8ed..0437159 100644
--- a/vl.c
+++ b/vl.c
@@ -3556,6 +3556,8 @@  void vm_start(void)
         cpu_enable_ticks();
         vm_running = 1;
         vm_state_notify(1, 0);
+        if (kvm_enabled())
+            qemu_kvm_resume_all_threads();
         qemu_rearm_alarm_timer(alarm_timer);
     }
 }
@@ -3565,6 +3567,8 @@  void vm_stop(int reason)
     if (vm_running) {
         cpu_disable_ticks();
         vm_running = 0;
+        if (kvm_enabled())
+            qemu_kvm_pause_all_threads();
         vm_state_notify(0, reason);
     }
 }