diff mbox

[18/21] Remove host_alarm_timer hacks.

Message ID 1241040038-17183-19-git-send-email-aliguori@us.ibm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Anthony Liguori April 29, 2009, 9:20 p.m. UTC
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
---
 vl.c |    3 +--
 1 files changed, 1 insertions(+), 2 deletions(-)

Comments

Avi Kivity April 30, 2009, 9:44 a.m. UTC | #1
Anthony Liguori wrote:
> Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
> ---
>  vl.c |    3 +--
>  1 files changed, 1 insertions(+), 2 deletions(-)
>
> diff --git a/vl.c b/vl.c
> index 3b0e3dc..848a8f8 100644
> --- a/vl.c
> +++ b/vl.c
> @@ -1367,8 +1367,7 @@ static void host_alarm_handler(int host_signum)
>          last_clock = ti;
>      }
>  #endif
> -    if (1 ||
> -        alarm_has_dynticks(alarm_timer) ||
> +    if (alarm_has_dynticks(alarm_timer) ||
>          (!use_icount &&
>              qemu_timer_expired(active_timers[QEMU_TIMER_VIRTUAL],
>                                 qemu_get_clock(vm_clock))) ||
>   

This was added to fix a problem.  Have you tested it?
Anthony Liguori April 30, 2009, 1:19 p.m. UTC | #2
Avi Kivity wrote:
> Anthony Liguori wrote:
>> Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
>> ---
>>  vl.c |    3 +--
>>  1 files changed, 1 insertions(+), 2 deletions(-)
>>
>> diff --git a/vl.c b/vl.c
>> index 3b0e3dc..848a8f8 100644
>> --- a/vl.c
>> +++ b/vl.c
>> @@ -1367,8 +1367,7 @@ static void host_alarm_handler(int host_signum)
>>          last_clock = ti;
>>      }
>>  #endif
>> -    if (1 ||
>> -        alarm_has_dynticks(alarm_timer) ||
>> +    if (alarm_has_dynticks(alarm_timer) ||
>>          (!use_icount &&
>>              qemu_timer_expired(active_timers[QEMU_TIMER_VIRTUAL],
>>                                 qemu_get_clock(vm_clock))) ||
>>   
>
> This was added to fix a problem.  Have you tested it?

Do you know what problem it fixes?

This goes back a very long time.  IIUC, this was added prior to the IO 
thread as an "optimization".  This ensures that any time there's a 
timer, the vcpu is interrupted to allow IO to run.  With non-dynticks, 
there can be spurious timer signals because we problem the timer with a 
fixed frequency.  It's necessary to take this path with dynticks because 
we need to rearm the timer which happens in the IO path.  It's not 
necessary to take this path with a non-dynticks timer unless there's 
been an expiration.

In modern KVM, the IO thread is capable of interrupting the CPU whenever 
it needs to process IO.  Therefore this "problem" no longer exists.

Regards,

Anthony Liguori
Avi Kivity April 30, 2009, 1:25 p.m. UTC | #3
Anthony Liguori wrote:
> Avi Kivity wrote:
>> Anthony Liguori wrote:
>>> Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
>>> ---
>>>  vl.c |    3 +--
>>>  1 files changed, 1 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/vl.c b/vl.c
>>> index 3b0e3dc..848a8f8 100644
>>> --- a/vl.c
>>> +++ b/vl.c
>>> @@ -1367,8 +1367,7 @@ static void host_alarm_handler(int host_signum)
>>>          last_clock = ti;
>>>      }
>>>  #endif
>>> -    if (1 ||
>>> -        alarm_has_dynticks(alarm_timer) ||
>>> +    if (alarm_has_dynticks(alarm_timer) ||
>>>          (!use_icount &&
>>>              qemu_timer_expired(active_timers[QEMU_TIMER_VIRTUAL],
>>>                                 qemu_get_clock(vm_clock))) ||
>>>   
>>
>> This was added to fix a problem.  Have you tested it?
>
> Do you know what problem it fixes?
>
> This goes back a very long time.  IIUC, this was added prior to the IO 
> thread as an "optimization".  This ensures that any time there's a 
> timer, the vcpu is interrupted to allow IO to run.  With non-dynticks, 
> there can be spurious timer signals because we problem the timer with 
> a fixed frequency.  It's necessary to take this path with dynticks 
> because we need to rearm the timer which happens in the IO path.  It's 
> not necessary to take this path with a non-dynticks timer unless 
> there's been an expiration.
>
> In modern KVM, the IO thread is capable of interrupting the CPU 
> whenever it needs to process IO.  Therefore this "problem" no longer 
> exists.
>

It would still be good to verify that the problem no longer exists.  
This is not a cosmetic change; some testing is needed to verify it 
doesn't introduce new latencies.
Anthony Liguori April 30, 2009, 1:37 p.m. UTC | #4
Avi Kivity wrote:
> Anthony Liguori wrote:
>> In modern KVM, the IO thread is capable of interrupting the CPU 
>> whenever it needs to process IO.  Therefore this "problem" no longer 
>> exists.
>>
>
> It would still be good to verify that the problem no longer exists.  
> This is not a cosmetic change; some testing is needed to verify it 
> doesn't introduce new latencies.
>

N.B. dynticks is the preferred timer in QEMU on Linux.  To even hit this 
code path, you'd have to use an explicit -clock hpet or -clock rtc.  I 
don't have an hpet on my laptop and -clock rtc boots just as fast as it 
did before.

Do we really care about optimizing latency with -clock rtc though?
Avi Kivity April 30, 2009, 3:46 p.m. UTC | #5
Anthony Liguori wrote:
> Avi Kivity wrote:
>> Anthony Liguori wrote:
>>> In modern KVM, the IO thread is capable of interrupting the CPU 
>>> whenever it needs to process IO.  Therefore this "problem" no longer 
>>> exists.
>>>
>>
>> It would still be good to verify that the problem no longer exists.  
>> This is not a cosmetic change; some testing is needed to verify it 
>> doesn't introduce new latencies.
>>
>
> N.B. dynticks is the preferred timer in QEMU on Linux.  To even hit 
> this code path, you'd have to use an explicit -clock hpet or -clock 
> rtc.  I don't have an hpet on my laptop and -clock rtc boots just as 
> fast as it did before.

I'll apply this and see what happens.

>
> Do we really care about optimizing latency with -clock rtc though?
>

People still run kvm on RHEL 5 (or cheap clones thereof), aren't they 
affected?
Anthony Liguori April 30, 2009, 3:49 p.m. UTC | #6
Avi Kivity wrote:
>>
>> Do we really care about optimizing latency with -clock rtc though?
>>
>
> People still run kvm on RHEL 5 (or cheap clones thereof), aren't they 
> affected?

Do they use -clock rtc?  -clock dynticks should still work on RHEL 5 
it's just that you won't get very accurate timer events.

You can only use -clock rtc with a single guest at a time so I doubt 
people use it seriously.  The other option would be -clock unix but I 
can't see why you'd use -clock unix instead of -clock dynticks.

The only reason to keep -clock unix around is for non Linux unices.
Avi Kivity April 30, 2009, 3:53 p.m. UTC | #7
Anthony Liguori wrote:
> Avi Kivity wrote:
>>>
>>> Do we really care about optimizing latency with -clock rtc though?
>>>
>>
>> People still run kvm on RHEL 5 (or cheap clones thereof), aren't they 
>> affected?
>
> Do they use -clock rtc?  -clock dynticks should still work on RHEL 5 
> it's just that you won't get very accurate timer events.
>
> You can only use -clock rtc with a single guest at a time so I doubt 
> people use it seriously.  The other option would be -clock unix but I 
> can't see why you'd use -clock unix instead of -clock dynticks.
>
> The only reason to keep -clock unix around is for non Linux unices.
>

Oh, okay then.
diff mbox

Patch

diff --git a/vl.c b/vl.c
index 3b0e3dc..848a8f8 100644
--- a/vl.c
+++ b/vl.c
@@ -1367,8 +1367,7 @@  static void host_alarm_handler(int host_signum)
         last_clock = ti;
     }
 #endif
-    if (1 ||
-        alarm_has_dynticks(alarm_timer) ||
+    if (alarm_has_dynticks(alarm_timer) ||
         (!use_icount &&
             qemu_timer_expired(active_timers[QEMU_TIMER_VIRTUAL],
                                qemu_get_clock(vm_clock))) ||