diff mbox

[01/15] xen: in do_softirq() sample smp_processor_id() once and for all.

Message ID 149633842141.12814.20908893053492021.stgit@Solace.fritz.box (mailing list archive)
State New, archived
Headers show

Commit Message

Dario Faggioli June 1, 2017, 5:33 p.m. UTC
In fact, right now, we read it at every iteration of the loop.
The reason it's done like this is how context switch was handled
on IA64 (see commit ae9bfcdc, "[XEN] Various softirq cleanups" [1]).

However:
1) we don't have IA64 any longer, and all the achitectures that
   we do support, are ok with sampling once and for all;
2) sampling at every iteration (slightly) affect performance;
3) sampling at every iteration is misleading, as it makes people
   believe that it is currently possible that SCHEDULE_SOFTIRQ
   moves the execution flow on another CPU (and the comment,
   by reinforcing this belief, makes things even worse!).

Therefore, let's:
- do the sampling once and for all, and remove the comment;
- leave an ASSERT() around, so that, if context switching
  logic changes (in current or new arches), we will notice.

[1] Some more (historical) information here:
    http://old-list-archives.xenproject.org/archives/html/xen-devel/2006-06/msg01262.html

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Tim Deegan <tim@xen.org>
---
 xen/common/softirq.c |    8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

Comments

Jan Beulich June 7, 2017, 2:38 p.m. UTC | #1
>>> On 01.06.17 at 19:33, <dario.faggioli@citrix.com> wrote:
> In fact, right now, we read it at every iteration of the loop.
> The reason it's done like this is how context switch was handled
> on IA64 (see commit ae9bfcdc, "[XEN] Various softirq cleanups" [1]).
> 
> However:
> 1) we don't have IA64 any longer, and all the achitectures that
>    we do support, are ok with sampling once and for all;
> 2) sampling at every iteration (slightly) affect performance;
> 3) sampling at every iteration is misleading, as it makes people
>    believe that it is currently possible that SCHEDULE_SOFTIRQ
>    moves the execution flow on another CPU (and the comment,
>    by reinforcing this belief, makes things even worse!).
> 
> Therefore, let's:
> - do the sampling once and for all, and remove the comment;
> - leave an ASSERT() around, so that, if context switching
>   logic changes (in current or new arches), we will notice.

I'm not convinced. The comment clearly says "may", and I don't
think the performance overhead of smp_processor_id() is that
high. Keeping the code as is allows some future port to do
context switches like ia64 did, if that's more efficient there.

Jan
George Dunlap June 8, 2017, 2:12 p.m. UTC | #2
On 07/06/17 15:38, Jan Beulich wrote:
>>>> On 01.06.17 at 19:33, <dario.faggioli@citrix.com> wrote:
>> In fact, right now, we read it at every iteration of the loop.
>> The reason it's done like this is how context switch was handled
>> on IA64 (see commit ae9bfcdc, "[XEN] Various softirq cleanups" [1]).
>>
>> However:
>> 1) we don't have IA64 any longer, and all the achitectures that
>>    we do support, are ok with sampling once and for all;
>> 2) sampling at every iteration (slightly) affect performance;
>> 3) sampling at every iteration is misleading, as it makes people
>>    believe that it is currently possible that SCHEDULE_SOFTIRQ
>>    moves the execution flow on another CPU (and the comment,
>>    by reinforcing this belief, makes things even worse!).
>>
>> Therefore, let's:
>> - do the sampling once and for all, and remove the comment;
>> - leave an ASSERT() around, so that, if context switching
>>   logic changes (in current or new arches), we will notice.
> 
> I'm not convinced. The comment clearly says "may", and I don't
> think the performance overhead of smp_processor_id() is that
> high. Keeping the code as is allows some future port to do
> context switches like ia64 did, if that's more efficient there.

In English, "may" not only means "has a possibility of not happening",
but also means "has a possibility of happening".  That's false in the
hypervisor code the way it's currently written.  If I said, "Bring a
water pistol to the XenSummit because Jan may burst into flames" I think
you'd say I was implying something false. :-)

I think it would be better to remove it.  There are doubtless *lots* of
places in the code now that implicitly rely on context_switch() never
returning.  The ASSERT() will catch this issue quickly enough if we ever
re-introduce that behavior.

On the other hand, unless there's a noticeable performance overhead,
this isn't a big issue.

 -George
George Dunlap June 8, 2017, 2:20 p.m. UTC | #3
On 01/06/17 18:33, Dario Faggioli wrote:
> In fact, right now, we read it at every iteration of the loop.
> The reason it's done like this is how context switch was handled
> on IA64 (see commit ae9bfcdc, "[XEN] Various softirq cleanups" [1]).
> 
> However:
> 1) we don't have IA64 any longer, and all the achitectures that
>    we do support, are ok with sampling once and for all;
> 2) sampling at every iteration (slightly) affect performance;
> 3) sampling at every iteration is misleading, as it makes people
>    believe that it is currently possible that SCHEDULE_SOFTIRQ
>    moves the execution flow on another CPU (and the comment,
>    by reinforcing this belief, makes things even worse!).
> 
> Therefore, let's:
> - do the sampling once and for all, and remove the comment;

"Once and for all" has a much stronger sense of finality than you're
using here: reading smp_processor_id() in that function "once and for
all" would mean you would never have to read it on that function, on
that particular core, ever again, until the end of the universe. :-)

I think I'd say, "only once".  The scope of the "only" in that case is
"this function call", not "all calls forever".

With that changed:

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

(Although we till need Jan's acquiescence.)

> - leave an ASSERT() around, so that, if context switching
>   logic changes (in current or new arches), we will notice.
> 
> [1] Some more (historical) information here:
>     http://old-list-archives.xenproject.org/archives/html/xen-devel/2006-06/msg01262.html
> 
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>


> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <George.Dunlap@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> Cc: Tim Deegan <tim@xen.org>
> ---
>  xen/common/softirq.c |    8 ++------
>  1 file changed, 2 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/common/softirq.c b/xen/common/softirq.c
> index ac12cf8..67c84ba 100644
> --- a/xen/common/softirq.c
> +++ b/xen/common/softirq.c
> @@ -27,16 +27,12 @@ static DEFINE_PER_CPU(unsigned int, batching);
>  
>  static void __do_softirq(unsigned long ignore_mask)
>  {
> -    unsigned int i, cpu;
> +    unsigned int i, cpu = smp_processor_id();
>      unsigned long pending;
>  
>      for ( ; ; )
>      {
> -        /*
> -         * Initialise @cpu on every iteration: SCHEDULE_SOFTIRQ may move
> -         * us to another processor.
> -         */
> -        cpu = smp_processor_id();
> +        ASSERT(cpu == smp_processor_id());
>  
>          if ( rcu_pending(cpu) )
>              rcu_check_callbacks(cpu);
>
Jan Beulich June 8, 2017, 2:42 p.m. UTC | #4
>>> On 08.06.17 at 16:20, <george.dunlap@citrix.com> wrote:
> On 01/06/17 18:33, Dario Faggioli wrote:
>> In fact, right now, we read it at every iteration of the loop.
>> The reason it's done like this is how context switch was handled
>> on IA64 (see commit ae9bfcdc, "[XEN] Various softirq cleanups" [1]).
>> 
>> However:
>> 1) we don't have IA64 any longer, and all the achitectures that
>>    we do support, are ok with sampling once and for all;
>> 2) sampling at every iteration (slightly) affect performance;
>> 3) sampling at every iteration is misleading, as it makes people
>>    believe that it is currently possible that SCHEDULE_SOFTIRQ
>>    moves the execution flow on another CPU (and the comment,
>>    by reinforcing this belief, makes things even worse!).
>> 
>> Therefore, let's:
>> - do the sampling once and for all, and remove the comment;
> 
> "Once and for all" has a much stronger sense of finality than you're
> using here: reading smp_processor_id() in that function "once and for
> all" would mean you would never have to read it on that function, on
> that particular core, ever again, until the end of the universe. :-)
> 
> I think I'd say, "only once".  The scope of the "only" in that case is
> "this function call", not "all calls forever".
> 
> With that changed:
> 
> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
> 
> (Although we till need Jan's acquiescence.)

I've voiced my opinion, but I don't mean to block the patch. After
all there's no active issue the change introduces.

Jan
diff mbox

Patch

diff --git a/xen/common/softirq.c b/xen/common/softirq.c
index ac12cf8..67c84ba 100644
--- a/xen/common/softirq.c
+++ b/xen/common/softirq.c
@@ -27,16 +27,12 @@  static DEFINE_PER_CPU(unsigned int, batching);
 
 static void __do_softirq(unsigned long ignore_mask)
 {
-    unsigned int i, cpu;
+    unsigned int i, cpu = smp_processor_id();
     unsigned long pending;
 
     for ( ; ; )
     {
-        /*
-         * Initialise @cpu on every iteration: SCHEDULE_SOFTIRQ may move
-         * us to another processor.
-         */
-        cpu = smp_processor_id();
+        ASSERT(cpu == smp_processor_id());
 
         if ( rcu_pending(cpu) )
             rcu_check_callbacks(cpu);