diff mbox series

[Bugfix,for-4.15] xen: credit2: fix per-entity load tracking when continuing running

Message ID 161615605709.5036.4052641880659992679.stgit@Wayrath (mailing list archive)
State New, archived
Headers show
Series [Bugfix,for-4.15] xen: credit2: fix per-entity load tracking when continuing running | expand

Commit Message

Dario Faggioli March 19, 2021, 12:14 p.m. UTC
If we schedule, and the current vCPU continues to run, its statistical
load is not properly updated, resulting in something like this, even if
all the 8 vCPUs are 100% busy:

(XEN) Runqueue 0:
(XEN) [...]
(XEN)   aveload            = 2097152 (~800%)
(XEN) [...]
(XEN)   Domain: 0 w 256 c 0 v 8
(XEN)     1: [0.0] flags=2 cpu=4 credit=9996885 [w=256] load=35 (~0%)
(XEN)     2: [0.1] flags=2 cpu=2 credit=9993725 [w=256] load=796 (~0%)
(XEN)     3: [0.2] flags=2 cpu=1 credit=9995885 [w=256] load=883 (~0%)
(XEN)     4: [0.3] flags=2 cpu=5 credit=9998833 [w=256] load=487 (~0%)
(XEN)     5: [0.4] flags=2 cpu=6 credit=9998942 [w=256] load=1595 (~0%)
(XEN)     6: [0.5] flags=2 cpu=0 credit=9994669 [w=256] load=22 (~0%)
(XEN)     7: [0.6] flags=2 cpu=7 credit=9997706 [w=256] load=0 (~0%)
(XEN)     8: [0.7] flags=2 cpu=3 credit=9992440 [w=256] load=0 (~0%)

As we can see, the average load of the runqueue as a whole is, instead,
computed properly.

This issue would, in theory, potentially affect Credit2 load balancing
logic. In practice, however, the problem only manifests (at least with
these characteristics) when there is only 1 runqueue active in the
cpupool, which also means there is no need to do any load-balancing.

Hence its real impact is pretty much limited to wrong per-vCPU load
percentages, when looking at the output of the 'r' debug-key.

With this patch, the load is updated and displayed correctly:

(XEN) Runqueue 0:
(XEN) [...]
(XEN)   aveload            = 2097152 (~800%)
(XEN) [...]
(XEN) Domain info:
(XEN)   Domain: 0 w 256 c 0 v 8
(XEN)     1: [0.0] flags=2 cpu=4 credit=9995584 [w=256] load=262144 (~100%)
(XEN)     2: [0.1] flags=2 cpu=6 credit=9992992 [w=256] load=262144 (~100%)
(XEN)     3: [0.2] flags=2 cpu=3 credit=9998918 [w=256] load=262118 (~99%)
(XEN)     4: [0.3] flags=2 cpu=5 credit=9996867 [w=256] load=262144 (~100%)
(XEN)     5: [0.4] flags=2 cpu=1 credit=9998912 [w=256] load=262144 (~100%)
(XEN)     6: [0.5] flags=2 cpu=2 credit=9997842 [w=256] load=262144 (~100%)
(XEN)     7: [0.6] flags=2 cpu=7 credit=9994623 [w=256] load=262144 (~100%)
(XEN)     8: [0.7] flags=2 cpu=0 credit=9991815 [w=256] load=262144 (~100%)

Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
---
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
---
Despite the limited effect, it's a bug. So:
- it should be backported;
- I think it should be included in 4.15. The risk is pretty low, for
  the same reasons already explained when describing its limited impact.
---
 xen/common/sched/credit2.c |    2 ++
 1 file changed, 2 insertions(+)

Comments

Jan Beulich April 27, 2021, 8:35 a.m. UTC | #1
On 19.03.2021 13:14, Dario Faggioli wrote:
> If we schedule, and the current vCPU continues to run, its statistical
> load is not properly updated, resulting in something like this, even if
> all the 8 vCPUs are 100% busy:
> 
> (XEN) Runqueue 0:
> (XEN) [...]
> (XEN)   aveload            = 2097152 (~800%)
> (XEN) [...]
> (XEN)   Domain: 0 w 256 c 0 v 8
> (XEN)     1: [0.0] flags=2 cpu=4 credit=9996885 [w=256] load=35 (~0%)
> (XEN)     2: [0.1] flags=2 cpu=2 credit=9993725 [w=256] load=796 (~0%)
> (XEN)     3: [0.2] flags=2 cpu=1 credit=9995885 [w=256] load=883 (~0%)
> (XEN)     4: [0.3] flags=2 cpu=5 credit=9998833 [w=256] load=487 (~0%)
> (XEN)     5: [0.4] flags=2 cpu=6 credit=9998942 [w=256] load=1595 (~0%)
> (XEN)     6: [0.5] flags=2 cpu=0 credit=9994669 [w=256] load=22 (~0%)
> (XEN)     7: [0.6] flags=2 cpu=7 credit=9997706 [w=256] load=0 (~0%)
> (XEN)     8: [0.7] flags=2 cpu=3 credit=9992440 [w=256] load=0 (~0%)
> 
> As we can see, the average load of the runqueue as a whole is, instead,
> computed properly.
> 
> This issue would, in theory, potentially affect Credit2 load balancing
> logic. In practice, however, the problem only manifests (at least with
> these characteristics) when there is only 1 runqueue active in the
> cpupool, which also means there is no need to do any load-balancing.
> 
> Hence its real impact is pretty much limited to wrong per-vCPU load
> percentages, when looking at the output of the 'r' debug-key.
> 
> With this patch, the load is updated and displayed correctly:
> 
> (XEN) Runqueue 0:
> (XEN) [...]
> (XEN)   aveload            = 2097152 (~800%)
> (XEN) [...]
> (XEN) Domain info:
> (XEN)   Domain: 0 w 256 c 0 v 8
> (XEN)     1: [0.0] flags=2 cpu=4 credit=9995584 [w=256] load=262144 (~100%)
> (XEN)     2: [0.1] flags=2 cpu=6 credit=9992992 [w=256] load=262144 (~100%)
> (XEN)     3: [0.2] flags=2 cpu=3 credit=9998918 [w=256] load=262118 (~99%)
> (XEN)     4: [0.3] flags=2 cpu=5 credit=9996867 [w=256] load=262144 (~100%)
> (XEN)     5: [0.4] flags=2 cpu=1 credit=9998912 [w=256] load=262144 (~100%)
> (XEN)     6: [0.5] flags=2 cpu=2 credit=9997842 [w=256] load=262144 (~100%)
> (XEN)     7: [0.6] flags=2 cpu=7 credit=9994623 [w=256] load=262144 (~100%)
> (XEN)     8: [0.7] flags=2 cpu=0 credit=9991815 [w=256] load=262144 (~100%)
> 
> Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
> ---
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Ian Jackson <iwj@xenproject.org>
> ---
> Despite the limited effect, it's a bug. So:
> - it should be backported;
> - I think it should be included in 4.15. The risk is pretty low, for
>   the same reasons already explained when describing its limited impact.

I'm a little puzzled to find this is still in my waiting-to-go-in
folder, for not having had an ack (or otherwise). George?

Jan

> ---
>  xen/common/sched/credit2.c |    2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
> index eb5e5a78c5..b3b5de94cf 100644
> --- a/xen/common/sched/credit2.c
> +++ b/xen/common/sched/credit2.c
> @@ -3646,6 +3646,8 @@ static void csched2_schedule(
>              runq_remove(snext);
>              __set_bit(__CSFLAG_scheduled, &snext->flags);
>          }
> +        else
> +            update_load(ops, rqd, snext, 0, now);
>  
>          /* Clear the idle mask if necessary */
>          if ( cpumask_test_cpu(sched_cpu, &rqd->idle) )
> 
> 
>
Dario Faggioli May 28, 2021, 3:18 p.m. UTC | #2
On Tue, 2021-04-27 at 10:35 +0200, Jan Beulich wrote:
> On 19.03.2021 13:14, Dario Faggioli wrote:
> > 
> > ---
> > Cc: George Dunlap <george.dunlap@citrix.com>
> > Cc: Ian Jackson <iwj@xenproject.org>
> > ---
> > Despite the limited effect, it's a bug. So:
> > - it should be backported;
> > - I think it should be included in 4.15. The risk is pretty low,
> > for
> >   the same reasons already explained when describing its limited
> > impact.
> 
> I'm a little puzzled to find this is still in my waiting-to-go-in
> folder, for not having had an ack (or otherwise). George?
> 
Yeah, and it probably still is, so... George? :-D

BTW, I'm dropping IanJ as, quite obviously,this won't go in 4.15. :-P
It should however (after it goes in) be backported (to 4.15 and
probably even earlier... I can have a look myself if it helps).

Thanks and Regards
George Dunlap June 7, 2021, 11:44 a.m. UTC | #3
> On Mar 19, 2021, at 12:14 PM, Dario Faggioli <dfaggioli@suse.com> wrote:
> 
> If we schedule, and the current vCPU continues to run, its statistical
> load is not properly updated, resulting in something like this, even if
> all the 8 vCPUs are 100% busy:
> 
> (XEN) Runqueue 0:
> (XEN) [...]
> (XEN)   aveload            = 2097152 (~800%)
> (XEN) [...]
> (XEN)   Domain: 0 w 256 c 0 v 8
> (XEN)     1: [0.0] flags=2 cpu=4 credit=9996885 [w=256] load=35 (~0%)
> (XEN)     2: [0.1] flags=2 cpu=2 credit=9993725 [w=256] load=796 (~0%)
> (XEN)     3: [0.2] flags=2 cpu=1 credit=9995885 [w=256] load=883 (~0%)
> (XEN)     4: [0.3] flags=2 cpu=5 credit=9998833 [w=256] load=487 (~0%)
> (XEN)     5: [0.4] flags=2 cpu=6 credit=9998942 [w=256] load=1595 (~0%)
> (XEN)     6: [0.5] flags=2 cpu=0 credit=9994669 [w=256] load=22 (~0%)
> (XEN)     7: [0.6] flags=2 cpu=7 credit=9997706 [w=256] load=0 (~0%)
> (XEN)     8: [0.7] flags=2 cpu=3 credit=9992440 [w=256] load=0 (~0%)
> 
> As we can see, the average load of the runqueue as a whole is, instead,
> computed properly.
> 
> This issue would, in theory, potentially affect Credit2 load balancing
> logic. In practice, however, the problem only manifests (at least with
> these characteristics) when there is only 1 runqueue active in the
> cpupool, which also means there is no need to do any load-balancing.
> 
> Hence its real impact is pretty much limited to wrong per-vCPU load
> percentages, when looking at the output of the 'r' debug-key.
> 
> With this patch, the load is updated and displayed correctly:
> 
> (XEN) Runqueue 0:
> (XEN) [...]
> (XEN)   aveload            = 2097152 (~800%)
> (XEN) [...]
> (XEN) Domain info:
> (XEN)   Domain: 0 w 256 c 0 v 8
> (XEN)     1: [0.0] flags=2 cpu=4 credit=9995584 [w=256] load=262144 (~100%)
> (XEN)     2: [0.1] flags=2 cpu=6 credit=9992992 [w=256] load=262144 (~100%)
> (XEN)     3: [0.2] flags=2 cpu=3 credit=9998918 [w=256] load=262118 (~99%)
> (XEN)     4: [0.3] flags=2 cpu=5 credit=9996867 [w=256] load=262144 (~100%)
> (XEN)     5: [0.4] flags=2 cpu=1 credit=9998912 [w=256] load=262144 (~100%)
> (XEN)     6: [0.5] flags=2 cpu=2 credit=9997842 [w=256] load=262144 (~100%)
> (XEN)     7: [0.6] flags=2 cpu=7 credit=9994623 [w=256] load=262144 (~100%)
> (XEN)     8: [0.7] flags=2 cpu=0 credit=9991815 [w=256] load=262144 (~100%)
> 
> Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
> ---
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Ian Jackson <iwj@xenproject.org>
> ---
> Despite the limited effect, it's a bug. So:
> - it should be backported;
> - I think it should be included in 4.15. The risk is pretty low, for
>  the same reasons already explained when describing its limited impact.
> ---
> xen/common/sched/credit2.c |    2 ++
> 1 file changed, 2 insertions(+)
> 
> diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
> index eb5e5a78c5..b3b5de94cf 100644
> --- a/xen/common/sched/credit2.c
> +++ b/xen/common/sched/credit2.c
> @@ -3646,6 +3646,8 @@ static void csched2_schedule(
>             runq_remove(snext);
>             __set_bit(__CSFLAG_scheduled, &snext->flags);
>         }
> +        else
> +            update_load(ops, rqd, snext, 0, now);

I feel like there must be a better way to do this than just bruteforce remember everywhere we could possibly need to update the load.  But at any rate:

Reviewed-by: George Dunlap <george.dunlap@citrix.com>
diff mbox series

Patch

diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
index eb5e5a78c5..b3b5de94cf 100644
--- a/xen/common/sched/credit2.c
+++ b/xen/common/sched/credit2.c
@@ -3646,6 +3646,8 @@  static void csched2_schedule(
             runq_remove(snext);
             __set_bit(__CSFLAG_scheduled, &snext->flags);
         }
+        else
+            update_load(ops, rqd, snext, 0, now);
 
         /* Clear the idle mask if necessary */
         if ( cpumask_test_cpu(sched_cpu, &rqd->idle) )