diff mbox

[01/24] xen: credit1: small optimization in Credit1's tickling logic.

Message ID 147145425981.25877.11614393181483419672.stgit@Solace.fritz.box (mailing list archive)
State New, archived
Headers show

Commit Message

Dario Faggioli Aug. 17, 2016, 5:17 p.m. UTC
If, when vcpu x wakes up, there are no idle pcpus in x's
soft-affinity, we just go ahead and look at its hard
affinity. This basically means that, if, in __runq_tickle(),
new_idlers_empty is true, balance_step is equal to
CSCHED_BALANCE_HARD_AFFINITY, and that calling
csched_balance_cpumask() for whatever vcpu, would just
return the vcpu's cpu_hard_affinity.

Therefore, don't bother calling it (it's just pure
overhead) and use cpu_hard_affinity directly.

For this very reason, this patch should only be
a (slight) optimization, and entail no functional
change.

As a side note, it would make sense to do what the
patch does, even if we could be inside the
[[ new_idlers_empty && new->pri > cur->pri ]] if
with balance_step equal to CSCHED_BALANCE_SOFT_AFFINITY.
In fact, what is actually happening is:
 - vcpu x is waking up, and (since there aren't suitable
   idlers, and it's entitled for it) it is preempting
   vcpu y;
 - vcpu y's hard-affinity is a superset of its
   soft-affinity mask.

Therefore, it makes sense to use wider possible mask,
as by doing that, we maximize the probability of
finding an idle pcpu in there, to which we can send
vcpu y, which then will be able to run.

While there, also fix the comment, which included
an awkward parenthesis nesting.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Anshul Makkar <anshul.makkar@citrix.com>
Cc: David Vrabel <david.vrabel@citrix.com>
---
 xen/common/sched_credit.c |    8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

Comments

George Dunlap Sept. 12, 2016, 3:01 p.m. UTC | #1
On 17/08/16 18:17, Dario Faggioli wrote:
> If, when vcpu x wakes up, there are no idle pcpus in x's
> soft-affinity, we just go ahead and look at its hard
> affinity. This basically means that, if, in __runq_tickle(),
> new_idlers_empty is true, balance_step is equal to
> CSCHED_BALANCE_HARD_AFFINITY, and that calling
> csched_balance_cpumask() for whatever vcpu, would just
> return the vcpu's cpu_hard_affinity.
> 
> Therefore, don't bother calling it (it's just pure
> overhead) and use cpu_hard_affinity directly.
> 
> For this very reason, this patch should only be
> a (slight) optimization, and entail no functional
> change.
> 
> As a side note, it would make sense to do what the
> patch does, even if we could be inside the
> [[ new_idlers_empty && new->pri > cur->pri ]] if
> with balance_step equal to CSCHED_BALANCE_SOFT_AFFINITY.
> In fact, what is actually happening is:
>  - vcpu x is waking up, and (since there aren't suitable
>    idlers, and it's entitled for it) it is preempting
>    vcpu y;
>  - vcpu y's hard-affinity is a superset of its
>    soft-affinity mask.
> 
> Therefore, it makes sense to use wider possible mask,
> as by doing that, we maximize the probability of
> finding an idle pcpu in there, to which we can send
> vcpu y, which then will be able to run.
> 
> While there, also fix the comment, which included
> an awkward parenthesis nesting.
> 
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

Acked-by: George Dunlap <george.dunlap@citrix.com>

> ---
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Anshul Makkar <anshul.makkar@citrix.com>
> Cc: David Vrabel <david.vrabel@citrix.com>
> ---
>  xen/common/sched_credit.c |    8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> index 220ff0d..6eccf09 100644
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -424,9 +424,9 @@ static inline void __runq_tickle(struct csched_vcpu *new)
>              /*
>               * If there are no suitable idlers for new, and it's higher
>               * priority than cur, check whether we can migrate cur away.
> -             * (We have to do it indirectly, via _VPF_migrating, instead
> +             * We have to do it indirectly, via _VPF_migrating (instead
>               * of just tickling any idler suitable for cur) because cur
> -             * is running.)
> +             * is running.
>               *
>               * If there are suitable idlers for new, no matter priorities,
>               * leave cur alone (as it is running and is, likely, cache-hot)
> @@ -435,9 +435,7 @@ static inline void __runq_tickle(struct csched_vcpu *new)
>               */
>              if ( new_idlers_empty && new->pri > cur->pri )
>              {
> -                csched_balance_cpumask(cur->vcpu, balance_step,
> -                                       cpumask_scratch_cpu(cpu));
> -                if ( cpumask_intersects(cpumask_scratch_cpu(cpu),
> +                if ( cpumask_intersects(cur->vcpu->cpu_hard_affinity,
>                                          &idle_mask) )
>                  {
>                      SCHED_VCPU_STAT_CRANK(cur, kicked_away);
>
diff mbox

Patch

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index 220ff0d..6eccf09 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -424,9 +424,9 @@  static inline void __runq_tickle(struct csched_vcpu *new)
             /*
              * If there are no suitable idlers for new, and it's higher
              * priority than cur, check whether we can migrate cur away.
-             * (We have to do it indirectly, via _VPF_migrating, instead
+             * We have to do it indirectly, via _VPF_migrating (instead
              * of just tickling any idler suitable for cur) because cur
-             * is running.)
+             * is running.
              *
              * If there are suitable idlers for new, no matter priorities,
              * leave cur alone (as it is running and is, likely, cache-hot)
@@ -435,9 +435,7 @@  static inline void __runq_tickle(struct csched_vcpu *new)
              */
             if ( new_idlers_empty && new->pri > cur->pri )
             {
-                csched_balance_cpumask(cur->vcpu, balance_step,
-                                       cpumask_scratch_cpu(cpu));
-                if ( cpumask_intersects(cpumask_scratch_cpu(cpu),
+                if ( cpumask_intersects(cur->vcpu->cpu_hard_affinity,
                                         &idle_mask) )
                 {
                     SCHED_VCPU_STAT_CRANK(cur, kicked_away);