diff mbox

[1/7] xen: sched: factor affinity helpers out of sched_credit.c

Message ID 149762242283.11899.2127034224965071833.stgit@Solace.fritz.box (mailing list archive)
State New, archived
Headers show

Commit Message

Dario Faggioli June 16, 2017, 2:13 p.m. UTC
In fact, we want to be able to use them from any scheduler.

While there, make the moved code use 'v' for struct_vcpu*
variable, like it should be done everywhere.

No functional change intended.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Signed-off-by: Justin T. Weaver <jtweaver@hawaii.edu>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
---
Cc: Anshul Makkar <anshul.makkar@citrix.com>
---
 xen/common/sched_credit.c  |   97 +++++++-------------------------------------
 xen/include/xen/sched-if.h |   64 +++++++++++++++++++++++++++++
 2 files changed, 79 insertions(+), 82 deletions(-)

Comments

Anshul Makkar June 23, 2017, 10:02 a.m. UTC | #1
On 16/06/2017 15:13, Dario Faggioli wrote:
> In fact, we want to be able to use them from any scheduler.
>
> While there, make the moved code use 'v' for struct_vcpu*
> variable, like it should be done everywhere.
>
> No functional change intended.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> Signed-off-by: Justin T. Weaver <jtweaver@hawaii.edu>
> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
> ---
> Cc: Anshul Makkar <anshul.makkar@citrix.com>
> ---
>  xen/common/sched_credit.c  |   97 +++++++-------------------------------------
>  xen/include/xen/sched-if.h |   64 +++++++++++++++++++++++++++++
>  2 files changed, 79 insertions(+), 82 deletions(-)
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> index efdf6bf..53773df 100644
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -136,27 +136,6 @@
*
> + * Hard affinity balancing is always necessary and must never be skipped.
> + * But soft affinity need only be considered when it has a functionally
> + * different effect than other constraints (such as hard affinity, cpus
> + * online, or cpupools).
> + *
> + * Soft affinity only needs to be considered if:
> + * * The cpus in the cpupool are not a subset of soft affinity
do you mean cpus in a cpupool are not in a subset of the soft affinity 
of a vcpu which is a runnable state ?
> + * * The hard affinity is not a subset of soft affinity
> + * * There is an overlap between the soft affinity and the mask which is
> + *   currently being considered.
> + */
>
reviewed !! looks good.

Anshul
Dario Faggioli June 23, 2017, 1:33 p.m. UTC | #2
On Fri, 2017-06-23 at 11:02 +0100, Anshul Makkar wrote:
> On 16/06/2017 15:13, Dario Faggioli wrote:
> > diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> > index efdf6bf..53773df 100644
> > --- a/xen/common/sched_credit.c
> > +++ b/xen/common/sched_credit.c
> > @@ -136,27 +136,6 @@
> 
> *
> > + * Hard affinity balancing is always necessary and must never be
> > skipped.
> > + * But soft affinity need only be considered when it has a
> > functionally
> > + * different effect than other constraints (such as hard affinity,
> > cpus
> > + * online, or cpupools).
> > + *
> > + * Soft affinity only needs to be considered if:
> > + * * The cpus in the cpupool are not a subset of soft affinity
> 
> do you mean cpus in a cpupool are not in a subset of the soft
> affinity 
> of a vcpu which is a runnable state ?
>
Err... not sure. It's probably the same. IAC, what this means is that,
if a vcpu is in cpupool C, and the cpus of cpupool C are 1,2,3 _and_
the soft affinity of the vcpu is 0,1,2,3,4,5 then, it does not make
sense to check soft-affinity.

In fact, the vcpu will only execute on either 1,2 or 3, which are all
part of its soft-affinity anyway.

And it should be exactly that the cpupool is *a* *subset* of the soft-
affinity. In fact, if cpupool is 1,2,3 and soft-affinity is 0,1,3,4,5
then it _does_ make sense to check soft affinity,, to try not to run
the vcpu on 2.

Hope this clarify things.

Regards,
Dario
diff mbox

Patch

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index efdf6bf..53773df 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -136,27 +136,6 @@ 
 #define TRC_CSCHED_RATELIMIT     TRC_SCHED_CLASS_EVT(CSCHED, 10)
 #define TRC_CSCHED_STEAL_CHECK   TRC_SCHED_CLASS_EVT(CSCHED, 11)
 
-
-/*
- * Hard and soft affinity load balancing.
- *
- * Idea is each vcpu has some pcpus that it prefers, some that it does not
- * prefer but is OK with, and some that it cannot run on at all. The first
- * set of pcpus are the ones that are both in the soft affinity *and* in the
- * hard affinity; the second set of pcpus are the ones that are in the hard
- * affinity but *not* in the soft affinity; the third set of pcpus are the
- * ones that are not in the hard affinity.
- *
- * We implement a two step balancing logic. Basically, every time there is
- * the need to decide where to run a vcpu, we first check the soft affinity
- * (well, actually, the && between soft and hard affinity), to see if we can
- * send it where it prefers to (and can) run on. However, if the first step
- * does not find any suitable and free pcpu, we fall back checking the hard
- * affinity.
- */
-#define CSCHED_BALANCE_SOFT_AFFINITY    0
-#define CSCHED_BALANCE_HARD_AFFINITY    1
-
 /*
  * Boot parameters
  */
@@ -331,52 +310,6 @@  runq_remove(struct csched_vcpu *svc)
     __runq_remove(svc);
 }
 
-#define for_each_csched_balance_step(step) \
-    for ( (step) = 0; (step) <= CSCHED_BALANCE_HARD_AFFINITY; (step)++ )
-
-
-/*
- * Hard affinity balancing is always necessary and must never be skipped.
- * But soft affinity need only be considered when it has a functionally
- * different effect than other constraints (such as hard affinity, cpus
- * online, or cpupools).
- *
- * Soft affinity only needs to be considered if:
- * * The cpus in the cpupool are not a subset of soft affinity
- * * The hard affinity is not a subset of soft affinity
- * * There is an overlap between the soft affinity and the mask which is
- *   currently being considered.
- */
-static inline int __vcpu_has_soft_affinity(const struct vcpu *vc,
-                                           const cpumask_t *mask)
-{
-    return !cpumask_subset(cpupool_domain_cpumask(vc->domain),
-                           vc->cpu_soft_affinity) &&
-           !cpumask_subset(vc->cpu_hard_affinity, vc->cpu_soft_affinity) &&
-           cpumask_intersects(vc->cpu_soft_affinity, mask);
-}
-
-/*
- * Each csched-balance step uses its own cpumask. This function determines
- * which one (given the step) and copies it in mask. For the soft affinity
- * balancing step, the pcpus that are not part of vc's hard affinity are
- * filtered out from the result, to avoid running a vcpu where it would
- * like, but is not allowed to!
- */
-static void
-csched_balance_cpumask(const struct vcpu *vc, int step, cpumask_t *mask)
-{
-    if ( step == CSCHED_BALANCE_SOFT_AFFINITY )
-    {
-        cpumask_and(mask, vc->cpu_soft_affinity, vc->cpu_hard_affinity);
-
-        if ( unlikely(cpumask_empty(mask)) )
-            cpumask_copy(mask, vc->cpu_hard_affinity);
-    }
-    else /* step == CSCHED_BALANCE_HARD_AFFINITY */
-        cpumask_copy(mask, vc->cpu_hard_affinity);
-}
-
 static void burn_credits(struct csched_vcpu *svc, s_time_t now)
 {
     s_time_t delta;
@@ -441,18 +374,18 @@  static inline void __runq_tickle(struct csched_vcpu *new)
          * Soft and hard affinity balancing loop. For vcpus without
          * a useful soft affinity, consider hard affinity only.
          */
-        for_each_csched_balance_step( balance_step )
+        for_each_affinity_balance_step( balance_step )
         {
             int new_idlers_empty;
 
-            if ( balance_step == CSCHED_BALANCE_SOFT_AFFINITY
-                 && !__vcpu_has_soft_affinity(new->vcpu,
-                                              new->vcpu->cpu_hard_affinity) )
+            if ( balance_step == BALANCE_SOFT_AFFINITY
+                 && !has_soft_affinity(new->vcpu,
+                                       new->vcpu->cpu_hard_affinity) )
                 continue;
 
             /* Are there idlers suitable for new (for this balance step)? */
-            csched_balance_cpumask(new->vcpu, balance_step,
-                                   cpumask_scratch_cpu(cpu));
+            affinity_balance_cpumask(new->vcpu, balance_step,
+                                     cpumask_scratch_cpu(cpu));
             cpumask_and(cpumask_scratch_cpu(cpu),
                         cpumask_scratch_cpu(cpu), &idle_mask);
             new_idlers_empty = cpumask_empty(cpumask_scratch_cpu(cpu));
@@ -463,7 +396,7 @@  static inline void __runq_tickle(struct csched_vcpu *new)
              * hard affinity as well, before taking final decisions.
              */
             if ( new_idlers_empty
-                 && balance_step == CSCHED_BALANCE_SOFT_AFFINITY )
+                 && balance_step == BALANCE_SOFT_AFFINITY )
                 continue;
 
             /*
@@ -789,7 +722,7 @@  _csched_cpu_pick(const struct scheduler *ops, struct vcpu *vc, bool_t commit)
     online = cpupool_domain_cpumask(vc->domain);
     cpumask_and(&cpus, vc->cpu_hard_affinity, online);
 
-    for_each_csched_balance_step( balance_step )
+    for_each_affinity_balance_step( balance_step )
     {
         /*
          * We want to pick up a pcpu among the ones that are online and
@@ -809,12 +742,12 @@  _csched_cpu_pick(const struct scheduler *ops, struct vcpu *vc, bool_t commit)
          * cpus and, if the result is empty, we just skip the soft affinity
          * balancing step all together.
          */
-        if ( balance_step == CSCHED_BALANCE_SOFT_AFFINITY
-             && !__vcpu_has_soft_affinity(vc, &cpus) )
+        if ( balance_step == BALANCE_SOFT_AFFINITY
+             && !has_soft_affinity(vc, &cpus) )
             continue;
 
         /* Pick an online CPU from the proper affinity mask */
-        csched_balance_cpumask(vc, balance_step, &cpus);
+        affinity_balance_cpumask(vc, balance_step, &cpus);
         cpumask_and(&cpus, &cpus, online);
 
         /* If present, prefer vc's current processor */
@@ -1710,11 +1643,11 @@  csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step)
          * or counter.
          */
         if ( vc->is_running ||
-             (balance_step == CSCHED_BALANCE_SOFT_AFFINITY
-              && !__vcpu_has_soft_affinity(vc, vc->cpu_hard_affinity)) )
+             (balance_step == BALANCE_SOFT_AFFINITY
+              && !has_soft_affinity(vc, vc->cpu_hard_affinity)) )
             continue;
 
-        csched_balance_cpumask(vc, balance_step, cpumask_scratch);
+        affinity_balance_cpumask(vc, balance_step, cpumask_scratch);
         if ( __csched_vcpu_is_migrateable(vc, cpu, cpumask_scratch) )
         {
             /* We got a candidate. Grab it! */
@@ -1774,7 +1707,7 @@  csched_load_balance(struct csched_private *prv, int cpu,
      *  1. any "soft-affine work" to steal first,
      *  2. if not finding anything, any "hard-affine work" to steal.
      */
-    for_each_csched_balance_step( bstep )
+    for_each_affinity_balance_step( bstep )
     {
         /*
          * We peek at the non-idling CPUs in a node-wise fashion. In fact,
diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h
index c32ee7a..c4a4935 100644
--- a/xen/include/xen/sched-if.h
+++ b/xen/include/xen/sched-if.h
@@ -208,4 +208,68 @@  static inline cpumask_t* cpupool_domain_cpumask(struct domain *d)
     return d->cpupool->cpu_valid;
 }
 
+/*
+ * Hard and soft affinity load balancing.
+ *
+ * Idea is each vcpu has some pcpus that it prefers, some that it does not
+ * prefer but is OK with, and some that it cannot run on at all. The first
+ * set of pcpus are the ones that are both in the soft affinity *and* in the
+ * hard affinity; the second set of pcpus are the ones that are in the hard
+ * affinity but *not* in the soft affinity; the third set of pcpus are the
+ * ones that are not in the hard affinity.
+ *
+ * We implement a two step balancing logic. Basically, every time there is
+ * the need to decide where to run a vcpu, we first check the soft affinity
+ * (well, actually, the && between soft and hard affinity), to see if we can
+ * send it where it prefers to (and can) run on. However, if the first step
+ * does not find any suitable and free pcpu, we fall back checking the hard
+ * affinity.
+ */
+#define BALANCE_SOFT_AFFINITY    0
+#define BALANCE_HARD_AFFINITY    1
+
+#define for_each_affinity_balance_step(step) \
+    for ( (step) = 0; (step) <= BALANCE_HARD_AFFINITY; (step)++ )
+
+/*
+ * Hard affinity balancing is always necessary and must never be skipped.
+ * But soft affinity need only be considered when it has a functionally
+ * different effect than other constraints (such as hard affinity, cpus
+ * online, or cpupools).
+ *
+ * Soft affinity only needs to be considered if:
+ * * The cpus in the cpupool are not a subset of soft affinity
+ * * The hard affinity is not a subset of soft affinity
+ * * There is an overlap between the soft affinity and the mask which is
+ *   currently being considered.
+ */
+static inline int has_soft_affinity(const struct vcpu *v,
+                                    const cpumask_t *mask)
+{
+    return !cpumask_subset(cpupool_domain_cpumask(v->domain),
+                           v->cpu_soft_affinity) &&
+           !cpumask_subset(v->cpu_hard_affinity, v->cpu_soft_affinity) &&
+           cpumask_intersects(v->cpu_soft_affinity, mask);
+}
+
+/*
+ * This function copies in mask the cpumask that should be used for a
+ * particular affinity balancing step. For the soft affinity one, the pcpus
+ * that are not part of vc's hard affinity are filtered out from the result,
+ * to avoid running a vcpu where it would like, but is not allowed to!
+ */
+static inline void
+affinity_balance_cpumask(const struct vcpu *v, int step, cpumask_t *mask)
+{
+    if ( step == BALANCE_SOFT_AFFINITY )
+    {
+        cpumask_and(mask, v->cpu_soft_affinity, v->cpu_hard_affinity);
+
+        if ( unlikely(cpumask_empty(mask)) )
+            cpumask_copy(mask, v->cpu_hard_affinity);
+    }
+    else /* step == BALANCE_HARD_AFFINITY */
+        cpumask_copy(mask, v->cpu_hard_affinity);
+}
+
 #endif /* __XEN_SCHED_IF_H__ */