diff mbox

[5/7] xen: credit2: kick away vcpus not running within their soft-affinity

Message ID 149762245143.11899.458751530098326746.stgit@Solace.fritz.box (mailing list archive)
State New, archived
Headers show

Commit Message

Dario Faggioli June 16, 2017, 2:14 p.m. UTC
If, during scheduling, we realize that the current vcpu
is running outside of its own soft-affinity, it would be
preferable to send it somewhere else.

Of course, that may not be possible, and if we're too
strict, we risk having vcpus sit in runqueues, even if
there are idle pcpus (violating work-conservingness).
In fact, what about there are no pcpus, from the soft
affinity mask of the vcpu in question, where it can
run?

To make sure we don't fall in the above described trap,
only actually de-schedule the vcpu if there are idle and
not already tickled cpus from its soft affinity where it
can run immediately.

If there is (at least one) of such cpus, we let current
be preempted, so that csched2_context_saved() will put
it back in the runq, and runq_tickle() will wake (one
of) the cpu.

If there is not even one, we let current run where it is,
as running outside its soft-affinity is still better than
not running at all.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Anshul Makkar <anshul.makkar@citrix.com>
---
 xen/common/sched_credit2.c |   40 ++++++++++++++++++++++++++++++++++++++--
 1 file changed, 38 insertions(+), 2 deletions(-)

Comments

George Dunlap July 25, 2017, 11:06 a.m. UTC | #1
On 06/16/2017 03:14 PM, Dario Faggioli wrote:
> If, during scheduling, we realize that the current vcpu
> is running outside of its own soft-affinity, it would be
> preferable to send it somewhere else.
> 
> Of course, that may not be possible, and if we're too
> strict, we risk having vcpus sit in runqueues, even if
> there are idle pcpus (violating work-conservingness).
> In fact, what about there are no pcpus, from the soft
> affinity mask of the vcpu in question, where it can
> run?
> 
> To make sure we don't fall in the above described trap,
> only actually de-schedule the vcpu if there are idle and
> not already tickled cpus from its soft affinity where it
> can run immediately.
> 
> If there is (at least one) of such cpus, we let current
> be preempted, so that csched2_context_saved() will put
> it back in the runq, and runq_tickle() will wake (one
> of) the cpu.
> 
> If there is not even one, we let current run where it is,
> as running outside its soft-affinity is still better than
> not running at all.
> 
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

*This* one looks good:

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

> ---
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Anshul Makkar <anshul.makkar@citrix.com>
> ---
>  xen/common/sched_credit2.c |   40 ++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 38 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> index fb97ff7..5d8f25c 100644
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -2637,6 +2637,7 @@ runq_candidate(struct csched2_runqueue_data *rqd,
>      struct csched2_vcpu *snext = NULL;
>      struct csched2_private *prv = csched2_priv(per_cpu(scheduler, cpu));
>      bool yield = __test_and_clear_bit(__CSFLAG_vcpu_yield, &scurr->flags);
> +    bool soft_aff_preempt = false;
>  
>      *skipped = 0;
>  
> @@ -2670,8 +2671,43 @@ runq_candidate(struct csched2_runqueue_data *rqd,
>          return scurr;
>      }
>  
> -    /* Default to current if runnable, idle otherwise */
> -    if ( vcpu_runnable(scurr->vcpu) )
> +    /* If scurr has a soft-affinity, let's check whether cpu is part of it */
> +    if ( !is_idle_vcpu(scurr->vcpu) &&
> +         has_soft_affinity(scurr->vcpu, scurr->vcpu->cpu_hard_affinity) )
> +    {
> +        affinity_balance_cpumask(scurr->vcpu, BALANCE_SOFT_AFFINITY,
> +                                 cpumask_scratch);
> +        if ( unlikely(!cpumask_test_cpu(cpu, cpumask_scratch)) )
> +        {
> +            cpumask_t *online = cpupool_domain_cpumask(scurr->vcpu->domain);
> +
> +            /* Ok, is any of the pcpus in scurr soft-affinity idle? */
> +            cpumask_and(cpumask_scratch, cpumask_scratch, &rqd->idle);
> +            cpumask_andnot(cpumask_scratch, cpumask_scratch, &rqd->tickled);
> +            soft_aff_preempt = cpumask_intersects(cpumask_scratch, online);
> +        }
> +    }
> +
> +    /*
> +     * If scurr is runnable, and this cpu is in its soft-affinity, default to
> +     * it. We also default to it, even if cpu is not in its soft-affinity, if
> +     * there aren't any idle and not tickled cpu in its soft-affinity. In
> +     * fact, we don't want to risk leaving scurr in the runq and this cpu idle
> +     * only because scurr is running outside of its soft-affinity.
> +     *
> +     * On the other hand, if cpu is not in scurr's soft-affinity, and there
> +     * looks to be better options, go for them. That happens by defaulting to
> +     * idle here, which means scurr will be preempted, put back in runq, and
> +     * one of those idle and not tickled cpus from its soft-affinity will be
> +     * tickled to pick it up.
> +     *
> +     * Finally, if scurr does not have a valid soft-affinity, we also let it
> +     * continue to run here (in fact, soft_aff_preempt will still be false,
> +     * in this case).
> +     *
> +     * Of course, we also default to idle also if scurr is not runnable.
> +     */
> +    if ( vcpu_runnable(scurr->vcpu) && !soft_aff_preempt )
>          snext = scurr;
>      else
>          snext = csched2_vcpu(idle_vcpu[cpu]);
>
diff mbox

Patch

diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index fb97ff7..5d8f25c 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -2637,6 +2637,7 @@  runq_candidate(struct csched2_runqueue_data *rqd,
     struct csched2_vcpu *snext = NULL;
     struct csched2_private *prv = csched2_priv(per_cpu(scheduler, cpu));
     bool yield = __test_and_clear_bit(__CSFLAG_vcpu_yield, &scurr->flags);
+    bool soft_aff_preempt = false;
 
     *skipped = 0;
 
@@ -2670,8 +2671,43 @@  runq_candidate(struct csched2_runqueue_data *rqd,
         return scurr;
     }
 
-    /* Default to current if runnable, idle otherwise */
-    if ( vcpu_runnable(scurr->vcpu) )
+    /* If scurr has a soft-affinity, let's check whether cpu is part of it */
+    if ( !is_idle_vcpu(scurr->vcpu) &&
+         has_soft_affinity(scurr->vcpu, scurr->vcpu->cpu_hard_affinity) )
+    {
+        affinity_balance_cpumask(scurr->vcpu, BALANCE_SOFT_AFFINITY,
+                                 cpumask_scratch);
+        if ( unlikely(!cpumask_test_cpu(cpu, cpumask_scratch)) )
+        {
+            cpumask_t *online = cpupool_domain_cpumask(scurr->vcpu->domain);
+
+            /* Ok, is any of the pcpus in scurr soft-affinity idle? */
+            cpumask_and(cpumask_scratch, cpumask_scratch, &rqd->idle);
+            cpumask_andnot(cpumask_scratch, cpumask_scratch, &rqd->tickled);
+            soft_aff_preempt = cpumask_intersects(cpumask_scratch, online);
+        }
+    }
+
+    /*
+     * If scurr is runnable, and this cpu is in its soft-affinity, default to
+     * it. We also default to it, even if cpu is not in its soft-affinity, if
+     * there aren't any idle and not tickled cpu in its soft-affinity. In
+     * fact, we don't want to risk leaving scurr in the runq and this cpu idle
+     * only because scurr is running outside of its soft-affinity.
+     *
+     * On the other hand, if cpu is not in scurr's soft-affinity, and there
+     * looks to be better options, go for them. That happens by defaulting to
+     * idle here, which means scurr will be preempted, put back in runq, and
+     * one of those idle and not tickled cpus from its soft-affinity will be
+     * tickled to pick it up.
+     *
+     * Finally, if scurr does not have a valid soft-affinity, we also let it
+     * continue to run here (in fact, soft_aff_preempt will still be false,
+     * in this case).
+     *
+     * Of course, we also default to idle also if scurr is not runnable.
+     */
+    if ( vcpu_runnable(scurr->vcpu) && !soft_aff_preempt )
         snext = scurr;
     else
         snext = csched2_vcpu(idle_vcpu[cpu]);