diff mbox series

[v3,26/47] xen/sched: Change vcpu_migrate_*() to operate on schedule unit

Message ID 20190914085251.18816-27-jgross@suse.com (mailing list archive)
State Superseded
Headers show
Series xen: add core scheduling support | expand

Commit Message

Jürgen Groß Sept. 14, 2019, 8:52 a.m. UTC
Now that vcpu_migrate_start() and vcpu_migrate_finish() are used only
to ensure a vcpu is running on a suitable processor they can be
switched to operate on schedule units instead of vcpus.

While doing that rename them accordingly and make the _start() variant
static. As it is needed anyway call vcpu_sync_execstate() for each
vcpu of the unit when changing processors.

vcpu_move_locked() is switched to schedule unit, too.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- move tested invariant condition out of loop (Jan Beulich)
- add comment regarding call of vcpu_sync_execstate() (Jan Beulich)
---
 xen/common/schedule.c | 106 +++++++++++++++++++++++++++++++-------------------
 1 file changed, 65 insertions(+), 41 deletions(-)

Comments

Dario Faggioli Sept. 24, 2019, 10:33 p.m. UTC | #1
On Sat, 2019-09-14 at 10:52 +0200, Juergen Gross wrote:
> Now that vcpu_migrate_start() and vcpu_migrate_finish() are used only
> to ensure a vcpu is running on a suitable processor 
>
Is this sentence like this (I mean with that "Now" at the beginning)
because it was --in previous versions of the series-- right after the
patch that changed vcpu_force_reschedule()?

If yes, then that is not the case any longer, so we may want to re-
phrase.

> they can be
> switched to operate on schedule units instead of vcpus.
> 
> While doing that rename them accordingly and make the _start()
> variant
> static. 
>
What does this mean/refer to?

Wasn't vcpu_migrate_start() static already?

> As it is needed anyway call vcpu_sync_execstate() for each
> vcpu of the unit when changing processors.
> 
Again, what do you mean with "As it is needed anyway"?

> diff --git a/xen/common/schedule.c b/xen/common/schedule.c
> index 0bd9f0d278..70271cdea2 100644
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> 
>  /*
>   * Initiating migration
>   *
> - * In order to migrate, we need the vcpu in question to have stopped
> + * In order to migrate, we need the unit in question to have stopped
>   * running and had sched_sleep() called (to take it off any
>
Might be me (not a native speaker), but this reads weird. "and have
called sched_sleep()" or "and sched_sleep() to have been called" would
sound better.

It's, of course, not your fault, but I guess we can consider adjusting
it, since we are touching the line above it.

Or maybe you also want to queue it up for the cleanup series?

Regards
Jürgen Groß Sept. 25, 2019, 12:04 p.m. UTC | #2
On 25.09.19 00:33, Dario Faggioli wrote:
> On Sat, 2019-09-14 at 10:52 +0200, Juergen Gross wrote:
>> Now that vcpu_migrate_start() and vcpu_migrate_finish() are used only
>> to ensure a vcpu is running on a suitable processor
>>
> Is this sentence like this (I mean with that "Now" at the beginning)
> because it was --in previous versions of the series-- right after the
> patch that changed vcpu_force_reschedule()?
> 
> If yes, then that is not the case any longer, so we may want to re-
> phrase.

Indeed.

> 
>> they can be
>> switched to operate on schedule units instead of vcpus.
>>
>> While doing that rename them accordingly and make the _start()
>> variant
>> static.
>>
> What does this mean/refer to?
> 
> Wasn't vcpu_migrate_start() static already?

Wei cheated. He modified it with commit 9f8d606b4384408. :-)

> 
>> As it is needed anyway call vcpu_sync_execstate() for each
>> vcpu of the unit when changing processors.
>>
> Again, what do you mean with "As it is needed anyway"?

When moving from one cpu to another one the state must be saved in
struct vcpu (in contrast to being held partially in registers or on the
stack in case only the idle vcpu was scheduled afterwards on the old
cpu). This is done by vcpu_sync_execstate(). Without the explicit call
it would be done either when the vcpu is being scheduled on the new cpu
or if another non-idle vcpu is becoming active on the old cpu.

> 
>> diff --git a/xen/common/schedule.c b/xen/common/schedule.c
>> index 0bd9f0d278..70271cdea2 100644
>> --- a/xen/common/schedule.c
>> +++ b/xen/common/schedule.c
>>
>>   /*
>>    * Initiating migration
>>    *
>> - * In order to migrate, we need the vcpu in question to have stopped
>> + * In order to migrate, we need the unit in question to have stopped
>>    * running and had sched_sleep() called (to take it off any
>>
> Might be me (not a native speaker), but this reads weird. "and have
> called sched_sleep()" or "and sched_sleep() to have been called" would
> sound better.
> 
> It's, of course, not your fault, but I guess we can consider adjusting
> it, since we are touching the line above it.
> 
> Or maybe you also want to queue it up for the cleanup series?

No, I'll change it.


Juergen
Dario Faggioli Sept. 25, 2019, 4:37 p.m. UTC | #3
On Wed, 2019-09-25 at 14:04 +0200, Jürgen Groß wrote:
> On 25.09.19 00:33, Dario Faggioli wrote:
> > 
> > > As it is needed anyway call vcpu_sync_execstate() for each
> > > vcpu of the unit when changing processors.
> > > 
> > Again, what do you mean with "As it is needed anyway"?
> 
> When moving from one cpu to another one the state must be saved in
> struct vcpu (in contrast to being held partially in registers or on
> the
> stack in case only the idle vcpu was scheduled afterwards on the old
> cpu). 
>
Sure.

> This is done by vcpu_sync_execstate(). Without the explicit call
> it would be done either when the vcpu is being scheduled on the new
> cpu
> or if another non-idle vcpu is becoming active on the old cpu.
> 
Right. And does this then means that we're now doing it twice (i.e.,
here and either of the other places you mentioned)?

Regards
Jürgen Groß Sept. 26, 2019, 4:51 a.m. UTC | #4
On 25.09.19 18:37, Dario Faggioli wrote:
> On Wed, 2019-09-25 at 14:04 +0200, Jürgen Groß wrote:
>> On 25.09.19 00:33, Dario Faggioli wrote:
>>>
>>>> As it is needed anyway call vcpu_sync_execstate() for each
>>>> vcpu of the unit when changing processors.
>>>>
>>> Again, what do you mean with "As it is needed anyway"?
>>
>> When moving from one cpu to another one the state must be saved in
>> struct vcpu (in contrast to being held partially in registers or on
>> the
>> stack in case only the idle vcpu was scheduled afterwards on the old
>> cpu).
>>
> Sure.
> 
>> This is done by vcpu_sync_execstate(). Without the explicit call
>> it would be done either when the vcpu is being scheduled on the new
>> cpu
>> or if another non-idle vcpu is becoming active on the old cpu.
>>
> Right. And does this then means that we're now doing it twice (i.e.,
> here and either of the other places you mentioned)?

No, it checks whether it has to do anything.


Juergen
Dario Faggioli Sept. 26, 2019, 1:29 p.m. UTC | #5
On Thu, 2019-09-26 at 06:51 +0200, Jürgen Groß wrote:
> On 25.09.19 18:37, Dario Faggioli wrote:
> > > This is done by vcpu_sync_execstate(). Without the explicit call
> > > it would be done either when the vcpu is being scheduled on the
> > > new
> > > cpu
> > > or if another non-idle vcpu is becoming active on the old cpu.
> > > 
> > Right. And does this then means that we're now doing it twice
> > (i.e.,
> > here and either of the other places you mentioned)?
> 
> No, it checks whether it has to do anything.
> 
Which I knew, but keep forgetting. :-(

Thanks, and sorry for the noise.

Regards
diff mbox series

Patch

diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 0bd9f0d278..70271cdea2 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -744,35 +744,40 @@  void vcpu_unblock(struct vcpu *v)
 }
 
 /*
- * Do the actual movement of a vcpu from old to new CPU. Locks for *both*
+ * Do the actual movement of an unit from old to new CPU. Locks for *both*
  * CPUs needs to have been taken already when calling this!
  */
-static void vcpu_move_locked(struct vcpu *v, unsigned int new_cpu)
+static void sched_unit_move_locked(struct sched_unit *unit,
+                                   unsigned int new_cpu)
 {
-    unsigned int old_cpu = v->processor;
+    unsigned int old_cpu = unit->res->master_cpu;
+    struct vcpu *v;
 
     /*
      * Transfer urgency status to new CPU before switching CPUs, as
      * once the switch occurs, v->is_urgent is no longer protected by
      * the per-CPU scheduler lock we are holding.
      */
-    if ( unlikely(v->is_urgent) && (old_cpu != new_cpu) )
+    for_each_sched_unit_vcpu ( unit, v )
     {
-        atomic_inc(&per_cpu(sched_urgent_count, new_cpu));
-        atomic_dec(&per_cpu(sched_urgent_count, old_cpu));
+        if ( unlikely(v->is_urgent) && (old_cpu != new_cpu) )
+        {
+            atomic_inc(&per_cpu(sched_urgent_count, new_cpu));
+            atomic_dec(&per_cpu(sched_urgent_count, old_cpu));
+        }
     }
 
     /*
      * Actual CPU switch to new CPU.  This is safe because the lock
      * pointer can't change while the current lock is held.
      */
-    sched_migrate(vcpu_scheduler(v), v->sched_unit, new_cpu);
+    sched_migrate(unit_scheduler(unit), unit, new_cpu);
 }
 
 /*
  * Initiating migration
  *
- * In order to migrate, we need the vcpu in question to have stopped
+ * In order to migrate, we need the unit in question to have stopped
  * running and had sched_sleep() called (to take it off any
  * runqueues, for instance); and if it is currently running, it needs
  * to be scheduled out.  Finally, we need to hold the scheduling locks
@@ -788,37 +793,45 @@  static void vcpu_move_locked(struct vcpu *v, unsigned int new_cpu)
  * should be called like this:
  *
  *     lock = unit_schedule_lock_irq(unit);
- *     vcpu_migrate_start(v);
+ *     sched_unit_migrate_start(unit);
  *     unit_schedule_unlock_irq(lock, unit)
- *     vcpu_migrate_finish(v);
+ *     sched_unit_migrate_finish(unit);
  *
- * vcpu_migrate_finish() will do the work now if it can, or simply
- * return if it can't (because v is still running); in that case
- * vcpu_migrate_finish() will be called by context_saved().
+ * sched_unit_migrate_finish() will do the work now if it can, or simply
+ * return if it can't (because unit is still running); in that case
+ * sched_unit_migrate_finish() will be called by context_saved().
  */
-static void vcpu_migrate_start(struct vcpu *v)
+static void sched_unit_migrate_start(struct sched_unit *unit)
 {
-    set_bit(_VPF_migrating, &v->pause_flags);
-    vcpu_sleep_nosync_locked(v);
+    struct vcpu *v;
+
+    for_each_sched_unit_vcpu ( unit, v )
+    {
+        set_bit(_VPF_migrating, &v->pause_flags);
+        vcpu_sleep_nosync_locked(v);
+    }
 }
 
-static void vcpu_migrate_finish(struct vcpu *v)
+static void sched_unit_migrate_finish(struct sched_unit *unit)
 {
     unsigned long flags;
     unsigned int old_cpu, new_cpu;
     spinlock_t *old_lock, *new_lock;
     bool_t pick_called = 0;
+    struct vcpu *v;
 
     /*
-     * If the vcpu is currently running, this will be handled by
+     * If the unit is currently running, this will be handled by
      * context_saved(); and in any case, if the bit is cleared, then
      * someone else has already done the work so we don't need to.
      */
-    if ( v->sched_unit->is_running ||
-         !test_bit(_VPF_migrating, &v->pause_flags) )
+    if ( unit->is_running )
         return;
+    for_each_sched_unit_vcpu ( unit, v )
+        if ( !test_bit(_VPF_migrating, &v->pause_flags) )
+            return;
 
-    old_cpu = new_cpu = v->processor;
+    old_cpu = new_cpu = unit->res->master_cpu;
     for ( ; ; )
     {
         /*
@@ -831,7 +844,7 @@  static void vcpu_migrate_finish(struct vcpu *v)
 
         sched_spin_lock_double(old_lock, new_lock, &flags);
 
-        old_cpu = v->processor;
+        old_cpu = unit->res->master_cpu;
         if ( old_lock == get_sched_res(old_cpu)->schedule_lock )
         {
             /*
@@ -840,15 +853,15 @@  static void vcpu_migrate_finish(struct vcpu *v)
              */
             if ( pick_called &&
                  (new_lock == get_sched_res(new_cpu)->schedule_lock) &&
-                 cpumask_test_cpu(new_cpu, v->sched_unit->cpu_hard_affinity) &&
-                 cpumask_test_cpu(new_cpu, v->domain->cpupool->cpu_valid) )
+                 cpumask_test_cpu(new_cpu, unit->cpu_hard_affinity) &&
+                 cpumask_test_cpu(new_cpu, unit->domain->cpupool->cpu_valid) )
                 break;
 
             /* Select a new CPU. */
-            new_cpu = sched_pick_resource(vcpu_scheduler(v),
-                                          v->sched_unit)->master_cpu;
+            new_cpu = sched_pick_resource(unit_scheduler(unit),
+                                          unit)->master_cpu;
             if ( (new_lock == get_sched_res(new_cpu)->schedule_lock) &&
-                 cpumask_test_cpu(new_cpu, v->domain->cpupool->cpu_valid) )
+                 cpumask_test_cpu(new_cpu, unit->domain->cpupool->cpu_valid) )
                 break;
             pick_called = 1;
         }
@@ -869,22 +882,35 @@  static void vcpu_migrate_finish(struct vcpu *v)
      * because they both happen in (different) spinlock regions, and those
      * regions are strictly serialised.
      */
-    if ( v->sched_unit->is_running ||
-         !test_and_clear_bit(_VPF_migrating, &v->pause_flags) )
+    if ( unit->is_running )
     {
         sched_spin_unlock_double(old_lock, new_lock, flags);
         return;
     }
+    for_each_sched_unit_vcpu ( unit, v )
+    {
+        if ( !test_and_clear_bit(_VPF_migrating, &v->pause_flags) )
+        {
+            sched_spin_unlock_double(old_lock, new_lock, flags);
+            return;
+        }
+    }
 
-    vcpu_move_locked(v, new_cpu);
+    sched_unit_move_locked(unit, new_cpu);
 
     sched_spin_unlock_double(old_lock, new_lock, flags);
 
     if ( old_cpu != new_cpu )
-        sched_move_irqs(v->sched_unit);
+    {
+        /* Vcpus are moved to other pcpus, commit their states to memory. */
+        for_each_sched_unit_vcpu ( unit, v )
+            sync_vcpu_execstate(v);
+        sched_move_irqs(unit);
+    }
 
     /* Wake on new CPU. */
-    vcpu_wake(v);
+    for_each_sched_unit_vcpu ( unit, v )
+        vcpu_wake(v);
 }
 
 static bool sched_check_affinity_broken(const struct sched_unit *unit)
@@ -1033,11 +1059,9 @@  int cpu_disable_scheduler(unsigned int cpu)
              *  * the scheduler will always find a suitable solution, or
              *    things would have failed before getting in here.
              */
-            /* TODO: multiple vcpus per unit. */
-            vcpu_migrate_start(unit->vcpu_list);
+            sched_unit_migrate_start(unit);
             unit_schedule_unlock_irqrestore(lock, flags, unit);
-
-            vcpu_migrate_finish(unit->vcpu_list);
+            sched_unit_migrate_finish(unit);
 
             /*
              * The only caveat, in this case, is that if a vcpu active in
@@ -1121,14 +1145,14 @@  static int vcpu_set_affinity(
             ASSERT(which == unit->cpu_soft_affinity);
             sched_set_affinity(v, NULL, affinity);
         }
-        vcpu_migrate_start(v);
+        sched_unit_migrate_start(unit);
     }
 
     unit_schedule_unlock_irq(lock, unit);
 
     domain_update_node_affinity(v->domain);
 
-    vcpu_migrate_finish(v);
+    sched_unit_migrate_finish(unit);
 
     return ret;
 }
@@ -1389,12 +1413,12 @@  int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason)
 
     migrate = !ret && !cpumask_test_cpu(v->processor, unit->cpu_hard_affinity);
     if ( migrate )
-        vcpu_migrate_start(v);
+        sched_unit_migrate_start(unit);
 
     unit_schedule_unlock_irq(lock, unit);
 
     if ( migrate )
-        vcpu_migrate_finish(v);
+        sched_unit_migrate_finish(unit);
 
     return ret;
 }
@@ -1811,7 +1835,7 @@  void context_saved(struct vcpu *prev)
 
     sched_context_saved(vcpu_scheduler(prev), prev->sched_unit);
 
-    vcpu_migrate_finish(prev);
+    sched_unit_migrate_finish(prev->sched_unit);
 }
 
 /* The scheduler timer: force a run through the scheduler */