[07/60] xen/sched: build a linked list of struct sched_unit
diff mbox series

Message ID 20190528103313.1343-8-jgross@suse.com
State New, archived
Headers show
Series
  • xen: add core scheduling support
Related show

Commit Message

Jürgen Groß May 28, 2019, 10:32 a.m. UTC
In order to make it easy to iterate over sched_unit elements of a
domain build a single linked list and add an iterator for it. The new
list is guarded by the same mechanisms as the vcpu linked list as it
is modified only via vcpu_create() or vcpu_destroy().

For completeness add another iterator for_each_sched_unit_vcpu() which
will iterate over all vcpus if a sched_unit (right now only one). This
will be needed later for larger scheduling granularity (e.g. cores).

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/schedule.c   | 56 ++++++++++++++++++++++++++++++++++++++++++-------
 xen/include/xen/sched.h |  9 ++++++++
 2 files changed, 58 insertions(+), 7 deletions(-)

Comments

Dario Faggioli July 19, 2019, 12:01 a.m. UTC | #1
On Tue, 2019-05-28 at 12:32 +0200, Juergen Gross wrote:
> In order to make it easy to iterate over sched_unit elements of a
> domain build a single linked list and add an iterator for it.
>
How about a ',' between domain and build?

> For completeness add another iterator for_each_sched_unit_vcpu()
> which
> will iterate over all vcpus if a sched_unit (right now only one).
> This

"over all vcpus of a sched_unit" ?

> will be needed later for larger scheduling granularity (e.g. cores).
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
>
One question:

> @@ -279,8 +279,16 @@ struct vcpu
>  struct sched_unit {
>      struct vcpu           *vcpu;
>      void                  *priv;      /* scheduler private data */
> +    struct sched_unit     *next_in_list;
>  };
>  
> +#define for_each_sched_unit(d,
> e)                                         \
> +    for ( (e) = (d)->sched_unit_list; (e) != NULL; (e) = (e)-
> >next_in_list )
> +
> +#define for_each_sched_unit_vcpu(i,
> v)                                    \
> +    for ( (v) = (i)->vcpu; (v) != NULL && (v)->sched_unit ==
> (i);         \
> +          (v) = (v)->next_in_list )
> +
>
So, here... sorry if it's me not seeing it, but why the 
(v)->sched_unit == (i) check is necessary?

Do we expect to put in the list of vcpus of a particular unit, vcpus
that are in another unit?

Regards
Jürgen Groß July 19, 2019, 5:07 a.m. UTC | #2
On 19.07.19 02:01, Dario Faggioli wrote:
> On Tue, 2019-05-28 at 12:32 +0200, Juergen Gross wrote:
>> In order to make it easy to iterate over sched_unit elements of a
>> domain build a single linked list and add an iterator for it.
>>
> How about a ',' between domain and build?

Okay.

> 
>> For completeness add another iterator for_each_sched_unit_vcpu()
>> which
>> will iterate over all vcpus if a sched_unit (right now only one).
>> This
> 
> "over all vcpus of a sched_unit" ?

Oh, of course!

> 
>> will be needed later for larger scheduling granularity (e.g. cores).
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
> One question:
> 
>> @@ -279,8 +279,16 @@ struct vcpu
>>   struct sched_unit {
>>       struct vcpu           *vcpu;
>>       void                  *priv;      /* scheduler private data */
>> +    struct sched_unit     *next_in_list;
>>   };
>>   
>> +#define for_each_sched_unit(d,
>> e)                                         \
>> +    for ( (e) = (d)->sched_unit_list; (e) != NULL; (e) = (e)-
>>> next_in_list )
>> +
>> +#define for_each_sched_unit_vcpu(i,
>> v)                                    \
>> +    for ( (v) = (i)->vcpu; (v) != NULL && (v)->sched_unit ==
>> (i);         \
>> +          (v) = (v)->next_in_list )
>> +
>>
> So, here... sorry if it's me not seeing it, but why the
> (v)->sched_unit == (i) check is necessary?
> 
> Do we expect to put in the list of vcpus of a particular unit, vcpus
> that are in another unit?

Yes. I'm making use of the fact that all vcpus in a unit are consecutive
as I'm re-using the already existing list of vcpus in a domain:

dom->vcpu0->vcpu1->vcpu2->vcpu3
       ^             ^
       !             !
unit0-+             !
                     !
unit2---------------+


Juergen
Dario Faggioli July 19, 2019, 5:16 p.m. UTC | #3
On Fri, 2019-07-19 at 07:07 +0200, Juergen Gross wrote:
> On 19.07.19 02:01, Dario Faggioli wrote:
> > On Tue, 2019-05-28 at 12:32 +0200, Juergen Gross wrote:
> > > 
> > How about a ',' between domain and build?
> 
> Okay.

> > "over all vcpus of a sched_unit" ?
> 
> Oh, of course!
> 
Thanks.

> > One question:
> > 
> > > @@ -279,8 +279,16 @@ struct vcpu
> > >   struct sched_unit {
> > >       struct vcpu           *vcpu;
> > >       void                  *priv;      /* scheduler private data
> > > */
> > > +    struct sched_unit     *next_in_list;
> > >   };
> > >   
> > > +#define for_each_sched_unit(d,
> > > e)                                         \
> > > +    for ( (e) = (d)->sched_unit_list; (e) != NULL; (e) = (e)-
> > > > next_in_list )
> > > +
> > > +#define for_each_sched_unit_vcpu(i,
> > > v)                                    \
> > > +    for ( (v) = (i)->vcpu; (v) != NULL && (v)->sched_unit ==
> > > (i);         \
> > > +          (v) = (v)->next_in_list )
> > > +
> > > 
> > So, here... sorry if it's me not seeing it, but why the
> > (v)->sched_unit == (i) check is necessary?
> > 
> > Do we expect to put in the list of vcpus of a particular unit,
> > vcpus
> > that are in another unit?
> 
> Yes. 
>
Ah!

> I'm making use of the fact that all vcpus in a unit are consecutive
> as I'm re-using the already existing list of vcpus in a domain:
> 
> dom->vcpu0->vcpu1->vcpu2->vcpu3
>        ^             ^
>        !             !
> unit0-+             !
>                      !
> unit2---------------+
> 
Ok, I see. Can you add a short comment, above the for_each_xxx
construct, about that?

"All vcpus from all sched units are kept in the same. Only iterate over
the ones from a particular unit"

Or something like this.

Regards

Patch
diff mbox series

diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 86a43f7192..49d25489ef 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -249,6 +249,52 @@  static void sched_spin_unlock_double(spinlock_t *lock1, spinlock_t *lock2,
     spin_unlock_irqrestore(lock1, flags);
 }
 
+static void sched_free_unit(struct sched_unit *unit)
+{
+    struct sched_unit *prev_unit;
+    struct domain *d = unit->vcpu->domain;
+
+    if ( d->sched_unit_list == unit )
+        d->sched_unit_list = unit->next_in_list;
+    else
+    {
+        for_each_sched_unit ( d, prev_unit )
+        {
+            if ( prev_unit->next_in_list == unit )
+            {
+                prev_unit->next_in_list = unit->next_in_list;
+                break;
+            }
+        }
+    }
+
+    unit->vcpu->sched_unit = NULL;
+    xfree(unit);
+}
+
+static struct sched_unit *sched_alloc_unit(struct vcpu *v)
+{
+    struct sched_unit *unit, **prev_unit;
+    struct domain *d = v->domain;
+
+    if ( (unit = xzalloc(struct sched_unit)) == NULL )
+        return NULL;
+
+    v->sched_unit = unit;
+    unit->vcpu = v;
+
+    for ( prev_unit = &d->sched_unit_list; *prev_unit;
+          prev_unit = &(*prev_unit)->next_in_list )
+        if ( (*prev_unit)->next_in_list &&
+             (*prev_unit)->next_in_list->vcpu->vcpu_id > v->vcpu_id )
+            break;
+
+    unit->next_in_list = *prev_unit;
+    *prev_unit = unit;
+
+    return unit;
+}
+
 int sched_init_vcpu(struct vcpu *v, unsigned int processor)
 {
     struct domain *d = v->domain;
@@ -256,10 +302,8 @@  int sched_init_vcpu(struct vcpu *v, unsigned int processor)
 
     v->processor = processor;
 
-    if ( (unit = xzalloc(struct sched_unit)) == NULL )
+    if ( (unit = sched_alloc_unit(v)) == NULL )
         return 1;
-    v->sched_unit = unit;
-    unit->vcpu = v;
 
     /* Initialise the per-vcpu timers. */
     init_timer(&v->periodic_timer, vcpu_periodic_timer_fn,
@@ -272,8 +316,7 @@  int sched_init_vcpu(struct vcpu *v, unsigned int processor)
     unit->priv = sched_alloc_vdata(dom_scheduler(d), unit, d->sched_priv);
     if ( unit->priv == NULL )
     {
-        v->sched_unit = NULL;
-        xfree(unit);
+        sched_free_unit(unit);
         return 1;
     }
 
@@ -416,8 +459,7 @@  void sched_destroy_vcpu(struct vcpu *v)
         atomic_dec(&per_cpu(schedule_data, v->processor).urgent_count);
     sched_remove_unit(vcpu_scheduler(v), unit);
     sched_free_vdata(vcpu_scheduler(v), unit->priv);
-    xfree(unit);
-    v->sched_unit = NULL;
+    sched_free_unit(unit);
 }
 
 int sched_init_domain(struct domain *d, int poolid)
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index f549ad60d1..4da1ab201d 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -279,8 +279,16 @@  struct vcpu
 struct sched_unit {
     struct vcpu           *vcpu;
     void                  *priv;      /* scheduler private data */
+    struct sched_unit     *next_in_list;
 };
 
+#define for_each_sched_unit(d, e)                                         \
+    for ( (e) = (d)->sched_unit_list; (e) != NULL; (e) = (e)->next_in_list )
+
+#define for_each_sched_unit_vcpu(i, v)                                    \
+    for ( (v) = (i)->vcpu; (v) != NULL && (v)->sched_unit == (i);         \
+          (v) = (v)->next_in_list )
+
 /* Per-domain lock can be recursively acquired in fault handlers. */
 #define domain_lock(d) spin_lock_recursive(&(d)->domain_lock)
 #define domain_unlock(d) spin_unlock_recursive(&(d)->domain_lock)
@@ -339,6 +347,7 @@  struct domain
 
     /* Scheduling. */
     void            *sched_priv;    /* scheduler-specific data */
+    struct sched_unit *sched_unit_list;
     struct cpupool  *cpupool;
 
     struct domain   *next_in_list;