diff mbox series

[06/60] xen/sched: move per-vcpu scheduler private data pointer to sched_unit

Message ID 20190528103313.1343-7-jgross@suse.com (mailing list archive)
State New, archived
Headers show
Series xen: add core scheduling support | expand

Commit Message

Jürgen Groß May 28, 2019, 10:32 a.m. UTC
This prepares making the different schedulers vcpu agnostic.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched_arinc653.c |  4 ++--
 xen/common/sched_credit.c   |  6 +++---
 xen/common/sched_credit2.c  | 10 +++++-----
 xen/common/sched_null.c     |  4 ++--
 xen/common/sched_rt.c       |  4 ++--
 xen/common/schedule.c       | 24 ++++++++++++------------
 xen/include/xen/sched.h     |  2 +-
 7 files changed, 27 insertions(+), 27 deletions(-)

Comments

Dario Faggioli July 18, 2019, 10:52 p.m. UTC | #1
On Tue, 2019-05-28 at 12:32 +0200, Juergen Gross wrote:
> This prepares making the different schedulers vcpu agnostic.
> 
Ok, but the scheduler private data is, actually, for all the
schedulers, per-vcpu scheduler data such as, for instance,
struct csched2_vcpu.

After this patch we have (sticking to Credit2 as an example)
csched2_vcpu allocated by a function called csched2_alloc_vdata() but
stored on a per-sched_unit basis.

Similarly, we have an accessor method called csched2_vcpu() which
returns the struct csched2_vcpu which was stored in the per-unit
private space.

But shouldn't then the struct be called csched2_unit, and cited
functions be called csched2_alloc_udata() and csched2_unit()?
Now, I know that these transformation happen later in the series.
And, this time, I'm not asking to consolidate the patches.

However:
- can the changelog of this patch be a little more explicit, not only 
  informing that this is a preparatory patch, but also explaining 
  briefly the temporary inconcistency
- could the patches that deal with this be grouped together, so that 
  they are close to each other in the series (e.g., this patch, the 
  renaming hunks of patch 10 and patches that are currently 20 to 24)?
  Or are there dependencies that I am not considering?

Regards
Jürgen Groß July 19, 2019, 5:03 a.m. UTC | #2
On 19.07.19 00:52, Dario Faggioli wrote:
> On Tue, 2019-05-28 at 12:32 +0200, Juergen Gross wrote:
>> This prepares making the different schedulers vcpu agnostic.
>>
> Ok, but the scheduler private data is, actually, for all the
> schedulers, per-vcpu scheduler data such as, for instance,
> struct csched2_vcpu.
> 
> After this patch we have (sticking to Credit2 as an example)
> csched2_vcpu allocated by a function called csched2_alloc_vdata() but
> stored on a per-sched_unit basis.
> 
> Similarly, we have an accessor method called csched2_vcpu() which
> returns the struct csched2_vcpu which was stored in the per-unit
> private space.
> 
> But shouldn't then the struct be called csched2_unit, and cited
> functions be called csched2_alloc_udata() and csched2_unit()?
> Now, I know that these transformation happen later in the series.
> And, this time, I'm not asking to consolidate the patches.
> 
> However:
> - can the changelog of this patch be a little more explicit, not only
>    informing that this is a preparatory patch, but also explaining
>    briefly the temporary inconcistency

Sure.

> - could the patches that deal with this be grouped together, so that
>    they are close to each other in the series (e.g., this patch, the
>    renaming hunks of patch 10 and patches that are currently 20 to 24)?
>    Or are there dependencies that I am not considering?

There are some patches which could be reordered, but I'm not sure the
needed work is worth it. By moving the patches you named closer to each
other there will be other closely related patches ripped apart from
each other.

In the end it will make no sense to apply only some patches of the main
series. The huge amount of patches is meant only to make review easier.


Juergen
Dario Faggioli July 19, 2019, 5:10 p.m. UTC | #3
On Fri, 2019-07-19 at 07:03 +0200, Juergen Gross wrote:
> On 19.07.19 00:52, Dario Faggioli wrote:
> > On Tue, 2019-05-28 at 12:32 +0200, Juergen Gross wrote:
> > > This prepares making the different schedulers vcpu agnostic.
>
> > But shouldn't then the struct be called csched2_unit, and cited
> > functions be called csched2_alloc_udata() and csched2_unit()?
> > Now, I know that these transformation happen later in the series.
> > And, this time, I'm not asking to consolidate the patches.
> > 
> > However:
> > - can the changelog of this patch be a little more explicit, not
> > only
> >    informing that this is a preparatory patch, but also explaining
> >    briefly the temporary inconcistency
> 
> Sure.
> 
Ok, thanks.

> > - could the patches that deal with this be grouped together, so
> > that
> >    they are close to each other in the series [...]
> 
> There are some patches which could be reordered, but I'm not sure the
> needed work is worth it. By moving the patches you named closer to
> each
> other there will be other closely related patches ripped apart from
> each other.
>
Yes, I appreciate that.
> 
> In the end it will make no sense to apply only some patches of the
> main
> series. The huge amount of patches is meant only to make review
> easier.
>
And yet, this happens some times, especially for big series, and was
exactly what I was thinking to when making this comment.

Anyway, let's go with an updated changelog for now... And I'll build a
better and more solid opinion after having looked at the rest of the
series.

Regards
diff mbox series

Patch

diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c
index 283730b3f8..840891318e 100644
--- a/xen/common/sched_arinc653.c
+++ b/xen/common/sched_arinc653.c
@@ -53,7 +53,7 @@ 
  * Return a pointer to the ARINC 653-specific scheduler data information
  * associated with the given VCPU (vc)
  */
-#define AVCPU(vc) ((arinc653_vcpu_t *)(vc)->sched_priv)
+#define AVCPU(vc) ((arinc653_vcpu_t *)(vc)->sched_unit->priv)
 
 /**
  * Return the global scheduler private data given the scheduler ops pointer
@@ -647,7 +647,7 @@  a653_switch_sched(struct scheduler *new_ops, unsigned int cpu,
 
     ASSERT(!pdata && svc && is_idle_vcpu(svc->vc));
 
-    idle_vcpu[cpu]->sched_priv = vdata;
+    idle_vcpu[cpu]->sched_unit->priv = vdata;
 
     return &sd->_lock;
 }
diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index eba4db38bb..4cfef189aa 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -83,7 +83,7 @@ 
     ((struct csched_private *)((_ops)->sched_data))
 #define CSCHED_PCPU(_c)     \
     ((struct csched_pcpu *)per_cpu(schedule_data, _c).sched_priv)
-#define CSCHED_VCPU(_vcpu)  ((struct csched_vcpu *) (_vcpu)->sched_priv)
+#define CSCHED_VCPU(_vcpu)  ((struct csched_vcpu *) (_vcpu)->sched_unit->priv)
 #define CSCHED_DOM(_dom)    ((struct csched_dom *) (_dom)->sched_priv)
 #define RUNQ(_cpu)          (&(CSCHED_PCPU(_cpu)->runq))
 
@@ -641,7 +641,7 @@  csched_switch_sched(struct scheduler *new_ops, unsigned int cpu,
 
     ASSERT(svc && is_idle_vcpu(svc->vcpu));
 
-    idle_vcpu[cpu]->sched_priv = vdata;
+    idle_vcpu[cpu]->sched_unit->priv = vdata;
 
     /*
      * We are holding the runqueue lock already (it's been taken in
@@ -1022,7 +1022,7 @@  static void
 csched_unit_insert(const struct scheduler *ops, struct sched_unit *unit)
 {
     struct vcpu *vc = unit->vcpu;
-    struct csched_vcpu *svc = vc->sched_priv;
+    struct csched_vcpu *svc = unit->priv;
     spinlock_t *lock;
 
     BUG_ON( is_idle_vcpu(vc) );
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index def8d87484..86b44dc6cf 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -572,7 +572,7 @@  static inline struct csched2_pcpu *csched2_pcpu(unsigned int cpu)
 
 static inline struct csched2_vcpu *csched2_vcpu(const struct vcpu *v)
 {
-    return v->sched_priv;
+    return v->sched_unit->priv;
 }
 
 static inline struct csched2_dom *csched2_dom(const struct domain *d)
@@ -970,7 +970,7 @@  _runq_assign(struct csched2_vcpu *svc, struct csched2_runqueue_data *rqd)
 static void
 runq_assign(const struct scheduler *ops, struct vcpu *vc)
 {
-    struct csched2_vcpu *svc = vc->sched_priv;
+    struct csched2_vcpu *svc = vc->sched_unit->priv;
 
     ASSERT(svc->rqd == NULL);
 
@@ -997,7 +997,7 @@  _runq_deassign(struct csched2_vcpu *svc)
 static void
 runq_deassign(const struct scheduler *ops, struct vcpu *vc)
 {
-    struct csched2_vcpu *svc = vc->sched_priv;
+    struct csched2_vcpu *svc = vc->sched_unit->priv;
 
     ASSERT(svc->rqd == c2rqd(ops, vc->processor));
 
@@ -3108,7 +3108,7 @@  static void
 csched2_unit_insert(const struct scheduler *ops, struct sched_unit *unit)
 {
     struct vcpu *vc = unit->vcpu;
-    struct csched2_vcpu *svc = vc->sched_priv;
+    struct csched2_vcpu *svc = unit->priv;
     struct csched2_dom * const sdom = svc->sdom;
     spinlock_t *lock;
 
@@ -3887,7 +3887,7 @@  csched2_switch_sched(struct scheduler *new_ops, unsigned int cpu,
     ASSERT(!local_irq_is_enabled());
     write_lock(&prv->lock);
 
-    idle_vcpu[cpu]->sched_priv = vdata;
+    idle_vcpu[cpu]->sched_unit->priv = vdata;
 
     rqi = init_pdata(prv, pdata, cpu);
 
diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c
index bf52c84b0f..b622c4d7dc 100644
--- a/xen/common/sched_null.c
+++ b/xen/common/sched_null.c
@@ -117,7 +117,7 @@  static inline struct null_private *null_priv(const struct scheduler *ops)
 
 static inline struct null_vcpu *null_vcpu(const struct vcpu *v)
 {
-    return v->sched_priv;
+    return v->sched_unit->priv;
 }
 
 static inline bool vcpu_check_affinity(struct vcpu *v, unsigned int cpu,
@@ -392,7 +392,7 @@  static spinlock_t *null_switch_sched(struct scheduler *new_ops,
 
     ASSERT(nvc && is_idle_vcpu(nvc->vcpu));
 
-    idle_vcpu[cpu]->sched_priv = vdata;
+    idle_vcpu[cpu]->sched_unit->priv = vdata;
 
     /*
      * We are holding the runqueue lock already (it's been taken in
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 63ec732157..700ee9353e 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -235,7 +235,7 @@  static inline struct rt_private *rt_priv(const struct scheduler *ops)
 
 static inline struct rt_vcpu *rt_vcpu(const struct vcpu *vcpu)
 {
-    return vcpu->sched_priv;
+    return vcpu->sched_unit->priv;
 }
 
 static inline struct list_head *rt_runq(const struct scheduler *ops)
@@ -761,7 +761,7 @@  rt_switch_sched(struct scheduler *new_ops, unsigned int cpu,
         dprintk(XENLOG_DEBUG, "RTDS: timer initialized on cpu %u\n", cpu);
     }
 
-    idle_vcpu[cpu]->sched_priv = vdata;
+    idle_vcpu[cpu]->sched_unit->priv = vdata;
 
     return &prv->lock;
 }
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 3ff88096d8..86a43f7192 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -269,8 +269,8 @@  int sched_init_vcpu(struct vcpu *v, unsigned int processor)
     init_timer(&v->poll_timer, poll_timer_fn,
                v, v->processor);
 
-    v->sched_priv = sched_alloc_vdata(dom_scheduler(d), unit, d->sched_priv);
-    if ( v->sched_priv == NULL )
+    unit->priv = sched_alloc_vdata(dom_scheduler(d), unit, d->sched_priv);
+    if ( unit->priv == NULL )
     {
         v->sched_unit = NULL;
         xfree(unit);
@@ -365,7 +365,7 @@  int sched_move_domain(struct domain *d, struct cpupool *c)
     {
         spinlock_t *lock;
 
-        vcpudata = v->sched_priv;
+        vcpudata = v->sched_unit->priv;
 
         migrate_timer(&v->periodic_timer, new_p);
         migrate_timer(&v->singleshot_timer, new_p);
@@ -383,7 +383,7 @@  int sched_move_domain(struct domain *d, struct cpupool *c)
          */
         spin_unlock_irq(lock);
 
-        v->sched_priv = vcpu_priv[v->vcpu_id];
+        v->sched_unit->priv = vcpu_priv[v->vcpu_id];
         if ( !d->is_dying )
             sched_move_irqs(v);
 
@@ -415,7 +415,7 @@  void sched_destroy_vcpu(struct vcpu *v)
     if ( test_and_clear_bool(v->is_urgent) )
         atomic_dec(&per_cpu(schedule_data, v->processor).urgent_count);
     sched_remove_unit(vcpu_scheduler(v), unit);
-    sched_free_vdata(vcpu_scheduler(v), v->sched_priv);
+    sched_free_vdata(vcpu_scheduler(v), unit->priv);
     xfree(unit);
     v->sched_unit = NULL;
 }
@@ -1593,6 +1593,7 @@  static int cpu_schedule_up(unsigned int cpu)
     else
     {
         struct vcpu *idle = idle_vcpu[cpu];
+        struct sched_unit *unit = idle->sched_unit;
 
         /*
          * During (ACPI?) suspend the idle vCPU for this pCPU is not freed,
@@ -1604,11 +1605,10 @@  static int cpu_schedule_up(unsigned int cpu)
          * with a different scheduler, it is schedule_cpu_switch(), invoked
          * later, that will set things up as appropriate.
          */
-        ASSERT(idle->sched_priv == NULL);
+        ASSERT(unit->priv == NULL);
 
-        idle->sched_priv = sched_alloc_vdata(&ops, idle->sched_unit,
-                                             idle->domain->sched_priv);
-        if ( idle->sched_priv == NULL )
+        unit->priv = sched_alloc_vdata(&ops, unit, idle->domain->sched_priv);
+        if ( unit->priv == NULL )
             return -ENOMEM;
     }
     if ( idle_vcpu[cpu] == NULL )
@@ -1634,9 +1634,9 @@  static void cpu_schedule_down(unsigned int cpu)
     struct scheduler *sched = per_cpu(scheduler, cpu);
 
     sched_free_pdata(sched, sd->sched_priv, cpu);
-    sched_free_vdata(sched, idle_vcpu[cpu]->sched_priv);
+    sched_free_vdata(sched, idle_vcpu[cpu]->sched_unit->priv);
 
-    idle_vcpu[cpu]->sched_priv = NULL;
+    idle_vcpu[cpu]->sched_unit->priv = NULL;
     sd->sched_priv = NULL;
 
     kill_timer(&sd->s_timer);
@@ -1886,7 +1886,7 @@  int schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
      */
     old_lock = pcpu_schedule_lock_irq(cpu);
 
-    vpriv_old = idle->sched_priv;
+    vpriv_old = idle->sched_unit->priv;
     ppriv_old = sd->sched_priv;
     new_lock = sched_switch_sched(new_ops, cpu, ppriv, vpriv);
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index e8ea42e867..f549ad60d1 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -162,7 +162,6 @@  struct vcpu
     struct timer     poll_timer;    /* timeout for SCHEDOP_poll */
 
     struct sched_unit *sched_unit;
-    void            *sched_priv;    /* scheduler-specific data */
 
     struct vcpu_runstate_info runstate;
 #ifndef CONFIG_COMPAT
@@ -279,6 +278,7 @@  struct vcpu
 
 struct sched_unit {
     struct vcpu           *vcpu;
+    void                  *priv;      /* scheduler private data */
 };
 
 /* Per-domain lock can be recursively acquired in fault handlers. */