diff mbox series

[3/3] xen/sched: fix cpu hotplug

Message ID 20220802133619.22965-1-jgross@suse.com (mailing list archive)
State Superseded
Headers show
Series xen/sched: fix cpu hotplug | expand

Commit Message

Jürgen Groß Aug. 2, 2022, 1:36 p.m. UTC
Cpu cpu unplugging is calling schedule_cpu_rm() via stop_machine_run()
with interrupts disabled, thus any memory allocation or freeing must
be avoided.

Since commit 5047cd1d5dea ("xen/common: Use enhanced
ASSERT_ALLOC_CONTEXT in xmalloc()") this restriction is being enforced
via an assertion, which will now fail.

Before that commit cpu unplugging in normal configurations was working
just by chance as only the cpu performing schedule_cpu_rm() was doing
active work. With core scheduling enabled, however, failures could
result from memory allocations not being properly propagated to other
cpus' TLBs.

Fix this mess by allocating needed memory before entering
stop_machine_run() and freeing any memory only after having finished
stop_machine_run().

Fixes: 1ec410112cdd ("xen/sched: support differing granularity in schedule_cpu_[add/rm]()")
Reported-by: Gao Ruifeng <ruifeng.gao@intel.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/core.c    | 14 ++++---
 xen/common/sched/cpupool.c | 77 +++++++++++++++++++++++++++++---------
 xen/common/sched/private.h |  5 ++-
 3 files changed, 72 insertions(+), 24 deletions(-)

Comments

Jan Beulich Aug. 3, 2022, 9:53 a.m. UTC | #1
On 02.08.2022 15:36, Juergen Gross wrote:
> --- a/xen/common/sched/cpupool.c
> +++ b/xen/common/sched/cpupool.c
> @@ -419,6 +419,8 @@ static int cpupool_alloc_affin_masks(struct affinity_masks *masks)
>          return 0;
>  
>      free_cpumask_var(masks->hard);
> +    memset(masks, 0, sizeof(*masks));

FREE_CPUMASK_VAR()?

> @@ -1031,10 +1041,23 @@ static int cf_check cpu_callback(
>  {
>      unsigned int cpu = (unsigned long)hcpu;
>      int rc = 0;
> +    static struct cpu_rm_data *mem;

When you mentioned your plan, I was actually envisioning a slightly
different model: Instead of doing the allocation at CPU_DOWN_PREPARE,
allocate a single instance during boot, which would never be freed.
Did you consider such, and it turned out worse? I guess the main
obstacle would be figuring an upper bound for sr->granularity, but
of course schedule_cpu_rm_alloc(), besides the allocations, also
does quite a bit of filling in values, which can't be done up front.

>      switch ( action )
>      {
>      case CPU_DOWN_FAILED:
> +        if ( system_state <= SYS_STATE_active )
> +        {
> +            if ( mem )
> +            {
> +                if ( memchr_inv(&mem->affinity, 0, sizeof(mem->affinity)) )
> +                    cpupool_free_affin_masks(&mem->affinity);

I don't think the conditional is really needed - it merely avoids two
xfree(NULL) invocations at the expense of readability here. Plus -
wouldn't this better be part of ...

> +                schedule_cpu_rm_free(mem, cpu);

... this anyway?

> @@ -1042,12 +1065,32 @@ static int cf_check cpu_callback(
>      case CPU_DOWN_PREPARE:
>          /* Suspend/Resume don't change assignments of cpus to cpupools. */
>          if ( system_state <= SYS_STATE_active )
> +        {
>              rc = cpupool_cpu_remove_prologue(cpu);
> +            if ( !rc )
> +            {
> +                ASSERT(!mem);
> +                mem = schedule_cpu_rm_alloc(cpu);
> +                rc = mem ? cpupool_alloc_affin_masks(&mem->affinity) : -ENOMEM;

Ah - here you actually want a non-boolean return value. No need to
change that then in the earlier patch (albeit of course a change
there could be easily accommodated here).

Along the lines of the earlier comment this 2nd allocation may also
want to move into schedule_cpu_rm_alloc(). If other users of the
function don't need the extra allocations, perhaps by adding a bool
parameter.

Jan
Jürgen Groß Aug. 8, 2022, 10:21 a.m. UTC | #2
On 03.08.22 11:53, Jan Beulich wrote:
> On 02.08.2022 15:36, Juergen Gross wrote:
>> --- a/xen/common/sched/cpupool.c
>> +++ b/xen/common/sched/cpupool.c
>> @@ -419,6 +419,8 @@ static int cpupool_alloc_affin_masks(struct affinity_masks *masks)
>>           return 0;
>>   
>>       free_cpumask_var(masks->hard);
>> +    memset(masks, 0, sizeof(*masks));
> 
> FREE_CPUMASK_VAR()?

Oh, yes.

> 
>> @@ -1031,10 +1041,23 @@ static int cf_check cpu_callback(
>>   {
>>       unsigned int cpu = (unsigned long)hcpu;
>>       int rc = 0;
>> +    static struct cpu_rm_data *mem;
> 
> When you mentioned your plan, I was actually envisioning a slightly
> different model: Instead of doing the allocation at CPU_DOWN_PREPARE,
> allocate a single instance during boot, which would never be freed.
> Did you consider such, and it turned out worse? I guess the main
> obstacle would be figuring an upper bound for sr->granularity, but
> of course schedule_cpu_rm_alloc(), besides the allocations, also
> does quite a bit of filling in values, which can't be done up front.

With sched-gran=socket sr->granularity can grow to above 100, so I'm
not sure we'd want to do that.

> 
>>       switch ( action )
>>       {
>>       case CPU_DOWN_FAILED:
>> +        if ( system_state <= SYS_STATE_active )
>> +        {
>> +            if ( mem )
>> +            {
>> +                if ( memchr_inv(&mem->affinity, 0, sizeof(mem->affinity)) )
>> +                    cpupool_free_affin_masks(&mem->affinity);
> 
> I don't think the conditional is really needed - it merely avoids two
> xfree(NULL) invocations at the expense of readability here. Plus -

Okay.

> wouldn't this better be part of ...
> 
>> +                schedule_cpu_rm_free(mem, cpu);
> 
> ... this anyway?

This would add a layering violation IMHO.

> 
>> @@ -1042,12 +1065,32 @@ static int cf_check cpu_callback(
>>       case CPU_DOWN_PREPARE:
>>           /* Suspend/Resume don't change assignments of cpus to cpupools. */
>>           if ( system_state <= SYS_STATE_active )
>> +        {
>>               rc = cpupool_cpu_remove_prologue(cpu);
>> +            if ( !rc )
>> +            {
>> +                ASSERT(!mem);
>> +                mem = schedule_cpu_rm_alloc(cpu);
>> +                rc = mem ? cpupool_alloc_affin_masks(&mem->affinity) : -ENOMEM;
> 
> Ah - here you actually want a non-boolean return value. No need to
> change that then in the earlier patch (albeit of course a change
> there could be easily accommodated here).
> 
> Along the lines of the earlier comment this 2nd allocation may also
> want to move into schedule_cpu_rm_alloc(). If other users of the
> function don't need the extra allocations, perhaps by adding a bool
> parameter.

I could do that, but I still think this would pull cpupool specific needs
into sched/core.c.


Juergen
Jan Beulich Aug. 9, 2022, 6:15 a.m. UTC | #3
On 08.08.2022 12:21, Juergen Gross wrote:
> On 03.08.22 11:53, Jan Beulich wrote:
>> On 02.08.2022 15:36, Juergen Gross wrote:
>>>       switch ( action )
>>>       {
>>>       case CPU_DOWN_FAILED:
>>> +        if ( system_state <= SYS_STATE_active )
>>> +        {
>>> +            if ( mem )
>>> +            {
>>> +                if ( memchr_inv(&mem->affinity, 0, sizeof(mem->affinity)) )
>>> +                    cpupool_free_affin_masks(&mem->affinity);
>>
>> I don't think the conditional is really needed - it merely avoids two
>> xfree(NULL) invocations at the expense of readability here. Plus -
> 
> Okay.
> 
>> wouldn't this better be part of ...
>>
>>> +                schedule_cpu_rm_free(mem, cpu);
>>
>> ... this anyway?
> 
> This would add a layering violation IMHO.
> 
>>
>>> @@ -1042,12 +1065,32 @@ static int cf_check cpu_callback(
>>>       case CPU_DOWN_PREPARE:
>>>           /* Suspend/Resume don't change assignments of cpus to cpupools. */
>>>           if ( system_state <= SYS_STATE_active )
>>> +        {
>>>               rc = cpupool_cpu_remove_prologue(cpu);
>>> +            if ( !rc )
>>> +            {
>>> +                ASSERT(!mem);
>>> +                mem = schedule_cpu_rm_alloc(cpu);
>>> +                rc = mem ? cpupool_alloc_affin_masks(&mem->affinity) : -ENOMEM;
>>
>> Ah - here you actually want a non-boolean return value. No need to
>> change that then in the earlier patch (albeit of course a change
>> there could be easily accommodated here).
>>
>> Along the lines of the earlier comment this 2nd allocation may also
>> want to move into schedule_cpu_rm_alloc(). If other users of the
>> function don't need the extra allocations, perhaps by adding a bool
>> parameter.
> 
> I could do that, but I still think this would pull cpupool specific needs
> into sched/core.c.

But the struct isn't cpupool specific, and hence controlling the setting up
of the field via a function parameter doesn't really look like a layering
violation to me. While imo the end result would be more clean (as in - all
allocations / freeing in one place), I'm not going to insist (not the least
because I'm not maintainer of that code anyway).

Jan
diff mbox series

Patch

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index d6ff4f4921..1473cef372 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -3190,7 +3190,7 @@  out:
     return ret;
 }
 
-static struct cpu_rm_data *schedule_cpu_rm_alloc(unsigned int cpu)
+struct cpu_rm_data *schedule_cpu_rm_alloc(unsigned int cpu)
 {
     struct cpu_rm_data *data;
     struct sched_resource *sr;
@@ -3242,7 +3242,7 @@  static struct cpu_rm_data *schedule_cpu_rm_alloc(unsigned int cpu)
     return data;
 }
 
-static void schedule_cpu_rm_free(struct cpu_rm_data *mem, unsigned int cpu)
+void schedule_cpu_rm_free(struct cpu_rm_data *mem, unsigned int cpu)
 {
     sched_free_udata(mem->old_ops, mem->vpriv_old);
     sched_free_pdata(mem->old_ops, mem->ppriv_old, cpu);
@@ -3256,17 +3256,18 @@  static void schedule_cpu_rm_free(struct cpu_rm_data *mem, unsigned int cpu)
  * The cpu is already marked as "free" and not valid any longer for its
  * cpupool.
  */
-int schedule_cpu_rm(unsigned int cpu)
+int schedule_cpu_rm(unsigned int cpu, struct cpu_rm_data *data)
 {
     struct sched_resource *sr;
-    struct cpu_rm_data *data;
     struct sched_unit *unit;
     spinlock_t *old_lock;
     unsigned long flags;
     int idx = 0;
     unsigned int cpu_iter;
+    bool freemem = !data;
 
-    data = schedule_cpu_rm_alloc(cpu);
+    if ( !data )
+        data = schedule_cpu_rm_alloc(cpu);
     if ( !data )
         return -ENOMEM;
 
@@ -3333,7 +3334,8 @@  int schedule_cpu_rm(unsigned int cpu)
     sched_deinit_pdata(data->old_ops, data->ppriv_old, cpu);
 
     rcu_read_unlock(&sched_res_rculock);
-    schedule_cpu_rm_free(data, cpu);
+    if ( freemem )
+        schedule_cpu_rm_free(data, cpu);
 
     return 0;
 }
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 1463dcd767..d9dadedea3 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -419,6 +419,8 @@  static int cpupool_alloc_affin_masks(struct affinity_masks *masks)
         return 0;
 
     free_cpumask_var(masks->hard);
+    memset(masks, 0, sizeof(*masks));
+
     return -ENOMEM;
 }
 
@@ -428,28 +430,34 @@  static void cpupool_free_affin_masks(struct affinity_masks *masks)
     free_cpumask_var(masks->hard);
 }
 
-static void cpupool_update_node_affinity(const struct cpupool *c)
+static void cpupool_update_node_affinity(const struct cpupool *c,
+                                         struct affinity_masks *masks)
 {
     const cpumask_t *online = c->res_valid;
-    struct affinity_masks masks;
+    struct affinity_masks local_masks;
     struct domain *d;
 
-    if ( cpupool_alloc_affin_masks(&masks) )
-        return;
+    if ( !masks )
+    {
+        if ( cpupool_alloc_affin_masks(&local_masks) )
+            return;
+        masks = &local_masks;
+    }
 
     rcu_read_lock(&domlist_read_lock);
     for_each_domain_in_cpupool(d, c)
     {
         if ( d->vcpu && d->vcpu[0] )
         {
-            cpumask_clear(masks.hard);
-            cpumask_clear(masks.soft);
-            domain_update_node_affinity_noalloc(d, online, &masks);
+            cpumask_clear(masks->hard);
+            cpumask_clear(masks->soft);
+            domain_update_node_affinity_noalloc(d, online, masks);
         }
     }
     rcu_read_unlock(&domlist_read_lock);
 
-    cpupool_free_affin_masks(&masks);
+    if ( masks == &local_masks )
+        cpupool_free_affin_masks(&local_masks);
 }
 
 /*
@@ -483,15 +491,17 @@  static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu)
 
     rcu_read_unlock(&sched_res_rculock);
 
-    cpupool_update_node_affinity(c);
+    cpupool_update_node_affinity(c, NULL);
 
     return 0;
 }
 
-static int cpupool_unassign_cpu_finish(struct cpupool *c)
+static int cpupool_unassign_cpu_finish(struct cpupool *c,
+                                       struct cpu_rm_data *mem)
 {
     int cpu = cpupool_moving_cpu;
     const cpumask_t *cpus;
+    struct affinity_masks *masks = mem ? &mem->affinity : NULL;
     int ret;
 
     if ( c != cpupool_cpu_moving )
@@ -514,7 +524,7 @@  static int cpupool_unassign_cpu_finish(struct cpupool *c)
      */
     if ( !ret )
     {
-        ret = schedule_cpu_rm(cpu);
+        ret = schedule_cpu_rm(cpu, mem);
         if ( ret )
             cpumask_andnot(&cpupool_free_cpus, &cpupool_free_cpus, cpus);
         else
@@ -526,7 +536,7 @@  static int cpupool_unassign_cpu_finish(struct cpupool *c)
     }
     rcu_read_unlock(&sched_res_rculock);
 
-    cpupool_update_node_affinity(c);
+    cpupool_update_node_affinity(c, masks);
 
     return ret;
 }
@@ -590,7 +600,7 @@  static long cf_check cpupool_unassign_cpu_helper(void *info)
                       cpupool_cpu_moving->cpupool_id, cpupool_moving_cpu);
     spin_lock(&cpupool_lock);
 
-    ret = cpupool_unassign_cpu_finish(c);
+    ret = cpupool_unassign_cpu_finish(c, NULL);
 
     spin_unlock(&cpupool_lock);
     debugtrace_printk("cpupool_unassign_cpu ret=%ld\n", ret);
@@ -737,7 +747,7 @@  static int cpupool_cpu_add(unsigned int cpu)
  * This function is called in stop_machine context, so we can be sure no
  * non-idle vcpu is active on the system.
  */
-static void cpupool_cpu_remove(unsigned int cpu)
+static void cpupool_cpu_remove(unsigned int cpu, struct cpu_rm_data *mem)
 {
     int ret;
 
@@ -745,7 +755,7 @@  static void cpupool_cpu_remove(unsigned int cpu)
 
     if ( !cpumask_test_cpu(cpu, &cpupool_free_cpus) )
     {
-        ret = cpupool_unassign_cpu_finish(cpupool0);
+        ret = cpupool_unassign_cpu_finish(cpupool0, mem);
         BUG_ON(ret);
     }
     cpumask_clear_cpu(cpu, &cpupool_free_cpus);
@@ -811,7 +821,7 @@  static void cpupool_cpu_remove_forced(unsigned int cpu)
         {
             ret = cpupool_unassign_cpu_start(c, master_cpu);
             BUG_ON(ret);
-            ret = cpupool_unassign_cpu_finish(c);
+            ret = cpupool_unassign_cpu_finish(c, NULL);
             BUG_ON(ret);
         }
     }
@@ -1031,10 +1041,23 @@  static int cf_check cpu_callback(
 {
     unsigned int cpu = (unsigned long)hcpu;
     int rc = 0;
+    static struct cpu_rm_data *mem;
 
     switch ( action )
     {
     case CPU_DOWN_FAILED:
+        if ( system_state <= SYS_STATE_active )
+        {
+            if ( mem )
+            {
+                if ( memchr_inv(&mem->affinity, 0, sizeof(mem->affinity)) )
+                    cpupool_free_affin_masks(&mem->affinity);
+                schedule_cpu_rm_free(mem, cpu);
+                mem = NULL;
+            }
+            rc = cpupool_cpu_add(cpu);
+        }
+        break;
     case CPU_ONLINE:
         if ( system_state <= SYS_STATE_active )
             rc = cpupool_cpu_add(cpu);
@@ -1042,12 +1065,32 @@  static int cf_check cpu_callback(
     case CPU_DOWN_PREPARE:
         /* Suspend/Resume don't change assignments of cpus to cpupools. */
         if ( system_state <= SYS_STATE_active )
+        {
             rc = cpupool_cpu_remove_prologue(cpu);
+            if ( !rc )
+            {
+                ASSERT(!mem);
+                mem = schedule_cpu_rm_alloc(cpu);
+                rc = mem ? cpupool_alloc_affin_masks(&mem->affinity) : -ENOMEM;
+            }
+        }
         break;
     case CPU_DYING:
         /* Suspend/Resume don't change assignments of cpus to cpupools. */
         if ( system_state <= SYS_STATE_active )
-            cpupool_cpu_remove(cpu);
+        {
+            ASSERT(mem);
+            cpupool_cpu_remove(cpu, mem);
+        }
+        break;
+    case CPU_DEAD:
+        if ( system_state <= SYS_STATE_active )
+        {
+            ASSERT(mem);
+            cpupool_free_affin_masks(&mem->affinity);
+            schedule_cpu_rm_free(mem, cpu);
+            mem = NULL;
+        }
         break;
     case CPU_RESUME_FAILED:
         cpupool_cpu_remove_forced(cpu);
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index c626ad4907..f5bf41226c 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -600,6 +600,7 @@  struct affinity_masks {
 
 /* Memory allocation related data for schedule_cpu_rm(). */
 struct cpu_rm_data {
+    struct affinity_masks affinity;
     struct scheduler *old_ops;
     void *ppriv_old;
     void *vpriv_old;
@@ -617,7 +618,9 @@  struct scheduler *scheduler_alloc(unsigned int sched_id);
 void scheduler_free(struct scheduler *sched);
 int cpu_disable_scheduler(unsigned int cpu);
 int schedule_cpu_add(unsigned int cpu, struct cpupool *c);
-int schedule_cpu_rm(unsigned int cpu);
+struct cpu_rm_data *schedule_cpu_rm_alloc(unsigned int cpu);
+void schedule_cpu_rm_free(struct cpu_rm_data *mem, unsigned int cpu);
+int schedule_cpu_rm(unsigned int cpu, struct cpu_rm_data *mem);
 int sched_move_domain(struct domain *d, struct cpupool *c);
 struct cpupool *cpupool_get_by_id(unsigned int poolid);
 void cpupool_put(struct cpupool *pool);