diff mbox

[RFC] spapr: disintricate core-id from DT semantics

Message ID 146918576946.26293.7740314135850382852.stgit@bahia.lan (mailing list archive)
State New, archived
Headers show

Commit Message

Greg Kurz July 22, 2016, 11:10 a.m. UTC
The goal of this patch is to have a stable core-id which does not depend
on any DT related semantics, which involve non-obvious computations on
modern PowerPC server cpus.

With this patch, the DT core id is computed on-demand as:

       (core-id / smp_threads) * smt

where smt is the number of threads per core in the host.

This formula should be consolidated in a helper since it is needed in
several places.

Other uses for core-id includes: compute a stable cpu_index (which
allows random order hotplug/unplug without breaking migration) and
NUMA.

Signed-off-by: Greg Kurz <groug@kaod.org>
---

It was first suggested here:

https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg01727.html

and as option 1) in the following discussion on IRC:

<dwg> imammedo, basically the options are: 1) change core-ids to be
 0, 1, .. n and compute cpu_index as core_id * threads + thread#, or
 2) leave core-ids as they are and calculate cpu_index as
 core-id / smt * threads + thread#

It is based on David's ppc-for-2.7 branch (commit bb6268f35f457).

It is lightly tested but I could at least do in-order hotplug/unplug.


 hw/ppc/spapr.c          |   10 +++++-----
 hw/ppc/spapr_cpu_core.c |   24 +++++++++++-------------
 2 files changed, 16 insertions(+), 18 deletions(-)

Comments

David Gibson July 25, 2016, 6:01 a.m. UTC | #1
On Fri, Jul 22, 2016 at 01:10:36PM +0200, Greg Kurz wrote:
> The goal of this patch is to have a stable core-id which does not depend
> on any DT related semantics, which involve non-obvious computations on
> modern PowerPC server cpus.
> 
> With this patch, the DT core id is computed on-demand as:
> 
>        (core-id / smp_threads) * smt
> 
> where smt is the number of threads per core in the host.
> 
> This formula should be consolidated in a helper since it is needed in
> several places.

It's a little odd you node this but don't do so.

> Other uses for core-id includes: compute a stable cpu_index (which
> allows random order hotplug/unplug without breaking migration) and
> NUMA.
> 
> Signed-off-by: Greg Kurz <groug@kaod.org>
> ---
> 
> It was first suggested here:
> 
> https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg01727.html
> 
> and as option 1) in the following discussion on IRC:
> 
> <dwg> imammedo, basically the options are: 1) change core-ids to be
>  0, 1, .. n and compute cpu_index as core_id * threads + thread#, or
>  2) leave core-ids as they are and calculate cpu_index as
>  core-id / smt * threads + thread#
> 
> It is based on David's ppc-for-2.7 branch (commit bb6268f35f457).
> 
> It is lightly tested but I could at least do in-order
> hotplug/unplug.

I think this is basically the right approach, and I've applied to
ppc-for-2.7.  Here's my plan for what to do about all this id stuff:

  1. Merge this to ppc-for-2.7 (done)
  2. Assuming there are now show-stoppers in testing, send a pull
     request tomorrow
  3. Once this is merged, try to get Igor's series (or a respin of it)
     in ASAP.

I'm hoping that will give us good-enough hotplug for 2.7.

In the 2.8 timeframe, I want to:
  4. Disconnect KVM vcpu ID from dt_id, instead calculate it from (now
     stable) cpu_index
  5. Remove dt_id as a cpu field - instead just compute DT ids from
     the (now stable) cpu_index when we build the DT
  6. (for new machine type versions) Change DT ID assignment, so it
      no longer depends on kvmppc_smt_threads().  The current scheme
      means that migration between hosts with different native SMT
      values won't work, which is unfortunate.  I suspect there may be
      other problems with any real situation where that's the case,
      but nontheless it's a silly restriction.

Nice to have but bigger scope things for 2.8:
  7. Update archs to they *all* call cpu_exec_init() / cpu_exec_exit()
     at realize / unrealize time instead of init / finalize time.
  8. Update all archs and machines to use stable cpu_index
Greg Kurz July 27, 2016, 10:06 a.m. UTC | #2
Le Mon, 25 Jul 2016 16:01:31 +1000,
David Gibson <david@gibson.dropbear.id.au> a écrit :

> On Fri, Jul 22, 2016 at 01:10:36PM +0200, Greg Kurz wrote:
> > The goal of this patch is to have a stable core-id which does not depend
> > on any DT related semantics, which involve non-obvious computations on
> > modern PowerPC server cpus.
> > 
> > With this patch, the DT core id is computed on-demand as:
> > 
> >        (core-id / smp_threads) * smt
> > 
> > where smt is the number of threads per core in the host.
> > 
> > This formula should be consolidated in a helper since it is needed in
> > several places.  
> 
> It's a little odd you node this but don't do so.
> 

I was too busy packing for holidays already :) and I think the consolidation
work can be addressed in followup patches.

Cheers from Brittany, under the rain.

--
Greg

> > Other uses for core-id includes: compute a stable cpu_index (which
> > allows random order hotplug/unplug without breaking migration) and
> > NUMA.
> > 
> > Signed-off-by: Greg Kurz <groug@kaod.org>
> > ---
> > 
> > It was first suggested here:
> > 
> > https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg01727.html
> > 
> > and as option 1) in the following discussion on IRC:
> > 
> > <dwg> imammedo, basically the options are: 1) change core-ids to be
> >  0, 1, .. n and compute cpu_index as core_id * threads + thread#, or
> >  2) leave core-ids as they are and calculate cpu_index as
> >  core-id / smt * threads + thread#
> > 
> > It is based on David's ppc-for-2.7 branch (commit bb6268f35f457).
> > 
> > It is lightly tested but I could at least do in-order
> > hotplug/unplug.  
> 
> I think this is basically the right approach, and I've applied to
> ppc-for-2.7.  Here's my plan for what to do about all this id stuff:
> 
>   1. Merge this to ppc-for-2.7 (done)
>   2. Assuming there are now show-stoppers in testing, send a pull
>      request tomorrow
>   3. Once this is merged, try to get Igor's series (or a respin of it)
>      in ASAP.
> 
> I'm hoping that will give us good-enough hotplug for 2.7.
> 
> In the 2.8 timeframe, I want to:
>   4. Disconnect KVM vcpu ID from dt_id, instead calculate it from (now
>      stable) cpu_index
>   5. Remove dt_id as a cpu field - instead just compute DT ids from
>      the (now stable) cpu_index when we build the DT
>   6. (for new machine type versions) Change DT ID assignment, so it
>       no longer depends on kvmppc_smt_threads().  The current scheme
>       means that migration between hosts with different native SMT
>       values won't work, which is unfortunate.  I suspect there may be
>       other problems with any real situation where that's the case,
>       but nontheless it's a silly restriction.
> 
> Nice to have but bigger scope things for 2.8:
>   7. Update archs to they *all* call cpu_exec_init() / cpu_exec_exit()
>      at realize / unrealize time instead of init / finalize time.
>   8. Update all archs and machines to use stable cpu_index
>
David Gibson July 27, 2016, 11:33 p.m. UTC | #3
On Wed, Jul 27, 2016 at 12:06:17PM +0200, Greg Kurz wrote:
> Le Mon, 25 Jul 2016 16:01:31 +1000,
> David Gibson <david@gibson.dropbear.id.au> a écrit :
> 
> > On Fri, Jul 22, 2016 at 01:10:36PM +0200, Greg Kurz wrote:
> > > The goal of this patch is to have a stable core-id which does not depend
> > > on any DT related semantics, which involve non-obvious computations on
> > > modern PowerPC server cpus.
> > > 
> > > With this patch, the DT core id is computed on-demand as:
> > > 
> > >        (core-id / smp_threads) * smt
> > > 
> > > where smt is the number of threads per core in the host.
> > > 
> > > This formula should be consolidated in a helper since it is needed in
> > > several places.  
> > 
> > It's a little odd you node this but don't do so.
> > 
> 
> I was too busy packing for holidays already :) and I think the consolidation
> work can be addressed in followup patches.

Heh, fair enough.

> Cheers from Brittany, under the rain.

Have fun.  Eat some chocolate pancakes for me.
diff mbox

Patch

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 9193ac2c122b..fbbd0518edd4 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -1815,10 +1815,11 @@  static void ppc_spapr_init(MachineState *machine)
 
         spapr->cores = g_new0(Object *, spapr_max_cores);
         for (i = 0; i < spapr_max_cores; i++) {
-            int core_dt_id = i * smt;
+            int core_id = i * smp_threads;
             sPAPRDRConnector *drc =
                 spapr_dr_connector_new(OBJECT(spapr),
-                                       SPAPR_DR_CONNECTOR_TYPE_CPU, core_dt_id);
+                                       SPAPR_DR_CONNECTOR_TYPE_CPU,
+                                       (core_id / smp_threads) * smt);
 
             qemu_register_reset(spapr_drc_reset, drc);
 
@@ -1834,7 +1835,7 @@  static void ppc_spapr_init(MachineState *machine)
                 core  = object_new(type);
                 object_property_set_int(core, smp_threads, "nr-threads",
                                         &error_fatal);
-                object_property_set_int(core, core_dt_id, CPU_CORE_PROP_CORE_ID,
+                object_property_set_int(core, core_id, CPU_CORE_PROP_CORE_ID,
                                         &error_fatal);
                 object_property_set_bool(core, true, "realized", &error_fatal);
             }
@@ -2376,7 +2377,6 @@  static HotpluggableCPUList *spapr_query_hotpluggable_cpus(MachineState *machine)
     HotpluggableCPUList *head = NULL;
     sPAPRMachineState *spapr = SPAPR_MACHINE(machine);
     int spapr_max_cores = max_cpus / smp_threads;
-    int smt = kvmppc_smt_threads();
 
     for (i = 0; i < spapr_max_cores; i++) {
         HotpluggableCPUList *list_item = g_new0(typeof(*list_item), 1);
@@ -2386,7 +2386,7 @@  static HotpluggableCPUList *spapr_query_hotpluggable_cpus(MachineState *machine)
         cpu_item->type = spapr_get_cpu_core_type(machine->cpu_model);
         cpu_item->vcpus_count = smp_threads;
         cpu_props->has_core_id = true;
-        cpu_props->core_id = i * smt;
+        cpu_props->core_id = i * smp_threads;
         /* TODO: add 'has_node/node' here to describe
            to which node core belongs */
 
diff --git a/hw/ppc/spapr_cpu_core.c b/hw/ppc/spapr_cpu_core.c
index 4bfc96bd5a67..c04aaa47d7cd 100644
--- a/hw/ppc/spapr_cpu_core.c
+++ b/hw/ppc/spapr_cpu_core.c
@@ -103,7 +103,6 @@  static void spapr_core_release(DeviceState *dev, void *opaque)
     size_t size = object_type_get_instance_size(typename);
     sPAPRMachineState *spapr = SPAPR_MACHINE(qdev_get_machine());
     CPUCore *cc = CPU_CORE(dev);
-    int smt = kvmppc_smt_threads();
     int i;
 
     for (i = 0; i < cc->nr_threads; i++) {
@@ -117,7 +116,7 @@  static void spapr_core_release(DeviceState *dev, void *opaque)
         object_unparent(obj);
     }
 
-    spapr->cores[cc->core_id / smt] = NULL;
+    spapr->cores[cc->core_id / smp_threads] = NULL;
 
     g_free(sc->threads);
     object_unparent(OBJECT(dev));
@@ -128,18 +127,19 @@  void spapr_core_unplug(HotplugHandler *hotplug_dev, DeviceState *dev,
 {
     sPAPRMachineState *spapr = SPAPR_MACHINE(OBJECT(hotplug_dev));
     CPUCore *cc = CPU_CORE(dev);
+    int smt = kvmppc_smt_threads();
+    int index = cc->core_id / smp_threads;
     sPAPRDRConnector *drc =
-        spapr_dr_connector_by_id(SPAPR_DR_CONNECTOR_TYPE_CPU, cc->core_id);
+        spapr_dr_connector_by_id(SPAPR_DR_CONNECTOR_TYPE_CPU, index * smt);
     sPAPRDRConnectorClass *drck;
     Error *local_err = NULL;
-    int smt = kvmppc_smt_threads();
-    int index = cc->core_id / smt;
     int spapr_max_cores = max_cpus / smp_threads;
     int i;
 
     for (i = spapr_max_cores - 1; i > index; i--) {
         if (spapr->cores[i]) {
-            error_setg(errp, "core-id %d should be removed first", i * smt);
+            error_setg(errp, "core-id %d should be removed first",
+                       i * smp_threads);
             return;
         }
     }
@@ -168,11 +168,10 @@  void spapr_core_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
     Error *local_err = NULL;
     void *fdt = NULL;
     int fdt_offset = 0;
-    int index;
+    int index = cc->core_id / smp_threads;
     int smt = kvmppc_smt_threads();
 
-    drc = spapr_dr_connector_by_id(SPAPR_DR_CONNECTOR_TYPE_CPU, cc->core_id);
-    index = cc->core_id / smt;
+    drc = spapr_dr_connector_by_id(SPAPR_DR_CONNECTOR_TYPE_CPU, index * smt);
     spapr->cores[index] = OBJECT(dev);
 
     if (!smc->dr_cpu_enabled) {
@@ -226,7 +225,6 @@  void spapr_core_pre_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
     sPAPRMachineState *spapr = SPAPR_MACHINE(OBJECT(hotplug_dev));
     int spapr_max_cores = max_cpus / smp_threads;
     int index, i;
-    int smt = kvmppc_smt_threads();
     Error *local_err = NULL;
     CPUCore *cc = CPU_CORE(dev);
     char *base_core_type = spapr_get_cpu_core_type(machine->cpu_model);
@@ -247,12 +245,12 @@  void spapr_core_pre_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
         goto out;
     }
 
-    if (cc->core_id % smt) {
+    if (cc->core_id % smp_threads) {
         error_setg(&local_err, "invalid core id %d\n", cc->core_id);
         goto out;
     }
 
-    index = cc->core_id / smt;
+    index = cc->core_id / smp_threads;
     if (index < 0 || index >= spapr_max_cores) {
         error_setg(&local_err, "core id %d out of range", cc->core_id);
         goto out;
@@ -266,7 +264,7 @@  void spapr_core_pre_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
     for (i = 0; i < index; i++) {
         if (!spapr->cores[i]) {
             error_setg(&local_err, "core-id %d should be added first",
-                       i * smt);
+                       i * smp_threads);
             goto out;
         }
     }