Message ID | 20181001115704.701-16-luc.michel@greensocs.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | gdbstub: support for the multiprocess extension | expand |
Cc'ing more QOM involved people. On 01/10/2018 13:57, Luc Michel wrote: > Create two separate QOM containers for APUs and RPUs to indicate to the > GDB stub that those CPUs should be put in different processes. > > Signed-off-by: Luc Michel <luc.michel@greensocs.com> > --- > hw/arm/xlnx-zynqmp.c | 7 +++++-- > 1 file changed, 5 insertions(+), 2 deletions(-) > > diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c > index c195040350..5e92adbc71 100644 > --- a/hw/arm/xlnx-zynqmp.c > +++ b/hw/arm/xlnx-zynqmp.c > @@ -22,10 +22,11 @@ > #include "hw/arm/xlnx-zynqmp.h" > #include "hw/intc/arm_gic_common.h" > #include "exec/address-spaces.h" > #include "sysemu/kvm.h" > #include "kvm_arm.h" > +#include "exec/gdbstub.h" > > #define GIC_NUM_SPI_INTR 160 > > #define ARM_PHYS_TIMER_PPI 30 > #define ARM_VIRT_TIMER_PPI 27 > @@ -175,17 +176,18 @@ static void xlnx_zynqmp_create_rpu(XlnxZynqMPState *s, const char *boot_cpu, > Error **errp) > { > Error *err = NULL; > int i; > int num_rpus = MIN(smp_cpus - XLNX_ZYNQMP_NUM_APU_CPUS, XLNX_ZYNQMP_NUM_RPU_CPUS); > + Object *rpu_group = gdb_cpu_group_container_get(OBJECT(s)); I'd rather keep this generic: not involve 'gdb' container name. (qemu) info qom-tree /machine (xlnx-zcu102-machine) /soc (xlnx,zynqmp) /gdb-group[0] (container) /apu-cpu[3] (cortex-a53-arm-cpu) /unnamed-gpio-in[0] (irq) /unnamed-gpio-in[2] (irq) /unnamed-gpio-in[1] (irq) /unnamed-gpio-in[3] (irq) ... /gdb-group[1] (container) Maybe your create_processes() from patch [1] should enumerate all CPUs with object_resolve_path_type("", TYPE_CPU, NULL) then sort by cpu? [1] https://lists.gnu.org/archive/html/qemu-devel/2018-10/msg00089.html > > for (i = 0; i < num_rpus; i++) { > char *name; > > object_initialize(&s->rpu_cpu[i], sizeof(s->rpu_cpu[i]), > "cortex-r5f-" TYPE_ARM_CPU); > - object_property_add_child(OBJECT(s), "rpu-cpu[*]", > + object_property_add_child(rpu_group, "rpu-cpu[*]", > OBJECT(&s->rpu_cpu[i]), &error_abort); > > name = object_get_canonical_path_component(OBJECT(&s->rpu_cpu[i])); > if (strcmp(name, boot_cpu)) { > /* Secondary CPUs start in PSCI powered-down state */ > @@ -210,13 +212,14 @@ static void xlnx_zynqmp_create_rpu(XlnxZynqMPState *s, const char *boot_cpu, > static void xlnx_zynqmp_init(Object *obj) > { > XlnxZynqMPState *s = XLNX_ZYNQMP(obj); > int i; > int num_apus = MIN(smp_cpus, XLNX_ZYNQMP_NUM_APU_CPUS); > + Object *apu_group = gdb_cpu_group_container_get(obj); > > for (i = 0; i < num_apus; i++) { > - object_initialize_child(obj, "apu-cpu[*]", &s->apu_cpu[i], > + object_initialize_child(apu_group, "apu-cpu[*]", &s->apu_cpu[i], > sizeof(s->apu_cpu[i]), > "cortex-a53-" TYPE_ARM_CPU, &error_abort, NULL); > } > > sysbus_init_child_obj(obj, "gic", &s->gic, sizeof(s->gic), >
On 2 October 2018 at 12:33, Philippe Mathieu-Daudé <philmd@redhat.com> wrote: > Cc'ing more QOM involved people. > > On 01/10/2018 13:57, Luc Michel wrote: >> Create two separate QOM containers for APUs and RPUs to indicate to the >> GDB stub that those CPUs should be put in different processes. >> >> Signed-off-by: Luc Michel <luc.michel@greensocs.com> >> --- >> hw/arm/xlnx-zynqmp.c | 7 +++++-- >> 1 file changed, 5 insertions(+), 2 deletions(-) >> >> diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c >> index c195040350..5e92adbc71 100644 >> --- a/hw/arm/xlnx-zynqmp.c >> +++ b/hw/arm/xlnx-zynqmp.c >> @@ -22,10 +22,11 @@ >> #include "hw/arm/xlnx-zynqmp.h" >> #include "hw/intc/arm_gic_common.h" >> #include "exec/address-spaces.h" >> #include "sysemu/kvm.h" >> #include "kvm_arm.h" >> +#include "exec/gdbstub.h" >> >> #define GIC_NUM_SPI_INTR 160 >> >> #define ARM_PHYS_TIMER_PPI 30 >> #define ARM_VIRT_TIMER_PPI 27 >> @@ -175,17 +176,18 @@ static void xlnx_zynqmp_create_rpu(XlnxZynqMPState *s, const char *boot_cpu, >> Error **errp) >> { >> Error *err = NULL; >> int i; >> int num_rpus = MIN(smp_cpus - XLNX_ZYNQMP_NUM_APU_CPUS, XLNX_ZYNQMP_NUM_RPU_CPUS); >> + Object *rpu_group = gdb_cpu_group_container_get(OBJECT(s)); > > I'd rather keep this generic: not involve 'gdb' container name. Yes, I agree. We should structure how we construct our model to follow what the hardware has (two CPU clusters with 4 cores each), and then the gdb code should introspect the system later to decide how it exposes things to the gdb user. GDB specifics should (as far as possible) be kept out of the board code. The fact that there are two clusters here also affects other things, like whether they have the same view of memory, whether they can share translated code (they shouldn't but do at the moment), and so on -- it's not just a GDB-relevant distinction. So we should be modelling it somehow, definitely. I don't have a clear view how just yet. This probably ties into the stuff I have somewhere on my todo list about supporting multiple heterogenous systems. The problem with this xilinx board is that it is trying to model that kind of system but we don't actually properly support that in QEMU yet. thanks -- PMM
On 10/2/18 1:58 PM, Peter Maydell wrote: > On 2 October 2018 at 12:33, Philippe Mathieu-Daudé <philmd@redhat.com> wrote: >> Cc'ing more QOM involved people. >> >> On 01/10/2018 13:57, Luc Michel wrote: >>> Create two separate QOM containers for APUs and RPUs to indicate to the >>> GDB stub that those CPUs should be put in different processes. >>> >>> Signed-off-by: Luc Michel <luc.michel@greensocs.com> >>> --- >>> hw/arm/xlnx-zynqmp.c | 7 +++++-- >>> 1 file changed, 5 insertions(+), 2 deletions(-) >>> >>> diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c >>> index c195040350..5e92adbc71 100644 >>> --- a/hw/arm/xlnx-zynqmp.c >>> +++ b/hw/arm/xlnx-zynqmp.c >>> @@ -22,10 +22,11 @@ >>> #include "hw/arm/xlnx-zynqmp.h" >>> #include "hw/intc/arm_gic_common.h" >>> #include "exec/address-spaces.h" >>> #include "sysemu/kvm.h" >>> #include "kvm_arm.h" >>> +#include "exec/gdbstub.h" >>> >>> #define GIC_NUM_SPI_INTR 160 >>> >>> #define ARM_PHYS_TIMER_PPI 30 >>> #define ARM_VIRT_TIMER_PPI 27 >>> @@ -175,17 +176,18 @@ static void xlnx_zynqmp_create_rpu(XlnxZynqMPState *s, const char *boot_cpu, >>> Error **errp) >>> { >>> Error *err = NULL; >>> int i; >>> int num_rpus = MIN(smp_cpus - XLNX_ZYNQMP_NUM_APU_CPUS, XLNX_ZYNQMP_NUM_RPU_CPUS); >>> + Object *rpu_group = gdb_cpu_group_container_get(OBJECT(s)); >> >> I'd rather keep this generic: not involve 'gdb' container name. > > Yes, I agree. We should structure how we construct our > model to follow what the hardware has (two CPU clusters > with 4 cores each), and then the gdb code should introspect > the system later to decide how it exposes things to the gdb > user. GDB specifics should (as far as possible) be kept out > of the board code. > > The fact that there are two clusters here also > affects other things, like whether they have the > same view of memory, whether they can share translated > code (they shouldn't but do at the moment), and so on -- > it's not just a GDB-relevant distinction. So we should > be modelling it somehow, definitely. I don't have a clear > view how just yet. So for now, maybe it's better to rely on an heuristic such as the one suggested by Philippe in the gdb code to group the CPUs. Once QEMU gains more supports for such heterogeneous architectures, we can remove the heuristic and put the proper QOM introspection code instead. Luc > > This probably ties into the stuff I have somewhere on > my todo list about supporting multiple heterogenous > systems. The problem with this xilinx board is that it > is trying to model that kind of system but we don't actually > properly support that in QEMU yet. > > thanks > -- PMM >
On 03/10/2018 13:44, Luc Michel wrote: > On 10/2/18 1:58 PM, Peter Maydell wrote: >> On 2 October 2018 at 12:33, Philippe Mathieu-Daudé <philmd@redhat.com> wrote: >>> Cc'ing more QOM involved people. >>> >>> On 01/10/2018 13:57, Luc Michel wrote: >>>> Create two separate QOM containers for APUs and RPUs to indicate to the >>>> GDB stub that those CPUs should be put in different processes. >>>> >>>> Signed-off-by: Luc Michel <luc.michel@greensocs.com> >>>> --- >>>> hw/arm/xlnx-zynqmp.c | 7 +++++-- >>>> 1 file changed, 5 insertions(+), 2 deletions(-) >>>> >>>> diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c >>>> index c195040350..5e92adbc71 100644 >>>> --- a/hw/arm/xlnx-zynqmp.c >>>> +++ b/hw/arm/xlnx-zynqmp.c >>>> @@ -22,10 +22,11 @@ >>>> #include "hw/arm/xlnx-zynqmp.h" >>>> #include "hw/intc/arm_gic_common.h" >>>> #include "exec/address-spaces.h" >>>> #include "sysemu/kvm.h" >>>> #include "kvm_arm.h" >>>> +#include "exec/gdbstub.h" >>>> >>>> #define GIC_NUM_SPI_INTR 160 >>>> >>>> #define ARM_PHYS_TIMER_PPI 30 >>>> #define ARM_VIRT_TIMER_PPI 27 >>>> @@ -175,17 +176,18 @@ static void xlnx_zynqmp_create_rpu(XlnxZynqMPState *s, const char *boot_cpu, >>>> Error **errp) >>>> { >>>> Error *err = NULL; >>>> int i; >>>> int num_rpus = MIN(smp_cpus - XLNX_ZYNQMP_NUM_APU_CPUS, XLNX_ZYNQMP_NUM_RPU_CPUS); >>>> + Object *rpu_group = gdb_cpu_group_container_get(OBJECT(s)); >>> >>> I'd rather keep this generic: not involve 'gdb' container name. >> >> Yes, I agree. We should structure how we construct our >> model to follow what the hardware has (two CPU clusters >> with 4 cores each), and then the gdb code should introspect >> the system later to decide how it exposes things to the gdb >> user. GDB specifics should (as far as possible) be kept out >> of the board code. >> >> The fact that there are two clusters here also >> affects other things, like whether they have the >> same view of memory, whether they can share translated >> code (they shouldn't but do at the moment), and so on -- >> it's not just a GDB-relevant distinction. So we should >> be modelling it somehow, definitely. I don't have a clear >> view how just yet. > > So for now, maybe it's better to rely on an heuristic such as the one > suggested by Philippe in the gdb code to group the CPUs. Once QEMU gains > more supports for such heterogeneous architectures, we can remove the > heuristic and put the proper QOM introspection code instead. I'm not sure this is the best approach, just suggested because using object_resolve_path_type("", TYPE_CPU, NULL) seemed to me the quicker/easiest approach. Eduardo: Do you have other thoughts on how to resolve those generic containers, without involving any gdb-specific tag? >> This probably ties into the stuff I have somewhere on >> my todo list about supporting multiple heterogenous >> systems. The problem with this xilinx board is that it >> is trying to model that kind of system but we don't actually >> properly support that in QEMU yet. >> >> thanks >> -- PMM >>
On Thu, Oct 04, 2018 at 06:07:45PM +0200, Philippe Mathieu-Daudé wrote: > On 03/10/2018 13:44, Luc Michel wrote: > > On 10/2/18 1:58 PM, Peter Maydell wrote: > >> On 2 October 2018 at 12:33, Philippe Mathieu-Daudé <philmd@redhat.com> wrote: > >>> Cc'ing more QOM involved people. > >>> > >>> On 01/10/2018 13:57, Luc Michel wrote: > >>>> Create two separate QOM containers for APUs and RPUs to indicate to the > >>>> GDB stub that those CPUs should be put in different processes. > >>>> > >>>> Signed-off-by: Luc Michel <luc.michel@greensocs.com> > >>>> --- > >>>> hw/arm/xlnx-zynqmp.c | 7 +++++-- > >>>> 1 file changed, 5 insertions(+), 2 deletions(-) > >>>> > >>>> diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c > >>>> index c195040350..5e92adbc71 100644 > >>>> --- a/hw/arm/xlnx-zynqmp.c > >>>> +++ b/hw/arm/xlnx-zynqmp.c > >>>> @@ -22,10 +22,11 @@ > >>>> #include "hw/arm/xlnx-zynqmp.h" > >>>> #include "hw/intc/arm_gic_common.h" > >>>> #include "exec/address-spaces.h" > >>>> #include "sysemu/kvm.h" > >>>> #include "kvm_arm.h" > >>>> +#include "exec/gdbstub.h" > >>>> > >>>> #define GIC_NUM_SPI_INTR 160 > >>>> > >>>> #define ARM_PHYS_TIMER_PPI 30 > >>>> #define ARM_VIRT_TIMER_PPI 27 > >>>> @@ -175,17 +176,18 @@ static void xlnx_zynqmp_create_rpu(XlnxZynqMPState *s, const char *boot_cpu, > >>>> Error **errp) > >>>> { > >>>> Error *err = NULL; > >>>> int i; > >>>> int num_rpus = MIN(smp_cpus - XLNX_ZYNQMP_NUM_APU_CPUS, XLNX_ZYNQMP_NUM_RPU_CPUS); > >>>> + Object *rpu_group = gdb_cpu_group_container_get(OBJECT(s)); > >>> > >>> I'd rather keep this generic: not involve 'gdb' container name. > >> > >> Yes, I agree. We should structure how we construct our > >> model to follow what the hardware has (two CPU clusters > >> with 4 cores each), and then the gdb code should introspect > >> the system later to decide how it exposes things to the gdb > >> user. GDB specifics should (as far as possible) be kept out > >> of the board code. Agreed. > >> > >> The fact that there are two clusters here also > >> affects other things, like whether they have the > >> same view of memory, whether they can share translated > >> code (they shouldn't but do at the moment), and so on -- > >> it's not just a GDB-relevant distinction. So we should > >> be modelling it somehow, definitely. I don't have a clear > >> view how just yet. > > > > So for now, maybe it's better to rely on an heuristic such as the one > > suggested by Philippe in the gdb code to group the CPUs. Once QEMU gains > > more supports for such heterogeneous architectures, we can remove the > > heuristic and put the proper QOM introspection code instead. > > I'm not sure this is the best approach, just suggested because using > object_resolve_path_type("", TYPE_CPU, NULL) seemed to me the > quicker/easiest approach. > > Eduardo: Do you have other thoughts on how to resolve those generic > containers, without involving any gdb-specific tag? Changing the object hierarchy based on GDB groups doesn't seem right, but I don't think it would be a big deal if we have the board code explicitly telling the GDB code how to group the CPUs. If you really want to do it implicitly, would it work if you simply group the CPUs based on object_get_canonical_path()? If a more explicit GDB grouping API is acceptable, what about just adding a INTERFACE_GDB_GROUP interface name to (existing) container objects that we expect to become GDB groups? I'm not sure which way is better. I'm a bit worried that making things too implicit could easily break (e.g. if somebody changes the CPU QOM hierarchy in the future for unrelated reasons). > > >> This probably ties into the stuff I have somewhere on > >> my todo list about supporting multiple heterogenous > >> systems. The problem with this xilinx board is that it > >> is trying to model that kind of system but we don't actually > >> properly support that in QEMU yet. > >> > >> thanks > >> -- PMM > >>
On 4 October 2018 at 20:52, Eduardo Habkost <ehabkost@redhat.com> wrote: > Changing the object hierarchy based on GDB groups doesn't seem > right, but I don't think it would be a big deal if we have the > board code explicitly telling the GDB code how to group the CPUs. > > If you really want to do it implicitly, would it work if you > simply group the CPUs based on object_get_canonical_path()? > > If a more explicit GDB grouping API is acceptable, what about > just adding a INTERFACE_GDB_GROUP interface name to (existing) > container objects that we expect to become GDB groups? > > I'm not sure which way is better. I'm a bit worried that making > things too implicit could easily break (e.g. if somebody changes > the CPU QOM hierarchy in the future for unrelated reasons). I don't want things implicit. I just don't want the explicitness to be "this is all about GDB", because it isn't. I want us to explicitly say "these 4 CPUs are in one cluster" (or whatever term we use), because that affects more than merely GDB. thanks -- PMM
On Thu, Oct 04, 2018 at 09:01:09PM +0100, Peter Maydell wrote: > On 4 October 2018 at 20:52, Eduardo Habkost <ehabkost@redhat.com> wrote: > > Changing the object hierarchy based on GDB groups doesn't seem > > right, but I don't think it would be a big deal if we have the > > board code explicitly telling the GDB code how to group the CPUs. > > > > If you really want to do it implicitly, would it work if you > > simply group the CPUs based on object_get_canonical_path()? > > > > If a more explicit GDB grouping API is acceptable, what about > > just adding a INTERFACE_GDB_GROUP interface name to (existing) > > container objects that we expect to become GDB groups? > > > > I'm not sure which way is better. I'm a bit worried that making > > things too implicit could easily break (e.g. if somebody changes > > the CPU QOM hierarchy in the future for unrelated reasons). > > I don't want things implicit. I just don't want the explicitness > to be "this is all about GDB", because it isn't. I want us > to explicitly say "these 4 CPUs are in one cluster" (or > whatever term we use), because that affects more than merely GDB. We already have a way to say "these 4 CPUs are in one cluster", don't we? That's the QOM hierarchy. My question is if "the CPUs are in one cluster" should implicitly mean "the CPUs are in one GDB group".
On 04/10/2018 23:53, Eduardo Habkost wrote: > On Thu, Oct 04, 2018 at 09:01:09PM +0100, Peter Maydell wrote: >> On 4 October 2018 at 20:52, Eduardo Habkost <ehabkost@redhat.com> wrote: >>> Changing the object hierarchy based on GDB groups doesn't seem >>> right, but I don't think it would be a big deal if we have the >>> board code explicitly telling the GDB code how to group the CPUs. >>> >>> If you really want to do it implicitly, would it work if you >>> simply group the CPUs based on object_get_canonical_path()? >>> >>> If a more explicit GDB grouping API is acceptable, what about >>> just adding a INTERFACE_GDB_GROUP interface name to (existing) >>> container objects that we expect to become GDB groups? >>> >>> I'm not sure which way is better. I'm a bit worried that making >>> things too implicit could easily break (e.g. if somebody changes >>> the CPU QOM hierarchy in the future for unrelated reasons). >> >> I don't want things implicit. I just don't want the explicitness >> to be "this is all about GDB", because it isn't. I want us >> to explicitly say "these 4 CPUs are in one cluster" (or >> whatever term we use), because that affects more than merely GDB. > > We already have a way to say "these 4 CPUs are in one cluster", > don't we? That's the QOM hierarchy. > > My question is if "the CPUs are in one cluster" should implicitly > mean "the CPUs are in one GDB group". > What about having the container implement INTERFACE_CPU_CLUSTER? Or even cleaner, add a TYPE_CPU_CLUSTER which is just a container for TYPE_CPU[*]?
On Fri, Oct 05, 2018 at 03:50:01PM +0200, Philippe Mathieu-Daudé wrote: > On 04/10/2018 23:53, Eduardo Habkost wrote: > > On Thu, Oct 04, 2018 at 09:01:09PM +0100, Peter Maydell wrote: > >> On 4 October 2018 at 20:52, Eduardo Habkost <ehabkost@redhat.com> wrote: > >>> Changing the object hierarchy based on GDB groups doesn't seem > >>> right, but I don't think it would be a big deal if we have the > >>> board code explicitly telling the GDB code how to group the CPUs. > >>> > >>> If you really want to do it implicitly, would it work if you > >>> simply group the CPUs based on object_get_canonical_path()? > >>> > >>> If a more explicit GDB grouping API is acceptable, what about > >>> just adding a INTERFACE_GDB_GROUP interface name to (existing) > >>> container objects that we expect to become GDB groups? > >>> > >>> I'm not sure which way is better. I'm a bit worried that making > >>> things too implicit could easily break (e.g. if somebody changes > >>> the CPU QOM hierarchy in the future for unrelated reasons). > >> > >> I don't want things implicit. I just don't want the explicitness > >> to be "this is all about GDB", because it isn't. I want us > >> to explicitly say "these 4 CPUs are in one cluster" (or > >> whatever term we use), because that affects more than merely GDB. > > > > We already have a way to say "these 4 CPUs are in one cluster", > > don't we? That's the QOM hierarchy. > > > > My question is if "the CPUs are in one cluster" should implicitly > > mean "the CPUs are in one GDB group". > > > > What about having the container implement INTERFACE_CPU_CLUSTER? > > Or even cleaner, add a TYPE_CPU_CLUSTER which is just a container for > TYPE_CPU[*]? Sounds good to me. An interface sounds more flexible, to avoid clashing with existing type hierarchies for nodes/sockets/cores/etc. But first of all, I think we need a good definition of what exactly is a cluster, and what is the purpose of this abstraction. If we end up with a new abstraction that is only going to used by GDB code and nothing else, I don't see the point of pretending it is not a GDB-specific abstraction.
On 10/5/18 8:49 PM, Eduardo Habkost wrote: > On Fri, Oct 05, 2018 at 03:50:01PM +0200, Philippe Mathieu-Daudé wrote: >> On 04/10/2018 23:53, Eduardo Habkost wrote: >>> On Thu, Oct 04, 2018 at 09:01:09PM +0100, Peter Maydell wrote: >>>> On 4 October 2018 at 20:52, Eduardo Habkost <ehabkost@redhat.com> wrote: >>>>> Changing the object hierarchy based on GDB groups doesn't seem >>>>> right, but I don't think it would be a big deal if we have the >>>>> board code explicitly telling the GDB code how to group the CPUs. >>>>> >>>>> If you really want to do it implicitly, would it work if you >>>>> simply group the CPUs based on object_get_canonical_path()? >>>>> >>>>> If a more explicit GDB grouping API is acceptable, what about >>>>> just adding a INTERFACE_GDB_GROUP interface name to (existing) >>>>> container objects that we expect to become GDB groups? >>>>> >>>>> I'm not sure which way is better. I'm a bit worried that making >>>>> things too implicit could easily break (e.g. if somebody changes >>>>> the CPU QOM hierarchy in the future for unrelated reasons). >>>> >>>> I don't want things implicit. I just don't want the explicitness >>>> to be "this is all about GDB", because it isn't. I want us >>>> to explicitly say "these 4 CPUs are in one cluster" (or >>>> whatever term we use), because that affects more than merely GDB. >>> >>> We already have a way to say "these 4 CPUs are in one cluster", >>> don't we? That's the QOM hierarchy. >>> >>> My question is if "the CPUs are in one cluster" should implicitly >>> mean "the CPUs are in one GDB group". >>> >> >> What about having the container implement INTERFACE_CPU_CLUSTER? >> >> Or even cleaner, add a TYPE_CPU_CLUSTER which is just a container for >> TYPE_CPU[*]? > > Sounds good to me. An interface sounds more flexible, to avoid > clashing with existing type hierarchies for > nodes/sockets/cores/etc. But we still need a container sub-class specialized for that matter right? Or are we going to have the generic container class implements this not-so-generic interface? > > But first of all, I think we need a good definition of what > exactly is a cluster, and what is the purpose of this > abstraction. I think it has implications that go way beyond this patch set. Here we want to put the APUs (cortex-a53) and the RPUs (cortex-r5) in different groups mainly because they have different architectures (I think the address space is more or less the same for all the CPUs in this SoC). The current configuration is wrong since the A53 and the R5 probably don't have the same features, hence for the same piece of code, translations can differ from one another (e.g. one could have VFPv4 and not the other). So the translation cache should not be shared. We could imagine modelling more complex heterogeneous architectures. One that come to my mind is a many-core chip from Kalray, which is organised in 16 clusters of 16 cores each. In a cluster, the cores are SMP, accessing the same SRAM. But inter-cluster communication is done through an explicit NoC, using DMAs. In that case, a "cluster" QEMU abstraction would make sense since cores between clusters must not share the same address space, nor translation cache. Regarding GDB, two CPUs should be put in different groups if: - Their architectures are different - or if the extra XML descriptions we send to GDB for those CPUs are different (extra registers). So for now I think we can introduce this new "cpu cluster" abstraction as it makes sense for the kind of system we (could) want to model in QEMU. For now it will only be used by the GDB stub but it definitely has a deeper implication. > > If we end up with a new abstraction that is only going to used by > GDB code and nothing else, I don't see the point of pretending it > is not a GDB-specific abstraction. >
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c index c195040350..5e92adbc71 100644 --- a/hw/arm/xlnx-zynqmp.c +++ b/hw/arm/xlnx-zynqmp.c @@ -22,10 +22,11 @@ #include "hw/arm/xlnx-zynqmp.h" #include "hw/intc/arm_gic_common.h" #include "exec/address-spaces.h" #include "sysemu/kvm.h" #include "kvm_arm.h" +#include "exec/gdbstub.h" #define GIC_NUM_SPI_INTR 160 #define ARM_PHYS_TIMER_PPI 30 #define ARM_VIRT_TIMER_PPI 27 @@ -175,17 +176,18 @@ static void xlnx_zynqmp_create_rpu(XlnxZynqMPState *s, const char *boot_cpu, Error **errp) { Error *err = NULL; int i; int num_rpus = MIN(smp_cpus - XLNX_ZYNQMP_NUM_APU_CPUS, XLNX_ZYNQMP_NUM_RPU_CPUS); + Object *rpu_group = gdb_cpu_group_container_get(OBJECT(s)); for (i = 0; i < num_rpus; i++) { char *name; object_initialize(&s->rpu_cpu[i], sizeof(s->rpu_cpu[i]), "cortex-r5f-" TYPE_ARM_CPU); - object_property_add_child(OBJECT(s), "rpu-cpu[*]", + object_property_add_child(rpu_group, "rpu-cpu[*]", OBJECT(&s->rpu_cpu[i]), &error_abort); name = object_get_canonical_path_component(OBJECT(&s->rpu_cpu[i])); if (strcmp(name, boot_cpu)) { /* Secondary CPUs start in PSCI powered-down state */ @@ -210,13 +212,14 @@ static void xlnx_zynqmp_create_rpu(XlnxZynqMPState *s, const char *boot_cpu, static void xlnx_zynqmp_init(Object *obj) { XlnxZynqMPState *s = XLNX_ZYNQMP(obj); int i; int num_apus = MIN(smp_cpus, XLNX_ZYNQMP_NUM_APU_CPUS); + Object *apu_group = gdb_cpu_group_container_get(obj); for (i = 0; i < num_apus; i++) { - object_initialize_child(obj, "apu-cpu[*]", &s->apu_cpu[i], + object_initialize_child(apu_group, "apu-cpu[*]", &s->apu_cpu[i], sizeof(s->apu_cpu[i]), "cortex-a53-" TYPE_ARM_CPU, &error_abort, NULL); } sysbus_init_child_obj(obj, "gic", &s->gic, sizeof(s->gic),
Create two separate QOM containers for APUs and RPUs to indicate to the GDB stub that those CPUs should be put in different processes. Signed-off-by: Luc Michel <luc.michel@greensocs.com> --- hw/arm/xlnx-zynqmp.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)