diff mbox series

[2/3] Fix: Reverse CPUs presence check logic for x86 backward compatability

Message ID 20241106130331.205020-3-salil.mehta@huawei.com (mailing list archive)
State New
Headers show
Series Fixes CPUs AML & acpi-bios-tables to be x86 backward compatible | expand

Commit Message

Salil Mehta Nov. 6, 2024, 1:03 p.m. UTC
Checking `is_present` first can break x86 migration from new Qemu
version to old Qemu. This is because CPRS Bit is not defined in the
older Qemu register block and will always be 0 resulting in check always
failing. Reversing the logic to first check `is_enabled` can alleviate
below problem:

-                If ((\_SB.PCI0.PRES.CPEN == One))
-                {
-                    Local0 = 0x0F
+                If ((\_SB.PCI0.PRES.CPRS == One))
+                {
+                    If ((\_SB.PCI0.PRES.CPEN == One))
+                    {
+                        Local0 = 0x0F
+                    }
+                    Else
+                    {
+                        Local0 = 0x0D
+                    }
                 }

Suggested-by: Igor Mammedov <imammedo@redhat.com>
Message-ID: <20241106100047.18901c9d@imammedo.users.ipa.redhat.com>
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
---
 hw/acpi/cpu.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

Comments

Igor Mammedov Nov. 6, 2024, 1:56 p.m. UTC | #1
On Wed, 6 Nov 2024 13:03:30 +0000
Salil Mehta <salil.mehta@huawei.com> wrote:

> Checking `is_present` first can break x86 migration from new Qemu
> version to old Qemu. This is because CPRS Bit is not defined in the
> older Qemu register block and will always be 0 resulting in check always
> failing. Reversing the logic to first check `is_enabled` can alleviate
> below problem:
> 
> -                If ((\_SB.PCI0.PRES.CPEN == One))
> -                {
> -                    Local0 = 0x0F
> +                If ((\_SB.PCI0.PRES.CPRS == One))
> +                {
> +                    If ((\_SB.PCI0.PRES.CPEN == One))
> +                    {
> +                        Local0 = 0x0F
> +                    }
> +                    Else
> +                    {
> +                        Local0 = 0x0D
> +                    }
>                  }
> 
> Suggested-by: Igor Mammedov <imammedo@redhat.com>
'Reported-by' maybe, but certainly not suggested.

After more thinking and given presence is system wide that doesn't change
at runtime, I don't see any reason for introducing presence bit as ABI
(and undocumented on top of that).

Instead changing AML code to account for it would be better,
something like this:

diff --git a/include/hw/acpi/cpu.h b/include/hw/acpi/cpu.h
index 32654dc274..4a3e591120 100644
--- a/include/hw/acpi/cpu.h
+++ b/include/hw/acpi/cpu.h
@@ -55,6 +55,7 @@ void cpu_hotplug_hw_init(MemoryRegion *as, Object *owner,
 typedef struct CPUHotplugFeatures {
     bool acpi_1_compatible;
     bool has_legacy_cphp;
+    bool always_present_cpus;
     bool fw_unplugs_cpu;
     const char *smi_path;
 } CPUHotplugFeatures;
diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
index 5cb60ca8bc..2bcce2b31c 100644
--- a/hw/acpi/cpu.c
+++ b/hw/acpi/cpu.c
@@ -452,15 +452,16 @@ void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts,
 
         method = aml_method(CPU_STS_METHOD, 1, AML_SERIALIZED);
         {
+            uint8_t default_sta = opts.always_present_cpus?0xd:0;
             Aml *idx = aml_arg(0);
-            Aml *sta = aml_local(0);
+            Aml *sta = aml_local(default_sta);
 
             aml_append(method, aml_acquire(ctrl_lock, 0xFFFF));
             aml_append(method, aml_store(idx, cpu_selector));
             aml_append(method, aml_store(zero, sta));
             ifctx = aml_if(aml_equal(is_enabled, one));
             {
-                aml_append(ifctx, aml_store(aml_int(0xF), sta));
+                aml_append(ifctx, aml_or(aml_int(0xF), sta, sta));
             }
             aml_append(method, ifctx);
             aml_append(method, aml_release(ctrl_lock))

then for ARM set
 CPUHotplugFeatures::always_present_cpus = true
to get present flag always enabled

After that revert _all_ other presence bit related changes
that were just merged.
(I did ask to get rid of that in previous reviews but it
came back again for no good reason).

> Message-ID: <20241106100047.18901c9d@imammedo.users.ipa.redhat.com>
> Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
> ---
>  hw/acpi/cpu.c | 22 +++++++++++-----------
>  1 file changed, 11 insertions(+), 11 deletions(-)
> 
> diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
> index 23443f09a5..b2f7a2b27e 100644
> --- a/hw/acpi/cpu.c
> +++ b/hw/acpi/cpu.c
> @@ -490,22 +490,22 @@ void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts,
>              aml_append(method, aml_acquire(ctrl_lock, 0xFFFF));
>              aml_append(method, aml_store(idx, cpu_selector));
>              aml_append(method, aml_store(zero, sta));
> -            ifctx = aml_if(aml_equal(is_present, one));
> +            ifctx = aml_if(aml_equal(is_enabled, one));
>              {
> -                ifctx2 = aml_if(aml_equal(is_enabled, one));
> -                {
> -                    /* cpu is present and enabled */
> -                    aml_append(ifctx2, aml_store(aml_int(0xF), sta));
> -                }
> -                aml_append(ifctx, ifctx2);
> -                else_ctx = aml_else();
> +                /* cpu is present and enabled */
> +                aml_append(ifctx, aml_store(aml_int(0xF), sta));
> +            }
> +            aml_append(method, ifctx);
> +            else_ctx = aml_else();
> +            {
> +                ifctx2 = aml_if(aml_equal(is_present, one));
>                  {
>                      /* cpu is present but disabled */
> -                    aml_append(else_ctx, aml_store(aml_int(0xD), sta));
> +                    aml_append(ifctx2, aml_store(aml_int(0xD), sta));
>                  }
> -                aml_append(ifctx, else_ctx);
> +                aml_append(else_ctx, ifctx2);
>              }
> -            aml_append(method, ifctx);
> +            aml_append(method, else_ctx);
>              aml_append(method, aml_release(ctrl_lock));
>              aml_append(method, aml_return(sta));
>          }
Salil Mehta Nov. 6, 2024, 2:45 p.m. UTC | #2
Hi Igor,

>  From: qemu-arm-bounces+salil.mehta=huawei.com@nongnu.org <qemu-
>  arm-bounces+salil.mehta=huawei.com@nongnu.org> On Behalf Of Igor
>  Mammedov
>  Sent: Wednesday, November 6, 2024 1:57 PM
>  To: Salil Mehta <salil.mehta@huawei.com>
>  
>  On Wed, 6 Nov 2024 13:03:30 +0000
>  Salil Mehta <salil.mehta@huawei.com> wrote:
>  
>  > Checking `is_present` first can break x86 migration from new Qemu
>  > version to old Qemu. This is because CPRS Bit is not defined in the
>  > older Qemu register block and will always be 0 resulting in check
>  > always failing. Reversing the logic to first check `is_enabled` can
>  > alleviate below problem:
>  >
>  > -                If ((\_SB.PCI0.PRES.CPEN == One))
>  > -                {
>  > -                    Local0 = 0x0F
>  > +                If ((\_SB.PCI0.PRES.CPRS == One))
>  > +                {
>  > +                    If ((\_SB.PCI0.PRES.CPEN == One))
>  > +                    {
>  > +                        Local0 = 0x0F
>  > +                    }
>  > +                    Else
>  > +                    {
>  > +                        Local0 = 0x0D
>  > +                    }
>  >                  }
>  >
>  > Suggested-by: Igor Mammedov <imammedo@redhat.com>
>  'Reported-by' maybe, but certainly not suggested.


No issues. I can change.


>  
>  After more thinking and given presence is system wide that doesn't change
>  at runtime, I don't see any reason for introducing presence bit as ABI (and
>  undocumented on top of that).


This is a wrong assumption. Presence bit can change in future. We have taken
into account this aspect by design in the kernel code as well. Both virtual
and physical CPU hot plug can co-exists or entirely as sole features.  This is
a requirement.


>  
>  Instead changing AML code to account for it would be better, something like
>  this:
>  
>  diff --git a/include/hw/acpi/cpu.h b/include/hw/acpi/cpu.h index
>  32654dc274..4a3e591120 100644
>  --- a/include/hw/acpi/cpu.h
>  +++ b/include/hw/acpi/cpu.h
>  @@ -55,6 +55,7 @@ void cpu_hotplug_hw_init(MemoryRegion *as, Object
>  *owner,  typedef struct CPUHotplugFeatures {
>       bool acpi_1_compatible;
>       bool has_legacy_cphp;
>  +    bool always_present_cpus;


This has to be fetched from architecture code. Please see other changes in the V3
patch-set.


>       bool fw_unplugs_cpu;
>       const char *smi_path;
>   } CPUHotplugFeatures;
>  diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c index 5cb60ca8bc..2bcce2b31c
>  100644
>  --- a/hw/acpi/cpu.c
>  +++ b/hw/acpi/cpu.c
>  @@ -452,15 +452,16 @@ void build_cpus_aml(Aml *table, MachineState
>  *machine, CPUHotplugFeatures opts,
>  
>           method = aml_method(CPU_STS_METHOD, 1, AML_SERIALIZED);
>           {
>  +            uint8_t default_sta = opts.always_present_cpus?0xd:0;
>               Aml *idx = aml_arg(0);
>  -            Aml *sta = aml_local(0);
>  +            Aml *sta = aml_local(default_sta);
>  
>               aml_append(method, aml_acquire(ctrl_lock, 0xFFFF));
>               aml_append(method, aml_store(idx, cpu_selector));
>               aml_append(method, aml_store(zero, sta));
>               ifctx = aml_if(aml_equal(is_enabled, one));
>               {
>  -                aml_append(ifctx, aml_store(aml_int(0xF), sta));
>  +                aml_append(ifctx, aml_or(aml_int(0xF), sta, sta));
>               }
>               aml_append(method, ifctx);
>               aml_append(method, aml_release(ctrl_lock))
>  
>  then for ARM set
>   CPUHotplugFeatures::always_present_cpus = true to get present flag
>  always enabled


We MUST fetch this from architecture code as this can dynamically change in
future and hence, we need to keep that flexibility

>  
>  After that revert _all_ other presence bit related changes that were just
>  merged.
>  (I did ask to get rid of that in previous reviews but it came back again for no
>  good reason).


The CPUs AML in the V2 patch-set would have broken the x86 functionality.


Thanks
Salil.
Igor Mammedov Nov. 6, 2024, 4:07 p.m. UTC | #3
On Wed, 6 Nov 2024 14:45:42 +0000
Salil Mehta <salil.mehta@huawei.com> wrote:

> Hi Igor,
> 
> >  From: qemu-arm-bounces+salil.mehta=huawei.com@nongnu.org <qemu-
> >  arm-bounces+salil.mehta=huawei.com@nongnu.org> On Behalf Of Igor
> >  Mammedov
> >  Sent: Wednesday, November 6, 2024 1:57 PM
> >  To: Salil Mehta <salil.mehta@huawei.com>
> >  
> >  On Wed, 6 Nov 2024 13:03:30 +0000
> >  Salil Mehta <salil.mehta@huawei.com> wrote:
> >    
> >  > Checking `is_present` first can break x86 migration from new Qemu
> >  > version to old Qemu. This is because CPRS Bit is not defined in the
> >  > older Qemu register block and will always be 0 resulting in check
> >  > always failing. Reversing the logic to first check `is_enabled` can
> >  > alleviate below problem:
> >  >
> >  > -                If ((\_SB.PCI0.PRES.CPEN == One))
> >  > -                {
> >  > -                    Local0 = 0x0F
> >  > +                If ((\_SB.PCI0.PRES.CPRS == One))
> >  > +                {
> >  > +                    If ((\_SB.PCI0.PRES.CPEN == One))
> >  > +                    {
> >  > +                        Local0 = 0x0F
> >  > +                    }
> >  > +                    Else
> >  > +                    {
> >  > +                        Local0 = 0x0D
> >  > +                    }
> >  >                  }
> >  >
> >  > Suggested-by: Igor Mammedov <imammedo@redhat.com>  
> >  'Reported-by' maybe, but certainly not suggested.  
> 
> 
> No issues. I can change.
> 
> 
> >  
> >  After more thinking and given presence is system wide that doesn't change
> >  at runtime, I don't see any reason for introducing presence bit as ABI (and
> >  undocumented on top of that).  
> 
> 
> This is a wrong assumption. Presence bit can change in future. We have taken
> into account this aspect by design in the kernel code as well. Both virtual

Above does imply that support for really hotpluggable CPUs might be implemented
in the future.
Can you point to relevant kernel code/patches?

> and physical CPU hot plug can co-exists or entirely as sole features.  This is
> a requirement.


I don't see any _must_ requirements here whatsoever. If/when ARM reaches point
where standby and hotplug cpus are mixed within VM instance, we might have to
expose presence bit to guest dynamically but it is totally not needed within
observable future and premature.

Your cpu class presence check also works target-wise just with more boiler code
+ not needed presence bit ABI for guest side,
The same as my suggestion only from other side.

But regardless of that as long as machine has doesn't mix always present CPUs
with hotpluggable ones within the same instance, changing AML side
as I've just suggested works.

If ARM ever gets real cpu hotplug as your comment above hints, my suggestion
also works fine. With only difference that board code would turn off
always_present_cpus if hotpluggable CPUs are used instead of standby.

> 
> >  
> >  Instead changing AML code to account for it would be better, something like
> >  this:
> >  
> >  diff --git a/include/hw/acpi/cpu.h b/include/hw/acpi/cpu.h index
> >  32654dc274..4a3e591120 100644
> >  --- a/include/hw/acpi/cpu.h
> >  +++ b/include/hw/acpi/cpu.h
> >  @@ -55,6 +55,7 @@ void cpu_hotplug_hw_init(MemoryRegion *as, Object
> >  *owner,  typedef struct CPUHotplugFeatures {
> >       bool acpi_1_compatible;
> >       bool has_legacy_cphp;
> >  +    bool always_present_cpus;  
> 
> 
> This has to be fetched from architecture code. Please see other changes in the V3
> patch-set.

I've seen, that and it doesn't have to be fetched dynamically.
In my opinion the series was not ready to be merged
(Michael probably picked it by mistake).

We don't really need this in 9.2 as it is only ARM cpu 'hotplug'
related, and the later is not ready for 9.2.
I'd prefer the series being reverted and we continue improving series,
instead of rushing it and fixing broken thing up.


> 
> 
> >       bool fw_unplugs_cpu;
> >       const char *smi_path;
> >   } CPUHotplugFeatures;
> >  diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c index 5cb60ca8bc..2bcce2b31c
> >  100644
> >  --- a/hw/acpi/cpu.c
> >  +++ b/hw/acpi/cpu.c
> >  @@ -452,15 +452,16 @@ void build_cpus_aml(Aml *table, MachineState
> >  *machine, CPUHotplugFeatures opts,
> >  
> >           method = aml_method(CPU_STS_METHOD, 1, AML_SERIALIZED);
> >           {
> >  +            uint8_t default_sta = opts.always_present_cpus?0xd:0;
> >               Aml *idx = aml_arg(0);
> >  -            Aml *sta = aml_local(0);
> >  +            Aml *sta = aml_local(default_sta);
> >  
> >               aml_append(method, aml_acquire(ctrl_lock, 0xFFFF));
> >               aml_append(method, aml_store(idx, cpu_selector));
> >               aml_append(method, aml_store(zero, sta));
> >               ifctx = aml_if(aml_equal(is_enabled, one));
> >               {
> >  -                aml_append(ifctx, aml_store(aml_int(0xF), sta));
> >  +                aml_append(ifctx, aml_or(aml_int(0xF), sta, sta));
> >               }
> >               aml_append(method, ifctx);
> >               aml_append(method, aml_release(ctrl_lock))
> >  
> >  then for ARM set
> >   CPUHotplugFeatures::always_present_cpus = true to get present flag
> >  always enabled  
> 
> 
> We MUST fetch this from architecture code as this can dynamically change in
> future and hence, we need to keep that flexibility

CPUHotplugFeatures::always_present_cpus can be set dynamically per VM instance
or per Machine type.

> >  
> >  After that revert _all_ other presence bit related changes that were just
> >  merged.
> >  (I did ask to get rid of that in previous reviews but it came back again for no
> >  good reason).  
> 
> 
> The CPUs AML in the V2 patch-set would have broken the x86 functionality.

Frankly speaking, I don't see much progress. All that happens on respins
is flipping between what I asked to remove before to some earlier concept.
And all of that for the purpose to workaround/accommodate fake cpu hotplug hacks.

> Thanks
> Salil.
>
Salil Mehta Nov. 6, 2024, 7:05 p.m. UTC | #4
Hi Igor,

Thanks for replying back and the reviews. Please find my replies
inline.

>  From: Igor Mammedov <imammedo@redhat.com>
>  Sent: Wednesday, November 6, 2024 4:08 PM
>  To: Salil Mehta <salil.mehta@huawei.com>
>  
>  On Wed, 6 Nov 2024 14:45:42 +0000
>  Salil Mehta <salil.mehta@huawei.com> wrote:
>  
>  > Hi Igor,
>  >
>  > >  From: qemu-arm-bounces+salil.mehta=huawei.com@nongnu.org
>  <qemu-
>  > > arm-bounces+salil.mehta=huawei.com@nongnu.org> On Behalf Of Igor
>  > > Mammedov
>  > >  Sent: Wednesday, November 6, 2024 1:57 PM
>  > >  To: Salil Mehta <salil.mehta@huawei.com>
>  > >
>  > >  On Wed, 6 Nov 2024 13:03:30 +0000
>  > >  Salil Mehta <salil.mehta@huawei.com> wrote:
>  > >
>  > >  > Checking `is_present` first can break x86 migration from new Qemu
>  > > > version to old Qemu. This is because CPRS Bit is not defined in
>  > > the  > older Qemu register block and will always be 0 resulting in
>  > > check  > always failing. Reversing the logic to first check
>  > > `is_enabled` can  > alleviate below problem:
>  > >  >
>  > >  > -                If ((\_SB.PCI0.PRES.CPEN == One))
>  > >  > -                {
>  > >  > -                    Local0 = 0x0F
>  > >  > +                If ((\_SB.PCI0.PRES.CPRS == One))
>  > >  > +                {
>  > >  > +                    If ((\_SB.PCI0.PRES.CPEN == One))
>  > >  > +                    {
>  > >  > +                        Local0 = 0x0F
>  > >  > +                    }
>  > >  > +                    Else
>  > >  > +                    {
>  > >  > +                        Local0 = 0x0D
>  > >  > +                    }
>  > >  >                  }
>  > >  >
>  > >  > Suggested-by: Igor Mammedov <imammedo@redhat.com>
>  'Reported-by'
>  > > maybe, but certainly not suggested.
>  >
>  >
>  > No issues. I can change.
>  >
>  >
>  > >
>  > >  After more thinking and given presence is system wide that doesn't
>  > > change  at runtime, I don't see any reason for introducing presence
>  > > bit as ABI (and  undocumented on top of that).
>  >
>  >
>  > This is a wrong assumption. Presence bit can change in future. We have
>  > taken into account this aspect by design in the kernel code as well.
>  > Both virtual
>  
>  Above does imply that support for really hotpluggable CPUs might be
>  implemented in the future.
>  Can you point to relevant kernel code/patches?


Let me make it clear so that there is no confusion, there is no support of
physical "CPU" hot-plug on ARM platforms right now and nor will there be
any in future as it does not makes sense to have. The point I made in the
patches is about hot-plug was at different granularity which has not been
denied by ARM.


>  
>  > and physical CPU hot plug can co-exists or entirely as sole features.
>  > This is a requirement.
>  
>  
>  I don't see any _must_ requirements here whatsoever. If/when ARM
>  reaches point where standby and hotplug cpus are mixed within VM
>  instance, we might have to expose presence bit to guest dynamically but it
>  is totally not needed within observable future and premature.


>  Your cpu class presence check also works target-wise just with more boiler
>  code
>  + not needed presence bit ABI for guest side,
>  The same as my suggestion only from other side.
>  
>  But regardless of that as long as machine has doesn't mix always present
>  CPUs with hotpluggable ones within the same instance, changing AML side
>  as I've just suggested works.


Sure, I'm not disagreeing. It will work by adding the flag you've suggested
but it will work even by not adding any flag and not defining a `persistent`
hook for any architecture.


>  
>  If ARM ever gets real cpu hotplug as your comment above hints, my
>  suggestion also works fine. With only difference that board code would turn
>  off always_present_cpus if hotpluggable CPUs are used instead of standby.


Sure, but is it necessary to define a new flag when you can do even without it?
Having few lines of extra code (literally 2-3 only) should not be a major cause of
worry, especially, if it makes design more clear and easy to understand. This is
not a performance code either.

Again, I appreciate your compact logic. I'm not disagreeing with it.


>  > >  Instead changing AML code to account for it would be better,
>  > > something like
>  > >  this:
>  > >
>  > >  diff --git a/include/hw/acpi/cpu.h b/include/hw/acpi/cpu.h index
>  > >  32654dc274..4a3e591120 100644
>  > >  --- a/include/hw/acpi/cpu.h
>  > >  +++ b/include/hw/acpi/cpu.h
>  > >  @@ -55,6 +55,7 @@ void cpu_hotplug_hw_init(MemoryRegion *as,
>  Object
>  > > *owner,  typedef struct CPUHotplugFeatures {
>  > >       bool acpi_1_compatible;
>  > >       bool has_legacy_cphp;
>  > >  +    bool always_present_cpus;
>  >
>  >
>  > This has to be fetched from architecture code. Please see other
>  > changes in the V3 patch-set.
>  
>  I've seen, that and it doesn't have to be fetched dynamically.


Agreed, it is not necessary to be fetched. Hence, if you do not define the
hook. In later case, it assumes the existing logic, which x86 has been following.
It will work.


>  In my opinion the series was not ready to be merged (Michael probably
>  picked it by mistake).
>  
>  We don't really need this in 9.2 as it is only ARM cpu 'hotplug'
>  related, and the later is not ready for 9.2.
>  I'd prefer the series being reverted and we continue improving series,
>  instead of rushing it and fixing broken thing up.
>  
>  
>  >
>  >
>  > >       bool fw_unplugs_cpu;
>  > >       const char *smi_path;
>  > >   } CPUHotplugFeatures;
>  > >  diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c index
>  > > 5cb60ca8bc..2bcce2b31c
>  > >  100644
>  > >  --- a/hw/acpi/cpu.c
>  > >  +++ b/hw/acpi/cpu.c
>  > >  @@ -452,15 +452,16 @@ void build_cpus_aml(Aml *table,
>  MachineState
>  > > *machine, CPUHotplugFeatures opts,
>  > >
>  > >           method = aml_method(CPU_STS_METHOD, 1, AML_SERIALIZED);
>  > >           {
>  > >  +            uint8_t default_sta = opts.always_present_cpus?0xd:0;
>  > >               Aml *idx = aml_arg(0);
>  > >  -            Aml *sta = aml_local(0);
>  > >  +            Aml *sta = aml_local(default_sta);
>  > >
>  > >               aml_append(method, aml_acquire(ctrl_lock, 0xFFFF));
>  > >               aml_append(method, aml_store(idx, cpu_selector));
>  > >               aml_append(method, aml_store(zero, sta));
>  > >               ifctx = aml_if(aml_equal(is_enabled, one));
>  > >               {
>  > >  -                aml_append(ifctx, aml_store(aml_int(0xF), sta));
>  > >  +                aml_append(ifctx, aml_or(aml_int(0xF), sta, sta));
>  > >               }
>  > >               aml_append(method, ifctx);
>  > >               aml_append(method, aml_release(ctrl_lock))
>  > >
>  > >  then for ARM set
>  > >   CPUHotplugFeatures::always_present_cpus = true to get present flag
>  > > always enabled
>  >
>  >
>  > We MUST fetch this from architecture code as this can dynamically
>  > change in future and hence, we need to keep that flexibility
>  
>  CPUHotplugFeatures::always_present_cpus can be set dynamically per VM
>  instance or per Machine type.


Yes, but you need a code for that and I'm not sure what is being saved by
introducing an extra flag then?


>  > >  After that revert _all_ other presence bit related changes that
>  > > were just  merged.
>  > >  (I did ask to get rid of that in previous reviews but it came back
>  > > again for no  good reason).
>  >
>  >
>  > The CPUs AML in the V2 patch-set would have broken the x86
>  functionality.
>  
>  Frankly speaking, I don't see much progress. All that happens on respins is
>  flipping between what I asked to remove before to some earlier concept.


I think then you've missed the code which addressed one of your key concerns
in the V1 patch-set that would have broken x86 migration i.e. presence of the
`is_enabled` state in the CPU Hot-plug VMSD. That has been removed. Maybe
you would like to go through the change log of the V3 patch-set

https://lore.kernel.org/qemu-devel/20241018163118.0ae01a84@imammedo.users.ipa.redhat.com/

Below is the relevant excerpt from your many comments:

[...]
>      .fields = (const VMStateField[]) {
>          VMSTATE_BOOL(is_inserting, AcpiCpuStatus),
>          VMSTATE_BOOL(is_removing, AcpiCpuStatus),
> +        VMSTATE_BOOL(is_present, AcpiCpuStatus),
> +        VMSTATE_BOOL(is_enabled, AcpiCpuStatus),

that's likely will break x86 migration,
but before bothering peterx, maybe we won't need this hunk if
is_enabled is migrated as part of vCPU state.

[...]


>  And all of that for the purpose to workaround/accommodate fake cpu
>  hotplug hacks.


I've to respectfully disagree on this. This is your assumption which is far from
reality. The accompanying main series of this will never have vCPUs which are
*not* present. BTW, these changes have been tested by Oracle folks with that
series and are known to work.

https://lore.kernel.org/qemu-devel/C4FEC9E7-69DB-428A-A85F-30170F98814B@oracle.com/



Thanks!

Best regards
Salil.
diff mbox series

Patch

diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
index 23443f09a5..b2f7a2b27e 100644
--- a/hw/acpi/cpu.c
+++ b/hw/acpi/cpu.c
@@ -490,22 +490,22 @@  void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts,
             aml_append(method, aml_acquire(ctrl_lock, 0xFFFF));
             aml_append(method, aml_store(idx, cpu_selector));
             aml_append(method, aml_store(zero, sta));
-            ifctx = aml_if(aml_equal(is_present, one));
+            ifctx = aml_if(aml_equal(is_enabled, one));
             {
-                ifctx2 = aml_if(aml_equal(is_enabled, one));
-                {
-                    /* cpu is present and enabled */
-                    aml_append(ifctx2, aml_store(aml_int(0xF), sta));
-                }
-                aml_append(ifctx, ifctx2);
-                else_ctx = aml_else();
+                /* cpu is present and enabled */
+                aml_append(ifctx, aml_store(aml_int(0xF), sta));
+            }
+            aml_append(method, ifctx);
+            else_ctx = aml_else();
+            {
+                ifctx2 = aml_if(aml_equal(is_present, one));
                 {
                     /* cpu is present but disabled */
-                    aml_append(else_ctx, aml_store(aml_int(0xD), sta));
+                    aml_append(ifctx2, aml_store(aml_int(0xD), sta));
                 }
-                aml_append(ifctx, else_ctx);
+                aml_append(else_ctx, ifctx2);
             }
-            aml_append(method, ifctx);
+            aml_append(method, else_ctx);
             aml_append(method, aml_release(ctrl_lock));
             aml_append(method, aml_return(sta));
         }