diff mbox series

[RFC,XEN,v2] x86/cpuid: Expose max_vcpus field in HVM hypervisor leaf

Message ID fa24cd3b232e8865eb6451e5f7af9cd203ce52ab.1721224079.git.matthew.barnes@cloud.com (mailing list archive)
State New
Headers show
Series [RFC,XEN,v2] x86/cpuid: Expose max_vcpus field in HVM hypervisor leaf | expand

Commit Message

Matthew Barnes July 19, 2024, 2:21 p.m. UTC
Currently, OVMF is hard-coded to set up a maximum of 64 vCPUs on
startup.

There are efforts to support a maximum of 128 vCPUs, which would involve
bumping the OVMF constant from 64 to 128.

However, it would be more future-proof for OVMF to access the maximum
number of vCPUs for a domain and set itself up appropriately at
run-time.

GitLab ticket: https://gitlab.com/xen-project/xen/-/issues/191

For OVMF to access the maximum vCPU count, this patch has Xen expose
the maximum vCPU ID via cpuid on the HVM hypervisor leaf in edx.

Signed-off-by: Matthew Barnes <matthew.barnes@cloud.com>
---
Changes in v2:
- Tweak value from "maximum vcpu count" to "maximum vcpu id"
- Reword commit message to avoid "have to" wording
- Fix vpcus -> vcpus typo
---
 xen/arch/x86/traps.c                | 4 ++++
 xen/include/public/arch-x86/cpuid.h | 3 +++
 2 files changed, 7 insertions(+)

Comments

Jan Beulich July 22, 2024, 11:37 a.m. UTC | #1
On 19.07.2024 16:21, Matthew Barnes wrote:
> Currently, OVMF is hard-coded to set up a maximum of 64 vCPUs on
> startup.
> 
> There are efforts to support a maximum of 128 vCPUs, which would involve
> bumping the OVMF constant from 64 to 128.
> 
> However, it would be more future-proof for OVMF to access the maximum
> number of vCPUs for a domain and set itself up appropriately at
> run-time.
> 
> GitLab ticket: https://gitlab.com/xen-project/xen/-/issues/191
> 
> For OVMF to access the maximum vCPU count, this patch has Xen expose
> the maximum vCPU ID via cpuid on the HVM hypervisor leaf in edx.
> 
> Signed-off-by: Matthew Barnes <matthew.barnes@cloud.com>
> ---
> Changes in v2:
> - Tweak value from "maximum vcpu count" to "maximum vcpu id"
> - Reword commit message to avoid "have to" wording
> - Fix vpcus -> vcpus typo
> ---

Yet still HVM-only?

Jan
Jan Beulich July 24, 2024, 5:42 a.m. UTC | #2
(re-adding xen-devel@)

On 23.07.2024 14:57, Matthew Barnes wrote:
> On Mon, Jul 22, 2024 at 01:37:11PM +0200, Jan Beulich wrote:
>> On 19.07.2024 16:21, Matthew Barnes wrote:
>>> Currently, OVMF is hard-coded to set up a maximum of 64 vCPUs on
>>> startup.
>>>
>>> There are efforts to support a maximum of 128 vCPUs, which would involve
>>> bumping the OVMF constant from 64 to 128.
>>>
>>> However, it would be more future-proof for OVMF to access the maximum
>>> number of vCPUs for a domain and set itself up appropriately at
>>> run-time.
>>>
>>> GitLab ticket: https://gitlab.com/xen-project/xen/-/issues/191
>>>
>>> For OVMF to access the maximum vCPU count, this patch has Xen expose
>>> the maximum vCPU ID via cpuid on the HVM hypervisor leaf in edx.
>>>
>>> Signed-off-by: Matthew Barnes <matthew.barnes@cloud.com>
>>> ---
>>> Changes in v2:
>>> - Tweak value from "maximum vcpu count" to "maximum vcpu id"
>>> - Reword commit message to avoid "have to" wording
>>> - Fix vpcus -> vcpus typo
>>> ---
>>
>> Yet still HVM-only?
> 
> This field is only used when the guest is HVM, so I decided it should
> only be present to HVM guests.
> 
> If not, where else would you suggest to put this field?

In a presently unused leaf? Or one of the unused registers of leaf x01
(with the gating flag in leaf x02 ECX)?

Jan
Matthew Barnes July 24, 2024, 12:51 p.m. UTC | #3
On Wed, Jul 24, 2024 at 07:42:19AM +0200, Jan Beulich wrote:
> (re-adding xen-devel@)
> 
> On 23.07.2024 14:57, Matthew Barnes wrote:
> > On Mon, Jul 22, 2024 at 01:37:11PM +0200, Jan Beulich wrote:
> >> On 19.07.2024 16:21, Matthew Barnes wrote:
> >>> Currently, OVMF is hard-coded to set up a maximum of 64 vCPUs on
> >>> startup.
> >>>
> >>> There are efforts to support a maximum of 128 vCPUs, which would involve
> >>> bumping the OVMF constant from 64 to 128.
> >>>
> >>> However, it would be more future-proof for OVMF to access the maximum
> >>> number of vCPUs for a domain and set itself up appropriately at
> >>> run-time.
> >>>
> >>> GitLab ticket: https://gitlab.com/xen-project/xen/-/issues/191
> >>>
> >>> For OVMF to access the maximum vCPU count, this patch has Xen expose
> >>> the maximum vCPU ID via cpuid on the HVM hypervisor leaf in edx.
> >>>
> >>> Signed-off-by: Matthew Barnes <matthew.barnes@cloud.com>
> >>> ---
> >>> Changes in v2:
> >>> - Tweak value from "maximum vcpu count" to "maximum vcpu id"
> >>> - Reword commit message to avoid "have to" wording
> >>> - Fix vpcus -> vcpus typo
> >>> ---
> >>
> >> Yet still HVM-only?
> > 
> > This field is only used when the guest is HVM, so I decided it should
> > only be present to HVM guests.
> > 
> > If not, where else would you suggest to put this field?
> 
> In a presently unused leaf? Or one of the unused registers of leaf x01
> (with the gating flag in leaf x02 ECX)?

I could establish leaf x06 as a 'domain info' leaf for both HVM and PV,
have EAX as a features bitmap, and EBX as the max_vcpu_id field.

Is this satisfactory?

Matt
Jan Beulich July 24, 2024, 1:01 p.m. UTC | #4
On 24.07.2024 14:51, Matthew Barnes wrote:
> On Wed, Jul 24, 2024 at 07:42:19AM +0200, Jan Beulich wrote:
>> (re-adding xen-devel@)
>>
>> On 23.07.2024 14:57, Matthew Barnes wrote:
>>> On Mon, Jul 22, 2024 at 01:37:11PM +0200, Jan Beulich wrote:
>>>> On 19.07.2024 16:21, Matthew Barnes wrote:
>>>>> Currently, OVMF is hard-coded to set up a maximum of 64 vCPUs on
>>>>> startup.
>>>>>
>>>>> There are efforts to support a maximum of 128 vCPUs, which would involve
>>>>> bumping the OVMF constant from 64 to 128.
>>>>>
>>>>> However, it would be more future-proof for OVMF to access the maximum
>>>>> number of vCPUs for a domain and set itself up appropriately at
>>>>> run-time.
>>>>>
>>>>> GitLab ticket: https://gitlab.com/xen-project/xen/-/issues/191
>>>>>
>>>>> For OVMF to access the maximum vCPU count, this patch has Xen expose
>>>>> the maximum vCPU ID via cpuid on the HVM hypervisor leaf in edx.
>>>>>
>>>>> Signed-off-by: Matthew Barnes <matthew.barnes@cloud.com>
>>>>> ---
>>>>> Changes in v2:
>>>>> - Tweak value from "maximum vcpu count" to "maximum vcpu id"
>>>>> - Reword commit message to avoid "have to" wording
>>>>> - Fix vpcus -> vcpus typo
>>>>> ---
>>>>
>>>> Yet still HVM-only?
>>>
>>> This field is only used when the guest is HVM, so I decided it should
>>> only be present to HVM guests.
>>>
>>> If not, where else would you suggest to put this field?
>>
>> In a presently unused leaf? Or one of the unused registers of leaf x01
>> (with the gating flag in leaf x02 ECX)?
> 
> I could establish leaf x06 as a 'domain info' leaf for both HVM and PV,
> have EAX as a features bitmap, and EBX as the max_vcpu_id field.
> 
> Is this satisfactory?

Hmm. Personally I think that all new leaves would better permit for multiple
sub-leaves. Hence EAX is already unavailable. Additionally I'm told that
there are internal discussions (supposed to be) going on at your end, which
makes me wonder whether the above is the outcome of those discussions (in
particular having at least tentative buy-off by Andrew).

For the particular data to expose here, I would prefer the indicated re-use
of an existing leaf. I haven't seen counter-arguments to that so far.

Jan
Alejandro Vallejo July 24, 2024, 2:14 p.m. UTC | #5
On Wed Jul 24, 2024 at 2:01 PM BST, Jan Beulich wrote:
> On 24.07.2024 14:51, Matthew Barnes wrote:
> > On Wed, Jul 24, 2024 at 07:42:19AM +0200, Jan Beulich wrote:
> >> (re-adding xen-devel@)
> >>
> >> On 23.07.2024 14:57, Matthew Barnes wrote:
> >>> On Mon, Jul 22, 2024 at 01:37:11PM +0200, Jan Beulich wrote:
> >>>> On 19.07.2024 16:21, Matthew Barnes wrote:
> >>>>> Currently, OVMF is hard-coded to set up a maximum of 64 vCPUs on
> >>>>> startup.
> >>>>>
> >>>>> There are efforts to support a maximum of 128 vCPUs, which would involve
> >>>>> bumping the OVMF constant from 64 to 128.
> >>>>>
> >>>>> However, it would be more future-proof for OVMF to access the maximum
> >>>>> number of vCPUs for a domain and set itself up appropriately at
> >>>>> run-time.
> >>>>>
> >>>>> GitLab ticket: https://gitlab.com/xen-project/xen/-/issues/191
> >>>>>
> >>>>> For OVMF to access the maximum vCPU count, this patch has Xen expose
> >>>>> the maximum vCPU ID via cpuid on the HVM hypervisor leaf in edx.
> >>>>>
> >>>>> Signed-off-by: Matthew Barnes <matthew.barnes@cloud.com>
> >>>>> ---
> >>>>> Changes in v2:
> >>>>> - Tweak value from "maximum vcpu count" to "maximum vcpu id"
> >>>>> - Reword commit message to avoid "have to" wording
> >>>>> - Fix vpcus -> vcpus typo
> >>>>> ---
> >>>>
> >>>> Yet still HVM-only?
> >>>
> >>> This field is only used when the guest is HVM, so I decided it should
> >>> only be present to HVM guests.
> >>>
> >>> If not, where else would you suggest to put this field?
> >>
> >> In a presently unused leaf? Or one of the unused registers of leaf x01
> >> (with the gating flag in leaf x02 ECX)?
> > 
> > I could establish leaf x06 as a 'domain info' leaf for both HVM and PV,
> > have EAX as a features bitmap, and EBX as the max_vcpu_id field.
> > 
> > Is this satisfactory?
>
> Hmm. Personally I think that all new leaves would better permit for multiple
> sub-leaves. Hence EAX is already unavailable. Additionally I'm told that
> there are internal discussions (supposed to be) going on at your end, which
> makes me wonder whether the above is the outcome of those discussions (in
> particular having at least tentative buy-off by Andrew).
>
> For the particular data to expose here, I would prefer the indicated re-use
> of an existing leaf. I haven't seen counter-arguments to that so far.
>
> Jan

I recommended Matt originally to expose it on the HVM leaf for semantic
cohesion with the other domain-related data and because it's strictly just
needed for HVM, at least for the time being.

It is true though that it's not HVM-specific and could go elsewhere. There's a
fiction of choice, but not so much in practice, I think. Re-using leaf 1 would
overload it semantically, as it's already used for version reporting (just like
other architectural CPUID groups). Leaf 2 could be an option, but it's somewhat
annoying because it leaves (pun intended) no room for expansion. A potential
new leaf 6 would indeed need to ensure only subleaf0 is implemented (as do
leaves 4 and 5), but otherwise should be pretty harmless.

Andrew might very well have wildly different views.

Cheers,
Alejandro
diff mbox series

Patch

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index ee91fc56b125..f39b598e9bba 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1169,6 +1169,10 @@  void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
         res->a |= XEN_HVM_CPUID_DOMID_PRESENT;
         res->c = d->domain_id;
 
+        /* Indicate presence of max vcpu id and set it in edx */
+        res->a |= XEN_HVM_CPUID_MAX_VCPU_ID_PRESENT;
+        res->d = d->max_vcpus - 1;
+
         /*
          * Per-vCPU event channel upcalls are implemented and work
          * correctly with PIRQs routed over event channels.
diff --git a/xen/include/public/arch-x86/cpuid.h b/xen/include/public/arch-x86/cpuid.h
index 3bb0dd249ff9..7673e285a9ec 100644
--- a/xen/include/public/arch-x86/cpuid.h
+++ b/xen/include/public/arch-x86/cpuid.h
@@ -87,6 +87,7 @@ 
  * Sub-leaf 0: EAX: Features
  * Sub-leaf 0: EBX: vcpu id (iff EAX has XEN_HVM_CPUID_VCPU_ID_PRESENT flag)
  * Sub-leaf 0: ECX: domain id (iff EAX has XEN_HVM_CPUID_DOMID_PRESENT flag)
+ * Sub-leaf 0: EDX: max vcpu id (iff EAX has XEN_HVM_CPUID_MAX_VCPU_ID_PRESENT flag)
  */
 #define XEN_HVM_CPUID_APIC_ACCESS_VIRT (1u << 0) /* Virtualized APIC registers */
 #define XEN_HVM_CPUID_X2APIC_VIRT      (1u << 1) /* Virtualized x2APIC accesses */
@@ -107,6 +108,8 @@ 
  */
 #define XEN_HVM_CPUID_UPCALL_VECTOR    (1u << 6)
 
+#define XEN_HVM_CPUID_MAX_VCPU_ID_PRESENT (1u << 7) /* max vcpu id is present in EDX */
+
 /*
  * Leaf 6 (0x40000x05)
  * PV-specific parameters