diff mbox series

[v5,07/11] xen/domctl: Introduce XEN_DOMCTL_CDF_vpci flag

Message ID 20d5b9d6a0d01a7b90711d28cbefb5701a88b438.1633540842.git.rahul.singh@arm.com (mailing list archive)
State New, archived
Headers show
Series PCI devices passthrough on Arm | expand

Commit Message

Rahul Singh Oct. 6, 2021, 5:40 p.m. UTC
Introduce XEN_DOMCTL_CDF_vpci flag to enable VPCI support in XEN.
Reject the use of this new flag for x86 as VPCI is not supported for
DOMU guests for x86.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Christian Lindig <christian.lindig@citrix.com>
---
Change in v5:
- Added Acked-by: Christian Lindig <christian.lindig@citrix.com>
- Added Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Change in v4: Added in this version
---
---
 tools/ocaml/libs/xc/xenctrl.ml  | 1 +
 tools/ocaml/libs/xc/xenctrl.mli | 1 +
 xen/arch/arm/domain.c           | 4 ++--
 xen/arch/x86/domain.c           | 6 ++++++
 xen/common/domain.c             | 2 +-
 xen/include/public/domctl.h     | 4 +++-
 6 files changed, 14 insertions(+), 4 deletions(-)

Comments

Jan Beulich Oct. 7, 2021, 1:08 p.m. UTC | #1
On 06.10.2021 19:40, Rahul Singh wrote:
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -70,9 +70,11 @@ struct xen_domctl_createdomain {
>  #define XEN_DOMCTL_CDF_iommu          (1U<<_XEN_DOMCTL_CDF_iommu)
>  #define _XEN_DOMCTL_CDF_nested_virt   6
>  #define XEN_DOMCTL_CDF_nested_virt    (1U << _XEN_DOMCTL_CDF_nested_virt)
> +#define _XEN_DOMCTL_CDF_vpci          7
> +#define XEN_DOMCTL_CDF_vpci           (1U << _XEN_DOMCTL_CDF_vpci)

Like said in [1] here I similarly don't see the need for two constants.
Preferably with the former of the two dropped
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan

[1] https://lists.xen.org/archives/html/xen-devel/2021-10/msg00266.html
Andrew Cooper Oct. 8, 2021, 6:06 p.m. UTC | #2
On 06/10/2021 18:40, Rahul Singh wrote:
> Introduce XEN_DOMCTL_CDF_vpci flag to enable VPCI support in XEN.
> Reject the use of this new flag for x86 as VPCI is not supported for
> DOMU guests for x86.
>
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> Acked-by: Christian Lindig <christian.lindig@citrix.com>

I'm afraid this logic is broken.

There's no matching feature to indicate to the toolstack whether
XEN_DOMCTL_CDF_vpci will be accepted or not.  The usual way of doing
this is with a physinfo_cap field.

This flag needs using in Patch 10 to reject attempts to create a VM with
devices when passthrough support is unavailable.

Ian, for the 4.16 release, this series either needs completing with the
additional flag implemented, or this patch needs reverting to avoid us
shipping a broken interface.

~Andrew
Julien Grall Oct. 8, 2021, 9:12 p.m. UTC | #3
Hi Andrew,

On Fri, 8 Oct 2021, 20:07 Andrew Cooper, <andrew.cooper3@citrix.com> wrote:

> On 06/10/2021 18:40, Rahul Singh wrote:
> > Introduce XEN_DOMCTL_CDF_vpci flag to enable VPCI support in XEN.
> > Reject the use of this new flag for x86 as VPCI is not supported for
> > DOMU guests for x86.
> >
> > Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> > Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> > Acked-by: Christian Lindig <christian.lindig@citrix.com>
>
> I'm afraid this logic is broken.
>
> There's no matching feature to indicate to the toolstack whether
> XEN_DOMCTL_CDF_vpci will be accepted or not.  The usual way of doing
> this is with a physinfo_cap field.
>

I am slightly puzzled by this. I am assuming you are referring to
XENVER_get_features which AFAICT is a stable interface. So why should we
use it to describe the presence of an unstable interface?


> This flag needs using in Patch 10 to reject attempts to create a VM with
> devices when passthrough support is unavailable.
>

May I ask why we can't rely on the hypervisor to reject the flag when
suitable?


> Ian, for the 4.16 release, this series either needs completing with the
> additional flag implemented, or this patch needs reverting to avoid us
> shipping a broken interface.


I fail to see how the interface would be broken... Would you mind to
describe a bit more what could go wrong with this interface?

Cheers,


> ~Andrew
>
>
Stefano Stabellini Oct. 8, 2021, 9:46 p.m. UTC | #4
On Fri, 8 Oct 2021, Julien Grall wrote:
> Hi Andrew,
> 
> On Fri, 8 Oct 2021, 20:07 Andrew Cooper, <andrew.cooper3@citrix.com> wrote:
>       On 06/10/2021 18:40, Rahul Singh wrote:
>       > Introduce XEN_DOMCTL_CDF_vpci flag to enable VPCI support in XEN.
>       > Reject the use of this new flag for x86 as VPCI is not supported for
>       > DOMU guests for x86.
>       >
>       > Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>       > Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>       > Acked-by: Christian Lindig <christian.lindig@citrix.com>
> 
>       I'm afraid this logic is broken.
> 
>       There's no matching feature to indicate to the toolstack whether
>       XEN_DOMCTL_CDF_vpci will be accepted or not.  The usual way of doing
>       this is with a physinfo_cap field.
> 
> 
> I am slightly puzzled by this. I am assuming you are referring to XENVER_get_features which AFAICT is a stable interface. So why should we
> use it to describe the presence of an unstable interface?
> 
> 
>       This flag needs using in Patch 10 to reject attempts to create a VM with
>       devices when passthrough support is unavailable.
> 
> 
> May I ask why we can't rely on the hypervisor to reject the flag when suitable?
> 
> 
>       Ian, for the 4.16 release, this series either needs completing with the
>       additional flag implemented, or this patch needs reverting to avoid us
>       shipping a broken interface.
> 
> 
> I fail to see how the interface would be broken... Would you mind to describe a bit more what could go wrong with this interface?


After chatting with Andrew on IRC, this is my understanding.

Today if pci=[] is specified in the VM config file then
XEN_DOMCTL_CDF_vpci is added. If Xen doesn't support it, Xen returns
an error but libxl/xl won't be able to tell exactly why it fails. So xl
will end up printing a generic VM creation failure. Andrew would like to
see something like the following in libxl:

if ( PCI_devices && !cap.vcpi )
    error("Sorry - PCI not supported")

So that the user gets a clear informative error back rather than a
generic VM creation failure. Also, this is a requirement for the stable
hypercall interface.


I think that's fine and we can implement this request easily by adding
XEN_SYSCTL_PHYSCAP_vpci. Rahul or Bertrand, are you guys happy with
doing that? Otherwise I could take it on.


As a side note, given that PCI passthrough support is actually not yet
complete on ARM, we could even just do the following in xl/libxl:

if ( PCI_devices )
    error("Sorry - PCI not supported")

or temporarily remove XEN_DOMCTL_CDF_vpci until PCI passthrough gets
finalized.
Julien Grall Oct. 11, 2021, 9:24 a.m. UTC | #5
Hi Stefano,

On 08/10/2021 22:46, Stefano Stabellini wrote:
> On Fri, 8 Oct 2021, Julien Grall wrote:
>> Hi Andrew,
>>
>> On Fri, 8 Oct 2021, 20:07 Andrew Cooper, <andrew.cooper3@citrix.com> wrote:
>>        On 06/10/2021 18:40, Rahul Singh wrote:
>>        > Introduce XEN_DOMCTL_CDF_vpci flag to enable VPCI support in XEN.
>>        > Reject the use of this new flag for x86 as VPCI is not supported for
>>        > DOMU guests for x86.
>>        >
>>        > Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>>        > Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>>        > Acked-by: Christian Lindig <christian.lindig@citrix.com>
>>
>>        I'm afraid this logic is broken.
>>
>>        There's no matching feature to indicate to the toolstack whether
>>        XEN_DOMCTL_CDF_vpci will be accepted or not.  The usual way of doing
>>        this is with a physinfo_cap field.
>>
>>
>> I am slightly puzzled by this. I am assuming you are referring to XENVER_get_features which AFAICT is a stable interface. So why should we
>> use it to describe the presence of an unstable interface?
>>
>>
>>        This flag needs using in Patch 10 to reject attempts to create a VM with
>>        devices when passthrough support is unavailable.
>>
>>
>> May I ask why we can't rely on the hypervisor to reject the flag when suitable?
>>
>>
>>        Ian, for the 4.16 release, this series either needs completing with the
>>        additional flag implemented, or this patch needs reverting to avoid us
>>        shipping a broken interface.
>>
>>
>> I fail to see how the interface would be broken... Would you mind to describe a bit more what could go wrong with this interface?
> 
> 
> After chatting with Andrew on IRC, this is my understanding.
> 
> Today if pci=[] is specified in the VM config file then
> XEN_DOMCTL_CDF_vpci is added. If Xen doesn't support it, Xen returns
> an error but libxl/xl won't be able to tell exactly why it fails. So xl
> will end up printing a generic VM creation failure. Andrew would like to
> see something like the following in libxl:
> 
> if ( PCI_devices && !cap.vcpi )
>      error("Sorry - PCI not supported")
> 
> So that the user gets a clear informative error back rather than a
> generic VM creation failure. 

I can understand this request. However...

Also, this is a requirement for the stable
> hypercall interface.

... leaving aside the fact that domctl is clearly not stable today, the 
flag would be rejected on hypervisor not supporting vPCI. So I don't 
really see how this is a requirement for the stable hypercall interface.

Would you mind providing more details?

> 
> 
> I think that's fine and we can implement this request easily by adding
> XEN_SYSCTL_PHYSCAP_vpci. Rahul or Bertrand, are you guys happy with
> doing that? Otherwise I could take it on.
> 
> 
> As a side note, given that PCI passthrough support is actually not yet
> complete on ARM, we could even just do the following in xl/libxl:
> 
> if ( PCI_devices )
>      error("Sorry - PCI not supported")
> 
> or temporarily remove XEN_DOMCTL_CDF_vpci until PCI passthrough gets
> finalized.

Cheers,


>
Roger Pau Monné Oct. 11, 2021, 9:27 a.m. UTC | #6
On Wed, Oct 06, 2021 at 06:40:33PM +0100, Rahul Singh wrote:
> Introduce XEN_DOMCTL_CDF_vpci flag to enable VPCI support in XEN.
> Reject the use of this new flag for x86 as VPCI is not supported for
> DOMU guests for x86.

I don't like this approach, XEN_DOMCTL_CDF_vpci should be set for x86
PVH dom0, like we do for any other CDF flags when Xen builds dom0.

Things like PVH vs PV get translated into CDF flags by create_dom0,
and processed normally by the sanitise_domain_config logic, vPCI
should be handled that way.

Do you think you could see about fixing this?

Thanks, Roger.
Ian Jackson Oct. 11, 2021, 9:48 a.m. UTC | #7
Andrew Cooper writes ("Re: [PATCH v5 07/11] xen/domctl: Introduce XEN_DOMCTL_CDF_vpci flag"):
> Ian, for the 4.16 release, this series either needs completing with the
> additional flag implemented, or this patch needs reverting to avoid us
> shipping a broken interface.

I have caught up on this thread.  I think (hope?) it's converging.
If not please let me know and maybe I can help.

Can I ask to please be CC'd on the whole series for the patch(es) to
sort this out.  Please also make sure that those who commented are
CC'd.  I want the fixes that ultimately get committed to be the final
fixes (probably that means they should have consensus).

FTAOD, from a formal release management point of view: I regard those
putative fixes as bugfixes so they can go in after the feature freeze
(which is this Friday).  But if suitable fixes don't make it in within
the first few weeks of the freeze (and, as I expect, the maintainers
or I still regard this as an RC bug) then a revert of the new feature
will be the only option.

Ian.
Michal Orzel Oct. 11, 2021, 11:29 a.m. UTC | #8
Hi Stefano,

On 08.10.2021 23:46, Stefano Stabellini wrote:
> On Fri, 8 Oct 2021, Julien Grall wrote:
>> Hi Andrew,
>>
>> On Fri, 8 Oct 2021, 20:07 Andrew Cooper, <andrew.cooper3@citrix.com> wrote:
>>       On 06/10/2021 18:40, Rahul Singh wrote:
>>       > Introduce XEN_DOMCTL_CDF_vpci flag to enable VPCI support in XEN.
>>       > Reject the use of this new flag for x86 as VPCI is not supported for
>>       > DOMU guests for x86.
>>       >
>>       > Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>>       > Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>>       > Acked-by: Christian Lindig <christian.lindig@citrix.com>
>>
>>       I'm afraid this logic is broken.
>>
>>       There's no matching feature to indicate to the toolstack whether
>>       XEN_DOMCTL_CDF_vpci will be accepted or not.  The usual way of doing
>>       this is with a physinfo_cap field.
>>
>>
>> I am slightly puzzled by this. I am assuming you are referring to XENVER_get_features which AFAICT is a stable interface. So why should we
>> use it to describe the presence of an unstable interface?
>>
>>
>>       This flag needs using in Patch 10 to reject attempts to create a VM with
>>       devices when passthrough support is unavailable.
>>
>>
>> May I ask why we can't rely on the hypervisor to reject the flag when suitable?
>>
>>
>>       Ian, for the 4.16 release, this series either needs completing with the
>>       additional flag implemented, or this patch needs reverting to avoid us
>>       shipping a broken interface.
>>
>>
>> I fail to see how the interface would be broken... Would you mind to describe a bit more what could go wrong with this interface?
> 
> 
> After chatting with Andrew on IRC, this is my understanding.
> 
> Today if pci=[] is specified in the VM config file then
> XEN_DOMCTL_CDF_vpci is added. If Xen doesn't support it, Xen returns
> an error but libxl/xl won't be able to tell exactly why it fails. So xl
> will end up printing a generic VM creation failure. Andrew would like to
> see something like the following in libxl:
> 
> if ( PCI_devices && !cap.vcpi )
>     error("Sorry - PCI not supported")
> 
> So that the user gets a clear informative error back rather than a
> generic VM creation failure. Also, this is a requirement for the stable
> hypercall interface.
> 
> 
> I think that's fine and we can implement this request easily by adding
> XEN_SYSCTL_PHYSCAP_vpci. Rahul or Bertrand, are you guys happy with
> doing that? Otherwise I could take it on.
> 
> 
> As a side note, given that PCI passthrough support is actually not yet
> complete on ARM, we could even just do the following in xl/libxl:
> 
> if ( PCI_devices )
>     error("Sorry - PCI not supported")
> 
> or temporarily remove XEN_DOMCTL_CDF_vpci until PCI passthrough gets
> finalized.
> 
As Rahul is on leave:
I'm ok to introduce XEN_SYSCTL_PHYSCAP_vpci. I did the same for vpmu so it's ok.
However the problem I have is about setting this cap.
On arm it is easy as we are not supporting vpci at the moment so the cap can be set to false.
But how to deal with this cap in x86 code? I'm not familiar with x86 so I'm asking for advice.

Cheers,
Michal
Jan Beulich Oct. 11, 2021, 11:35 a.m. UTC | #9
On 11.10.2021 13:29, Michal Orzel wrote:
> On 08.10.2021 23:46, Stefano Stabellini wrote:
>> On Fri, 8 Oct 2021, Julien Grall wrote:
>>> On Fri, 8 Oct 2021, 20:07 Andrew Cooper, <andrew.cooper3@citrix.com> wrote:
>>>       On 06/10/2021 18:40, Rahul Singh wrote:
>>>       > Introduce XEN_DOMCTL_CDF_vpci flag to enable VPCI support in XEN.
>>>       > Reject the use of this new flag for x86 as VPCI is not supported for
>>>       > DOMU guests for x86.
>>>       >
>>>       > Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>>>       > Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>>>       > Acked-by: Christian Lindig <christian.lindig@citrix.com>
>>>
>>>       I'm afraid this logic is broken.
>>>
>>>       There's no matching feature to indicate to the toolstack whether
>>>       XEN_DOMCTL_CDF_vpci will be accepted or not.  The usual way of doing
>>>       this is with a physinfo_cap field.
>>>
>>>
>>> I am slightly puzzled by this. I am assuming you are referring to XENVER_get_features which AFAICT is a stable interface. So why should we
>>> use it to describe the presence of an unstable interface?
>>>
>>>
>>>       This flag needs using in Patch 10 to reject attempts to create a VM with
>>>       devices when passthrough support is unavailable.
>>>
>>>
>>> May I ask why we can't rely on the hypervisor to reject the flag when suitable?
>>>
>>>
>>>       Ian, for the 4.16 release, this series either needs completing with the
>>>       additional flag implemented, or this patch needs reverting to avoid us
>>>       shipping a broken interface.
>>>
>>>
>>> I fail to see how the interface would be broken... Would you mind to describe a bit more what could go wrong with this interface?
>>
>>
>> After chatting with Andrew on IRC, this is my understanding.
>>
>> Today if pci=[] is specified in the VM config file then
>> XEN_DOMCTL_CDF_vpci is added. If Xen doesn't support it, Xen returns
>> an error but libxl/xl won't be able to tell exactly why it fails. So xl
>> will end up printing a generic VM creation failure. Andrew would like to
>> see something like the following in libxl:
>>
>> if ( PCI_devices && !cap.vcpi )
>>     error("Sorry - PCI not supported")
>>
>> So that the user gets a clear informative error back rather than a
>> generic VM creation failure. Also, this is a requirement for the stable
>> hypercall interface.
>>
>>
>> I think that's fine and we can implement this request easily by adding
>> XEN_SYSCTL_PHYSCAP_vpci. Rahul or Bertrand, are you guys happy with
>> doing that? Otherwise I could take it on.
>>
>>
>> As a side note, given that PCI passthrough support is actually not yet
>> complete on ARM, we could even just do the following in xl/libxl:
>>
>> if ( PCI_devices )
>>     error("Sorry - PCI not supported")
>>
>> or temporarily remove XEN_DOMCTL_CDF_vpci until PCI passthrough gets
>> finalized.
>>
> As Rahul is on leave:
> I'm ok to introduce XEN_SYSCTL_PHYSCAP_vpci. I did the same for vpmu so it's ok.
> However the problem I have is about setting this cap.
> On arm it is easy as we are not supporting vpci at the moment so the cap can be set to false.
> But how to deal with this cap in x86 code? I'm not familiar with x86 so I'm asking for advice.

As the sysctl is mainly from tool stacks to drive guests (DomU-s), I'd
set it to false for x86 as well. Roger - do you see any reason this
could be needed to express anything Dom0-related?

Jan
Michal Orzel Oct. 11, 2021, 12:06 p.m. UTC | #10
Hi Roger,

On 11.10.2021 11:27, Roger Pau Monné wrote:
> On Wed, Oct 06, 2021 at 06:40:33PM +0100, Rahul Singh wrote:
>> Introduce XEN_DOMCTL_CDF_vpci flag to enable VPCI support in XEN.
>> Reject the use of this new flag for x86 as VPCI is not supported for
>> DOMU guests for x86.
> 
> I don't like this approach, XEN_DOMCTL_CDF_vpci should be set for x86
> PVH dom0, like we do for any other CDF flags when Xen builds dom0.
> 
> Things like PVH vs PV get translated into CDF flags by create_dom0,
> and processed normally by the sanitise_domain_config logic, vPCI
> should be handled that way.
> 
> Do you think you could see about fixing this?
> 
> Thanks, Roger.
> 

As Rahul is on leave:
I agree with you. XEN_DOMCTL_CDF_vpci should be set in dom0_cfg.flags in create_dom0 for pvh.
We can do such fix in a new patch together with adding physcap for vpci.

Cheers,
Michal
Roger Pau Monné Oct. 11, 2021, 1:17 p.m. UTC | #11
On Mon, Oct 11, 2021 at 01:35:18PM +0200, Jan Beulich wrote:
> On 11.10.2021 13:29, Michal Orzel wrote:
> > On 08.10.2021 23:46, Stefano Stabellini wrote:
> >> On Fri, 8 Oct 2021, Julien Grall wrote:
> >>> On Fri, 8 Oct 2021, 20:07 Andrew Cooper, <andrew.cooper3@citrix.com> wrote:
> >>>       On 06/10/2021 18:40, Rahul Singh wrote:
> >>>       > Introduce XEN_DOMCTL_CDF_vpci flag to enable VPCI support in XEN.
> >>>       > Reject the use of this new flag for x86 as VPCI is not supported for
> >>>       > DOMU guests for x86.
> >>>       >
> >>>       > Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> >>>       > Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> >>>       > Acked-by: Christian Lindig <christian.lindig@citrix.com>
> >>>
> >>>       I'm afraid this logic is broken.
> >>>
> >>>       There's no matching feature to indicate to the toolstack whether
> >>>       XEN_DOMCTL_CDF_vpci will be accepted or not.  The usual way of doing
> >>>       this is with a physinfo_cap field.
> >>>
> >>>
> >>> I am slightly puzzled by this. I am assuming you are referring to XENVER_get_features which AFAICT is a stable interface. So why should we
> >>> use it to describe the presence of an unstable interface?
> >>>
> >>>
> >>>       This flag needs using in Patch 10 to reject attempts to create a VM with
> >>>       devices when passthrough support is unavailable.
> >>>
> >>>
> >>> May I ask why we can't rely on the hypervisor to reject the flag when suitable?
> >>>
> >>>
> >>>       Ian, for the 4.16 release, this series either needs completing with the
> >>>       additional flag implemented, or this patch needs reverting to avoid us
> >>>       shipping a broken interface.
> >>>
> >>>
> >>> I fail to see how the interface would be broken... Would you mind to describe a bit more what could go wrong with this interface?
> >>
> >>
> >> After chatting with Andrew on IRC, this is my understanding.
> >>
> >> Today if pci=[] is specified in the VM config file then
> >> XEN_DOMCTL_CDF_vpci is added. If Xen doesn't support it, Xen returns
> >> an error but libxl/xl won't be able to tell exactly why it fails. So xl
> >> will end up printing a generic VM creation failure. Andrew would like to
> >> see something like the following in libxl:
> >>
> >> if ( PCI_devices && !cap.vcpi )
> >>     error("Sorry - PCI not supported")
> >>
> >> So that the user gets a clear informative error back rather than a
> >> generic VM creation failure. Also, this is a requirement for the stable
> >> hypercall interface.
> >>
> >>
> >> I think that's fine and we can implement this request easily by adding
> >> XEN_SYSCTL_PHYSCAP_vpci. Rahul or Bertrand, are you guys happy with
> >> doing that? Otherwise I could take it on.
> >>
> >>
> >> As a side note, given that PCI passthrough support is actually not yet
> >> complete on ARM, we could even just do the following in xl/libxl:
> >>
> >> if ( PCI_devices )
> >>     error("Sorry - PCI not supported")
> >>
> >> or temporarily remove XEN_DOMCTL_CDF_vpci until PCI passthrough gets
> >> finalized.
> >>
> > As Rahul is on leave:
> > I'm ok to introduce XEN_SYSCTL_PHYSCAP_vpci. I did the same for vpmu so it's ok.
> > However the problem I have is about setting this cap.
> > On arm it is easy as we are not supporting vpci at the moment so the cap can be set to false.
> > But how to deal with this cap in x86 code? I'm not familiar with x86 so I'm asking for advice.
> 
> As the sysctl is mainly from tool stacks to drive guests (DomU-s), I'd
> set it to false for x86 as well. Roger - do you see any reason this
> could be needed to express anything Dom0-related?

I agree, I don't think we should set it to true unless we also support
vPCI for domUs on x86, or else it's just going to confuse the
toolstack.

Thanks, Roger.
Michal Orzel Oct. 12, 2021, 10:38 a.m. UTC | #12
Hi Roger,

On 11.10.2021 11:27, Roger Pau Monné wrote:
> On Wed, Oct 06, 2021 at 06:40:33PM +0100, Rahul Singh wrote:
>> Introduce XEN_DOMCTL_CDF_vpci flag to enable VPCI support in XEN.
>> Reject the use of this new flag for x86 as VPCI is not supported for
>> DOMU guests for x86.
> 
> I don't like this approach, XEN_DOMCTL_CDF_vpci should be set for x86
> PVH dom0, like we do for any other CDF flags when Xen builds dom0.
> 
> Things like PVH vs PV get translated into CDF flags by create_dom0,
> and processed normally by the sanitise_domain_config logic, vPCI
> should be handled that way.
> 
> Do you think you could see about fixing this?
> 
> Thanks, Roger.
> 

I have one question about this fix.
If I set XEN_DOMCTL_CDF_vpci for dom0 pvh in create_dom0, then in
sanitise_domain_config or arch_sanitise_domain_config I have no
knowledge on whether I am dom0 or not. I can check if I'm PVH but not if dom0.
This would be needed to add a warning if this flag is set but we are not dom0 pvh.

Any ideas?

Cheers,
Michal
Stefano Stabellini Oct. 12, 2021, 9:48 p.m. UTC | #13
On Mon, 11 Oct 2021, Roger Pau Monné wrote:
> On Wed, Oct 06, 2021 at 06:40:33PM +0100, Rahul Singh wrote:
> > Introduce XEN_DOMCTL_CDF_vpci flag to enable VPCI support in XEN.
> > Reject the use of this new flag for x86 as VPCI is not supported for
> > DOMU guests for x86.
> 
> I don't like this approach, XEN_DOMCTL_CDF_vpci should be set for x86
> PVH dom0, like we do for any other CDF flags when Xen builds dom0.
> 
> Things like PVH vs PV get translated into CDF flags by create_dom0,
> and processed normally by the sanitise_domain_config logic, vPCI
> should be handled that way.
> 
> Do you think you could see about fixing this?

Andrew suggested to use XEN_SYSCTL_PHYSCAP_vpci to check whether we can
set XEN_DOMCTL_CDF_vpci in libxl and it looks like we have consensus on
this approach. [1][2]

So it makes sense that XEN_DOMCTL_CDF_vpci is only set when
XEN_SYSCTL_PHYSCAP_vpci is also set, i.e. XEN_SYSCTL_PHYSCAP_vpci ==
XEN_DOMCTL_CDF_vpci.

From [2], XEN_SYSCTL_PHYSCAP_vpci is not going to be set on x86, so then
XEN_DOMCTL_CDF_vpci should also be left unset?

If you think XEN_DOMCTL_CDF_vpci should be set for x86 PVH Dom0, then
XEN_SYSCTL_PHYSCAP_vpci should also be set for x86 PVH Dom0.


[1] https://marc.info/?l=xen-devel&m=163372953907637
[2] https://marc.info/?l=xen-devel&m=163395821428850
Jan Beulich Oct. 13, 2021, 6:18 a.m. UTC | #14
On 12.10.2021 23:48, Stefano Stabellini wrote:
> On Mon, 11 Oct 2021, Roger Pau Monné wrote:
>> On Wed, Oct 06, 2021 at 06:40:33PM +0100, Rahul Singh wrote:
>>> Introduce XEN_DOMCTL_CDF_vpci flag to enable VPCI support in XEN.
>>> Reject the use of this new flag for x86 as VPCI is not supported for
>>> DOMU guests for x86.
>>
>> I don't like this approach, XEN_DOMCTL_CDF_vpci should be set for x86
>> PVH dom0, like we do for any other CDF flags when Xen builds dom0.
>>
>> Things like PVH vs PV get translated into CDF flags by create_dom0,
>> and processed normally by the sanitise_domain_config logic, vPCI
>> should be handled that way.
>>
>> Do you think you could see about fixing this?
> 
> Andrew suggested to use XEN_SYSCTL_PHYSCAP_vpci to check whether we can
> set XEN_DOMCTL_CDF_vpci in libxl and it looks like we have consensus on
> this approach. [1][2]
> 
> So it makes sense that XEN_DOMCTL_CDF_vpci is only set when
> XEN_SYSCTL_PHYSCAP_vpci is also set, i.e. XEN_SYSCTL_PHYSCAP_vpci ==
> XEN_DOMCTL_CDF_vpci.
> 
> From [2], XEN_SYSCTL_PHYSCAP_vpci is not going to be set on x86, so then
> XEN_DOMCTL_CDF_vpci should also be left unset?
> 
> If you think XEN_DOMCTL_CDF_vpci should be set for x86 PVH Dom0, then
> XEN_SYSCTL_PHYSCAP_vpci should also be set for x86 PVH Dom0.

Except that XEN_SYSCTL_PHYSCAP_vpci is not a domain specific attribute,
but a host-wide one.

Jan
Michal Orzel Oct. 13, 2021, 7:11 a.m. UTC | #15
On 13.10.2021 08:18, Jan Beulich wrote:
> On 12.10.2021 23:48, Stefano Stabellini wrote:
>> On Mon, 11 Oct 2021, Roger Pau Monné wrote:
>>> On Wed, Oct 06, 2021 at 06:40:33PM +0100, Rahul Singh wrote:
>>>> Introduce XEN_DOMCTL_CDF_vpci flag to enable VPCI support in XEN.
>>>> Reject the use of this new flag for x86 as VPCI is not supported for
>>>> DOMU guests for x86.
>>>
>>> I don't like this approach, XEN_DOMCTL_CDF_vpci should be set for x86
>>> PVH dom0, like we do for any other CDF flags when Xen builds dom0.
>>>
>>> Things like PVH vs PV get translated into CDF flags by create_dom0,
>>> and processed normally by the sanitise_domain_config logic, vPCI
>>> should be handled that way.
>>>
>>> Do you think you could see about fixing this?
>>
>> Andrew suggested to use XEN_SYSCTL_PHYSCAP_vpci to check whether we can
>> set XEN_DOMCTL_CDF_vpci in libxl and it looks like we have consensus on
>> this approach. [1][2]
>>
>> So it makes sense that XEN_DOMCTL_CDF_vpci is only set when
>> XEN_SYSCTL_PHYSCAP_vpci is also set, i.e. XEN_SYSCTL_PHYSCAP_vpci ==
>> XEN_DOMCTL_CDF_vpci.
>>
>> From [2], XEN_SYSCTL_PHYSCAP_vpci is not going to be set on x86, so then
>> XEN_DOMCTL_CDF_vpci should also be left unset?
>>
>> If you think XEN_DOMCTL_CDF_vpci should be set for x86 PVH Dom0, then
>> XEN_SYSCTL_PHYSCAP_vpci should also be set for x86 PVH Dom0.
> 
> Except that XEN_SYSCTL_PHYSCAP_vpci is not a domain specific attribute,
> but a host-wide one.
> 
> Jan
> 
> 
I already prepared a patch introducing XEN_SYSCTL_PHYSCAP_vpci. We agreed
that the cap_vpci should not be set neither for x86 nor ARM. This means that
the flag vpci_is_available, which tells us about vPCI platform support (and is used
in condition to set cap_vpci) is set to false by default. Later on it should be set by vPCI driver.
For me it does not make sense for XEN_SYSCTL_PHYSCAP_vpci saying that platform does not support vPCI
but setting XEN_DOMCTL_CDF_vpci for dom0 pvh.
I would prefer not setting XEN_DOMCTL_CDF_vpci for now at all. This way we have a chance
to merge Rahul's series until friday.

Cheers,
Michal
Roger Pau Monné Oct. 13, 2021, 8:30 a.m. UTC | #16
On Tue, Oct 12, 2021 at 12:38:35PM +0200, Michal Orzel wrote:
> Hi Roger,
> 
> On 11.10.2021 11:27, Roger Pau Monné wrote:
> > On Wed, Oct 06, 2021 at 06:40:33PM +0100, Rahul Singh wrote:
> >> Introduce XEN_DOMCTL_CDF_vpci flag to enable VPCI support in XEN.
> >> Reject the use of this new flag for x86 as VPCI is not supported for
> >> DOMU guests for x86.
> > 
> > I don't like this approach, XEN_DOMCTL_CDF_vpci should be set for x86
> > PVH dom0, like we do for any other CDF flags when Xen builds dom0.
> > 
> > Things like PVH vs PV get translated into CDF flags by create_dom0,
> > and processed normally by the sanitise_domain_config logic, vPCI
> > should be handled that way.
> > 
> > Do you think you could see about fixing this?
> > 
> > Thanks, Roger.
> > 
> 
> I have one question about this fix.
> If I set XEN_DOMCTL_CDF_vpci for dom0 pvh in create_dom0, then in
> sanitise_domain_config or arch_sanitise_domain_config I have no
> knowledge on whether I am dom0 or not. I can check if I'm PVH but not if dom0.
> This would be needed to add a warning if this flag is set but we are not dom0 pvh.
> 
> Any ideas?

I've just realized this is more wrong that I thought. vPCI is
signaled on x86 in xen_arch_domainconfig.emulation_flags, so
introducing a top level option for it without removing the arch
specific one is wrong, as then on x86 we have a duplicated option.

Then I'm also not sure whether we want to move it from
emulation_flags, it seems like the more natural place for it to live
on x86.

If we really want to make vPCI a CDF option we must deal with the
removal of XEN_X86_EMU_VPCI, or else you could introduce an arch
specific flag for vPCI on Arm.

Thanks, Roger.
Bertrand Marquis Oct. 13, 2021, 9:36 a.m. UTC | #17
Hi Roger,

> On 13 Oct 2021, at 09:30, Roger Pau Monné <roger.pau@citrix.com> wrote:
> 
> On Tue, Oct 12, 2021 at 12:38:35PM +0200, Michal Orzel wrote:
>> Hi Roger,
>> 
>> On 11.10.2021 11:27, Roger Pau Monné wrote:
>>> On Wed, Oct 06, 2021 at 06:40:33PM +0100, Rahul Singh wrote:
>>>> Introduce XEN_DOMCTL_CDF_vpci flag to enable VPCI support in XEN.
>>>> Reject the use of this new flag for x86 as VPCI is not supported for
>>>> DOMU guests for x86.
>>> 
>>> I don't like this approach, XEN_DOMCTL_CDF_vpci should be set for x86
>>> PVH dom0, like we do for any other CDF flags when Xen builds dom0.
>>> 
>>> Things like PVH vs PV get translated into CDF flags by create_dom0,
>>> and processed normally by the sanitise_domain_config logic, vPCI
>>> should be handled that way.
>>> 
>>> Do you think you could see about fixing this?
>>> 
>>> Thanks, Roger.
>>> 
>> 
>> I have one question about this fix.
>> If I set XEN_DOMCTL_CDF_vpci for dom0 pvh in create_dom0, then in
>> sanitise_domain_config or arch_sanitise_domain_config I have no
>> knowledge on whether I am dom0 or not. I can check if I'm PVH but not if dom0.
>> This would be needed to add a warning if this flag is set but we are not dom0 pvh.
>> 
>> Any ideas?
> 
> I've just realized this is more wrong that I thought. vPCI is
> signaled on x86 in xen_arch_domainconfig.emulation_flags, so
> introducing a top level option for it without removing the arch
> specific one is wrong, as then on x86 we have a duplicated option.
> 
> Then I'm also not sure whether we want to move it from
> emulation_flags, it seems like the more natural place for it to live
> on x86.
> 
> If we really want to make vPCI a CDF option we must deal with the
> removal of XEN_X86_EMU_VPCI, or else you could introduce an arch
> specific flag for vPCI on Arm.

First issue that we have here is that there is no emulation_flags right now on arm.

So I think there are 2 solutions:

- introduce PHYSCAP. The problem here is that it is not a physical capacity and
I think that will hit us in the future for example if we want to use vpci for VIRTIO
even if there is not physical PCI on the platform.

- introduce an emulation flag on arm. The problem here is that there is no emulation
flag right now on arm so adding this feature will require a change of interface in
arch-arm.h and in xen domctl interface. But if we introduce one on Arm, it would allow
the tools to check if CDF_vpci can be set or not which would solve current issues.

I think the second solution is the right one but it cannot be done so near from the
feature freeze.

The CDF flag as introduced right now is not creating any issue and will be used once
the emulation flag will be introduce. We will be able at this stage to set this properly
also on x86 (for dom0 pvh).
Moreover keeping it will allow to continue to merge the remaining part of the PCI
passthrough which are otherwise not possible to be done as they are dependent on this flag.

Can we agree on keep the DOMCTL_CDF_vpci flag and introduce the emulation
flag on Arm after 4.16 release ?

Cheers
Bertrand
Roger Pau Monné Oct. 13, 2021, 10:56 a.m. UTC | #18
On Wed, Oct 13, 2021 at 09:36:01AM +0000, Bertrand Marquis wrote:
> Hi Roger,
> 
> > On 13 Oct 2021, at 09:30, Roger Pau Monné <roger.pau@citrix.com> wrote:
> > 
> > On Tue, Oct 12, 2021 at 12:38:35PM +0200, Michal Orzel wrote:
> >> Hi Roger,
> >> 
> >> On 11.10.2021 11:27, Roger Pau Monné wrote:
> >>> On Wed, Oct 06, 2021 at 06:40:33PM +0100, Rahul Singh wrote:
> >>>> Introduce XEN_DOMCTL_CDF_vpci flag to enable VPCI support in XEN.
> >>>> Reject the use of this new flag for x86 as VPCI is not supported for
> >>>> DOMU guests for x86.
> >>> 
> >>> I don't like this approach, XEN_DOMCTL_CDF_vpci should be set for x86
> >>> PVH dom0, like we do for any other CDF flags when Xen builds dom0.
> >>> 
> >>> Things like PVH vs PV get translated into CDF flags by create_dom0,
> >>> and processed normally by the sanitise_domain_config logic, vPCI
> >>> should be handled that way.
> >>> 
> >>> Do you think you could see about fixing this?
> >>> 
> >>> Thanks, Roger.
> >>> 
> >> 
> >> I have one question about this fix.
> >> If I set XEN_DOMCTL_CDF_vpci for dom0 pvh in create_dom0, then in
> >> sanitise_domain_config or arch_sanitise_domain_config I have no
> >> knowledge on whether I am dom0 or not. I can check if I'm PVH but not if dom0.
> >> This would be needed to add a warning if this flag is set but we are not dom0 pvh.
> >> 
> >> Any ideas?
> > 
> > I've just realized this is more wrong that I thought. vPCI is
> > signaled on x86 in xen_arch_domainconfig.emulation_flags, so
> > introducing a top level option for it without removing the arch
> > specific one is wrong, as then on x86 we have a duplicated option.
> > 
> > Then I'm also not sure whether we want to move it from
> > emulation_flags, it seems like the more natural place for it to live
> > on x86.
> > 
> > If we really want to make vPCI a CDF option we must deal with the
> > removal of XEN_X86_EMU_VPCI, or else you could introduce an arch
> > specific flag for vPCI on Arm.
> 
> First issue that we have here is that there is no emulation_flags right now on arm.

You don't explicitly need an emulation_flags field, you could add a
uint8_t vpci or some such to xen_arch_domainconfig for Arm if you
don't think there's a need to select more emulation. That's up to Arm
folks.

> So I think there are 2 solutions:
> 
> - introduce PHYSCAP. The problem here is that it is not a physical capacity and
> I think that will hit us in the future for example if we want to use vpci for VIRTIO
> even if there is not physical PCI on the platform.

Hm, PHYSCAP is a bit weird, for example Xen signals shadow paging
support using PHYSCAP which doesn't depend on any hardware feature.

I think you could signal vPCI regardless of whether the underlying
platform has PCI devices or not, as vPCI is purely a software
component.

Regarding VirtIO, won't it be implemented using ioreqs in user-space,
and thus won't depend on vPCI?

> - introduce an emulation flag on arm. The problem here is that there is no emulation
> flag right now on arm so adding this feature will require a change of interface in
> arch-arm.h and in xen domctl interface. But if we introduce one on Arm, it would allow
> the tools to check if CDF_vpci can be set or not which would solve current issues.

I'm not sure I follow. If we go the emulation flags route this will all
be set in xen_arch_domainconfig, and CDF_vpci will completely go away.

> I think the second solution is the right one but it cannot be done so near from the
> feature freeze.
> 
> The CDF flag as introduced right now is not creating any issue and will be used once
> the emulation flag will be introduce. We will be able at this stage to set this properly
> also on x86 (for dom0 pvh).
> Moreover keeping it will allow to continue to merge the remaining part of the PCI
> passthrough which are otherwise not possible to be done as they are dependent on this flag.
> 
> Can we agree on keep the DOMCTL_CDF_vpci flag and introduce the emulation
> flag on Arm after 4.16 release ?

If vPCI for Arm on 4.16 is not going to be functional, why so much
pressure in pushing those patches so fast? I understand the need to
remove stuff from the queue, but I don't think it's worth the cost of
introducing a broken interface deliberately on a release.

I think we need to at least settle on whether we want to keep
CDF_vpci or use an arch specific signal mechanism in order to decide
what to do regarding the release.

Thanks, Roger.
Bertrand Marquis Oct. 13, 2021, 12:11 p.m. UTC | #19
Hi Roger,

> On 13 Oct 2021, at 11:56, Roger Pau Monné <roger.pau@citrix.com> wrote:
> 
> On Wed, Oct 13, 2021 at 09:36:01AM +0000, Bertrand Marquis wrote:
>> Hi Roger,
>> 
>>> On 13 Oct 2021, at 09:30, Roger Pau Monné <roger.pau@citrix.com> wrote:
>>> 
>>> On Tue, Oct 12, 2021 at 12:38:35PM +0200, Michal Orzel wrote:
>>>> Hi Roger,
>>>> 
>>>> On 11.10.2021 11:27, Roger Pau Monné wrote:
>>>>> On Wed, Oct 06, 2021 at 06:40:33PM +0100, Rahul Singh wrote:
>>>>>> Introduce XEN_DOMCTL_CDF_vpci flag to enable VPCI support in XEN.
>>>>>> Reject the use of this new flag for x86 as VPCI is not supported for
>>>>>> DOMU guests for x86.
>>>>> 
>>>>> I don't like this approach, XEN_DOMCTL_CDF_vpci should be set for x86
>>>>> PVH dom0, like we do for any other CDF flags when Xen builds dom0.
>>>>> 
>>>>> Things like PVH vs PV get translated into CDF flags by create_dom0,
>>>>> and processed normally by the sanitise_domain_config logic, vPCI
>>>>> should be handled that way.
>>>>> 
>>>>> Do you think you could see about fixing this?
>>>>> 
>>>>> Thanks, Roger.
>>>>> 
>>>> 
>>>> I have one question about this fix.
>>>> If I set XEN_DOMCTL_CDF_vpci for dom0 pvh in create_dom0, then in
>>>> sanitise_domain_config or arch_sanitise_domain_config I have no
>>>> knowledge on whether I am dom0 or not. I can check if I'm PVH but not if dom0.
>>>> This would be needed to add a warning if this flag is set but we are not dom0 pvh.
>>>> 
>>>> Any ideas?
>>> 
>>> I've just realized this is more wrong that I thought. vPCI is
>>> signaled on x86 in xen_arch_domainconfig.emulation_flags, so
>>> introducing a top level option for it without removing the arch
>>> specific one is wrong, as then on x86 we have a duplicated option.
>>> 
>>> Then I'm also not sure whether we want to move it from
>>> emulation_flags, it seems like the more natural place for it to live
>>> on x86.
>>> 
>>> If we really want to make vPCI a CDF option we must deal with the
>>> removal of XEN_X86_EMU_VPCI, or else you could introduce an arch
>>> specific flag for vPCI on Arm.
>> 
>> First issue that we have here is that there is no emulation_flags right now on arm.
> 
> You don't explicitly need an emulation_flags field, you could add a
> uint8_t vpci or some such to xen_arch_domainconfig for Arm if you
> don't think there's a need to select more emulation. That's up to Arm
> folks.

One way or an other it is still changing the interface.

> 
>> So I think there are 2 solutions:
>> 
>> - introduce PHYSCAP. The problem here is that it is not a physical capacity and
>> I think that will hit us in the future for example if we want to use vpci for VIRTIO
>> even if there is not physical PCI on the platform.
> 
> Hm, PHYSCAP is a bit weird, for example Xen signals shadow paging
> support using PHYSCAP which doesn't depend on any hardware feature.
> 
> I think you could signal vPCI regardless of whether the underlying
> platform has PCI devices or not, as vPCI is purely a software
> component.

But in the x86 case this is something not supported in all cases and actually only by dom0 pvh.
So having something like that defined as a PHYSCAP is a bit weird.

> 
> Regarding VirtIO, won't it be implemented using ioreqs in user-space,
> and thus won't depend on vPCI?

Yes that’s a solution but you can list them through a PCI interface to let a guest discover them.
I am not saying that we will do that but that was an example of case where we could use vPCI without hardware PCI present.

> 
>> - introduce an emulation flag on arm. The problem here is that there is no emulation
>> flag right now on arm so adding this feature will require a change of interface in
>> arch-arm.h and in xen domctl interface. But if we introduce one on Arm, it would allow
>> the tools to check if CDF_vpci can be set or not which would solve current issues.
> 
> I'm not sure I follow. If we go the emulation flags route this will all
> be set in xen_arch_domainconfig, and CDF_vpci will completely go away.

This is a mistake on my side. You are right using emulation flags will require to remove CDF_vpci.

> 
>> I think the second solution is the right one but it cannot be done so near from the
>> feature freeze.
>> 
>> The CDF flag as introduced right now is not creating any issue and will be used once
>> the emulation flag will be introduce. We will be able at this stage to set this properly
>> also on x86 (for dom0 pvh).
>> Moreover keeping it will allow to continue to merge the remaining part of the PCI
>> passthrough which are otherwise not possible to be done as they are dependent on this flag.
>> 
>> Can we agree on keep the DOMCTL_CDF_vpci flag and introduce the emulation
>> flag on Arm after 4.16 release ?
> 
> If vPCI for Arm on 4.16 is not going to be functional, why so much
> pressure in pushing those patches so fast? I understand the need to
> remove stuff from the queue, but I don't think it's worth the cost of
> introducing a broken interface deliberately on a release.

We did not push that fast, those have been on the mailing list for a while (this is the 5th version of the serie).
PCI passthrough is a big change requiring lots of patches and we decided to push it piece by piece to make
the review easier.

> 
> I think we need to at least settle on whether we want to keep
> CDF_vpci or use an arch specific signal mechanism in order to decide
> what to do regarding the release.

Agree and I have no definitive idea on this so but switching will require to revert the patch adding CDF_vpci
and change the subsequent patches.

Regards
Bertrand

> 
> Thanks, Roger.
Jan Beulich Oct. 13, 2021, 12:57 p.m. UTC | #20
On 13.10.2021 14:11, Bertrand Marquis wrote:
>> On 13 Oct 2021, at 11:56, Roger Pau Monné <roger.pau@citrix.com> wrote:
>> If vPCI for Arm on 4.16 is not going to be functional, why so much
>> pressure in pushing those patches so fast? I understand the need to
>> remove stuff from the queue, but I don't think it's worth the cost of
>> introducing a broken interface deliberately on a release.

+1

> We did not push that fast, those have been on the mailing list for a while (this is the 5th version of the serie).
> PCI passthrough is a big change requiring lots of patches and we decided to push it piece by piece to make
> the review easier.

5 versions for a series like this one was to be expected. Imo it has
been wrong in the past to rush in new features (even if experimental
ones) very close to the freeze, and it has mislead people to think
they can delay work until very (too?) late a point in time.

In fact I'm getting somewhat angry seeing how much effort is put into
getting this work in (including myself doing reviews there), when at
the same time far older series of mine aren't given any chance to
make any progress. Not your fault, sure, but an observation.

Jan
Roger Pau Monné Oct. 13, 2021, 2:28 p.m. UTC | #21
On Wed, Oct 13, 2021 at 12:11:30PM +0000, Bertrand Marquis wrote:
> Hi Roger,
> 
> > On 13 Oct 2021, at 11:56, Roger Pau Monné <roger.pau@citrix.com> wrote:
> > 
> > On Wed, Oct 13, 2021 at 09:36:01AM +0000, Bertrand Marquis wrote:
> >> Hi Roger,
> >> 
> >>> On 13 Oct 2021, at 09:30, Roger Pau Monné <roger.pau@citrix.com> wrote:
> >>> 
> >>> On Tue, Oct 12, 2021 at 12:38:35PM +0200, Michal Orzel wrote:
> >>>> Hi Roger,
> >>>> 
> >>>> On 11.10.2021 11:27, Roger Pau Monné wrote:
> >>>>> On Wed, Oct 06, 2021 at 06:40:33PM +0100, Rahul Singh wrote:
> >>>>>> Introduce XEN_DOMCTL_CDF_vpci flag to enable VPCI support in XEN.
> >>>>>> Reject the use of this new flag for x86 as VPCI is not supported for
> >>>>>> DOMU guests for x86.
> >>>>> 
> >>>>> I don't like this approach, XEN_DOMCTL_CDF_vpci should be set for x86
> >>>>> PVH dom0, like we do for any other CDF flags when Xen builds dom0.
> >>>>> 
> >>>>> Things like PVH vs PV get translated into CDF flags by create_dom0,
> >>>>> and processed normally by the sanitise_domain_config logic, vPCI
> >>>>> should be handled that way.
> >>>>> 
> >>>>> Do you think you could see about fixing this?
> >>>>> 
> >>>>> Thanks, Roger.
> >>>>> 
> >>>> 
> >>>> I have one question about this fix.
> >>>> If I set XEN_DOMCTL_CDF_vpci for dom0 pvh in create_dom0, then in
> >>>> sanitise_domain_config or arch_sanitise_domain_config I have no
> >>>> knowledge on whether I am dom0 or not. I can check if I'm PVH but not if dom0.
> >>>> This would be needed to add a warning if this flag is set but we are not dom0 pvh.
> >>>> 
> >>>> Any ideas?
> >>> 
> >>> I've just realized this is more wrong that I thought. vPCI is
> >>> signaled on x86 in xen_arch_domainconfig.emulation_flags, so
> >>> introducing a top level option for it without removing the arch
> >>> specific one is wrong, as then on x86 we have a duplicated option.
> >>> 
> >>> Then I'm also not sure whether we want to move it from
> >>> emulation_flags, it seems like the more natural place for it to live
> >>> on x86.
> >>> 
> >>> If we really want to make vPCI a CDF option we must deal with the
> >>> removal of XEN_X86_EMU_VPCI, or else you could introduce an arch
> >>> specific flag for vPCI on Arm.
> >> 
> >> First issue that we have here is that there is no emulation_flags right now on arm.
> > 
> > You don't explicitly need an emulation_flags field, you could add a
> > uint8_t vpci or some such to xen_arch_domainconfig for Arm if you
> > don't think there's a need to select more emulation. That's up to Arm
> > folks.
> 
> One way or an other it is still changing the interface.

Well, I don't want to sound rude, but you already changed the
interface first and very recently by introducing CDF_vpci. Complaining
that you don't want to change it anymore just after that seems
slightly weird.

> > 
> >> So I think there are 2 solutions:
> >> 
> >> - introduce PHYSCAP. The problem here is that it is not a physical capacity and
> >> I think that will hit us in the future for example if we want to use vpci for VIRTIO
> >> even if there is not physical PCI on the platform.
> > 
> > Hm, PHYSCAP is a bit weird, for example Xen signals shadow paging
> > support using PHYSCAP which doesn't depend on any hardware feature.
> > 
> > I think you could signal vPCI regardless of whether the underlying
> > platform has PCI devices or not, as vPCI is purely a software
> > component.
> 
> But in the x86 case this is something not supported in all cases and actually only by dom0 pvh.
> So having something like that defined as a PHYSCAP is a bit weird.

Well, even if vPCI was fully supported on x86 for both domU and dom0
it would only apply to PVH and maybe HVM guests, but classic PV will
never get vPCI. It's kind of trying to create a PV guest with HAP
support.

One option would be to defer the introduction of the CDF_vpci flag
until the vpci work has been finished - then we could decide if the
current code can also be used by x86 so maybe it could be enabled for
both arches, and there would be no problem in setting the PHYSCAP.

> >> I think the second solution is the right one but it cannot be done so near from the
> >> feature freeze.
> >> 
> >> The CDF flag as introduced right now is not creating any issue and will be used once
> >> the emulation flag will be introduce. We will be able at this stage to set this properly
> >> also on x86 (for dom0 pvh).
> >> Moreover keeping it will allow to continue to merge the remaining part of the PCI
> >> passthrough which are otherwise not possible to be done as they are dependent on this flag.
> >> 
> >> Can we agree on keep the DOMCTL_CDF_vpci flag and introduce the emulation
> >> flag on Arm after 4.16 release ?
> > 
> > If vPCI for Arm on 4.16 is not going to be functional, why so much
> > pressure in pushing those patches so fast? I understand the need to
> > remove stuff from the queue, but I don't think it's worth the cost of
> > introducing a broken interface deliberately on a release.
> 
> We did not push that fast, those have been on the mailing list for a while (this is the 5th version of the serie).
> PCI passthrough is a big change requiring lots of patches and we decided to push it piece by piece to make
> the review easier.

First version of this patch I have on my inbox is from 23rd of
September, that's less than 3 weeks ago.

> > 
> > I think we need to at least settle on whether we want to keep
> > CDF_vpci or use an arch specific signal mechanism in order to decide
> > what to do regarding the release.
> 
> Agree and I have no definitive idea on this so but switching will require to revert the patch adding CDF_vpci
> and change the subsequent patches.

I think it's fair to expect changes to the interface when things are
reviewed, that's part of the review process, changes to subsequent
patches can be painful, but we have all been there and dealt with
them.

I'm not saying we must remove the CDF_vpci flag. I have to admit I was
fine using emulation_flags, but I could also be fine using CDF_vpci if
the x86 side is fixed and XEN_X86_EMU_VPCI is removed.

Thanks, Roger.
Stefano Stabellini Oct. 13, 2021, 8:41 p.m. UTC | #22
On Wed, 13 Oct 2021, Jan Beulich wrote:
> On 13.10.2021 14:11, Bertrand Marquis wrote:
> >> On 13 Oct 2021, at 11:56, Roger Pau Monné <roger.pau@citrix.com> wrote:
> >> If vPCI for Arm on 4.16 is not going to be functional, why so much
> >> pressure in pushing those patches so fast? I understand the need to
> >> remove stuff from the queue, but I don't think it's worth the cost of
> >> introducing a broken interface deliberately on a release.
> 
> +1
> 
> > We did not push that fast, those have been on the mailing list for a while (this is the 5th version of the serie).
> > PCI passthrough is a big change requiring lots of patches and we decided to push it piece by piece to make
> > the review easier.
> 
> 5 versions for a series like this one was to be expected. Imo it has
> been wrong in the past to rush in new features (even if experimental
> ones) very close to the freeze, and it has mislead people to think
> they can delay work until very (too?) late a point in time.


Hi Roger, Jan,

Let me take a few minutes to clarify and provide context for this work.


I don't think anyone "pushed hard" any of the ARM series close to the
release. I sent "one" email in public as a reminder of things that need
reviewing:
https://marc.info/?l=xen-devel&m=163373776611154

I did send a couple of private emails to Jan but they were to synchronize
ourselves rather than push; Jan, I hope you didn't take them the wrong
way.


At the same time it is certainly true that all the people involved here
worked very hard to get these series ready for 4.16. Oct 15 is the Xen
Project feature freeze. It is also the deadline for many of us working
with upstream Xen Project to upstream features and report back to our
management. I know it doesn't affect you guys directly, but you can
imagine that as employees of our respective companies we feel pressure
to complete our objectives in the given timeframe. Of course we need to
make sure to do that without compromising quality and without
overextending upstream review capacity.


The ARM PCI series are public since Dec 2020, pushed to a gitlab branch
for testing: https://gitlab.com/xen-project/fusa/xen-integration/-/tree/integration/pci-passthrough
I helped creating, managing and testing the branch.

So, I can see why Bertrand feels like they have been around for a while:
we have been dealing with these patches for almost a year, even if from
a xen-devel perspective we are "only" at version 5.


In the past there have been contributors dropping a large half-broken
series at the last minute before the code freeze, asking for it to be
committed, promising it will be improved later, and then disappearing
right after the release, leaving the code half-broken forever.

It is easy to draw comparisons between that kind of behavior and what is
happening in this thread. We have all been burned by it at some point.

However, that is really *not* what is happening this week.

PCI Passthrough for ARM is a difficult feature to implement. We have
been discussing it for 5+ years, close to 10 years. I know it sounds
unbelievable but it is true. For one reason or the other it never
materialized. Until now.

Even with all the outstanding series committed, PCI Passthrough for ARM
is not fully functional, but it is a lot closer to the objective.
Rahul, Oleksandr and their teams have been already "sticking around" for
almost a year making improvements and without their help going forward
it is not going to get finished anyway. We still don't have MSI support.

Given that Rahul, Oleksandr and the others are required to finish the
work anyway, and they are not at risk of disappearing, I think we can
deal with one or two TODO left by Oct 15 [1].

My suggestion is to accept the TODO for patch #8 [1] but I agree with
Roger that we shouldn't introduce bad interfaces, especially as they
impact x86 which is not "tech preview". So I think we should revert
patch #7 (this patch.) I'll reply with more details in separate shorter
email.

[1] https://marc.info/?l=xen-devel&m=163412120531248


> In fact I'm getting somewhat angry seeing how much effort is put into
> getting this work in (including myself doing reviews there), when at
> the same time far older series of mine aren't given any chance to
> make any progress. Not your fault, sure, but an observation.


Jan,

Let me start by pointing out that without your help we would never have
managed to get so many ARM patches committed for 4.16. Not just PCI
related, but also EFI too. You should really get credit for that.

I can imagine it is frustrating helping so much somebody else but not
getting help for your own series. In a way it happened to me too as the
only Xilinx series outstanding didn't make any progress:
https://marc.info/?l=xen-devel&m=163056545414965

The only way to solve this problem is by growing the review capacity of
the community. You'll be happy to know that (without naming names) I
have heard from multiple people, new members of the community, that they
are going to commit time to do reviews during the next months. We are
growing the community again and some of the new contributors will become
good reviewers and maintainers over time. I am positive that the next 12
months will be a lot better than the last 12 months from that
perspective.


Thanks Jan, Roger, the two Oleksandr, Rahul, Michal, Bertrand, Luca and
everyone else that stayed up late these last few weeks to push Xen
forward with new groundbreaking features and pushing the project forward
in new directions. I do think 4.16 is going to be a great release and
the both CI-loops are still fully working with staging very close to
master, further proof of the great work of this community.

Cheers,

Stefano
Stefano Stabellini Oct. 13, 2021, 8:53 p.m. UTC | #23
On Wed, 13 Oct 2021, Roger Pau Monné wrote:
> > I think the second solution is the right one but it cannot be done so near from the
> > feature freeze.
> > 
> > The CDF flag as introduced right now is not creating any issue and will be used once
> > the emulation flag will be introduce. We will be able at this stage to set this properly
> > also on x86 (for dom0 pvh).
> > Moreover keeping it will allow to continue to merge the remaining part of the PCI
> > passthrough which are otherwise not possible to be done as they are dependent on this flag.
> > 
> > Can we agree on keep the DOMCTL_CDF_vpci flag and introduce the emulation
> > flag on Arm after 4.16 release ?
> 
> If vPCI for Arm on 4.16 is not going to be functional, why so much
> pressure in pushing those patches so fast? I understand the need to
> remove stuff from the queue, but I don't think it's worth the cost of
> introducing a broken interface deliberately on a release.
> 
> I think we need to at least settle on whether we want to keep
> CDF_vpci or use an arch specific signal mechanism in order to decide
> what to do regarding the release.

I wrote a longer separate email to provide more context about the
"pushing fast" comment.

I agree that we don't want to introduce a bad interface.

In regards to a way forward for 4.16, my suggestion is the following:

- revert this patch: do not change the interface in this series
    - do not change anything related to CDF_vpci for x86
    - for ARM, leave has_vpci(d) to { false } and do not set
      XEN_DOMCTL_CDF_vpci
    - we can discuss it in depth later on, no rush

- in patch #10, in libxl_arm.c:libxl__arch_domain_prepare_config
    - do not set XEN_DOMCTL_CDF_vpci
    - do not set b_info.arch_arm.vpci
    - do not define LIBXL_HAVE_BUILDINFO_ARM_VPCI in libxl.h
    - keep make_vpci_node and arch_arm.vpci


The other patches (#1, #8, #10), which don't change any interfaces, can
still make it for 4.16 if the review feedback is addressed on time, with
one open TODO item [1].

This way, we get all the essential infrastructure we are trying to
introduce without making any compromises on the external interfaces.
Still it is good to have patches #1 #8 #10 so that with a trival
oneliner patch on top of 4.16 we can enable PCI for ARM and do testing
in the community, in gitlab-ci, and OSSTest too. (We have been
discussing special OSSTest flights to valide PCI passthrough as we
complete development.)


If we think we need a snap decision on this topic, I am available for a
quick sync-up call or IRC meeting between 8AM and 10AM tomorrow (Oct
14).

[1] https://marc.info/?l=xen-devel&m=163412120531248
Stefano Stabellini Oct. 13, 2021, 11:21 p.m. UTC | #24
On Wed, 13 Oct 2021, Stefano Stabellini wrote:
> On Wed, 13 Oct 2021, Roger Pau Monné wrote:
> > > I think the second solution is the right one but it cannot be done so near from the
> > > feature freeze.
> > > 
> > > The CDF flag as introduced right now is not creating any issue and will be used once
> > > the emulation flag will be introduce. We will be able at this stage to set this properly
> > > also on x86 (for dom0 pvh).
> > > Moreover keeping it will allow to continue to merge the remaining part of the PCI
> > > passthrough which are otherwise not possible to be done as they are dependent on this flag.
> > > 
> > > Can we agree on keep the DOMCTL_CDF_vpci flag and introduce the emulation
> > > flag on Arm after 4.16 release ?
> > 
> > If vPCI for Arm on 4.16 is not going to be functional, why so much
> > pressure in pushing those patches so fast? I understand the need to
> > remove stuff from the queue, but I don't think it's worth the cost of
> > introducing a broken interface deliberately on a release.
> > 
> > I think we need to at least settle on whether we want to keep
> > CDF_vpci or use an arch specific signal mechanism in order to decide
> > what to do regarding the release.
> 
> I wrote a longer separate email to provide more context about the
> "pushing fast" comment.
> 
> I agree that we don't want to introduce a bad interface.
> 
> In regards to a way forward for 4.16, my suggestion is the following:
> 
> - revert this patch: do not change the interface in this series
>     - do not change anything related to CDF_vpci for x86
>     - for ARM, leave has_vpci(d) to { false } and do not set
>       XEN_DOMCTL_CDF_vpci
>     - we can discuss it in depth later on, no rush
> 
> - in patch #10, in libxl_arm.c:libxl__arch_domain_prepare_config
>     - do not set XEN_DOMCTL_CDF_vpci
>     - do not set b_info.arch_arm.vpci
>     - do not define LIBXL_HAVE_BUILDINFO_ARM_VPCI in libxl.h
>     - keep make_vpci_node and arch_arm.vpci
> 
> 
> The other patches (#1, #8, #10), which don't change any interfaces, can
> still make it for 4.16 if the review feedback is addressed on time, with
> one open TODO item [1].
> 
> This way, we get all the essential infrastructure we are trying to
> introduce without making any compromises on the external interfaces.
> Still it is good to have patches #1 #8 #10 so that with a trival
> oneliner patch on top of 4.16 we can enable PCI for ARM and do testing
> in the community, in gitlab-ci, and OSSTest too. (We have been
> discussing special OSSTest flights to valide PCI passthrough as we
> complete development.)

One more thing: I would really like to get at least patch #8 committed
because it would help with making progress with part 2 and part 3 of the
PCI enablement series. My preference would also be to get #1 and #10
in as well but I feel less strongly about it.
Jan Beulich Oct. 14, 2021, 6:23 a.m. UTC | #25
On 13.10.2021 22:41, Stefano Stabellini wrote:
> On Wed, 13 Oct 2021, Jan Beulich wrote:
>> On 13.10.2021 14:11, Bertrand Marquis wrote:
>>>> On 13 Oct 2021, at 11:56, Roger Pau Monné <roger.pau@citrix.com> wrote:
>>>> If vPCI for Arm on 4.16 is not going to be functional, why so much
>>>> pressure in pushing those patches so fast? I understand the need to
>>>> remove stuff from the queue, but I don't think it's worth the cost of
>>>> introducing a broken interface deliberately on a release.
>>
>> +1
>>
>>> We did not push that fast, those have been on the mailing list for a while (this is the 5th version of the serie).
>>> PCI passthrough is a big change requiring lots of patches and we decided to push it piece by piece to make
>>> the review easier.
>>
>> 5 versions for a series like this one was to be expected. Imo it has
>> been wrong in the past to rush in new features (even if experimental
>> ones) very close to the freeze, and it has mislead people to think
>> they can delay work until very (too?) late a point in time.
> 
> 
> Hi Roger, Jan,
> 
> Let me take a few minutes to clarify and provide context for this work.
> 
> 
> I don't think anyone "pushed hard" any of the ARM series close to the
> release. I sent "one" email in public as a reminder of things that need
> reviewing:
> https://marc.info/?l=xen-devel&m=163373776611154
> 
> I did send a couple of private emails to Jan but they were to synchronize
> ourselves rather than push; Jan, I hope you didn't take them the wrong
> way.

Definitely not, no worries.

> At the same time it is certainly true that all the people involved here
> worked very hard to get these series ready for 4.16. Oct 15 is the Xen
> Project feature freeze. It is also the deadline for many of us working
> with upstream Xen Project to upstream features and report back to our
> management. I know it doesn't affect you guys directly, but you can
> imagine that as employees of our respective companies we feel pressure
> to complete our objectives in the given timeframe. Of course we need to
> make sure to do that without compromising quality and without
> overextending upstream review capacity.
> 
> 
> The ARM PCI series are public since Dec 2020, pushed to a gitlab branch
> for testing: https://gitlab.com/xen-project/fusa/xen-integration/-/tree/integration/pci-passthrough
> I helped creating, managing and testing the branch.
> 
> So, I can see why Bertrand feels like they have been around for a while:
> we have been dealing with these patches for almost a year, even if from
> a xen-devel perspective we are "only" at version 5.

I'm afraid that anything not on xen-devel doesn't really count; the list
is the only "official" channel. ISTR that for certain aspects there's a
plan George has to make this more pronounced / formal in some of the docs
we have.

Making internally set deadlines is a fully understandable aspect. But
starting to post in September for a mid-October deadline is putting
oneself at risk, I would say, for a (set of) series this involved. I see
no problem with anyone doing so, as long as they accept that risk rather
than expecting to get away by either extraordinary efforts by others
(reviewers) or relaxation of what is permitted to go in.

Of course there's the opposite problem of feedback taking unusually (I'm
inclined to say unduly) long to arrive, like with the gpaddr_bits
addition still facing vague / unclear opposition by Andrew.

> My suggestion is to accept the TODO for patch #8 [1] but I agree with
> Roger that we shouldn't introduce bad interfaces, especially as they
> impact x86 which is not "tech preview". So I think we should revert
> patch #7 (this patch.) I'll reply with more details in separate shorter
> email.
> 
> [1] https://marc.info/?l=xen-devel&m=163412120531248

FWIW I agree with both proposals (acceptance of TODO and revert).

Jan
Bertrand Marquis Oct. 14, 2021, 7:53 a.m. UTC | #26
Hi,

> On 14 Oct 2021, at 07:23, Jan Beulich <jbeulich@suse.com> wrote:
> 
> On 13.10.2021 22:41, Stefano Stabellini wrote:
>> On Wed, 13 Oct 2021, Jan Beulich wrote:
>>> On 13.10.2021 14:11, Bertrand Marquis wrote:
>>>>> On 13 Oct 2021, at 11:56, Roger Pau Monné <roger.pau@citrix.com> wrote:
>>>>> If vPCI for Arm on 4.16 is not going to be functional, why so much
>>>>> pressure in pushing those patches so fast? I understand the need to
>>>>> remove stuff from the queue, but I don't think it's worth the cost of
>>>>> introducing a broken interface deliberately on a release.
>>> 
>>> +1
>>> 
>>>> We did not push that fast, those have been on the mailing list for a while (this is the 5th version of the serie).
>>>> PCI passthrough is a big change requiring lots of patches and we decided to push it piece by piece to make
>>>> the review easier.
>>> 
>>> 5 versions for a series like this one was to be expected. Imo it has
>>> been wrong in the past to rush in new features (even if experimental
>>> ones) very close to the freeze, and it has mislead people to think
>>> they can delay work until very (too?) late a point in time.
>> 
>> 
>> Hi Roger, Jan,
>> 
>> Let me take a few minutes to clarify and provide context for this work.
>> 
>> 
>> I don't think anyone "pushed hard" any of the ARM series close to the
>> release. I sent "one" email in public as a reminder of things that need
>> reviewing:
>> https://marc.info/?l=xen-devel&m=163373776611154
>> 
>> I did send a couple of private emails to Jan but they were to synchronize
>> ourselves rather than push; Jan, I hope you didn't take them the wrong
>> way.
> 
> Definitely not, no worries.
> 
>> At the same time it is certainly true that all the people involved here
>> worked very hard to get these series ready for 4.16. Oct 15 is the Xen
>> Project feature freeze. It is also the deadline for many of us working
>> with upstream Xen Project to upstream features and report back to our
>> management. I know it doesn't affect you guys directly, but you can
>> imagine that as employees of our respective companies we feel pressure
>> to complete our objectives in the given timeframe. Of course we need to
>> make sure to do that without compromising quality and without
>> overextending upstream review capacity.
>> 
>> 
>> The ARM PCI series are public since Dec 2020, pushed to a gitlab branch
>> for testing: https://gitlab.com/xen-project/fusa/xen-integration/-/tree/integration/pci-passthrough
>> I helped creating, managing and testing the branch.
>> 
>> So, I can see why Bertrand feels like they have been around for a while:
>> we have been dealing with these patches for almost a year, even if from
>> a xen-devel perspective we are "only" at version 5.
> 
> I'm afraid that anything not on xen-devel doesn't really count; the list
> is the only "official" channel. ISTR that for certain aspects there's a
> plan George has to make this more pronounced / formal in some of the docs
> we have.
> 
> Making internally set deadlines is a fully understandable aspect. But
> starting to post in September for a mid-October deadline is putting
> oneself at risk, I would say, for a (set of) series this involved. I see
> no problem with anyone doing so, as long as they accept that risk rather
> than expecting to get away by either extraordinary efforts by others
> (reviewers) or relaxation of what is permitted to go in.
> 
> Of course there's the opposite problem of feedback taking unusually (I'm
> inclined to say unduly) long to arrive, like with the gpaddr_bits
> addition still facing vague / unclear opposition by Andrew.
> 
>> My suggestion is to accept the TODO for patch #8 [1] but I agree with
>> Roger that we shouldn't introduce bad interfaces, especially as they
>> impact x86 which is not "tech preview". So I think we should revert
>> patch #7 (this patch.) I'll reply with more details in separate shorter
>> email.
>> 
>> [1] https://marc.info/?l=xen-devel&m=163412120531248
> 
> FWIW I agree with both proposals (acceptance of TODO and revert).

I want to say again that nothing I said was meant as a critic and I am very
grateful of all reviews and comments that we have (even if some of them
are implying work or pushing stuff in the future) as we want to do this right.

We will do the following before the end of the day today:
- send a reverting patch for patch 7
- send updated version of patch 8 handling all pending comments

We will try to (not sure as some changes might be more complex)
- send an updated version of patch 10
- try to find a solution acceptable for patch 1

Thanks everyone for the support here and the huge amount of time
spent on reviewing all patches that was pushed (and there was a lot of them).

Over the next month Arm is committed to put a lot more effort in reviewing and
testing patches pushed to xen-devel. We now have an internal CI that will help
us a lot and all people in Arm Xen team will start to have dedicated time to review
upstream patches and help the community.
Xen is a big part of Arm strategy around automotive as the reference Type 1
hypervisor and we will not disappear soon :-)

If there are any outstanding serie or patches for which you would like us to help
On review, please do not hesitate to ping me and I will see how we can help (even
if it is only to do some testing).

Thanks again everyone for the help.

Cheers
Bertrand

PS: I will stay connected on IRC if someone needs to contact me.

> 
> Jan
diff mbox series

Patch

diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index a5588c643f..7ed1c00e47 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -69,6 +69,7 @@  type domain_create_flag =
 	| CDF_XS_DOMAIN
 	| CDF_IOMMU
 	| CDF_NESTED_VIRT
+	| CDF_VPCI
 
 type domain_create_iommu_opts =
 	| IOMMU_NO_SHAREPT
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index 6e94940a8a..391d4abdf8 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -62,6 +62,7 @@  type domain_create_flag =
   | CDF_XS_DOMAIN
   | CDF_IOMMU
   | CDF_NESTED_VIRT
+  | CDF_VPCI
 
 type domain_create_iommu_opts =
   | IOMMU_NO_SHAREPT
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 19c756ac3d..36138c1b2e 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -622,8 +622,8 @@  int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
 {
     unsigned int max_vcpus;
 
-    /* HVM and HAP must be set. IOMMU may or may not be */
-    if ( (config->flags & ~XEN_DOMCTL_CDF_iommu) !=
+    /* HVM and HAP must be set. IOMMU and VPCI may or may not be */
+    if ( (config->flags & ~XEN_DOMCTL_CDF_iommu & ~XEN_DOMCTL_CDF_vpci) !=
          (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap) )
     {
         dprintk(XENLOG_INFO, "Unsupported configuration %#x\n",
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index ef1812dc14..79c2aa4636 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -662,6 +662,12 @@  int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
         return -EINVAL;
     }
 
+    if ( config->flags & XEN_DOMCTL_CDF_vpci )
+    {
+        dprintk(XENLOG_INFO, "vPCI cannot be enabled yet\n");
+        return -EINVAL;
+    }
+
     if ( config->vmtrace_size )
     {
         unsigned int size = config->vmtrace_size;
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 6ee5d033b0..40d67ec342 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -483,7 +483,7 @@  static int sanitise_domain_config(struct xen_domctl_createdomain *config)
          ~(XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
            XEN_DOMCTL_CDF_s3_integrity | XEN_DOMCTL_CDF_oos_off |
            XEN_DOMCTL_CDF_xs_domain | XEN_DOMCTL_CDF_iommu |
-           XEN_DOMCTL_CDF_nested_virt) )
+           XEN_DOMCTL_CDF_nested_virt | XEN_DOMCTL_CDF_vpci) )
     {
         dprintk(XENLOG_INFO, "Unknown CDF flags %#x\n", config->flags);
         return -EINVAL;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 96696e3842..4245da3f45 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -70,9 +70,11 @@  struct xen_domctl_createdomain {
 #define XEN_DOMCTL_CDF_iommu          (1U<<_XEN_DOMCTL_CDF_iommu)
 #define _XEN_DOMCTL_CDF_nested_virt   6
 #define XEN_DOMCTL_CDF_nested_virt    (1U << _XEN_DOMCTL_CDF_nested_virt)
+#define _XEN_DOMCTL_CDF_vpci          7
+#define XEN_DOMCTL_CDF_vpci           (1U << _XEN_DOMCTL_CDF_vpci)
 
 /* Max XEN_DOMCTL_CDF_* constant.  Used for ABI checking. */
-#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_nested_virt
+#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_vpci
 
     uint32_t flags;