diff mbox

[RFC] Add SUPPORT.md

Message ID 20170831102719.30462-1-george.dunlap@citrix.com (mailing list archive)
State New, archived
Headers show

Commit Message

George Dunlap Aug. 31, 2017, 10:27 a.m. UTC
Add a machine-readable file to describe what features are in what
state of being 'supported', as well as information about how long this
release will be supported, and so on.

The document should be formatted using "semantic newlines" [1], to make
changes easier.

Signed-off-by: Ian Jackson <ian.jackson@citrix.com>
Signed-off-by: George Dunlap <george.dunlap@citrix.com>

[1] http://rhodesmill.org/brandon/2012/one-sentence-per-line/
---

Definitely meant to be a draft; if you disagree with the status of one
of these features, now is the time to suggest something else.

I've made a number of stylistic decisions that people may have opinions on:

* When dealing with multiple implementations of the same feature (for
  instance, x86/PV x86/HVM and ARM guest types, or Linux / FreeBSD /
  QEMU backends), I decided in general to combine the feature itself
  into a single stanza, and break the 'Status' line up by specifying
  the implementation.

  For example, if a feature is supported on x86 but tech preview on
  ARM, there would be two status lines, thus:

    Status, x86: Supported
    Status, ARM: Tech preview

  If a feature is not implemented for a specific implementation, it
  will simply not be listed:

    Status, x86: Supported

* I've added common 'Support variations' to the bottom of the document

Thinking on support status of specific features:

gdbsx security support: Someone may want to debug an untrusted guest,
so I think we should say 'yes' here.

xentrace: Users may want to trace guests in production environments,
so I think we should say 'yes'.

gcov: No good reason to run a gcov hypervisor in a production
environment.  May be ways for a rogue guest to DoS.

memory paging: Changed to experimental -- are we testing it at all?

alternative p2m: No security support until better testing in place

ARINC653 scheduler: Not sure we have the expertise to properly fix
bugs.  Can switch to 'supported' if we get commitment from
maintainers.

vMCE: Is MCE an x86-only thing, or could this conceivably by extended
to ARM?

PVHv2: Not sure why we'd downgrade guest support to 'experimental'.

ARM/Virtual RAM: Not sure what the note 'Limited by supported host
memory' was supposed to mean

CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
CC: Dario Faggioli <dario.faggioli@citrix.com>
CC: Tamas K Lengyel <tamas.lengyel@zentific.com>
CC: Roger Pau Monne <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Anthony Perard <anthony.perard@citrix.com>
CC: Paul Durrant <paul.durrant@citrix.com>
CC: Konrad Wilk <konrad.wilk@oracle.com>
---
 SUPPORT.md | 770 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 770 insertions(+)
 create mode 100644 SUPPORT.md

Comments

Paul Durrant Aug. 31, 2017, 10:46 a.m. UTC | #1
> -----Original Message-----
> +
> +### Blkfront
> +
> +    Status, Linux: Supported
> +    Status, FreeBSD: Supported, Security support external
> +    Status, Windows: Supported [XXX]
> +
> +Guest-side driver capable of speaking the Xen PV block protocol
> +
> +### Netfront
> +
> +    Status, Linux: Supported
> +    Status, FreeBSD: Supported, Security support external
> +    States, Windows: Supported [XXX]
> +

The Windows PV drivers are a sub-project of Xen so I guess they should have the same level of support as Linux and FreeBSD frontends, but I'm unclear as to what 'Supported' means in context of guest-side code. E.g. if someone finds a way of crashing a network frontend using a specially crafted packet, does that mean that an XSA should be issued?

> +Guest-side driver capable of speaking the Xen PV networking protocol
> +
> +### Xen Framebuffer
> +
> +    Status, Linux (xen-fbfront): Supported
> +
> +Guest-side driver capable of speaking the Xen PV Framebuffer protocol
> +
> +[XXX FreeBSD? NetBSD?]
> +
> +### Xen Console
> +
> +    Status, Linux (hvc_xen): Supported
> +
> +Guest-side driver capable of speaking the Xen PV console protocol
> +
> +[XXX FreeBSD? NetBSD? Windows?]
> +

There is one for Windows too.

> +### Xen PV keyboard
> +
> +    Status, Linux (xen-kbdfront): Supported
> +
> +Guest-side driver capable of speaking the Xen PV keyboard protocol

There is one for Windows too. It's not been officially announced as it needed some fixes in QEMU allow frontends running in HVM guests to function correctly.

  Paul
George Dunlap Aug. 31, 2017, 10:56 a.m. UTC | #2
On 08/31/2017 11:46 AM, Paul Durrant wrote:
>> -----Original Message-----
>> +
>> +### Blkfront
>> +
>> +    Status, Linux: Supported
>> +    Status, FreeBSD: Supported, Security support external
>> +    Status, Windows: Supported [XXX]
>> +
>> +Guest-side driver capable of speaking the Xen PV block protocol
>> +
>> +### Netfront
>> +
>> +    Status, Linux: Supported
>> +    Status, FreeBSD: Supported, Security support external
>> +    States, Windows: Supported [XXX]
>> +
> 
> The Windows PV drivers are a sub-project of Xen so I guess they should have the same level of support as Linux and FreeBSD frontends, but I'm unclear as to what 'Supported' means in context of guest-side code. E.g. if someone finds a way of crashing a network frontend using a specially crafted packet, does that mean that an XSA should be issued?

I would think so, yes.

>> +Guest-side driver capable of speaking the Xen PV networking protocol
>> +
>> +### Xen Framebuffer
>> +
>> +    Status, Linux (xen-fbfront): Supported
>> +
>> +Guest-side driver capable of speaking the Xen PV Framebuffer protocol
>> +
>> +[XXX FreeBSD? NetBSD?]
>> +
>> +### Xen Console
>> +
>> +    Status, Linux (hvc_xen): Supported
>> +
>> +Guest-side driver capable of speaking the Xen PV console protocol
>> +
>> +[XXX FreeBSD? NetBSD? Windows?]
>> +
> 
> There is one for Windows too.

OK, I'll add that in.

>> +### Xen PV keyboard
>> +
>> +    Status, Linux (xen-kbdfront): Supported
>> +
>> +Guest-side driver capable of speaking the Xen PV keyboard protocol
> 
> There is one for Windows too. It's not been officially announced as it needed some fixes in QEMU allow frontends running in HVM guests to function correctly.

OK; would you describe its expected reliability in 4.10 as closer to
"Here be dragons", or "Quirky"?

 -George
Paul Durrant Aug. 31, 2017, 11:03 a.m. UTC | #3
> -----Original Message-----

> From: George Dunlap [mailto:george.dunlap@citrix.com]

> Sent: 31 August 2017 11:56

> To: Paul Durrant <Paul.Durrant@citrix.com>; xen-devel@lists.xenproject.org

> Cc: Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>;

> Andrew Cooper <Andrew.Cooper3@citrix.com>; Jan Beulich

> <jbeulich@suse.com>; Tim (Xen.org) <tim@xen.org>; Dario Faggioli

> <dario.faggioli@citrix.com>; Tamas K Lengyel <tamas.lengyel@zentific.com>;

> Roger Pau Monne <roger.pau@citrix.com>; Stefano Stabellini

> <sstabellini@kernel.org>; Anthony Perard <anthony.perard@citrix.com>;

> Konrad Wilk <konrad.wilk@oracle.com>

> Subject: Re: [PATCH RFC] Add SUPPORT.md

> 

> On 08/31/2017 11:46 AM, Paul Durrant wrote:

> >> -----Original Message-----

> >> +

> >> +### Blkfront

> >> +

> >> +    Status, Linux: Supported

> >> +    Status, FreeBSD: Supported, Security support external

> >> +    Status, Windows: Supported [XXX]

> >> +

> >> +Guest-side driver capable of speaking the Xen PV block protocol

> >> +

> >> +### Netfront

> >> +

> >> +    Status, Linux: Supported

> >> +    Status, FreeBSD: Supported, Security support external

> >> +    States, Windows: Supported [XXX]

> >> +

> >

> > The Windows PV drivers are a sub-project of Xen so I guess they should

> have the same level of support as Linux and FreeBSD frontends, but I'm

> unclear as to what 'Supported' means in context of guest-side code. E.g. if

> someone finds a way of crashing a network frontend using a specially crafted

> packet, does that mean that an XSA should be issued?

> 

> I would think so, yes.

> 

> >> +Guest-side driver capable of speaking the Xen PV networking protocol

> >> +

> >> +### Xen Framebuffer

> >> +

> >> +    Status, Linux (xen-fbfront): Supported

> >> +

> >> +Guest-side driver capable of speaking the Xen PV Framebuffer protocol

> >> +

> >> +[XXX FreeBSD? NetBSD?]

> >> +

> >> +### Xen Console

> >> +

> >> +    Status, Linux (hvc_xen): Supported

> >> +

> >> +Guest-side driver capable of speaking the Xen PV console protocol

> >> +

> >> +[XXX FreeBSD? NetBSD? Windows?]

> >> +

> >

> > There is one for Windows too.

> 

> OK, I'll add that in.

> 

> >> +### Xen PV keyboard

> >> +

> >> +    Status, Linux (xen-kbdfront): Supported

> >> +

> >> +Guest-side driver capable of speaking the Xen PV keyboard protocol

> >

> > There is one for Windows too. It's not been officially announced as it

> needed some fixes in QEMU allow frontends running in HVM guests to

> function correctly.

> 

> OK; would you describe its expected reliability in 4.10 as closer to

> "Here be dragons", or "Quirky"?


I've lost track of the state of the QEMU patches but, if they go in, then it should be completely reliable. If not then it will be non-functional... but the same would be true of the Linux frontend running in an HVM guest. (The patches fix a bug where xenvkbd and xenfb are interdependent... but the xenfb backend is only created in the xenpv machine type).

  Paul

> 

>  -George
George Dunlap Aug. 31, 2017, 11:05 a.m. UTC | #4
On 08/31/2017 12:03 PM, Paul Durrant wrote:
>> -----Original Message-----
>> From: George Dunlap [mailto:george.dunlap@citrix.com]
>> Sent: 31 August 2017 11:56
>> To: Paul Durrant <Paul.Durrant@citrix.com>; xen-devel@lists.xenproject.org
>> Cc: Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>;
>> Andrew Cooper <Andrew.Cooper3@citrix.com>; Jan Beulich
>> <jbeulich@suse.com>; Tim (Xen.org) <tim@xen.org>; Dario Faggioli
>> <dario.faggioli@citrix.com>; Tamas K Lengyel <tamas.lengyel@zentific.com>;
>> Roger Pau Monne <roger.pau@citrix.com>; Stefano Stabellini
>> <sstabellini@kernel.org>; Anthony Perard <anthony.perard@citrix.com>;
>> Konrad Wilk <konrad.wilk@oracle.com>
>> Subject: Re: [PATCH RFC] Add SUPPORT.md
>>
>> On 08/31/2017 11:46 AM, Paul Durrant wrote:
>>>> -----Original Message-----
>>>> +
>>>> +### Blkfront
>>>> +
>>>> +    Status, Linux: Supported
>>>> +    Status, FreeBSD: Supported, Security support external
>>>> +    Status, Windows: Supported [XXX]
>>>> +
>>>> +Guest-side driver capable of speaking the Xen PV block protocol
>>>> +
>>>> +### Netfront
>>>> +
>>>> +    Status, Linux: Supported
>>>> +    Status, FreeBSD: Supported, Security support external
>>>> +    States, Windows: Supported [XXX]
>>>> +
>>>
>>> The Windows PV drivers are a sub-project of Xen so I guess they should
>> have the same level of support as Linux and FreeBSD frontends, but I'm
>> unclear as to what 'Supported' means in context of guest-side code. E.g. if
>> someone finds a way of crashing a network frontend using a specially crafted
>> packet, does that mean that an XSA should be issued?
>>
>> I would think so, yes.
>>
>>>> +Guest-side driver capable of speaking the Xen PV networking protocol
>>>> +
>>>> +### Xen Framebuffer
>>>> +
>>>> +    Status, Linux (xen-fbfront): Supported
>>>> +
>>>> +Guest-side driver capable of speaking the Xen PV Framebuffer protocol
>>>> +
>>>> +[XXX FreeBSD? NetBSD?]
>>>> +
>>>> +### Xen Console
>>>> +
>>>> +    Status, Linux (hvc_xen): Supported
>>>> +
>>>> +Guest-side driver capable of speaking the Xen PV console protocol
>>>> +
>>>> +[XXX FreeBSD? NetBSD? Windows?]
>>>> +
>>>
>>> There is one for Windows too.
>>
>> OK, I'll add that in.
>>
>>>> +### Xen PV keyboard
>>>> +
>>>> +    Status, Linux (xen-kbdfront): Supported
>>>> +
>>>> +Guest-side driver capable of speaking the Xen PV keyboard protocol
>>>
>>> There is one for Windows too. It's not been officially announced as it
>> needed some fixes in QEMU allow frontends running in HVM guests to
>> function correctly.
>>
>> OK; would you describe its expected reliability in 4.10 as closer to
>> "Here be dragons", or "Quirky"?
> 
> I've lost track of the state of the QEMU patches but, if they go in, then it should be completely reliable. If not then it will be non-functional... but the same would be true of the Linux frontend running in an HVM guest. (The patches fix a bug where xenvkbd and xenfb are interdependent... but the xenfb backend is only created in the xenpv machine type).

OK -- well we should state here the status of the version(s) of qemu
that will ship in the xen release tarball; and we if the Linux frontend
is broken under HVM, we should specify that as well.

 -George
Roger Pau Monne Aug. 31, 2017, 11:25 a.m. UTC | #5
On Thu, Aug 31, 2017 at 11:27:19AM +0100, George Dunlap wrote:
> Add a machine-readable file to describe what features are in what
> state of being 'supported', as well as information about how long this
> release will be supported, and so on.
> 
> The document should be formatted using "semantic newlines" [1], to make
> changes easier.
> 
> Signed-off-by: Ian Jackson <ian.jackson@citrix.com>
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> 
> [1] http://rhodesmill.org/brandon/2012/one-sentence-per-line/
> ---
> 
> Definitely meant to be a draft; if you disagree with the status of one
> of these features, now is the time to suggest something else.
> 
> I've made a number of stylistic decisions that people may have opinions on:
> 
> * When dealing with multiple implementations of the same feature (for
>   instance, x86/PV x86/HVM and ARM guest types, or Linux / FreeBSD /
>   QEMU backends), I decided in general to combine the feature itself
>   into a single stanza, and break the 'Status' line up by specifying
>   the implementation.
> 
>   For example, if a feature is supported on x86 but tech preview on
>   ARM, there would be two status lines, thus:
> 
>     Status, x86: Supported
>     Status, ARM: Tech preview
> 
>   If a feature is not implemented for a specific implementation, it
>   will simply not be listed:
> 
>     Status, x86: Supported
> 
> * I've added common 'Support variations' to the bottom of the document
> 
> Thinking on support status of specific features:
> 
> gdbsx security support: Someone may want to debug an untrusted guest,
> so I think we should say 'yes' here.
> 
> xentrace: Users may want to trace guests in production environments,
> so I think we should say 'yes'.
> 
> gcov: No good reason to run a gcov hypervisor in a production
> environment.  May be ways for a rogue guest to DoS.
> 
> memory paging: Changed to experimental -- are we testing it at all?
> 
> alternative p2m: No security support until better testing in place
> 
> ARINC653 scheduler: Not sure we have the expertise to properly fix
> bugs.  Can switch to 'supported' if we get commitment from
> maintainers.
> 
> vMCE: Is MCE an x86-only thing, or could this conceivably by extended
> to ARM?
> 
> PVHv2: Not sure why we'd downgrade guest support to 'experimental'.
> 
> ARM/Virtual RAM: Not sure what the note 'Limited by supported host
> memory' was supposed to mean
> 
> CC: Ian Jackson <ian.jackson@citrix.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Tim Deegan <tim@xen.org>
> CC: Dario Faggioli <dario.faggioli@citrix.com>
> CC: Tamas K Lengyel <tamas.lengyel@zentific.com>
> CC: Roger Pau Monne <roger.pau@citrix.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Anthony Perard <anthony.perard@citrix.com>
> CC: Paul Durrant <paul.durrant@citrix.com>
> CC: Konrad Wilk <konrad.wilk@oracle.com>
> ---
>  SUPPORT.md | 770 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 770 insertions(+)
>  create mode 100644 SUPPORT.md
> 
> diff --git a/SUPPORT.md b/SUPPORT.md
> new file mode 100644
> index 0000000000..283cbeb725
> --- /dev/null
> +++ b/SUPPORT.md
> @@ -0,0 +1,770 @@
> +# Support statement for this release
> +
> +This document describes the support status and in particular the
> +security support status of the Xen branch within which you find it.
> +
> +See the bottom of the file for the definitions of the support status
> +levels etc.
> +
> +# Release Support
> +
> +    Xen-Version: 4.10-unstable
> +    Initial-Release: n/a
> +    Supported-Until: TBD
> +    Security-Support-Until: Unreleased - not yet security-supported
> +
> +# Feature Support
> +
> +## Host Architecture
> +
> +### x86-64
> +
> +    Status: Supported
> +
> +### ARM v7 + Virtualization Extensions
> +
> +    Status: Supported
> +
> +### ARM v8
> +
> +    Status: Supported
> +
> +## Guest Type
> +
> +### x86/PV
> +
> +    Status: Supported
> +
> +Traditional Xen Project PV guest
> +
> +### x86/HVM
> +
> +    Status: Supported
> +
> +Fully virtualised guest using hardware virtualisation extensions
> +
> +Requires hardware virtualisation support
> +
> +### x86/PV-on-HVM

Do we really consider this a guest type? From both Xen and the
toolstack PoV this is just a HVM guest. What's more, I'm not really
sure xl/libxl has the right options to create a HVM guest _without_
exposing any PV interfaces.

Ie: can a HMV guest without PV timers and PV event channels
actually be created? Or even without having the MSR to initialize the
hypercall page.

> +
> +    Status: Supported
> +
> +Fully virtualised guest using PV extensions/drivers for improved performance
> +
> +Requires hardware virtualisation support
> +
> +### x86/PVH guest
> +
> +    Status: Preview
> +
> +PVHv2 guest support
> +
> +Requires hardware virtualisation support
> +
> +### x86/PVH dom0
              ^ v2
> +
> +    Status: Experimental

The status of this is just "not finished". We need at least the PCI
emulation series for having a half-functional PVHv2 Dom0.

> +
> +PVHv2 domain 0 support
> +
> +### ARM guest
> +
> +    Status: Supported
> +
> +ARM only has one guest type at the moment
> +
> +## Limits/Host
> +
> +### CPUs
> +
> +    Limit, x86: 4095
> +    Limit, ARM32: 8
> +    Limit, ARM64: 128
> +
> +Note that for x86, very large number of cpus may not work/boot,
> +but we will still provide security support
> +
> +### x86/RAM
> +
> +    Limit, x86: 16TiB
> +    Limit, ARM32: 16GiB
> +    Limit, ARM64: 5TiB
> +
> +[XXX: Andy to suggest what this should say for x86]
> +
> +## Limits/Guest
> +
> +### Virtual CPUs
> +
> +    Limit, x86 PV: 512
> +    Limit, x86 HVM: 128

There has already been some discussion about the HVM vCPU limit due to
other topics, is Xen really compromised on providing security support
for this case?

I would very much like to have a host in osstest capable of creating
such a guest, plus maybe some XTF tests to stress it.

> +    Limit, ARM32: 8
> +    Limit, ARM64: 128
> +
> +### x86/PV/Virtual RAM
       ^ This seems wrong, "Guest RAM" maybe?
> +
> +    Limit, x86 PV: >1TB

 > 1TB? that seems kind of vaguee.

> +    Limit, x86 HVM: 1TB
> +    Limit, ARM32: 16GiB
> +    Limit, ARM64: 1TB
> +
> +### x86 PV/Event Channels
> +
> +    Limit: 131072
> +
> +## Toolstack
> +
> +### xl
> +
> +    Status: Supported
> +
> +### Direct-boot kernel image format
> +
> +    Supported, x86: bzImage

This should be:

Supported, x86: bzImage, ELF

FreeBSD kernel is just a plain ELF binary that's loaded using
libelf. It should also be suitable for ARM, but I have no idea whether
it has been tested on ARM at all.

> +    Supported, ARM32: zImage
> +    Supported, ARM64: Image [XXX - Not sure if this is correct]
> +
> +Format which the toolstack accept for direct-boot kernels
> +
> +### Qemu based disk backend (qdisk) for xl
> +
> +    Status: Supported
> +
> +### Open vSwitch integration for xl
> +
> +    Status: Supported
> +
> +### systemd support for xl
> +
> +    Status: Supported
> +
> +### JSON support for xl
> +
> +    Status: Preview
> +
> +### AHCI support for xl
> +
> +    Status, x86: Supported
> +
> +### ACPI guest
> +
> +    Status, ARM: Preview
       Status: Supported

HVM guests have been using ACPI for a long time on x86.

> +
> +### PVUSB support for xl
> +
> +    Status: Supported
> +
> +### HVM USB passthrough for xl
> +
> +    Status, x86: Supported
> +
> +### QEMU backend hotplugging for xl
> +
> +    Status: Supported
> +
> +### Soft-reset for xl
> +
> +    Status: Supported
> +
> +### Virtual cpu hotplug
> +
> +    Status, ARM: Supported

Status: Supported

On x86 is supported for both HVM and PV. HVM can use ACPI, PV uses
xenstore.

> +
> +## Toolstack/3rd party
> +
> +### libvirt driver for xl
> +
> +    Status: Supported, Security support external
> +
> +Security support for libvirt is provided by the libvirt project.
> +See https://libvirt.org/securityprocess.html
> +
> +## Tooling
> +
> +### gdbsx
> +
> +    Status, x86: Supported
> +
> +Debugger to debug ELF guests
> +
> +### vPMU
> +
> +    Status, x86: Supported, Not security supported
> +
> +Virtual Performance Management Unit for HVM guests
> +
> +Disabled by default (enable with hypervisor command line option).
> +This feature is not security supported: see http://xenbits.xen.org/xsa/advisory-163.html
> +
> +### Guest serial sonsole
> +
> +    Status: Supported
> +
> +Logs key hypervisor and Dom0 kernel events to a file

What's "Guest serial console"? Is it xenconsoled? Does it log Dom0
kernel events?

> +
> +### xentrace
> +
> +    Status, x86: Supported
> +
> +Tool to capture Xen trace buffer data
> +
> +### gcov
> +
> +    Status: Supported, Not security supported
> +
> +## Memory Management
> +
> +### Memory Ballooning
> +
> +    Status: Supported
> +
> +### Memory Sharing
> +
> +    Status, x86 HVM: Preview
> +    Status, ARM: Preview
> +
> +Allow sharing of identical pages between guests
> +
> +### Memory Paging
> +
> +    Status, x86 HVM: Experimenal
> +
> +Allow pages belonging to guests to be paged to disk
> +
> +### Transcendent Memory
> +
> +    Status: Experimental

Some text here might be nice, although I don't even know myself what's
the purpose of tmem.

[...]
> +### x86/Deliver events to PVHVM guests using Xen event channels
> +
> +    Status: Supported

I'm not really sure of the usefulness of this item. As said above, I
don't think it's possible to create a HVM guest without event
channels, in which case this should be already covered by the HVM
guest type support.

> +
> +### Fair locks (ticket-locks)
> +
> +    Status: Supported
> +
> +[XXX Is this host ticket locks?  Or some sort of guest PV ticket locks?  If the former it doesn't make any sense to call it 'supported' because they're either there or not.]

Isn't that the spinlock implementation used by Xen internally? In any
case, I don't think this should be on the list at all.

> +
> +## High Availability and Fault Tolerance
> +
> +### Live Migration, Save & Restore
> +
> +    Status, x86: Supported
> +
> +### Remus Fault Tolerance
> +
> +    Status: Experimental
> +
> +### COLO Manager
> +
> +    Status: Experimental
> +
> +### vMCE
> +
> +    Status, x86: Supported
> +
> +Forward Machine Check Exceptions to Appropriate guests
> +
> +## Virtual driver support, guest side
> +
> +### Blkfront
> +
> +    Status, Linux: Supported
> +    Status, FreeBSD: Supported, Security support external

Status, NetBSD: Supported, Security support external

> +    Status, Windows: Supported [XXX]
> +
> +Guest-side driver capable of speaking the Xen PV block protocol

It feels kind of silly to list code that's not part of our project, I
understand this is done because Linux lacks a security process and we
are nice people, but IMHO this should be managed by the security team
of each external project (or live with the fact that there's none).

> +### Netfront
> +
> +    Status, Linux: Supported
> +    Status, FreeBSD: Supported, Security support external

Status, NetBSD: Supported, Security support external
Status, OpenBSD: Supported, Security support external

> +    States, Windows: Supported [XXX]
> +
> +Guest-side driver capable of speaking the Xen PV networking protocol

https://www.freebsd.org/security/
http://www.netbsd.org/support/security/
https://www.openbsd.org/security.html

> +
> +### Xen Framebuffer
> +
> +    Status, Linux (xen-fbfront): Supported
> +
> +Guest-side driver capable of speaking the Xen PV Framebuffer protocol
> +
> +[XXX FreeBSD? NetBSD?]

I don't think so.

> +
> +### Xen Console
> +
> +    Status, Linux (hvc_xen): Supported
> +
> +Guest-side driver capable of speaking the Xen PV console protocol
> +
> +[XXX FreeBSD? NetBSD? Windows?]

Status NetBSD, FreeBSD: Supported, Security support external

[...]
> +Host-side implementaiton of the Xen PV framebuffer protocol
> +
> +### Xen Console
> +
> +    Status, Linux: Supported

There's no Linux host side (backend) of the PV console, it's
xenconsoled. It should be:

Status: Supported

IMHO.

> +    Status, QEMU: Supported
> +
> +Host-side implementation of the Xen PV console protocol
> +
> +### Xen PV keyboard
> +
> +    Status, Linux: Supported

Is there a Linux backend for this? I though the only backend was in
QEMU.

> +    Status, QEMU: Supported
> +
> +Host-side implementation fo the Xen PV keyboard protocol
> +
> +### Xen PV USB
> +
> +    Status, Linux: Experimental
> +    Status, QEMU: Supported

Not sure about this either, do we consider both the PV backend and the
QEMU emulation? Is the USB PV backend inside of Linux?

> +
> +Host-side implementation of the Xen PV USB protocol
> +
> +### Xen PV SCSI protocol
> +
> +    Status, Linux: [XXX]
> +
> +### Xen PV TPM
> +
> +    Status, Linux: Supported

Again this backend runs in user-space IIRC, which means it's not Linux
specific.

> +
> +### Xen 9pfs
> +
> +    Status, QEMU: Preview
> +
> +### PVCalls
> +
> +    Status, Linux: Preview
> +
> +### Online resize of virtual disks
> +
> +    Status: Supported

That pretty much depends on where you are actually storing your disks
I guess. I'm not sure we want to make such compromises.

> +
> +## Security
> +
> +### Driver Domains
> +
> +    Status: Supported
> +
> +### Device Model Stub Domains
> +
> +    Status: Supported, with caveats
> +
> +Vulnerabilities of a device model stub domain to a hostile driver domain are excluded from security support.
> +
> +### KCONFIG Expert
> +
> +    Status: Experimental
> +
> +### Live Patching
> +
> +    Status: Supported, x86 only

Status, x86: Supported
Status, ARM: Preview | Experimental?

Not sure which one is best.

> +
> +Compile time disabled
> +
> +### Virtual Machine Introspection
> +
> +    Status: Supported, x86 only

Status, x86: Supported.

> +
> +### XSM & FLASK
> +
> +    Status: Experimental
> +
> +Compile time disabled
> +
> +### XSM & FLASK support for IS_PRIV
> +
> +    Status: Experimental
> +
> +Compile time disabled
> +
> +### vTPM Support
> +
> +    Status: Supported, x86 only

How's that different from the "Xen PV TPM" item above?

> +
> +### Intel/TXT ???
> +
> +    Status: ???
> +
> +TXT-based integrity system for the Linux kernel and Xen hypervisor
> +
> +[XXX]
> +
> +## Hardware
> +
> +### x86/Nested Virtualization
> +
> +    Status: Experimental

Status, x86: Experimental.

> +
> +Running a hypervisor inside an HVM guest

I would write that as: "Providing hardware virtualization extensions
to HVM guests."

> +
> +### x86/HVM iPXE
> +
> +    Status: Supported, with caveats
> +
> +Booting a guest via PXE.
> +PXE inherently places full trust of the guest in the network,
> +and so should only be used
> +when the guest network is under the same administrative control
> +as the guest itself.

Hm, not sure why this needs to be spelled out, it's just like running
any bootloader/firmware inside a HVM guest, which I'm quite sure we
are not going to list here.

Ie: I don't see us listing OVMF, SeaBIOS or ROMBIOS, simply because
they run inside the guest, so if they are able to cause security
issues, anything else is also capable of causing them.

> +
> +### x86/Physical CPU Hotplug
> +
> +    Status: Supported
> +
> +### x86/Physical Memory Hotplug
> +
> +    Status: Supported
> +
> +### x86/PCI Passthrough PV
> +
> +    Status: Supported, Not security supported
> +
> +PV passthrough cannot be done safely.
> +
> +[XXX Not even with an IOMMU?]
> +
> +### x86/PCI Passthrough HVM
> +
> +    Status: Supported, with caveats
> +
> +Many hardware device and motherboard combinations are not possible to use safely.
> +The XenProject will support bugs in PCI passthrough for Xen,
> +but the user is responsible to ensure that the hardware combination they use
> +is sufficiently secure for their needs,
> +and should assume that any combination is insecure
> +unless they have reason to believe otherwise.
> +
> +### ARM/Non-PCI device passthrough
> +
> +    Status: Supported

I guess non-pci devices on ARM also use the IOMMU? (SMMU)

> +
> +### x86/Advanced Vector eXtension
> +
> +    Status: Supported
> +
> +### Intel Platform QoS Technologies
> +
> +    Status: Preview
> +
> +### ARM/ACPI (host)
> +
> +    Status: Experimental
> +
> +### ARM/SMMU
> +
> +    Status: Supported, with caveats
> +
> +Only ARM SMMU hardware is supported; non-ARM SMMU hardware is not supported.

I'm not sure of the purpose of this sentence, it's quite clear that
the SMMU is only supported if available. Also, I'm not sure this
should be spelled out in this document, x86 doesn't have a VT-d or SVM
section.

> +
> +### ARM/ITS
> +
> +    Status: experimental
> +
> +[XXX What is this?]
> +
> +### ARM: 16K and 64K pages in guests

Newline

> +    Status: Supported, with caveats
> +
> +No support for QEMU backends in a 16K or 64K domain.
> +

Extra newline.

> +
> +# Format and definitions
> +
> +This file contains prose, and machine-readable fragments.
> +The data in a machine-readable fragment relate to
> +the section and subection in which it is fine.
                                         ^ belongs?

> +
> +The file is in markdown format.
> +The machine-readable fragments are markdown literals
> +containing RFC-822-like (deb822-like) data.
> +
> +## Keys found in the Feature Support subsections
> +
> +### Status
> +
> +This gives the overall status of the feature,
> +including security support status, functional completeness, etc.
> +Refer to the detailed definitions below.
> +
> +If support differs based on implementation
> +(for instance, x86 / ARM, Linux / QEMU / FreeBSD),
> +one line for each set of implementations will be listed.
> +
> +### Restrictions
> +
> +This is a summary of any restrictions which apply,
> +particularly to functional or security support.
> +
> +Full details of restrictions may be provided in the prose
> +section of the feature entry,
> +if a Restrictions tag is present.

Formatting seems weird IMHO.

> +
> +### Limit-Security
> +
> +For size limits.
> +This figure shows the largest configuration which will receive
> +security support.
> +This does not mean that such a configuration will actually work.
> +This limit will only be listed explicitly
> +if it is different than the theoretical limit.

There's no usage of this at all in the document I think.

> +
> +### Limit
> +
> +This figure shows a theoretical size limit.
> +This does not mean that such a large configuration will actually work.

That doesn't make us look specially good, but anyway.

[...]
> +### Security supported
> +
> +Will XSAs be issued if security-related bugs are discovered
> +in the functionality?
> +
> +If "no",
> +anyone who finds a security-related bug in the feature
> +will be advised to
> +post it publicly to the Xen Project mailing lists
> +(or contact another security response team,
> +if a relevant one exists).
> +
> +Bugs found after the end of **Security-Support-Until**
> +in the Release Support section will receive an XSA
> +if they also affect newer, security-supported, versions of Xen.
> +However,
> +the Xen Project will not provide official fixes
> +for non-security-supported versions.

Again weird formatting above (also elsewhere).

Thanks, Roger.
Jan Beulich Aug. 31, 2017, 12:40 p.m. UTC | #6
>>> On 31.08.17 at 13:25, <roger.pau@citrix.com> wrote:
> On Thu, Aug 31, 2017 at 11:27:19AM +0100, George Dunlap wrote:
>> +## Limits/Guest
>> +
>> +### Virtual CPUs
>> +
>> +    Limit, x86 PV: 512
>> +    Limit, x86 HVM: 128
> 
> There has already been some discussion about the HVM vCPU limit due to
> other topics, is Xen really compromised on providing security support
> for this case?
> 
> I would very much like to have a host in osstest capable of creating
> such a guest, plus maybe some XTF tests to stress it.

The problem is that it working well depends on the workload you put
inside the guest. Simply booting such a guest is quite likely going to
be fine (I've tried it a long while ago without seeing any issues).

Jan
Jan Beulich Aug. 31, 2017, 12:46 p.m. UTC | #7
>>> On 31.08.17 at 12:27, <george.dunlap@citrix.com> wrote:
> vMCE: Is MCE an x86-only thing, or could this conceivably by extended
> to ARM?

I think this can't be reasonably extended beyond x86 (and,
considering their similar origin, ia64).

> +## Tooling
> +
> +### gdbsx
> +
> +    Status, x86: Supported
> +
> +Debugger to debug ELF guests
> +
> +### vPMU
> +
> +    Status, x86: Supported, Not security supported
> +
> +Virtual Performance Management Unit for HVM guests

Why is this under Tooling?

> +## Scalability
> +
> +### 1GB/2MB super page support
> +
> +    Status: Supported

Is this a host, guest, CPU, and/or IOMMU capability? Do the same
superpage sizes apply to 16k/64k-page-size ARM? If host, here as
well as ...

> +### Fair locks (ticket-locks)
> +
> +    Status: Supported

... here I wonder whether these are legitimately on this list in the
first place. Admins have no way to avoid their use.

> +### Live Patching
> +
> +    Status: Supported, x86 only
> +
> +Compile time disabled

Bu we're settled to change that, aren't we? It was even meant to be
so in 4.9, but then didn't make it.

> +### Virtual Machine Introspection
> +
> +    Status: Supported, x86 only

Including security support?

> +### x86/PCI Passthrough PV
> +
> +    Status: Supported, Not security supported
> +
> +PV passthrough cannot be done safely.
> +
> +[XXX Not even with an IOMMU?]

It depends who you ask. I think it would be okay to use ...

> +### x86/PCI Passthrough HVM
> +
> +    Status: Supported, with caveats
> +
> +Many hardware device and motherboard combinations are not possible to use safely.
> +The XenProject will support bugs in PCI passthrough for Xen,
> +but the user is responsible to ensure that the hardware combination they use
> +is sufficiently secure for their needs,
> +and should assume that any combination is insecure
> +unless they have reason to believe otherwise.

... this for PV+IOMMU too.

> +### x86/Advanced Vector eXtension
> +
> +    Status: Supported

How fine-grained do we want this document to be? If this one is a
valid entry, then many other CPUID bits will need to have entries
too.

Having reached the end of the list I further wonder whether we
shouldn't add information on various hypercalls and their subops.
I.e. a full walk through include/public/ may be needed to see
what additional entries may be necessary or desirable.

> +# Format and definitions
> +
> +This file contains prose, and machine-readable fragments.
> +The data in a machine-readable fragment relate to
> +the section and subection in which it is fine.

"subsection" and s/fine/found/ ?

> +## Definition of Status labels
> +
> +Each Status value corresponds to levels of security support,
> +testing, stability, etc., as follows:
> +
> +### Experimental
> +
> +    Functional completeness: No
> +    Functional stability: Here be dragons
> +    Interface stability: Not stable
> +    Security supported: No
> +
> +### Tech Preview

I think most if not all entries using this say just "Preview" - I think
the terms would better fully match.

Jan
Wei Liu Sept. 1, 2017, 3 p.m. UTC | #8
On Thu, Aug 31, 2017 at 11:27:19AM +0100, George Dunlap wrote:
> +### Direct-boot kernel image format
> +
> +    Supported, x86: bzImage

Do you mean booting a PV guest? If so there are a few more formats.

> +    Supported, ARM32: zImage
> +    Supported, ARM64: Image [XXX - Not sure if this is correct]
> +
> +Format which the toolstack accept for direct-boot kernels
[...]
> +### JSON support for xl
> +
> +    Status: Preview
> +

What is this?

> +### AHCI support for xl
> +
> +    Status, x86: Supported
> +

There is only one knob to change, I'm not sure whether makes sense to
list it separately.

> +### Soft-reset for xl
> +
> +    Status: Supported
> +

We never tested this in osstest so I'm not sure about if this is the
correct status. Furthermore there is also moving parts in hypervisor.


> +
> +### Online resize of virtual disks
> +
> +    Status: Supported

What is this? Is this part of the PV block protocol?
George Dunlap Sept. 7, 2017, 10:49 a.m. UTC | #9
On 08/31/2017 12:25 PM, Roger Pau Monne wrote:
> On Thu, Aug 31, 2017 at 11:27:19AM +0100, George Dunlap wrote:
>> Add a machine-readable file to describe what features are in what
>> state of being 'supported', as well as information about how long this
>> release will be supported, and so on.
>>
>> The document should be formatted using "semantic newlines" [1], to make
>> changes easier.
>>
>> Signed-off-by: Ian Jackson <ian.jackson@citrix.com>
>> Signed-off-by: George Dunlap <george.dunlap@citrix.com>

Thanks for the thorough review!  Some responses...


>> +### x86/PV-on-HVM
> 
> Do we really consider this a guest type? From both Xen and the
> toolstack PoV this is just a HVM guest. What's more, I'm not really
> sure xl/libxl has the right options to create a HVM guest _without_
> exposing any PV interfaces.
> 
> Ie: can a HMV guest without PV timers and PV event channels
> actually be created? Or even without having the MSR to initialize the
> hypercall page.

This document has its sources in the "feature support" page.  "PVHVM" is
a collective term that was used at the time for exposing a number of
individual interfaces to the guest; I think a lot of that work happened
around the 4.2-4.3 timeframe.  And *one* of the goals, if I understand
correctly, is to allow the automatic generation of such a table from the
Xen sources.

It may be that we don't need to mention this as a separate feature
anymore; or it may be that we can categorize this differently somehow --
I'm open to suggestions here.

>> +    Status: Supported
>> +
>> +Fully virtualised guest using PV extensions/drivers for improved performance
>> +
>> +Requires hardware virtualisation support
>> +
>> +### x86/PVH guest
>> +
>> +    Status: Preview
>> +
>> +PVHv2 guest support
>> +
>> +Requires hardware virtualisation support
>> +
>> +### x86/PVH dom0
>               ^ v2
>> +
>> +    Status: Experimental
> 
> The status of this is just "not finished". We need at least the PCI
> emulation series for having a half-functional PVHv2 Dom0.

From the definition of 'Experimental':

    Functional completeness: No
    Functional stability: Here be dragons
    Interface stability: Not stable
    Security supported: No

"Not finished" -> Functional completeness: No -> Experimental.

If there's no way of doing anything with dom0 at all we should probably
just remove it from the list.

>> +PVHv2 domain 0 support
>> +
>> +### ARM guest
>> +
>> +    Status: Supported
>> +
>> +ARM only has one guest type at the moment
>> +
>> +## Limits/Host
>> +
>> +### CPUs
>> +
>> +    Limit, x86: 4095
>> +    Limit, ARM32: 8
>> +    Limit, ARM64: 128
>> +
>> +Note that for x86, very large number of cpus may not work/boot,
>> +but we will still provide security support
>> +
>> +### x86/RAM
>> +
>> +    Limit, x86: 16TiB
>> +    Limit, ARM32: 16GiB
>> +    Limit, ARM64: 5TiB
>> +
>> +[XXX: Andy to suggest what this should say for x86]
>> +
>> +## Limits/Guest
>> +
>> +### Virtual CPUs
>> +
>> +    Limit, x86 PV: 512
>> +    Limit, x86 HVM: 128
> 
> There has already been some discussion about the HVM vCPU limit due to
> other topics, is Xen really compromised on providing security support
> for this case?
> 
> I would very much like to have a host in osstest capable of creating
> such a guest, plus maybe some XTF tests to stress it.

This is just copied from our currently-advertised limits.  Feel free to
propose a different limit.  In fact, this seems like a good place to use
Limit-Security (which you point out below, is defined but not used in
the document as posted).

>> +    Limit, ARM32: 8
>> +    Limit, ARM64: 128
>> +
>> +### x86/PV/Virtual RAM
>        ^ This seems wrong, "Guest RAM" maybe?

Oops -- Yeah, missed that one!

>> +
>> +    Limit, x86 PV: >1TB
> 
>  > 1TB? that seems kind of vaguee.

That's what I was given. :-)  Indeed, we need something more concrete --
I'll let someone who knows better propose something.

>> +    Limit, x86 HVM: 1TB
>> +    Limit, ARM32: 16GiB
>> +    Limit, ARM64: 1TB
>> +
>> +### x86 PV/Event Channels
>> +
>> +    Limit: 131072
>> +
>> +## Toolstack
>> +
>> +### xl
>> +
>> +    Status: Supported
>> +
>> +### Direct-boot kernel image format
>> +
>> +    Supported, x86: bzImage
> 
> This should be:
> 
> Supported, x86: bzImage, ELF
> 
> FreeBSD kernel is just a plain ELF binary that's loaded using
> libelf. It should also be suitable for ARM, but I have no idea whether
> it has been tested on ARM at all.

Ack

> 
>> +    Supported, ARM32: zImage
>> +    Supported, ARM64: Image [XXX - Not sure if this is correct]
>> +
>> +Format which the toolstack accept for direct-boot kernels
>> +
>> +### Qemu based disk backend (qdisk) for xl
>> +
>> +    Status: Supported
>> +
>> +### Open vSwitch integration for xl
>> +
>> +    Status: Supported
>> +
>> +### systemd support for xl
>> +
>> +    Status: Supported
>> +
>> +### JSON support for xl
>> +
>> +    Status: Preview
>> +
>> +### AHCI support for xl
>> +
>> +    Status, x86: Supported
>> +
>> +### ACPI guest
>> +
>> +    Status, ARM: Preview
>        Status: Supported
> 
> HVM guests have been using ACPI for a long time on x86.

You mean 'Status, x86 HVM: Supported', I take it?


>> +### Virtual cpu hotplug
>> +
>> +    Status, ARM: Supported
> 
> Status: Supported
> 
> On x86 is supported for both HVM and PV. HVM can use ACPI, PV uses
> xenstore.

Ack

>> +### Guest serial sonsole
>> +
>> +    Status: Supported
>> +
>> +Logs key hypervisor and Dom0 kernel events to a file
> 
> What's "Guest serial console"? Is it xenconsoled? Does it log Dom0
> kernel events?

Oh -- sorry, I changed the title because I couldn't figure out what it
was supposed to mean, but apparently didn't read the description very
well.  But of course the description is bogus anyway -- host serial
consoles don't log things to a file.

Lars, what was originally meant here?

>> +### Transcendent Memory
>> +
>> +    Status: Experimental
> 
> Some text here might be nice, although I don't even know myself what's
> the purpose of tmem.

Konrad / Boris, do you want to add anything?

I could some up with a short description too.

>> +### Fair locks (ticket-locks)
>> +
>> +    Status: Supported
>> +
>> +[XXX Is this host ticket locks?  Or some sort of guest PV ticket locks?  If the former it doesn't make any sense to call it 'supported' because they're either there or not.]
> 
> Isn't that the spinlock implementation used by Xen internally? In any
> case, I don't think this should be on the list at all.

I was tidying up a list I got from Ian, who in turn got it from Lars.
Your interpretation (and your conclusion) seems best to me, but I wanted
to give them an opportunity to say otherwise.

>> +### Blkfront
>> +
>> +    Status, Linux: Supported
>> +    Status, FreeBSD: Supported, Security support external
> 
> Status, NetBSD: Supported, Security support external
> 
>> +    Status, Windows: Supported [XXX]
>> +
>> +Guest-side driver capable of speaking the Xen PV block protocol
> 
> It feels kind of silly to list code that's not part of our project, I
> understand this is done because Linux lacks a security process and we
> are nice people, but IMHO this should be managed by the security team
> of each external project (or live with the fact that there's none).

Well the purpose of this document isn't *only* to say what's security
supported; it's also to help define new feature support, set
expectations for functionality, &c.

Additionally, regarding security:

1. For the most part our project wrote the Linux code, so it make sense
for us to support it

2. Windows is included as well, and that is explicitly a XenProject
subproject.

Maybe we should just have a section that points out that most code is
maintained by the projects that contain it, so we don't have to repeat it?

>> +### Netfront
>> +
>> +    Status, Linux: Supported
>> +    Status, FreeBSD: Supported, Security support external
> 
> Status, NetBSD: Supported, Security support external
> Status, OpenBSD: Supported, Security support external
> 
>> +    States, Windows: Supported [XXX]
>> +
>> +Guest-side driver capable of speaking the Xen PV networking protocol
> 
> https://www.freebsd.org/security/
> http://www.netbsd.org/support/security/
> https://www.openbsd.org/security.html

Ack

>> +### Xen Framebuffer
>> +
>> +    Status, Linux (xen-fbfront): Supported
>> +
>> +Guest-side driver capable of speaking the Xen PV Framebuffer protocol
>> +
>> +[XXX FreeBSD? NetBSD?]
> 
> I don't think so.

Thanks

> 
>> +
>> +### Xen Console
>> +
>> +    Status, Linux (hvc_xen): Supported
>> +
>> +Guest-side driver capable of speaking the Xen PV console protocol
>> +
>> +[XXX FreeBSD? NetBSD? Windows?]
> 
> Status NetBSD, FreeBSD: Supported, Security support external
> 
> [...]
>> +Host-side implementaiton of the Xen PV framebuffer protocol
>> +
>> +### Xen Console
>> +
>> +    Status, Linux: Supported
> 
> There's no Linux host side (backend) of the PV console, it's
> xenconsoled. It should be:
> 
> Status: Supported
> 
> IMHO.

What you say makes sense, but I didn't pull the 'QEMU' thing out of
nowhere -- I'm pretty sure that was listed somewhere.  Let me see if I
can dig that out.

>> +    Status, QEMU: Supported
>> +
>> +Host-side implementation of the Xen PV console protocol
>> +
>> +### Xen PV keyboard
>> +
>> +    Status, Linux: Supported
> 
> Is there a Linux backend for this? I though the only backend was in
> QEMU.

Oh, I bet this is where I was getting confused.

>> +### Xen PV USB
>> +
>> +    Status, Linux: Experimental
>> +    Status, QEMU: Supported
> 
> Not sure about this either, do we consider both the PV backend and the
> QEMU emulation? Is the USB PV backend inside of Linux?

There exist patches floating around for Linux PVUSB backend that worked
at some point.

In the case of QEMU, I'm talking specifically about the PVUSB backend
that Juergen implemented (similar to the blkback instance in QEMU).
That was checked in some time ago and I'm pretty sure is being actively
used by SuSE.

>> +### Xen PV TPM
>> +
>> +    Status, Linux: Supported
> 
> Again this backend runs in user-space IIRC, which means it's not Linux
> specific.

Ack

>> +### Online resize of virtual disks
>> +
>> +    Status: Supported
> 
> That pretty much depends on where you are actually storing your disks
> I guess. I'm not sure we want to make such compromises.

What do you mean?

>> +### Live Patching
>> +
>> +    Status: Supported, x86 only
> 
> Status, x86: Supported
> Status, ARM: Preview | Experimental?
> 
> Not sure which one is best.

Ah, missed this one, thanks.

>> +### Virtual Machine Introspection
>> +
>> +    Status: Supported, x86 only
> 
> Status, x86: Supported.

Ack

>> +### vTPM Support
>> +
>> +    Status: Supported, x86 only
> 
> How's that different from the "Xen PV TPM" item above?

Yeah, missed this duplciation.  I'll remove this one.

>> +### Intel/TXT ???
>> +
>> +    Status: ???
>> +
>> +TXT-based integrity system for the Linux kernel and Xen hypervisor
>> +
>> +[XXX]
>> +
>> +## Hardware
>> +
>> +### x86/Nested Virtualization
>> +
>> +    Status: Experimental
> 
> Status, x86: Experimental.

Ack.

>> +
>> +Running a hypervisor inside an HVM guest
> 
> I would write that as: "Providing hardware virtualization extensions
> to HVM guests."

Good catch -- actually we should probably have a separate entry for
Nested PV (which works -- not sure whether we want to support it or not).

>> +### x86/HVM iPXE
>> +
>> +    Status: Supported, with caveats
>> +
>> +Booting a guest via PXE.
>> +PXE inherently places full trust of the guest in the network,
>> +and so should only be used
>> +when the guest network is under the same administrative control
>> +as the guest itself.
> 
> Hm, not sure why this needs to be spelled out, it's just like running
> any bootloader/firmware inside a HVM guest, which I'm quite sure we
> are not going to list here.
> 
> Ie: I don't see us listing OVMF, SeaBIOS or ROMBIOS, simply because
> they run inside the guest, so if they are able to cause security
> issues, anything else is also capable of causing them.

Well iPXE is a feature, so we have to say something about it; and there
was a long discussion at the Summit about whether we should list iPXE as
"security supported", because *by design* it just runs random code that
someone sends it over the network.  But if we say it's not supported, it
makes it sound like we think you shouldn't use it.

Above was the agreed-upon compromise: to say it was supported but warn
people what "supported" means.

>> +### ARM/SMMU
>> +
>> +    Status: Supported, with caveats
>> +
>> +Only ARM SMMU hardware is supported; non-ARM SMMU hardware is not supported.
> 
> I'm not sure of the purpose of this sentence, it's quite clear that
> the SMMU is only supported if available. Also, I'm not sure this
> should be spelled out in this document, x86 doesn't have a VT-d or SVM
> section.

This sentence means, "An SMMU designed by ARM", as opposed to an SMMU
(or SMMU-like thing) designed by someone other than ARM.  (And yes, I
understand that such things existed before the ARM SMMU came out.)

I think people running ARM systems will understand what the sentence means.

>> +### ARM/ITS
>> +
>> +    Status: experimental
>> +
>> +[XXX What is this?]
>> +
>> +### ARM: 16K and 64K pages in guests
> 
> Newline

Ack

> 
>> +    Status: Supported, with caveats
>> +
>> +No support for QEMU backends in a 16K or 64K domain.
>> +
> 
> Extra newline.

Ack

>> +# Format and definitions
>> +
>> +This file contains prose, and machine-readable fragments.
>> +The data in a machine-readable fragment relate to
>> +the section and subection in which it is fine.
>                                          ^ belongs?

I think this should probably be 'found'.

>> +The file is in markdown format.
>> +The machine-readable fragments are markdown literals
>> +containing RFC-822-like (deb822-like) data.
>> +
>> +## Keys found in the Feature Support subsections
>> +
>> +### Status
>> +
>> +This gives the overall status of the feature,
>> +including security support status, functional completeness, etc.
>> +Refer to the detailed definitions below.
>> +
>> +If support differs based on implementation
>> +(for instance, x86 / ARM, Linux / QEMU / FreeBSD),
>> +one line for each set of implementations will be listed.
>> +
>> +### Restrictions
>> +
>> +This is a summary of any restrictions which apply,
>> +particularly to functional or security support.
>> +
>> +Full details of restrictions may be provided in the prose
>> +section of the feature entry,
>> +if a Restrictions tag is present.
> 
> Formatting seems weird IMHO.

To quote the changelog:

"The document should be formatted using "semantic newlines" [1], to make
changes easier.

"[1] http://rhodesmill.org/brandon/2012/one-sentence-per-line/"

>> +### Limit-Security
>> +
>> +For size limits.
>> +This figure shows the largest configuration which will receive
>> +security support.
>> +This does not mean that such a configuration will actually work.
>> +This limit will only be listed explicitly
>> +if it is different than the theoretical limit.
> 
> There's no usage of this at all in the document I think.

There was, but all the "Limit-Security" options were the same as the
"Limit" options, so they all ended up taken out.  I expect that at least
a handful will make their way into the final document.

Thanks!
 -George
George Dunlap Sept. 7, 2017, 11:31 a.m. UTC | #10
On 08/31/2017 01:46 PM, Jan Beulich wrote:
>>>> On 31.08.17 at 12:27, <george.dunlap@citrix.com> wrote:
>> vMCE: Is MCE an x86-only thing, or could this conceivably by extended
>> to ARM?
> 
> I think this can't be reasonably extended beyond x86 (and,
> considering their similar origin, ia64).

OK, I'll changet this to "x86/vMCE" then.

>> +## Tooling
>> +
>> +### gdbsx
>> +
>> +    Status, x86: Supported
>> +
>> +Debugger to debug ELF guests
>> +
>> +### vPMU
>> +
>> +    Status, x86: Supported, Not security supported
>> +
>> +Virtual Performance Management Unit for HVM guests
> 
> Why is this under Tooling?

Perhaps 'tooling' isn't the right name for this section; it includes:

- gdbsx
- vpmu
- guest serial console
- xentrace
- gcov

All of the other features have something to do with looking into the
guest / hypervisor and figuring out what's wrong.

But in any case, vPMU is more about allowing in-guest tools to analyze
the performance of the guest itself; as such it should probably
somewhere else.  I've moved it under "## Hardware".

>> +## Scalability
>> +
>> +### 1GB/2MB super page support
>> +
>> +    Status: Supported
> 
> Is this a host, guest, CPU, and/or IOMMU capability? Do the same
> superpage sizes apply to 16k/64k-page-size ARM? 

I'd say the useful thing to talk about is guest support.  Let me think
about how to reword this.

>> +### Fair locks (ticket-locks)
>> +
>> +    Status: Supported
> 
> ... here I wonder whether these are legitimately on this list in the
> first place. Admins have no way to avoid their use.

I've deleted this item.

>> +### Live Patching
>> +
>> +    Status: Supported, x86 only
>> +
>> +Compile time disabled
> 
> Bu we're settled to change that, aren't we? It was even meant to be
> so in 4.9, but then didn't make it.

Change the compile time disabling?  I don't really know. :-)

What gets checked in should ideally be true at the time it's checked in.

>> +### Virtual Machine Introspection
>> +
>> +    Status: Supported, x86 only
> 
> Including security support?

Not sure, actually.  Opinions?

>> +### x86/Advanced Vector eXtension
>> +
>> +    Status: Supported
> 
> How fine-grained do we want this document to be? If this one is a
> valid entry, then many other CPUID bits will need to have entries
> too.

Well remember that this list came from the "Feature support matrix",
which was also meant to announce / brag about new features we were
developing.

This is already really long.  Anything that comes accessible to guests
by default (which AVX instructions are) must be supported (including
security support).  I wonder if there's a better way to specify this
sort of thing.

> Having reached the end of the list I further wonder whether we
> shouldn't add information on various hypercalls and their subops.
> I.e. a full walk through include/public/ may be needed to see
> what additional entries may be necessary or desirable.

Yes, probably useful.

>> +# Format and definitions
>> +
>> +This file contains prose, and machine-readable fragments.
>> +The data in a machine-readable fragment relate to
>> +the section and subection in which it is fine.
> 
> "subsection" and s/fine/found/ ?

Ack.

> 
>> +## Definition of Status labels
>> +
>> +Each Status value corresponds to levels of security support,
>> +testing, stability, etc., as follows:
>> +
>> +### Experimental
>> +
>> +    Functional completeness: No
>> +    Functional stability: Here be dragons
>> +    Interface stability: Not stable
>> +    Security supported: No
>> +
>> +### Tech Preview
> 
> I think most if not all entries using this say just "Preview" - I think
> the terms would better fully match.

I like 'Tech Preview' better, so unless someone objects I'll change them
all to 'Tech Preview'.

I was originally using only 'Preview' because I thought a single word
would be more easy to parse; but we have "not security supported"
anyway, so might as well go with what sounds better.

Thanks,
 -George
Jan Beulich Sept. 7, 2017, 11:50 a.m. UTC | #11
>>> On 07.09.17 at 13:31, <george.dunlap@citrix.com> wrote:
> On 08/31/2017 01:46 PM, Jan Beulich wrote:
>>>>> On 31.08.17 at 12:27, <george.dunlap@citrix.com> wrote:
>>> +### Live Patching
>>> +
>>> +    Status: Supported, x86 only
>>> +
>>> +Compile time disabled
>> 
>> Bu we're settled to change that, aren't we? It was even meant to be
>> so in 4.9, but then didn't make it.
> 
> Change the compile time disabling?  I don't really know. :-)

Yeah, well, that series is taking awfully long to become ready to go
in. Konrad?

> What gets checked in should ideally be true at the time it's checked in.

Agreed.

>>> +### Virtual Machine Introspection
>>> +
>>> +    Status: Supported, x86 only
>> 
>> Including security support?
> 
> Not sure, actually.  Opinions?

So far it was my understanding that this is at best preview.

>>> +### x86/Advanced Vector eXtension
>>> +
>>> +    Status: Supported
>> 
>> How fine-grained do we want this document to be? If this one is a
>> valid entry, then many other CPUID bits will need to have entries
>> too.
> 
> Well remember that this list came from the "Feature support matrix",
> which was also meant to announce / brag about new features we were
> developing.
> 
> This is already really long.  Anything that comes accessible to guests
> by default (which AVX instructions are) must be supported (including
> security support).  I wonder if there's a better way to specify this
> sort of thing.

One option may be to refer to public/arch-x86/cpufeatureset.h,
but of course that would require it to gain support annotations,
which in turn may be ugly. Short of enumerating all supported
CPUID flags here, I can't think of better alternatives.

Jan
George Dunlap Sept. 7, 2017, 1:52 p.m. UTC | #12
On 09/01/2017 04:00 PM, Wei Liu wrote:
> On Thu, Aug 31, 2017 at 11:27:19AM +0100, George Dunlap wrote:
>> +### Direct-boot kernel image format
>> +
>> +    Supported, x86: bzImage
> 
> Do you mean booting a PV guest? If so there are a few more formats.
> 
>> +    Supported, ARM32: zImage
>> +    Supported, ARM64: Image [XXX - Not sure if this is correct]
>> +
>> +Format which the toolstack accept for direct-boot kernels
> [...]
>> +### JSON support for xl
>> +
>> +    Status: Preview
>> +
> 
> What is this?

JSON output; e.g., `xl list -l`.

Perhaps this should be called 'JSON output support'. :-)

>> +### AHCI support for xl
>> +
>> +    Status, x86: Supported
>> +
> 
> There is only one knob to change, I'm not sure whether makes sense to
> list it separately.
> 
>> +### Soft-reset for xl
>> +
>> +    Status: Supported
>> +
> 
> We never tested this in osstest so I'm not sure about if this is the
> correct status. Furthermore there is also moving parts in hypervisor.

Hmm, maybe this would go better under a hypervisor section somewhere; as
you say, the core functionality doesn't reside in xl, xl just enables it.

Strangely enough, we don't have a simple 'hypervisor' section.

>> +### Online resize of virtual disks
>> +
>> +    Status: Supported
> 
> What is this? Is this part of the PV block protocol?

I think so, yes.

 -George
Roger Pau Monne Sept. 7, 2017, 1:57 p.m. UTC | #13
On Thu, Sep 07, 2017 at 11:49:00AM +0100, George Dunlap wrote:
> On 08/31/2017 12:25 PM, Roger Pau Monne wrote:
> > On Thu, Aug 31, 2017 at 11:27:19AM +0100, George Dunlap wrote:
> >> +### x86/PV-on-HVM
> > 
> > Do we really consider this a guest type? From both Xen and the
> > toolstack PoV this is just a HVM guest. What's more, I'm not really
> > sure xl/libxl has the right options to create a HVM guest _without_
> > exposing any PV interfaces.
> > 
> > Ie: can a HMV guest without PV timers and PV event channels
> > actually be created? Or even without having the MSR to initialize the
> > hypercall page.
> 
> This document has its sources in the "feature support" page.  "PVHVM" is
> a collective term that was used at the time for exposing a number of
> individual interfaces to the guest; I think a lot of that work happened
> around the 4.2-4.3 timeframe.  And *one* of the goals, if I understand
> correctly, is to allow the automatic generation of such a table from the
> Xen sources.
> 
> It may be that we don't need to mention this as a separate feature
> anymore; or it may be that we can categorize this differently somehow --
> I'm open to suggestions here.

We marketed this as PVHVM, but I think this term applies to OSes
rather than Xen guest types.

From a Xen PoV, they are just HVM OSes.  Some of them make use of more
PV interfaces than others, but all HVM guests have the same set of
interfaces available to them, and thus the same surface of attack.

I don't think it makes sense to list PVHVM as a guest type in the list
of supported features.

> >> +### x86/PVH dom0
> >               ^ v2
> >> +
> >> +    Status: Experimental
> > 
> > The status of this is just "not finished". We need at least the PCI
> > emulation series for having a half-functional PVHv2 Dom0.
> 
> From the definition of 'Experimental':
> 
>     Functional completeness: No
>     Functional stability: Here be dragons
>     Interface stability: Not stable
>     Security supported: No
> 
> "Not finished" -> Functional completeness: No -> Experimental.
> 
> If there's no way of doing anything with dom0 at all we should probably
> just remove it from the list.

Right now it should be removed according to the logic above.

> >> +### ACPI guest
> >> +
> >> +    Status, ARM: Preview
> >        Status: Supported
> > 
> > HVM guests have been using ACPI for a long time on x86.
> 
> You mean 'Status, x86 HVM: Supported', I take it?

Right.

> >> +### Online resize of virtual disks
> >> +
> >> +    Status: Supported
> > 
> > That pretty much depends on where you are actually storing your disks
> > I guess. I'm not sure we want to make such compromises.
> 
> What do you mean?

I'm not sure online resizing is something that needs spelling out
separately, it's part of how the block protocol works, just like
indirect descriptors or persistent grants (which are also not listed
here, and I think that's fine).

> >> +### x86/HVM iPXE
> >> +
> >> +    Status: Supported, with caveats
> >> +
> >> +Booting a guest via PXE.
> >> +PXE inherently places full trust of the guest in the network,
> >> +and so should only be used
> >> +when the guest network is under the same administrative control
> >> +as the guest itself.
> > 
> > Hm, not sure why this needs to be spelled out, it's just like running
> > any bootloader/firmware inside a HVM guest, which I'm quite sure we
> > are not going to list here.
> > 
> > Ie: I don't see us listing OVMF, SeaBIOS or ROMBIOS, simply because
> > they run inside the guest, so if they are able to cause security
> > issues, anything else is also capable of causing them.
> 
> Well iPXE is a feature, so we have to say something about it; and there
> was a long discussion at the Summit about whether we should list iPXE as
> "security supported", because *by design* it just runs random code that
> someone sends it over the network.  But if we say it's not supported, it
> makes it sound like we think you shouldn't use it.
> 
> Above was the agreed-upon compromise: to say it was supported but warn
> people what "supported" means.

Hm, I'm still not sure this should be explicitly listed here.

Running random code inside of a guest is not a problem from Xen's PoV,
and we would never issue a XSA, unless such code is able to break
outside of the guest, in which case it doesn't matter whether the code
has been randomly fetched from the network.

IMHO iPXE is just like any other firmware that Xen supports, such as
OVMF/SeaBIOS/ROMBIOS, and I don't see them listed here. I'm not sure
in which way iPXE is special from the other ones that requires such an
entry in the support document.

> >> +### ARM/SMMU
> >> +
> >> +    Status: Supported, with caveats
> >> +
> >> +Only ARM SMMU hardware is supported; non-ARM SMMU hardware is not supported.
> > 
> > I'm not sure of the purpose of this sentence, it's quite clear that
> > the SMMU is only supported if available. Also, I'm not sure this
> > should be spelled out in this document, x86 doesn't have a VT-d or SVM
> > section.
> 
> This sentence means, "An SMMU designed by ARM", as opposed to an SMMU
> (or SMMU-like thing) designed by someone other than ARM.  (And yes, I
> understand that such things existed before the ARM SMMU came out.)
> 
> I think people running ARM systems will understand what the sentence means.

Oh, thanks. I didn't know there was such a difference in the ARM
world.

Thanks, Roger.
George Dunlap Sept. 7, 2017, 2:42 p.m. UTC | #14
On 09/07/2017 02:57 PM, Roger Pau Monné wrote:
> On Thu, Sep 07, 2017 at 11:49:00AM +0100, George Dunlap wrote:
>> On 08/31/2017 12:25 PM, Roger Pau Monne wrote:
>>> On Thu, Aug 31, 2017 at 11:27:19AM +0100, George Dunlap wrote:

[snip]

>>>> +### x86/PVH dom0
>>>               ^ v2
>>>> +
>>>> +    Status: Experimental
>>>
>>> The status of this is just "not finished". We need at least the PCI
>>> emulation series for having a half-functional PVHv2 Dom0.
>>
>> From the definition of 'Experimental':
>>
>>     Functional completeness: No
>>     Functional stability: Here be dragons
>>     Interface stability: Not stable
>>     Security supported: No
>>
>> "Not finished" -> Functional completeness: No -> Experimental.
>>
>> If there's no way of doing anything with dom0 at all we should probably
>> just remove it from the list.
> 
> Right now it should be removed according to the logic above.

Fair enough.

>>>> +### Online resize of virtual disks
>>>> +
>>>> +    Status: Supported
>>>
>>> That pretty much depends on where you are actually storing your disks
>>> I guess. I'm not sure we want to make such compromises.
>>
>> What do you mean?
> 
> I'm not sure online resizing is something that needs spelling out
> separately, it's part of how the block protocol works, just like
> indirect descriptors or persistent grants (which are also not listed
> here, and I think that's fine).
> 
>>>> +### x86/HVM iPXE
>>>> +
>>>> +    Status: Supported, with caveats
>>>> +
>>>> +Booting a guest via PXE.
>>>> +PXE inherently places full trust of the guest in the network,
>>>> +and so should only be used
>>>> +when the guest network is under the same administrative control
>>>> +as the guest itself.
>>>
>>> Hm, not sure why this needs to be spelled out, it's just like running
>>> any bootloader/firmware inside a HVM guest, which I'm quite sure we
>>> are not going to list here.
>>>
>>> Ie: I don't see us listing OVMF, SeaBIOS or ROMBIOS, simply because
>>> they run inside the guest, so if they are able to cause security
>>> issues, anything else is also capable of causing them.
>>
>> Well iPXE is a feature, so we have to say something about it; and there
>> was a long discussion at the Summit about whether we should list iPXE as
>> "security supported", because *by design* it just runs random code that
>> someone sends it over the network.  But if we say it's not supported, it
>> makes it sound like we think you shouldn't use it.
>>
>> Above was the agreed-upon compromise: to say it was supported but warn
>> people what "supported" means.
> 
> Hm, I'm still not sure this should be explicitly listed here.
> 
> Running random code inside of a guest is not a problem from Xen's PoV,
> and we would never issue a XSA, unless such code is able to break
> outside of the guest, in which case it doesn't matter whether the code
> has been randomly fetched from the network.

That's not true at all.  There are two security boundaries within the
guest: user space -> kernel space, and outside -> [anything].  If there
was a bug in the QEMU network card which allowed a crafted packet to
write into unauthorized memory in the guest, that would be a security
vulnerability, as would a bug which allowed a guest user to make
unauthorized changes to the guest kernel memory (or unauthorized access
to guest devices, &c).

> IMHO iPXE is just like any other firmware that Xen supports, such as
> OVMF/SeaBIOS/ROMBIOS, and I don't see them listed here. I'm not sure
> in which way iPXE is special from the other ones that requires such an
> entry in the support document.

Well first of all, the purpose of this document is in fact in part to
enumerate features, not only to talk about security support.  As such,
we probably should list OVMF / BIOS booting as features. :-)

Secondly, I agree that it's hard to imagine what a genuine security bug
in one of these things (iPXE, BIOS, OVMF) would look like.  However, *if
we did find* a bug in one of those pieces of code that allowed an entity
(either guest user or someone not in the guest at all) to make
unauthorized changes, we would definitely need to issue an XSA for it.
That is what "security supported" means.

[snip]

> From a Xen PoV, they are just HVM OSes.  Some of them make use of more
> PV interfaces than others, but all HVM guests have the same set of
> interfaces available to them, and thus the same surface of attack.

Similar to the argument above: XSAs will not only be issued for
violations of the guest -> Xen privilege boundary, but also for the
guest user -> guest kernel privilege boundary.  If one of the
PVHVM-related features has a bug such that it allows a user -> kernel
escalation, we will issue an XSA for it.

> I don't think it makes sense to list PVHVM as a guest type in the list
> of supported features.

Maybe we should group these as "PVHVM acceleration" and put them in a
different section, rather than calling them a separate guest type.

[snip]

>>>> +### ARM/SMMU
>>>> +
>>>> +    Status: Supported, with caveats
>>>> +
>>>> +Only ARM SMMU hardware is supported; non-ARM SMMU hardware is not supported.
>>>
>>> I'm not sure of the purpose of this sentence, it's quite clear that
>>> the SMMU is only supported if available. Also, I'm not sure this
>>> should be spelled out in this document, x86 doesn't have a VT-d or SVM
>>> section.
>>
>> This sentence means, "An SMMU designed by ARM", as opposed to an SMMU
>> (or SMMU-like thing) designed by someone other than ARM.  (And yes, I
>> understand that such things existed before the ARM SMMU came out.)
>>
>> I think people running ARM systems will understand what the sentence means.
> 
> Oh, thanks. I didn't know there was such a difference in the ARM
> world.

Yeah, I think in the x86 world Intel basically both designs and makes
all the chips necessary to build a system; in the historical ARM
ecosystem, a lot of the necessary "motherboard" chips have been made or
designed by the people building the embedded system.  So it was more
natural, before ARM had its own SMMU design, for a vendor to step up and
design / build one of their own.

But apparently in this case that one wasn't very good, and was quickly
superseded by the ARM one.  But since it exists, we have to clarify that
it's only the ARM-designed one which is supported.

 -George
Wei Liu Sept. 7, 2017, 2:56 p.m. UTC | #15
On Thu, Sep 07, 2017 at 02:52:49PM +0100, George Dunlap wrote:
> On 09/01/2017 04:00 PM, Wei Liu wrote:
> > On Thu, Aug 31, 2017 at 11:27:19AM +0100, George Dunlap wrote:
> >> +### Direct-boot kernel image format
> >> +
> >> +    Supported, x86: bzImage
> > 
> > Do you mean booting a PV guest? If so there are a few more formats.
> > 
> >> +    Supported, ARM32: zImage
> >> +    Supported, ARM64: Image [XXX - Not sure if this is correct]
> >> +
> >> +Format which the toolstack accept for direct-boot kernels
> > [...]
> >> +### JSON support for xl
> >> +
> >> +    Status: Preview
> >> +
> > 
> > What is this?
> 
> JSON output; e.g., `xl list -l`.
> 
> Perhaps this should be called 'JSON output support'. :-)
> 

OK. Anyway, no security support for this please. I'm not even very sure
if the output is going to be stable.

> >> +### AHCI support for xl
> >> +
> >> +    Status, x86: Supported
> >> +
> > 
> > There is only one knob to change, I'm not sure whether makes sense to
> > list it separately.
> > 
> >> +### Soft-reset for xl
> >> +
> >> +    Status: Supported
> >> +
> > 
> > We never tested this in osstest so I'm not sure about if this is the
> > correct status. Furthermore there is also moving parts in hypervisor.
> 
> Hmm, maybe this would go better under a hypervisor section somewhere; as
> you say, the core functionality doesn't reside in xl, xl just enables it.
> 

A bit more than that, there are moving parts in libxl to handle that as
well -- some initialisation needs to be skipped or whatever, some can't.
Stefano Stabellini Sept. 7, 2017, 9:36 p.m. UTC | #16
On Thu, 31 Aug 2017, Roger Pau Monne wrote:
> > +### ARM/Non-PCI device passthrough
> > +
> > +    Status: Supported
> 
> I guess non-pci devices on ARM also use the IOMMU? (SMMU)

Yes, they do.


> > +### ARM/SMMU
> > +
> > +    Status: Supported, with caveats
> > +
> > +Only ARM SMMU hardware is supported; non-ARM SMMU hardware is not supported.
> 
> I'm not sure of the purpose of this sentence, it's quite clear that
> the SMMU is only supported if available. Also, I'm not sure this
> should be spelled out in this document, x86 doesn't have a VT-d or SVM
> section.

As George wrote, there are many SMMUs in the market for ARM based
platforms, not all of them of ARM design.
Stefano Stabellini Sept. 7, 2017, 9:38 p.m. UTC | #17
On Thu, 31 Aug 2017, Jan Beulich wrote:
> >>> On 31.08.17 at 12:27, <george.dunlap@citrix.com> wrote:
> > vMCE: Is MCE an x86-only thing, or could this conceivably by extended
> > to ARM?
> 
> I think this can't be reasonably extended beyond x86 (and,
> considering their similar origin, ia64).

Yes, Jan is right. ARM has SErrors today, and might have something
better in the future, but it doubt they will be called MCEs anyway.
Stefano Stabellini Sept. 7, 2017, 9:54 p.m. UTC | #18
On Thu, 31 Aug 2017, George Dunlap wrote:
> +### Direct-boot kernel image format
> +
> +    Supported, x86: bzImage
> +    Supported, ARM32: zImage
> +    Supported, ARM64: Image [XXX - Not sure if this is correct]

On ARM64 it's called Image.gz.


> +Format which the toolstack accept for direct-boot kernels
> +
> +### Qemu based disk backend (qdisk) for xl
> +
> +    Status: Supported
> +
> +### Open vSwitch integration for xl
> +
> +    Status: Supported
> +
> +### systemd support for xl
> +
> +    Status: Supported
> +
> +### JSON support for xl
> +
> +    Status: Preview
> +
> +### AHCI support for xl
> +
> +    Status, x86: Supported
> +
> +### ACPI guest
> +
> +    Status, ARM: Preview
> +
> +### PVUSB support for xl
> +
> +    Status: Supported
> +
> +### HVM USB passthrough for xl
> +
> +    Status, x86: Supported
> +
> +### QEMU backend hotplugging for xl
> +
> +    Status: Supported
> +
> +### Soft-reset for xl
> +
> +    Status: Supported
> +
> +### Virtual cpu hotplug
> +
> +    Status, ARM: Supported
> +
> +## Toolstack/3rd party
> +
> +### libvirt driver for xl
> +
> +    Status: Supported, Security support external
> +
> +Security support for libvirt is provided by the libvirt project.
> +See https://libvirt.org/securityprocess.html
> +
> +## Tooling
> +
> +### gdbsx
> +
> +    Status, x86: Supported
> +
> +Debugger to debug ELF guests
> +
> +### vPMU
> +
> +    Status, x86: Supported, Not security supported
> +
> +Virtual Performance Management Unit for HVM guests
> +
> +Disabled by default (enable with hypervisor command line option).
> +This feature is not security supported: see http://xenbits.xen.org/xsa/advisory-163.html
> +
> +### Guest serial sonsole
> +
> +    Status: Supported
> +
> +Logs key hypervisor and Dom0 kernel events to a file
> +
> +### xentrace
> +
> +    Status, x86: Supported
> +
> +Tool to capture Xen trace buffer data
> +
> +### gcov
> +
> +    Status: Supported, Not security supported
> +
> +## Memory Management
> +
> +### Memory Ballooning
> +
> +    Status: Supported
> +
> +### Memory Sharing
> +
> +    Status, x86 HVM: Preview
> +    Status, ARM: Preview
> +
> +Allow sharing of identical pages between guests
> +
> +### Memory Paging
> +
> +    Status, x86 HVM: Experimenal
> +
> +Allow pages belonging to guests to be paged to disk
> +
> +### Transcendent Memory
> +
> +    Status: Experimental
> +
> +### Alternative p2m
> +
> +    Status, x86: Preview
> +
> +Allows external monitoring of hypervisor memory using Intel EPT by allowing to maintain multiple physical memory to machine physical mappings
> +
> +[XXX Should this be x86/Alternative p2m?]

No, the technology could be available on ARM.


> +## Resource Management
> +
> +### CPU Pools
> +
> +    Status: Supported
> +
> +Groups physical cpus into distinct groups called "cpupools",
> +with each pool having the capability of using different schedulers and scheduling properties.
> +
> +### Credit Scheduler
> +
> +    Status: Supported
> +
> +The default scheduler, which is a weighted proportional fair share virtual CPU scheduler.
> +
> +### Credit2 Scheduler
> +
> +    Status: Supported
> +
> +Credit2 is a general purpose scheduler for Xen,
> +designed with particular focus on fairness, responsiveness and scalability
> +
> +### RTDS based Scheduler
> +
> +    Status: Experimental
> +
> +A soft real-time CPU scheduler built to provide guaranteed CPU capacity to guest VMs on SMP hosts
> +
> +### ARINC653 Scheduler
> +
> +    Status: Supported, Not security supported
> +
> +A periodically repeating fixed timeslice scheduler. Multicore support is not yet implemented.
> +
> +### Null Scheduler
> +
> +    Status: Experimental
> +
> +A very simple, very static scheduling posicy that always schedules the same vCPU(s) on the same pCPU(s). It is designed for maximum determinism and minimum overhead on embedded platforms.

Can we say more than Experimental? I think it should be at least Tech
Preview.


> +### Numa scheduler affinity
> +
> +    Status, x86: Supported
> +
> +Enables Numa aware scheduling in Xen
> +
> +## Scalability
> +
> +### 1GB/2MB super page support
> +
> +    Status: Supported
> +
> +### x86/Deliver events to PVHVM guests using Xen event channels
> +
> +    Status: Supported
> +
> +### Fair locks (ticket-locks)
> +
> +    Status: Supported
> +
> +[XXX Is this host ticket locks?  Or some sort of guest PV ticket locks?  If the former it doesn't make any sense to call it 'supported' because they're either there or not.]
> +
> +## High Availability and Fault Tolerance
> +
> +### Live Migration, Save & Restore
> +
> +    Status, x86: Supported
> +
> +### Remus Fault Tolerance
> +
> +    Status: Experimental
> +
> +### COLO Manager
> +
> +    Status: Experimental
> +
> +### vMCE
> +
> +    Status, x86: Supported
> +
> +Forward Machine Check Exceptions to Appropriate guests
> +
> +## Virtual driver support, guest side
> +
> +### Blkfront
> +
> +    Status, Linux: Supported
> +    Status, FreeBSD: Supported, Security support external
> +    Status, Windows: Supported [XXX]
> +
> +Guest-side driver capable of speaking the Xen PV block protocol
> +
> +### Netfront
> +
> +    Status, Linux: Supported
> +    Status, FreeBSD: Supported, Security support external
> +    States, Windows: Supported [XXX]
> +
> +Guest-side driver capable of speaking the Xen PV networking protocol
> +
> +### Xen Framebuffer

Please write "Xen Framebuffer Frontend" in the title.


> +    Status, Linux (xen-fbfront): Supported
> +
> +Guest-side driver capable of speaking the Xen PV Framebuffer protocol
> +
> +[XXX FreeBSD? NetBSD?]
> +
> +### Xen Console

Please write frontend in the title


> +    Status, Linux (hvc_xen): Supported
> +
> +Guest-side driver capable of speaking the Xen PV console protocol
> +
> +[XXX FreeBSD? NetBSD? Windows?]
> +
> +### Xen PV keyboard

Please write frontend in the title


> +    Status, Linux (xen-kbdfront): Supported
> +
> +Guest-side driver capable of speaking the Xen PV keyboard protocol
> +
> +### Xen PVUSB protocol

Please write frontend in the title


> +    Status, Linux: Supported
> +
> +### Xen PV SCSI protocol

Please write frontend in the title


> +
> +    Status, Linux: [XXX]
> +
> +### Xen TPMfront
> +
> +    Status, Linux (xen-tpmfront): Preview
> +
> +Guest-side driver capable of speaking the Xen PV TPM protocol
> +
> +### Xen 9pfs frontend
> +
> +   Status, Linux: Preview
> +
> +Guest-side driver capable of speaking the Xen 9pfs protocol
> +
> +### PVCalls frontend
> +
> +   Status, Linux: Preview
> +
> +Guest-side driver capable of making pv system calls
> +
> +## Virtual device support, host side
> +
> +### Blkback
> +
> +    Status, Linux (blkback): Supported
> +    Status, FreeBSD (blkback): Supported
> +    Status, QEMU (xen_disk): Supported
> +    Status, Blktap2: Deprecated
> +
> +Host-side implementations of the Xen PV block protocol
> +
> +### Netback
> +
> +    Status, Linux (netback): Supported
> +    Status, FreeBSD (netback): Supported
> +    Status, QEMU (xen_nic): Experimental

I suggest to Deprecate xen_nic


> +Host-side implementations of Xen PV network protocol
> +
> +### Xen Framebuffer

Please write backend in the title


> +    Status, Linux: Supported
> +    Status, QEMU: Supported
> +
> +Host-side implementaiton of the Xen PV framebuffer protocol
> +
> +### Xen Console
> +

Please write backend in the title


> +    Status, Linux: Supported
> +    Status, QEMU: Supported
> +
> +Host-side implementation of the Xen PV console protocol
> +
> +### Xen PV keyboard
> +

Please write backend in the title


> +    Status, Linux: Supported
> +    Status, QEMU: Supported
> +
> +Host-side implementation fo the Xen PV keyboard protocol
> +
> +### Xen PV USB
> +

Please write backend in the title


> +    Status, Linux: Experimental
> +    Status, QEMU: Supported
> +
> +Host-side implementation of the Xen PV USB protocol
> +
> +### Xen PV SCSI protocol

Please write backend in the title


> +
> +    Status, Linux: [XXX]
> +
> +### Xen PV TPM

Please write backend in the title


> +
> +    Status, Linux: Supported
> +
> +### Xen 9pfs

Please write backend in the title


> +
> +    Status, QEMU: Preview
> +
> +### PVCalls

Please write backend in the title


> +
> +    Status, Linux: Preview
> +
> +### Online resize of virtual disks
> +
> +    Status: Supported
> +
> +## Security
> +
> +### Driver Domains
> +
> +    Status: Supported
> +
> +### Device Model Stub Domains
> +
> +    Status: Supported, with caveats
> +
> +Vulnerabilities of a device model stub domain to a hostile driver domain are excluded from security support.
> +
> +### KCONFIG Expert
> +
> +    Status: Experimental
> +
> +### Live Patching
> +
> +    Status: Supported, x86 only
> +
> +Compile time disabled
> +
> +### Virtual Machine Introspection
> +
> +    Status: Supported, x86 only
> +
> +### XSM & FLASK
> +
> +    Status: Experimental
> +
> +Compile time disabled
> +
> +### XSM & FLASK support for IS_PRIV
> +
> +    Status: Experimental
> +
> +Compile time disabled
> +
> +### vTPM Support
> +
> +    Status: Supported, x86 only

This should probably be x86/vTPM. TPM, the way we are discussing it, is
an x86-only implementation. ARM-based alternatives are not called TPM
AFAIK.



> +### Intel/TXT ???

Same here


> +    Status: ???
> +
> +TXT-based integrity system for the Linux kernel and Xen hypervisor
> +
> +[XXX]
> +
> +## Hardware
> +
> +### x86/Nested Virtualization
> +
> +    Status: Experimental
> +
> +Running a hypervisor inside an HVM guest
> +
> +### x86/HVM iPXE
> +
> +    Status: Supported, with caveats
> +
> +Booting a guest via PXE.
> +PXE inherently places full trust of the guest in the network,
> +and so should only be used
> +when the guest network is under the same administrative control
> +as the guest itself.
> +
> +### x86/Physical CPU Hotplug
> +
> +    Status: Supported
> +
> +### x86/Physical Memory Hotplug
> +
> +    Status: Supported
> +
> +### x86/PCI Passthrough PV
> +
> +    Status: Supported, Not security supported
> +
> +PV passthrough cannot be done safely.
> +
> +[XXX Not even with an IOMMU?]
> +
> +### x86/PCI Passthrough HVM
> +
> +    Status: Supported, with caveats
> +
> +Many hardware device and motherboard combinations are not possible to use safely.
> +The XenProject will support bugs in PCI passthrough for Xen,
> +but the user is responsible to ensure that the hardware combination they use
> +is sufficiently secure for their needs,
> +and should assume that any combination is insecure
> +unless they have reason to believe otherwise.
> +
> +### ARM/Non-PCI device passthrough
> +
> +    Status: Supported
> +
> +### x86/Advanced Vector eXtension
> +
> +    Status: Supported
> +
> +### Intel Platform QoS Technologies
> +
> +    Status: Preview
> +
> +### ARM/ACPI (host)
> +
> +    Status: Experimental
> +
> +### ARM/SMMU
> +
> +    Status: Supported, with caveats
> +
> +Only ARM SMMU hardware is supported; non-ARM SMMU hardware is not supported.
> +
> +### ARM/ITS
> +
> +    Status: experimental
> +
> +[XXX What is this?]

A particularly complex extension to the interrupt controller.


> +### ARM: 16K and 64K pages in guests
> +    Status: Supported, with caveats
> +
> +No support for QEMU backends in a 16K or 64K domain.
> +
> +
> +# Format and definitions
> +
> +This file contains prose, and machine-readable fragments.
> +The data in a machine-readable fragment relate to
> +the section and subection in which it is fine.
> +
> +The file is in markdown format.
> +The machine-readable fragments are markdown literals
> +containing RFC-822-like (deb822-like) data.
> +
> +## Keys found in the Feature Support subsections
> +
> +### Status
> +
> +This gives the overall status of the feature,
> +including security support status, functional completeness, etc.
> +Refer to the detailed definitions below.
> +
> +If support differs based on implementation
> +(for instance, x86 / ARM, Linux / QEMU / FreeBSD),
> +one line for each set of implementations will be listed.
> +
> +### Restrictions
> +
> +This is a summary of any restrictions which apply,
> +particularly to functional or security support.
> +
> +Full details of restrictions may be provided in the prose
> +section of the feature entry,
> +if a Restrictions tag is present.
> +
> +### Limit-Security
> +
> +For size limits.
> +This figure shows the largest configuration which will receive
> +security support.
> +This does not mean that such a configuration will actually work.
> +This limit will only be listed explicitly
> +if it is different than the theoretical limit.
> +
> +### Limit
> +
> +This figure shows a theoretical size limit.
> +This does not mean that such a large configuration will actually work.
> +
> +## Definition of Status labels
> +
> +Each Status value corresponds to levels of security support,
> +testing, stability, etc., as follows:
> +
> +### Experimental
> +
> +    Functional completeness: No
> +    Functional stability: Here be dragons
> +    Interface stability: Not stable
> +    Security supported: No
> +
> +### Tech Preview
> +
> +    Functional completeness: Yes
> +    Functional stability: Quirky
> +    Interface stability: Provisionally stable
> +    Security supported: No
> +
> +#### Supported
> +
> +    Functional completeness: Yes
> +    Functional stability: Normal
> +    Interface stability: Yes
> +    Security supported: Yes
> +
> +#### Deprecated
> +
> +    Functional completeness: Yes
> +    Functional stability: Quirky
> +    Interface stability: No (as in, may disappear the next release)
> +    Security supported: Yes
> +
> +All of these may appear in modified form.  There are several
> +interfaces, for instance, which are officially declared as not stable;
> +in such a case this feature may be described as "Stable / Interface
> +not stable".
> +
> +## Definition of the status label interpretation tags
> +
> +### Functionally complete
> +
> +Does it behave like a fully functional feature?
> +Does it work on all expected platforms,
> +or does it only work for a very specific sub-case?
> +Does it have a sensible UI,
> +or do you have to have a deep understanding of the internals
> +to get it to work properly?
> +
> +### Functional stability
> +
> +What is the risk of it exhibiting bugs?
> +
> +General answers to the above:
> +
> + * **Here be dragons**
> +
> +   Pretty likely to still crash / fail to work.
> +   Not recommended unless you like life on the bleeding edge.
> +
> + * **Quirky**
> +
> +   Mostly works but may have odd behavior here and there.
> +   Recommended for playing around or for non-production use cases.
> +
> + * **Normal**
> +
> +   Ready for production use
> +
> +### Interface stability
> +
> +If I build a system based on the current interfaces,
> +will they still work when I upgrade to the next version?
> +
> + * **Not stable**
> +
> +   Interface is still in the early stages and
> +   still fairly likely to be broken in future updates.
> +
> + * **Provisionally stable**
> +
> +   We're not yet promising backwards compatibility,
> +   but we think this is probably the final form of the interface.
> +   It may still require some tweaks.
> +
> + * **Stable**
> +
> +   We will try very hard to avoid breaking backwards  compatibility,
> +   and to fix any regressions that are reported.
> +
> +### Security supported
> +
> +Will XSAs be issued if security-related bugs are discovered
> +in the functionality?
> +
> +If "no",
> +anyone who finds a security-related bug in the feature
> +will be advised to
> +post it publicly to the Xen Project mailing lists
> +(or contact another security response team,
> +if a relevant one exists).
> +
> +Bugs found after the end of **Security-Support-Until**
> +in the Release Support section will receive an XSA
> +if they also affect newer, security-supported, versions of Xen.
> +However,
> +the Xen Project will not provide official fixes
> +for non-security-supported versions.
> +
> +Three common 'diversions' from the 'Supported' category
> +are given the following labels:
> +
> +  * **Supported, Not security supported**
> +
> +    Functionally complete, normal stability,
> +    interface stable, but no security support
> +
> +  * **Supported, Security support external**
> +  
> +    This feature is security supported
> +    by a different organization (not the XenProject).
> +    Links to that organization's security process
> +    will be given in the description.
> +
> +  * **Supported, with caveats**
> +
> +    This feature is security supported only under certain conditions,
> +    or support is given only for certain aspects of the feature,
> +    or the feature should be used with care
> +    because it is easy to use insecurely without knowing it.
> +    Additional details will be given in the description.
> +
> +### Interaction with other features
> +
> +Not all features interact well with all other features.
> +Some features are only for HVM guests; some don't work with migration, &c.
> -- 
> 2.14.1
>
Roger Pau Monne Sept. 8, 2017, 9:38 a.m. UTC | #19
On Thu, Sep 07, 2017 at 02:54:11PM -0700, Stefano Stabellini wrote:
> On Thu, 31 Aug 2017, George Dunlap wrote:
> > +### Direct-boot kernel image format
> > +
> > +    Supported, x86: bzImage
> > +    Supported, ARM32: zImage
> > +    Supported, ARM64: Image [XXX - Not sure if this is correct]
> 
> On ARM64 it's called Image.gz.

Just curious, but where's the spec for this format? I cannot seem to
find it anywhere.

Are those just ELF files compressed using different algorithms? If so
it would be good to separate the decompression from the actual
executable format that Xen supports.

Roger.
Stefano Stabellini Sept. 8, 2017, 7:37 p.m. UTC | #20
On Fri, 8 Sep 2017, Roger Pau Monné wrote:
> On Thu, Sep 07, 2017 at 02:54:11PM -0700, Stefano Stabellini wrote:
> > On Thu, 31 Aug 2017, George Dunlap wrote:
> > > +### Direct-boot kernel image format
> > > +
> > > +    Supported, x86: bzImage
> > > +    Supported, ARM32: zImage
> > > +    Supported, ARM64: Image [XXX - Not sure if this is correct]
> > 
> > On ARM64 it's called Image.gz.
> 
> Just curious, but where's the spec for this format? I cannot seem to
> find it anywhere.
> 
> Are those just ELF files compressed using different algorithms? If so
> it would be good to separate the decompression from the actual
> executable format that Xen supports.

No, it is not an ELF, it is almost the same as zImage:

https://www.kernel.org/doc/Documentation/arm64/booting.txt
George Dunlap Sept. 11, 2017, 2:16 p.m. UTC | #21
On 09/07/2017 10:54 PM, Stefano Stabellini wrote:
> On Thu, 31 Aug 2017, George Dunlap wrote:
>> +### Direct-boot kernel image format
>> +
>> +    Supported, x86: bzImage
>> +    Supported, ARM32: zImage
>> +    Supported, ARM64: Image [XXX - Not sure if this is correct]
> 
> On ARM64 it's called Image.gz.

Ack.


>> +### Alternative p2m
>> +
>> +    Status, x86: Preview
>> +
>> +Allows external monitoring of hypervisor memory using Intel EPT by allowing to maintain multiple physical memory to machine physical mappings
>> +
>> +[XXX Should this be x86/Alternative p2m?]
> 
> No, the technology could be available on ARM.

Yup, got that change already.

>> +### Null Scheduler
>> +
>> +    Status: Experimental
>> +
>> +A very simple, very static scheduling posicy that always schedules the same vCPU(s) on the same pCPU(s). It is designed for maximum determinism and minimum overhead on embedded platforms.
> 
> Can we say more than Experimental? I think it should be at least Tech
> Preview.

I was going to wait for Dario to respond to this (I had just copied what
was already there).  Tech Preview should look like this:

    Functional completeness: Yes
    Functional stability: Quirky
    Interface stability: Provisionally stable
    Security supported: No

I think that's probably accurate.  Dario?

>> +### Xen Framebuffer
> 
> Please write "Xen Framebuffer Frontend" in the title.

It is in a section labelled 'guest side'.  On the other hand, the list
is long, and the headings in markdown aren't actually that easy to scan
in text mode.

Let me give it some thought. (I'll put an XXX to make sure it gets
considered.)

>> +### Netback
>> +
>> +    Status, Linux (netback): Supported
>> +    Status, FreeBSD (netback): Supported
>> +    Status, QEMU (xen_nic): Experimental
> 
> I suggest to Deprecate xen_nic

That's fine with me.  Anthony?

>> +### vTPM Support
>> +
>> +    Status: Supported, x86 only
> 
> This should probably be x86/vTPM. TPM, the way we are discussing it, is
> an x86-only implementation. ARM-based alternatives are not called TPM
> AFAIK.

Someone said that because this was implemented entirely in userspace,
there's no reason the PV TPM couldn't work on ARM.  OTOH I suppose it
would be a lot less valuable if there weren't a physical TPM to back it up.

Any thoughts on that?

>> +### Intel/TXT ???
> 
> Same here

Well unless someone actually says something about this I'm just going go
delete it.

>> +### ARM/ITS
>> +
>> +    Status: experimental
>> +
>> +[XXX What is this?]
> 
> A particularly complex extension to the interrupt controller.

But what people reading this want to know isn't how complicated it is,
but what it would be for.

I could put "An extension to the ARM interrupt controller", but it would
be nice if I could also say, "...that implements $FEATURE" or
"...targeted at $APPLICATION".

Thanks for the feedback,
 -George
George Dunlap Sept. 11, 2017, 2:22 p.m. UTC | #22
On 09/07/2017 03:56 PM, Wei Liu wrote:
> On Thu, Sep 07, 2017 at 02:52:49PM +0100, George Dunlap wrote:
>> On 09/01/2017 04:00 PM, Wei Liu wrote:
>>> On Thu, Aug 31, 2017 at 11:27:19AM +0100, George Dunlap wrote:
>>>> +### Direct-boot kernel image format
>>>> +
>>>> +    Supported, x86: bzImage
>>>
>>> Do you mean booting a PV guest? If so there are a few more formats.
>>>
>>>> +    Supported, ARM32: zImage
>>>> +    Supported, ARM64: Image [XXX - Not sure if this is correct]
>>>> +
>>>> +Format which the toolstack accept for direct-boot kernels
>>> [...]
>>>> +### JSON support for xl
>>>> +
>>>> +    Status: Preview
>>>> +
>>>
>>> What is this?
>>
>> JSON output; e.g., `xl list -l`.
>>
>> Perhaps this should be called 'JSON output support'. :-)
>>
> 
> OK. Anyway, no security support for this please. I'm not even very sure
> if the output is going to be stable.

"Tech Preview" means no security support.  But given how incomplete it
is, maybe "Experimental" would be a better designation.

> 
>>>> +### AHCI support for xl
>>>> +
>>>> +    Status, x86: Supported
>>>> +
>>>
>>> There is only one knob to change, I'm not sure whether makes sense to
>>> list it separately.
>>>
>>>> +### Soft-reset for xl
>>>> +
>>>> +    Status: Supported
>>>> +
>>>
>>> We never tested this in osstest so I'm not sure about if this is the
>>> correct status. Furthermore there is also moving parts in hypervisor.
>>
>> Hmm, maybe this would go better under a hypervisor section somewhere; as
>> you say, the core functionality doesn't reside in xl, xl just enables it.
>>
> 
> A bit more than that, there are moving parts in libxl to handle that as
> well -- some initialisation needs to be skipped or whatever, some can't.

A large proportion of features require support both in the hypervisor
and in the toolstack.  It doesn't make sense to talk about them
separately; it makes sense to put them where the "core" of their
implementation resides.

 -George
Anthony PERARD Sept. 11, 2017, 3:02 p.m. UTC | #23
On Mon, Sep 11, 2017 at 03:16:13PM +0100, George Dunlap wrote:
> On 09/07/2017 10:54 PM, Stefano Stabellini wrote:
> > On Thu, 31 Aug 2017, George Dunlap wrote:
> >> +### Netback
> >> +
> >> +    Status, Linux (netback): Supported
> >> +    Status, FreeBSD (netback): Supported
> >> +    Status, QEMU (xen_nic): Experimental
> > 
> > I suggest to Deprecate xen_nic
> 
> That's fine with me.  Anthony?

Yes, that fine by me.  xen_nic is only for PV guest, and I don't known
how it can be used, there does not seems to be any support in libxl.
George Dunlap Sept. 11, 2017, 3:07 p.m. UTC | #24
On 09/11/2017 04:02 PM, Anthony PERARD wrote:
> On Mon, Sep 11, 2017 at 03:16:13PM +0100, George Dunlap wrote:
>> On 09/07/2017 10:54 PM, Stefano Stabellini wrote:
>>> On Thu, 31 Aug 2017, George Dunlap wrote:
>>>> +### Netback
>>>> +
>>>> +    Status, Linux (netback): Supported
>>>> +    Status, FreeBSD (netback): Supported
>>>> +    Status, QEMU (xen_nic): Experimental
>>>
>>> I suggest to Deprecate xen_nic
>>
>> That's fine with me.  Anthony?
> 
> Yes, that fine by me.  xen_nic is only for PV guest, and I don't known
> how it can be used, there does not seems to be any support in libxl.

Is this a holdover from 'xenner', which was supposed to allow you to run
a Xen guest on a non-Xen system?

Anyway, I'm happy to call it experimental, or just to leave it off
entirely if it can't actually be used.

 -George
Anthony PERARD Sept. 11, 2017, 3:21 p.m. UTC | #25
On Mon, Sep 11, 2017 at 04:07:08PM +0100, George Dunlap wrote:
> On 09/11/2017 04:02 PM, Anthony PERARD wrote:
> > On Mon, Sep 11, 2017 at 03:16:13PM +0100, George Dunlap wrote:
> >> On 09/07/2017 10:54 PM, Stefano Stabellini wrote:
> >>> On Thu, 31 Aug 2017, George Dunlap wrote:
> >>>> +### Netback
> >>>> +
> >>>> +    Status, Linux (netback): Supported
> >>>> +    Status, FreeBSD (netback): Supported
> >>>> +    Status, QEMU (xen_nic): Experimental
> >>>
> >>> I suggest to Deprecate xen_nic
> >>
> >> That's fine with me.  Anthony?
> > 
> > Yes, that fine by me.  xen_nic is only for PV guest, and I don't known
> > how it can be used, there does not seems to be any support in libxl.
> 
> Is this a holdover from 'xenner', which was supposed to allow you to run
> a Xen guest on a non-Xen system?

Yes, it looks like that it can be use with xenner.

> Anyway, I'm happy to call it experimental, or just to leave it off
> entirely if it can't actually be used.
> 
>  -George
Julien Grall Sept. 11, 2017, 3:54 p.m. UTC | #26
Hi,

Sorry I missed e-mail. It seems I was not CCed on it.

On 07/09/17 22:36, Stefano Stabellini wrote:
> On Thu, 31 Aug 2017, Roger Pau Monne wrote:
>>> +### ARM/Non-PCI device passthrough
>>> +
>>> +    Status: Supported
>>
>> I guess non-pci devices on ARM also use the IOMMU? (SMMU)
> 
> Yes, they do.
> 
> 
>>> +### ARM/SMMU
>>> +
>>> +    Status: Supported, with caveats
>>> +
>>> +Only ARM SMMU hardware is supported; non-ARM SMMU hardware is not supported.
>>
>> I'm not sure of the purpose of this sentence, it's quite clear that
>> the SMMU is only supported if available. Also, I'm not sure this
>> should be spelled out in this document, x86 doesn't have a VT-d or SVM
>> section.
> 
> As George wrote, there are many SMMUs in the market for ARM based
> platforms, not all of them of ARM design.

Few remarks here.

Firstly, what do you mean by Arm design? Is it spec compliant (i.e 
SMMUv1, SMMUv2, SMMUv3) ? Or is it implementation coming from Arm 
(SMMU-400, SMMU-401, SMMU-500,...)?

At the moment we have no support of SMMUv3 at all (this would be a 
separate driver as the spec is very different).

Regarding SMMUv1 and SMMUv2. Technically we should support all SMMUs 
which are compliant with the spec, providing there are no workaround 
necessary (yes there are some hardware only 99.9% compliant).

But, we can't even claim that we support Arm implementation. At least 
SMMU-401 (used by Seattle and Versatile Express) is not supported.

Furthermore, Arm may release new IP in the future. Does it mean we 
support them by default?

So there are some clarifications needed on what we actually support.

If we decide the support status is based on hardware, then it raise the 
questions on what about other specifications (e.g GICv2, GICv3, GICv4)? 
Each vendor is free to provide its own implementation (not necessarily 
bug free and fully compliant).

Cheers,
Julien Grall Sept. 11, 2017, 4 p.m. UTC | #27
On 07/09/17 22:54, Stefano Stabellini wrote:
> On Thu, 31 Aug 2017, George Dunlap wrote:
>> +### Direct-boot kernel image format
>> +
>> +    Supported, x86: bzImage
>> +    Supported, ARM32: zImage
>> +    Supported, ARM64: Image [XXX - Not sure if this is correct]
> 
> On ARM64 it's called Image.gz.

That's not true. Linux produces an Image. You can compress after if you 
want, but it is not the default.

[...]

>> +### ARM/ITS
>> +
>> +    Status: experimental
>> +
>> +[XXX What is this?]
> 
> A particularly complex extension to the interrupt controller.

To complete, it is an extension of GICv3 to support MSI. So it would be 
better to name it ARM/GICv3 ITS

Cheers,
George Dunlap Sept. 11, 2017, 4:04 p.m. UTC | #28
On 09/11/2017 05:00 PM, Julien Grall wrote:
> 
> 
> On 07/09/17 22:54, Stefano Stabellini wrote:
>> On Thu, 31 Aug 2017, George Dunlap wrote:
>>> +### Direct-boot kernel image format
>>> +
>>> +    Supported, x86: bzImage
>>> +    Supported, ARM32: zImage
>>> +    Supported, ARM64: Image [XXX - Not sure if this is correct]
>>
>> On ARM64 it's called Image.gz.
> 
> That's not true. Linux produces an Image. You can compress after if you
> want, but it is not the default.

I've left it as 'Image'.

> 
> [...]
> 
>>> +### ARM/ITS
>>> +
>>> +    Status: experimental
>>> +
>>> +[XXX What is this?]
>>
>> A particularly complex extension to the interrupt controller.
> 
> To complete, it is an extension of GICv3 to support MSI. So it would be
> better to name it ARM/GICv3 ITS

Renamed it and added the following description:

    Extension to the GICv3 interrupt controller to support MSI.

 -George
George Dunlap Sept. 11, 2017, 4:15 p.m. UTC | #29
On 09/11/2017 04:54 PM, Julien Grall wrote:
> Hi,
> 
> Sorry I missed e-mail. It seems I was not CCed on it.

Sorry -- already had a pretty large CC list.  I'll add you for the next one.


>>>> +### ARM/SMMU
>>>> +
>>>> +    Status: Supported, with caveats
>>>> +
>>>> +Only ARM SMMU hardware is supported; non-ARM SMMU hardware is not
>>>> supported.
>>>
>>> I'm not sure of the purpose of this sentence, it's quite clear that
>>> the SMMU is only supported if available. Also, I'm not sure this
>>> should be spelled out in this document, x86 doesn't have a VT-d or SVM
>>> section.
>>
>> As George wrote, there are many SMMUs in the market for ARM based
>> platforms, not all of them of ARM design.
> 
> Few remarks here.
> 
> Firstly, what do you mean by Arm design? Is it spec compliant (i.e
> SMMUv1, SMMUv2, SMMUv3) ? Or is it implementation coming from Arm
> (SMMU-400, SMMU-401, SMMU-500,...)?

Well as you and Stefano are going to be primarily doing security
support, I think whatever you think is most reasonable for you to
support, and whatever communicates best to your users what functionality
actually works and what will be security supported.

> At the moment we have no support of SMMUv3 at all (this would be a
> separate driver as the spec is very different).
> 
> Regarding SMMUv1 and SMMUv2. Technically we should support all SMMUs
> which are compliant with the spec, providing there are no workaround
> necessary (yes there are some hardware only 99.9% compliant).
> 
> But, we can't even claim that we support Arm implementation. At least
> SMMU-401 (used by Seattle and Versatile Express) is not supported.
> 
> Furthermore, Arm may release new IP in the future. Does it mean we
> support them by default?
> 
> So there are some clarifications needed on what we actually support.
> 
> If we decide the support status is based on hardware, then it raise the
> questions on what about other specifications (e.g GICv2, GICv3, GICv4)?
> Each vendor is free to provide its own implementation (not necessarily
> bug free and fully compliant).

On the whole it sounds like we ought to have separate stanzas for SMMUv1
and SMMUv2.

I'd say focus on accurately implementing the spec.  Call out specific
non-compliant implementations as and when you feel like you need to be
specific.

Shall I make this:

---
### ARM/SMMUv1

    Status: Supported

### ASM/SMMUv2

    Status: Supported
---

Will that communicate effectively that you only support ARM-spec SMMUs?
Or do we need to add some extra verbiage to make sure people know that
non-ARM specs are not supported?

 -George
Julien Grall Sept. 11, 2017, 4:16 p.m. UTC | #30
On 11/09/17 15:16, George Dunlap wrote:
> On 09/07/2017 10:54 PM, Stefano Stabellini wrote:
>> On Thu, 31 Aug 2017, George Dunlap wrote:
>>> +### Direct-boot kernel image format
>>> +
>>> +    Supported, x86: bzImage
>>> +    Supported, ARM32: zImage
>>> +    Supported, ARM64: Image [XXX - Not sure if this is correct]
>>
>> On ARM64 it's called Image.gz.
> 
> Ack.
>>> +### vTPM Support
>>> +
>>> +    Status: Supported, x86 only
>>
>> This should probably be x86/vTPM. TPM, the way we are discussing it, is
>> an x86-only implementation. ARM-based alternatives are not called TPM
>> AFAIK.
> 
> Someone said that because this was implemented entirely in userspace,
> there's no reason the PV TPM couldn't work on ARM.  OTOH I suppose it
> would be a lot less valuable if there weren't a physical TPM to back it up.
> 
> Any thoughts on that?

Per my understanding TPM is a specification and not tie to Arm, x86 or 
else. So if providing the PV driver is agnostic to x86 it should work. 
Note that I haven't looked at the code nor I am aware of some that 
tested it.

Cheers,
Julien Grall Sept. 11, 2017, 4:21 p.m. UTC | #31
On 11/09/17 17:15, George Dunlap wrote:
> On 09/11/2017 04:54 PM, Julien Grall wrote:
>> Hi,
>>
>> Sorry I missed e-mail. It seems I was not CCed on it.
> 
> Sorry -- already had a pretty large CC list.  I'll add you for the next one.
> 
> 
>>>>> +### ARM/SMMU
>>>>> +
>>>>> +    Status: Supported, with caveats
>>>>> +
>>>>> +Only ARM SMMU hardware is supported; non-ARM SMMU hardware is not
>>>>> supported.
>>>>
>>>> I'm not sure of the purpose of this sentence, it's quite clear that
>>>> the SMMU is only supported if available. Also, I'm not sure this
>>>> should be spelled out in this document, x86 doesn't have a VT-d or SVM
>>>> section.
>>>
>>> As George wrote, there are many SMMUs in the market for ARM based
>>> platforms, not all of them of ARM design.
>>
>> Few remarks here.
>>
>> Firstly, what do you mean by Arm design? Is it spec compliant (i.e
>> SMMUv1, SMMUv2, SMMUv3) ? Or is it implementation coming from Arm
>> (SMMU-400, SMMU-401, SMMU-500,...)?
> 
> Well as you and Stefano are going to be primarily doing security
> support, I think whatever you think is most reasonable for you to
> support, and whatever communicates best to your users what functionality
> actually works and what will be security supported.
> 
>> At the moment we have no support of SMMUv3 at all (this would be a
>> separate driver as the spec is very different).
>>
>> Regarding SMMUv1 and SMMUv2. Technically we should support all SMMUs
>> which are compliant with the spec, providing there are no workaround
>> necessary (yes there are some hardware only 99.9% compliant).
>>
>> But, we can't even claim that we support Arm implementation. At least
>> SMMU-401 (used by Seattle and Versatile Express) is not supported.
>>
>> Furthermore, Arm may release new IP in the future. Does it mean we
>> support them by default?
>>
>> So there are some clarifications needed on what we actually support.
>>
>> If we decide the support status is based on hardware, then it raise the
>> questions on what about other specifications (e.g GICv2, GICv3, GICv4)?
>> Each vendor is free to provide its own implementation (not necessarily
>> bug free and fully compliant).
> 
> On the whole it sounds like we ought to have separate stanzas for SMMUv1
> and SMMUv2.
> 
> I'd say focus on accurately implementing the spec.  Call out specific
> non-compliant implementations as and when you feel like you need to be
> specific.
> 
> Shall I make this:
> 
> ---
> ### ARM/SMMUv1
> 
>      Status: Supported
> 
> ### ASM/SMMUv2
> 
>      Status: Supported
> ---
> 
> Will that communicate effectively that you only support ARM-spec SMMUs?
> Or do we need to add some extra verbiage to make sure people know that
> non-ARM specs are not supported?

I think this would be fine.

Cheers,
George Dunlap Sept. 11, 2017, 4:28 p.m. UTC | #32
On 09/11/2017 05:16 PM, Julien Grall wrote:
> 
> 
> On 11/09/17 15:16, George Dunlap wrote:
>> On 09/07/2017 10:54 PM, Stefano Stabellini wrote:
>>> On Thu, 31 Aug 2017, George Dunlap wrote:
>>>> +### Direct-boot kernel image format
>>>> +
>>>> +    Supported, x86: bzImage
>>>> +    Supported, ARM32: zImage
>>>> +    Supported, ARM64: Image [XXX - Not sure if this is correct]
>>>
>>> On ARM64 it's called Image.gz.
>>
>> Ack.
>>>> +### vTPM Support
>>>> +
>>>> +    Status: Supported, x86 only
>>>
>>> This should probably be x86/vTPM. TPM, the way we are discussing it, is
>>> an x86-only implementation. ARM-based alternatives are not called TPM
>>> AFAIK.
>>
>> Someone said that because this was implemented entirely in userspace,
>> there's no reason the PV TPM couldn't work on ARM.  OTOH I suppose it
>> would be a lot less valuable if there weren't a physical TPM to back
>> it up.
>>
>> Any thoughts on that?
> 
> Per my understanding TPM is a specification and not tie to Arm, x86 or
> else. So if providing the PV driver is agnostic to x86 it should work.
> Note that I haven't looked at the code nor I am aware of some that
> tested it.

OK -- in my local copy I'm not making a distinction between x86 and ARM
then.

But I do wonder if we should make this 'Tech Preview', since it's not
being tested by osstest, and the most recent message from the maintainer
wasn't terribly promising [1].

 -George

[1]
marc.info/?i=<E0A769A898ADB6449596C41F51EF62C6B06031@SZXEMI506-MBX.china.huawei.com>
Stefano Stabellini Sept. 11, 2017, 4:39 p.m. UTC | #33
On Mon, 11 Sep 2017, Julien Grall wrote:
> On 07/09/17 22:54, Stefano Stabellini wrote:
> > On Thu, 31 Aug 2017, George Dunlap wrote:
> > > +### Direct-boot kernel image format
> > > +
> > > +    Supported, x86: bzImage
> > > +    Supported, ARM32: zImage
> > > +    Supported, ARM64: Image [XXX - Not sure if this is correct]
> > 
> > On ARM64 it's called Image.gz.
> 
> That's not true. Linux produces an Image. You can compress after if you want,
> but it is not the default.

Are you sure? Why do you say Image it's the default? If I do `make
help', the result is:


Architecture specific targets (arm64):
* Image.gz      - Compressed kernel image (arch/arm64/boot/Image.gz)
  Image         - Uncompressed kernel image (arch/arm64/boot/Image)
* dtbs          - Build device tree blobs for enabled boards
  dtbs_install  - Install dtbs to /boot/dtbs/4.13.0-rc1+
  install       - Install uncompressed kernel
  zinstall      - Install compressed kernel
                  Install using (your) ~/bin/installkernel or
                  (distribution) /sbin/installkernel or
                  install to $(INSTALL_PATH) and run lilo


Note that the default build targets are the ones that are starred.
Julien Grall Sept. 11, 2017, 6:03 p.m. UTC | #34
On 11/09/17 17:39, Stefano Stabellini wrote:
> On Mon, 11 Sep 2017, Julien Grall wrote:
>> On 07/09/17 22:54, Stefano Stabellini wrote:
>>> On Thu, 31 Aug 2017, George Dunlap wrote:
>>>> +### Direct-boot kernel image format
>>>> +
>>>> +    Supported, x86: bzImage
>>>> +    Supported, ARM32: zImage
>>>> +    Supported, ARM64: Image [XXX - Not sure if this is correct]
>>>
>>> On ARM64 it's called Image.gz.
>>
>> That's not true. Linux produces an Image. You can compress after if you want,
>> but it is not the default.
> 
> Are you sure? Why do you say Image it's the default? If I do `make
> help', the result is:

The format is call Image. Image.gz is just a compressed version and 
unlike zImage you can't boot it directly without the help of an external 
loader to uncompress it.

For instance, today, Xen is not able to decompress Image.gz by itself. 
It relies on the bootloader (assuming it has support for it).

Cheers,

> 
> 
> Architecture specific targets (arm64):
> * Image.gz      - Compressed kernel image (arch/arm64/boot/Image.gz)
>    Image         - Uncompressed kernel image (arch/arm64/boot/Image)
> * dtbs          - Build device tree blobs for enabled boards
>    dtbs_install  - Install dtbs to /boot/dtbs/4.13.0-rc1+
>    install       - Install uncompressed kernel
>    zinstall      - Install compressed kernel
>                    Install using (your) ~/bin/installkernel or
>                    (distribution) /sbin/installkernel or
>                    install to $(INSTALL_PATH) and run lilo
> 
> 
> Note that the default build targets are the ones that are starred.
>
Rich Persaud Sept. 11, 2017, 8:13 p.m. UTC | #35
On Sep 11, 2017, at 10:16, George Dunlap <george.dunlap@citrix.com> wrote:
> 
>>> +### vTPM Support
>>> +
>>> +    Status: Supported, x86 only
>> 
>> This should probably be x86/vTPM. TPM, the way we are discussing it, is
>> an x86-only implementation. ARM-based alternatives are not called TPM
>> AFAIK.
> 
> Someone said that because this was implemented entirely in userspace,
> there's no reason the PV TPM couldn't work on ARM.  OTOH I suppose it
> would be a lot less valuable if there weren't a physical TPM to back it up.
> 
> Any thoughts on that?

Physical TPMs are present on both x86 and ARM Chromebooks:

  https://www.chromium.org/developers/design-documents/tpm-usage

e.g. see Step 9 in this Samsung Series 3 teardown, "Infineon SLB9635":

  https://www.ifixit.com/Teardown/Samsung+Chromebook+Series+3+Teardown/12225


>>> +### Intel/TXT ???
>> 
>> Same here
> 
> Well unless someone actually says something about this I'm just going go
> delete it.

That's one way to motivate a response :)

Slide 11 of Joe Cihula's 2007 presentation documents the Xen changes for TXT: 

  http://www-archive.xenproject.org/files/xensummit_fall07/23_JosephCihula.pdf

More info in the 2007 patch and the Linux kernel doc:

  http://old-list-archives.xen.org/archives/html/xen-devel/2007-10/msg00897.html
  https://www.kernel.org/doc/Documentation/intel_txt.txt

Intel TXT is used with Xen by (at least) Qubes, OpenXT and Skyport Systems.  There was a design discussion at Xen Summit about implementing a frequently-used subset of tboot logic in Xen.  Hopefully Intel TXT will continue to be a Xen feature with security support.

Rich
Stefano Stabellini Sept. 11, 2017, 8:57 p.m. UTC | #36
On Mon, 11 Sep 2017, Rich Persaud wrote:
> On Sep 11, 2017, at 10:16, George Dunlap <george.dunlap@citrix.com> wrote:
> 
>                   +### vTPM Support
> 
>                   +
> 
>                   +    Status: Supported, x86 only
> 
> 
>             This should probably be x86/vTPM. TPM, the way we are discussing it, is
> 
>             an x86-only implementation. ARM-based alternatives are not called TPM
> 
>             AFAIK.
> 
> 
>       Someone said that because this was implemented entirely in userspace,
>       there's no reason the PV TPM couldn't work on ARM.  OTOH I suppose it
>       would be a lot less valuable if there weren't a physical TPM to back it up.
> 
>       Any thoughts on that?
> 
> 
> Physical TPMs are present on both x86 and ARM Chromebooks:
> 
>   https://www.chromium.org/developers/design-documents/tpm-usage
> 
> e.g. see Step 9 in this Samsung Series 3 teardown, "Infineon SLB9635":
> 
>   https://www.ifixit.com/Teardown/Samsung+Chromebook+Series+3+Teardown/12225

Interesting. In that case, I am OK with keeping "Status: Supported, x86
only".


>                   +### Intel/TXT ???
> 
> 
>             Same here
> 
> 
>       Well unless someone actually says something about this I'm just going go
>       delete it.
> 
> 
> That's one way to motivate a response :)
> 
> Slide 11 of Joe Cihula's 2007 presentation documents the Xen changes for TXT: 
> 
>   http://www-archive.xenproject.org/files/xensummit_fall07/23_JosephCihula.pdf
> 
> More info in the 2007 patch and the Linux kernel doc:
> 
>   http://old-list-archives.xen.org/archives/html/xen-devel/2007-10/msg00897.html
>   https://www.kernel.org/doc/Documentation/intel_txt.txt
> 
> Intel TXT is used with Xen by (at least) Qubes, OpenXT and Skyport Systems.  There was a design discussion at Xen Summit about implementing a frequently-used subset of tboot
> logic in Xen.  Hopefully Intel TXT will continue to be a Xen feature with security support.

From intel_txt.txt, this really seems to be only available on x86
platforms.
Dario Faggioli Sept. 12, 2017, 8:26 a.m. UTC | #37
On Mon, 2017-09-11 at 15:16 +0100, George Dunlap wrote:
> On 09/07/2017 10:54 PM, Stefano Stabellini wrote:
> > On Thu, 31 Aug 2017, George Dunlap wrote:
> > > 
> > > +### Null Scheduler
> > > +
> > > +    Status: Experimental
> > > +
> > > 
> > Can we say more than Experimental? I think it should be at least
> > Tech
> > Preview.
> 
> I was going to wait for Dario to respond to this (I had just copied
> what
> was already there).  Tech Preview should look like this:
> 
>     Functional completeness: Yes
>     Functional stability: Quirky
>     Interface stability: Provisionally stable
>     Security supported: No
> 
> I think that's probably accurate.  Dario?
> 
Yes, I think 'Tech Preview' is ok.

Next step would be adding it to OSSTest, at which point, we can start
to think about calling it supported.

Dario
Konrad Rzeszutek Wilk Sept. 14, 2017, 5:58 p.m. UTC | #38
On Thu, Sep 07, 2017 at 05:50:13AM -0600, Jan Beulich wrote:
> >>> On 07.09.17 at 13:31, <george.dunlap@citrix.com> wrote:
> > On 08/31/2017 01:46 PM, Jan Beulich wrote:
> >>>>> On 31.08.17 at 12:27, <george.dunlap@citrix.com> wrote:
> >>> +### Live Patching
> >>> +
> >>> +    Status: Supported, x86 only
> >>> +
> >>> +Compile time disabled
> >> 
> >> Bu we're settled to change that, aren't we? It was even meant to be
> >> so in 4.9, but then didn't make it.
> > 
> > Change the compile time disabling?  I don't really know. :-)
> 
> Yeah, well, that series is taking awfully long to become ready to go
> in. Konrad?

Just posted it:

https://lists.xen.org/archives/html/xen-devel/2017-09/msg01156.html
diff mbox

Patch

diff --git a/SUPPORT.md b/SUPPORT.md
new file mode 100644
index 0000000000..283cbeb725
--- /dev/null
+++ b/SUPPORT.md
@@ -0,0 +1,770 @@ 
+# Support statement for this release
+
+This document describes the support status and in particular the
+security support status of the Xen branch within which you find it.
+
+See the bottom of the file for the definitions of the support status
+levels etc.
+
+# Release Support
+
+    Xen-Version: 4.10-unstable
+    Initial-Release: n/a
+    Supported-Until: TBD
+    Security-Support-Until: Unreleased - not yet security-supported
+
+# Feature Support
+
+## Host Architecture
+
+### x86-64
+
+    Status: Supported
+
+### ARM v7 + Virtualization Extensions
+
+    Status: Supported
+
+### ARM v8
+
+    Status: Supported
+
+## Guest Type
+
+### x86/PV
+
+    Status: Supported
+
+Traditional Xen Project PV guest
+
+### x86/HVM
+
+    Status: Supported
+
+Fully virtualised guest using hardware virtualisation extensions
+
+Requires hardware virtualisation support
+
+### x86/PV-on-HVM
+
+    Status: Supported
+
+Fully virtualised guest using PV extensions/drivers for improved performance
+
+Requires hardware virtualisation support
+
+### x86/PVH guest
+
+    Status: Preview
+
+PVHv2 guest support
+
+Requires hardware virtualisation support
+
+### x86/PVH dom0
+
+    Status: Experimental
+
+PVHv2 domain 0 support
+
+### ARM guest
+
+    Status: Supported
+
+ARM only has one guest type at the moment
+
+## Limits/Host
+
+### CPUs
+
+    Limit, x86: 4095
+    Limit, ARM32: 8
+    Limit, ARM64: 128
+
+Note that for x86, very large number of cpus may not work/boot,
+but we will still provide security support
+
+### x86/RAM
+
+    Limit, x86: 16TiB
+    Limit, ARM32: 16GiB
+    Limit, ARM64: 5TiB
+
+[XXX: Andy to suggest what this should say for x86]
+
+## Limits/Guest
+
+### Virtual CPUs
+
+    Limit, x86 PV: 512
+    Limit, x86 HVM: 128
+    Limit, ARM32: 8
+    Limit, ARM64: 128
+
+### x86/PV/Virtual RAM
+
+    Limit, x86 PV: >1TB
+    Limit, x86 HVM: 1TB
+    Limit, ARM32: 16GiB
+    Limit, ARM64: 1TB
+
+### x86 PV/Event Channels
+
+    Limit: 131072
+
+## Toolstack
+
+### xl
+
+    Status: Supported
+
+### Direct-boot kernel image format
+
+    Supported, x86: bzImage
+    Supported, ARM32: zImage
+    Supported, ARM64: Image [XXX - Not sure if this is correct]
+
+Format which the toolstack accept for direct-boot kernels
+
+### Qemu based disk backend (qdisk) for xl
+
+    Status: Supported
+
+### Open vSwitch integration for xl
+
+    Status: Supported
+
+### systemd support for xl
+
+    Status: Supported
+
+### JSON support for xl
+
+    Status: Preview
+
+### AHCI support for xl
+
+    Status, x86: Supported
+
+### ACPI guest
+
+    Status, ARM: Preview
+
+### PVUSB support for xl
+
+    Status: Supported
+
+### HVM USB passthrough for xl
+
+    Status, x86: Supported
+
+### QEMU backend hotplugging for xl
+
+    Status: Supported
+
+### Soft-reset for xl
+
+    Status: Supported
+
+### Virtual cpu hotplug
+
+    Status, ARM: Supported
+
+## Toolstack/3rd party
+
+### libvirt driver for xl
+
+    Status: Supported, Security support external
+
+Security support for libvirt is provided by the libvirt project.
+See https://libvirt.org/securityprocess.html
+
+## Tooling
+
+### gdbsx
+
+    Status, x86: Supported
+
+Debugger to debug ELF guests
+
+### vPMU
+
+    Status, x86: Supported, Not security supported
+
+Virtual Performance Management Unit for HVM guests
+
+Disabled by default (enable with hypervisor command line option).
+This feature is not security supported: see http://xenbits.xen.org/xsa/advisory-163.html
+
+### Guest serial sonsole
+
+    Status: Supported
+
+Logs key hypervisor and Dom0 kernel events to a file
+
+### xentrace
+
+    Status, x86: Supported
+
+Tool to capture Xen trace buffer data
+
+### gcov
+
+    Status: Supported, Not security supported
+
+## Memory Management
+
+### Memory Ballooning
+
+    Status: Supported
+
+### Memory Sharing
+
+    Status, x86 HVM: Preview
+    Status, ARM: Preview
+
+Allow sharing of identical pages between guests
+
+### Memory Paging
+
+    Status, x86 HVM: Experimenal
+
+Allow pages belonging to guests to be paged to disk
+
+### Transcendent Memory
+
+    Status: Experimental
+
+### Alternative p2m
+
+    Status, x86: Preview
+
+Allows external monitoring of hypervisor memory using Intel EPT by allowing to maintain multiple physical memory to machine physical mappings
+
+[XXX Should this be x86/Alternative p2m?]
+
+## Resource Management
+
+### CPU Pools
+
+    Status: Supported
+
+Groups physical cpus into distinct groups called "cpupools",
+with each pool having the capability of using different schedulers and scheduling properties.
+
+### Credit Scheduler
+
+    Status: Supported
+
+The default scheduler, which is a weighted proportional fair share virtual CPU scheduler.
+
+### Credit2 Scheduler
+
+    Status: Supported
+
+Credit2 is a general purpose scheduler for Xen,
+designed with particular focus on fairness, responsiveness and scalability
+
+### RTDS based Scheduler
+
+    Status: Experimental
+
+A soft real-time CPU scheduler built to provide guaranteed CPU capacity to guest VMs on SMP hosts
+
+### ARINC653 Scheduler
+
+    Status: Supported, Not security supported
+
+A periodically repeating fixed timeslice scheduler. Multicore support is not yet implemented.
+
+### Null Scheduler
+
+    Status: Experimental
+
+A very simple, very static scheduling posicy that always schedules the same vCPU(s) on the same pCPU(s). It is designed for maximum determinism and minimum overhead on embedded platforms.
+
+### Numa scheduler affinity
+
+    Status, x86: Supported
+
+Enables Numa aware scheduling in Xen
+
+## Scalability
+
+### 1GB/2MB super page support
+
+    Status: Supported
+
+### x86/Deliver events to PVHVM guests using Xen event channels
+
+    Status: Supported
+
+### Fair locks (ticket-locks)
+
+    Status: Supported
+
+[XXX Is this host ticket locks?  Or some sort of guest PV ticket locks?  If the former it doesn't make any sense to call it 'supported' because they're either there or not.]
+
+## High Availability and Fault Tolerance
+
+### Live Migration, Save & Restore
+
+    Status, x86: Supported
+
+### Remus Fault Tolerance
+
+    Status: Experimental
+
+### COLO Manager
+
+    Status: Experimental
+
+### vMCE
+
+    Status, x86: Supported
+
+Forward Machine Check Exceptions to Appropriate guests
+
+## Virtual driver support, guest side
+
+### Blkfront
+
+    Status, Linux: Supported
+    Status, FreeBSD: Supported, Security support external
+    Status, Windows: Supported [XXX]
+
+Guest-side driver capable of speaking the Xen PV block protocol
+
+### Netfront
+
+    Status, Linux: Supported
+    Status, FreeBSD: Supported, Security support external
+    States, Windows: Supported [XXX]
+
+Guest-side driver capable of speaking the Xen PV networking protocol
+
+### Xen Framebuffer
+
+    Status, Linux (xen-fbfront): Supported
+
+Guest-side driver capable of speaking the Xen PV Framebuffer protocol
+
+[XXX FreeBSD? NetBSD?]
+
+### Xen Console
+
+    Status, Linux (hvc_xen): Supported
+
+Guest-side driver capable of speaking the Xen PV console protocol
+
+[XXX FreeBSD? NetBSD? Windows?]
+
+### Xen PV keyboard
+
+    Status, Linux (xen-kbdfront): Supported
+
+Guest-side driver capable of speaking the Xen PV keyboard protocol
+
+### Xen PVUSB protocol
+
+    Status, Linux: Supported
+
+### Xen PV SCSI protocol
+
+    Status, Linux: [XXX]
+
+### Xen TPMfront
+
+    Status, Linux (xen-tpmfront): Preview
+
+Guest-side driver capable of speaking the Xen PV TPM protocol
+
+### Xen 9pfs frontend
+
+   Status, Linux: Preview
+
+Guest-side driver capable of speaking the Xen 9pfs protocol
+
+### PVCalls frontend
+
+   Status, Linux: Preview
+
+Guest-side driver capable of making pv system calls
+
+## Virtual device support, host side
+
+### Blkback
+
+    Status, Linux (blkback): Supported
+    Status, FreeBSD (blkback): Supported
+    Status, QEMU (xen_disk): Supported
+    Status, Blktap2: Deprecated
+
+Host-side implementations of the Xen PV block protocol
+
+### Netback
+
+    Status, Linux (netback): Supported
+    Status, FreeBSD (netback): Supported
+    Status, QEMU (xen_nic): Experimental
+
+Host-side implementations of Xen PV network protocol
+
+### Xen Framebuffer
+
+    Status, Linux: Supported
+    Status, QEMU: Supported
+
+Host-side implementaiton of the Xen PV framebuffer protocol
+
+### Xen Console
+
+    Status, Linux: Supported
+    Status, QEMU: Supported
+
+Host-side implementation of the Xen PV console protocol
+
+### Xen PV keyboard
+
+    Status, Linux: Supported
+    Status, QEMU: Supported
+
+Host-side implementation fo the Xen PV keyboard protocol
+
+### Xen PV USB
+
+    Status, Linux: Experimental
+    Status, QEMU: Supported
+
+Host-side implementation of the Xen PV USB protocol
+
+### Xen PV SCSI protocol
+
+    Status, Linux: [XXX]
+
+### Xen PV TPM
+
+    Status, Linux: Supported
+
+### Xen 9pfs
+
+    Status, QEMU: Preview
+
+### PVCalls
+
+    Status, Linux: Preview
+
+### Online resize of virtual disks
+
+    Status: Supported
+
+## Security
+
+### Driver Domains
+
+    Status: Supported
+
+### Device Model Stub Domains
+
+    Status: Supported, with caveats
+
+Vulnerabilities of a device model stub domain to a hostile driver domain are excluded from security support.
+
+### KCONFIG Expert
+
+    Status: Experimental
+
+### Live Patching
+
+    Status: Supported, x86 only
+
+Compile time disabled
+
+### Virtual Machine Introspection
+
+    Status: Supported, x86 only
+
+### XSM & FLASK
+
+    Status: Experimental
+
+Compile time disabled
+
+### XSM & FLASK support for IS_PRIV
+
+    Status: Experimental
+
+Compile time disabled
+
+### vTPM Support
+
+    Status: Supported, x86 only
+
+### Intel/TXT ???
+
+    Status: ???
+
+TXT-based integrity system for the Linux kernel and Xen hypervisor
+
+[XXX]
+
+## Hardware
+
+### x86/Nested Virtualization
+
+    Status: Experimental
+
+Running a hypervisor inside an HVM guest
+
+### x86/HVM iPXE
+
+    Status: Supported, with caveats
+
+Booting a guest via PXE.
+PXE inherently places full trust of the guest in the network,
+and so should only be used
+when the guest network is under the same administrative control
+as the guest itself.
+
+### x86/Physical CPU Hotplug
+
+    Status: Supported
+
+### x86/Physical Memory Hotplug
+
+    Status: Supported
+
+### x86/PCI Passthrough PV
+
+    Status: Supported, Not security supported
+
+PV passthrough cannot be done safely.
+
+[XXX Not even with an IOMMU?]
+
+### x86/PCI Passthrough HVM
+
+    Status: Supported, with caveats
+
+Many hardware device and motherboard combinations are not possible to use safely.
+The XenProject will support bugs in PCI passthrough for Xen,
+but the user is responsible to ensure that the hardware combination they use
+is sufficiently secure for their needs,
+and should assume that any combination is insecure
+unless they have reason to believe otherwise.
+
+### ARM/Non-PCI device passthrough
+
+    Status: Supported
+
+### x86/Advanced Vector eXtension
+
+    Status: Supported
+
+### Intel Platform QoS Technologies
+
+    Status: Preview
+
+### ARM/ACPI (host)
+
+    Status: Experimental
+
+### ARM/SMMU
+
+    Status: Supported, with caveats
+
+Only ARM SMMU hardware is supported; non-ARM SMMU hardware is not supported.
+
+### ARM/ITS
+
+    Status: experimental
+
+[XXX What is this?]
+
+### ARM: 16K and 64K pages in guests
+    Status: Supported, with caveats
+
+No support for QEMU backends in a 16K or 64K domain.
+
+
+# Format and definitions
+
+This file contains prose, and machine-readable fragments.
+The data in a machine-readable fragment relate to
+the section and subection in which it is fine.
+
+The file is in markdown format.
+The machine-readable fragments are markdown literals
+containing RFC-822-like (deb822-like) data.
+
+## Keys found in the Feature Support subsections
+
+### Status
+
+This gives the overall status of the feature,
+including security support status, functional completeness, etc.
+Refer to the detailed definitions below.
+
+If support differs based on implementation
+(for instance, x86 / ARM, Linux / QEMU / FreeBSD),
+one line for each set of implementations will be listed.
+
+### Restrictions
+
+This is a summary of any restrictions which apply,
+particularly to functional or security support.
+
+Full details of restrictions may be provided in the prose
+section of the feature entry,
+if a Restrictions tag is present.
+
+### Limit-Security
+
+For size limits.
+This figure shows the largest configuration which will receive
+security support.
+This does not mean that such a configuration will actually work.
+This limit will only be listed explicitly
+if it is different than the theoretical limit.
+
+### Limit
+
+This figure shows a theoretical size limit.
+This does not mean that such a large configuration will actually work.
+
+## Definition of Status labels
+
+Each Status value corresponds to levels of security support,
+testing, stability, etc., as follows:
+
+### Experimental
+
+    Functional completeness: No
+    Functional stability: Here be dragons
+    Interface stability: Not stable
+    Security supported: No
+
+### Tech Preview
+
+    Functional completeness: Yes
+    Functional stability: Quirky
+    Interface stability: Provisionally stable
+    Security supported: No
+
+#### Supported
+
+    Functional completeness: Yes
+    Functional stability: Normal
+    Interface stability: Yes
+    Security supported: Yes
+
+#### Deprecated
+
+    Functional completeness: Yes
+    Functional stability: Quirky
+    Interface stability: No (as in, may disappear the next release)
+    Security supported: Yes
+
+All of these may appear in modified form.  There are several
+interfaces, for instance, which are officially declared as not stable;
+in such a case this feature may be described as "Stable / Interface
+not stable".
+
+## Definition of the status label interpretation tags
+
+### Functionally complete
+
+Does it behave like a fully functional feature?
+Does it work on all expected platforms,
+or does it only work for a very specific sub-case?
+Does it have a sensible UI,
+or do you have to have a deep understanding of the internals
+to get it to work properly?
+
+### Functional stability
+
+What is the risk of it exhibiting bugs?
+
+General answers to the above:
+
+ * **Here be dragons**
+
+   Pretty likely to still crash / fail to work.
+   Not recommended unless you like life on the bleeding edge.
+
+ * **Quirky**
+
+   Mostly works but may have odd behavior here and there.
+   Recommended for playing around or for non-production use cases.
+
+ * **Normal**
+
+   Ready for production use
+
+### Interface stability
+
+If I build a system based on the current interfaces,
+will they still work when I upgrade to the next version?
+
+ * **Not stable**
+
+   Interface is still in the early stages and
+   still fairly likely to be broken in future updates.
+
+ * **Provisionally stable**
+
+   We're not yet promising backwards compatibility,
+   but we think this is probably the final form of the interface.
+   It may still require some tweaks.
+
+ * **Stable**
+
+   We will try very hard to avoid breaking backwards  compatibility,
+   and to fix any regressions that are reported.
+
+### Security supported
+
+Will XSAs be issued if security-related bugs are discovered
+in the functionality?
+
+If "no",
+anyone who finds a security-related bug in the feature
+will be advised to
+post it publicly to the Xen Project mailing lists
+(or contact another security response team,
+if a relevant one exists).
+
+Bugs found after the end of **Security-Support-Until**
+in the Release Support section will receive an XSA
+if they also affect newer, security-supported, versions of Xen.
+However,
+the Xen Project will not provide official fixes
+for non-security-supported versions.
+
+Three common 'diversions' from the 'Supported' category
+are given the following labels:
+
+  * **Supported, Not security supported**
+
+    Functionally complete, normal stability,
+    interface stable, but no security support
+
+  * **Supported, Security support external**
+  
+    This feature is security supported
+    by a different organization (not the XenProject).
+    Links to that organization's security process
+    will be given in the description.
+
+  * **Supported, with caveats**
+
+    This feature is security supported only under certain conditions,
+    or support is given only for certain aspects of the feature,
+    or the feature should be used with care
+    because it is easy to use insecurely without knowing it.
+    Additional details will be given in the description.
+
+### Interaction with other features
+
+Not all features interact well with all other features.
+Some features are only for HVM guests; some don't work with migration, &c.