diff mbox series

[v4,29/32] vfio-pci/zdev: add DTSM to clp group capability

Message ID 20220314194451.58266-30-mjrosato@linux.ibm.com (mailing list archive)
State New, archived
Headers show
Series KVM: s390: enable zPCI for interpretive execution | expand

Commit Message

Matthew Rosato March 14, 2022, 7:44 p.m. UTC
The DTSM, or designation type supported mask, indicates what IOAT formats
are available to the guest.  For an interpreted device, userspace will not
know what format(s) the IOAT assist supports, so pass it via the
capability chain.  Since the value belongs to the Query PCI Function Group
clp, let's extend the existing capability with a new version.

Reviewed-by: Pierre Morel <pmorel@linux.ibm.com>
Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com>
---
 drivers/vfio/pci/vfio_pci_zdev.c | 12 ++++++++++--
 include/uapi/linux/vfio_zdev.h   |  3 +++
 2 files changed, 13 insertions(+), 2 deletions(-)

Comments

Jason Gunthorpe March 14, 2022, 9:49 p.m. UTC | #1
On Mon, Mar 14, 2022 at 03:44:48PM -0400, Matthew Rosato wrote:
> The DTSM, or designation type supported mask, indicates what IOAT formats
> are available to the guest.  For an interpreted device, userspace will not
> know what format(s) the IOAT assist supports, so pass it via the
> capability chain.  Since the value belongs to the Query PCI Function Group
> clp, let's extend the existing capability with a new version.

Why is this on the VFIO device?

Maybe I don't quite understand it right, but the IOAT is the
'userspace page table'?

That is something that should be modeled as a nested iommu domain.

Querying the formats and any control logic for this should be on the
iommu side not built into VFIO.

Jason
Matthew Rosato March 15, 2022, 2:39 p.m. UTC | #2
On 3/14/22 5:49 PM, Jason Gunthorpe wrote:
> On Mon, Mar 14, 2022 at 03:44:48PM -0400, Matthew Rosato wrote:
>> The DTSM, or designation type supported mask, indicates what IOAT formats
>> are available to the guest.  For an interpreted device, userspace will not
>> know what format(s) the IOAT assist supports, so pass it via the
>> capability chain.  Since the value belongs to the Query PCI Function Group
>> clp, let's extend the existing capability with a new version.
> 
> Why is this on the VFIO device?

Current vfio_pci_zdev support adds a series of capability chains to the 
VFIO_DEVICE_GET_INFO ioctl.  These capability chains are all related to 
output values associated with what are basically s390x query instructions.

The capability chain being modified by this patch is used to populate a 
response to the 'query this zPCI group' instruction.

> 
> Maybe I don't quite understand it right, but the IOAT is the
> 'userspace page table'?

IOAT = I/O Address Translation tables;  the head of which is called the 
IOTA (translation anchor).  But yes, this would generally refer to the 
guest DMA tables.

Specifically here we are only talking about the DTSM which is the 
formats that the guest is allowed to use for their address translation 
tables, because the hardware (or in our case the intermediary kvm iommu) 
can only operate on certain formats.

> 
> That is something that should be modeled as a nested iommu domain.
> 
> Querying the formats and any control logic for this should be on the
> iommu side not built into VFIO.

I agree that the DTSM is really controlled by what the IOMMU domain can 
support (e.g. what guest table formats it can actually operate on) and 
so the DTSM value should come from there vs out of KVM; but is there 
harm in including the query response data here along with the rest of 
the response information for 'query this zPCI group'?
Jason Gunthorpe March 15, 2022, 2:56 p.m. UTC | #3
On Tue, Mar 15, 2022 at 10:39:18AM -0400, Matthew Rosato wrote:
> > That is something that should be modeled as a nested iommu domain.
> > 
> > Querying the formats and any control logic for this should be on the
> > iommu side not built into VFIO.
> 
> I agree that the DTSM is really controlled by what the IOMMU domain can
> support (e.g. what guest table formats it can actually operate on) and so
> the DTSM value should come from there vs out of KVM; but is there harm in
> including the query response data here along with the rest of the response
> information for 'query this zPCI group'?

'Harm'? No, but I think it is wrong encapsulation and layering.

Jason
diff mbox series

Patch

diff --git a/drivers/vfio/pci/vfio_pci_zdev.c b/drivers/vfio/pci/vfio_pci_zdev.c
index 4a653ce480c7..aadd2b58b822 100644
--- a/drivers/vfio/pci/vfio_pci_zdev.c
+++ b/drivers/vfio/pci/vfio_pci_zdev.c
@@ -13,6 +13,7 @@ 
 #include <linux/vfio_zdev.h>
 #include <asm/pci_clp.h>
 #include <asm/pci_io.h>
+#include <asm/kvm_pci.h>
 
 #include <linux/vfio_pci_core.h>
 
@@ -44,16 +45,23 @@  static int zpci_group_cap(struct zpci_dev *zdev, struct vfio_info_cap *caps)
 {
 	struct vfio_device_info_cap_zpci_group cap = {
 		.header.id = VFIO_DEVICE_INFO_CAP_ZPCI_GROUP,
-		.header.version = 1,
+		.header.version = 2,
 		.dasm = zdev->dma_mask,
 		.msi_addr = zdev->msi_addr,
 		.flags = VFIO_DEVICE_INFO_ZPCI_FLAG_REFRESH,
 		.mui = zdev->fmb_update,
 		.noi = zdev->max_msi,
 		.maxstbl = ZPCI_MAX_WRITE_SIZE,
-		.version = zdev->version
+		.version = zdev->version,
+		.dtsm = 0
 	};
 
+	/* Some values are different for interpreted devices */
+	if (zdev->kzdev) {
+		cap.maxstbl = zdev->maxstbl;
+		cap.dtsm = kvm_s390_pci_get_dtsm(zdev);
+	}
+
 	return vfio_info_add_capability(caps, &cap.header, sizeof(cap));
 }
 
diff --git a/include/uapi/linux/vfio_zdev.h b/include/uapi/linux/vfio_zdev.h
index 78c022af3d29..29351687e914 100644
--- a/include/uapi/linux/vfio_zdev.h
+++ b/include/uapi/linux/vfio_zdev.h
@@ -50,6 +50,9 @@  struct vfio_device_info_cap_zpci_group {
 	__u16 noi;		/* Maximum number of MSIs */
 	__u16 maxstbl;		/* Maximum Store Block Length */
 	__u8 version;		/* Supported PCI Version */
+	/* End of version 1 */
+	__u8 dtsm;		/* Supported IOAT Designations */
+	/* End of version 2 */
 };
 
 /**