diff mbox series

[v3,04/15] iommufd: Overview documentation

Message ID 4-v3-402a7d6459de+24b-iommufd_jgg@nvidia.com (mailing list archive)
State Not Applicable
Delegated to: Netdev Maintainers
Headers show
Series IOMMUFD Generic interface | expand

Checks

Context Check Description
netdev/tree_selection success Not a local patch

Commit Message

Jason Gunthorpe Oct. 25, 2022, 6:12 p.m. UTC
From: Kevin Tian <kevin.tian@intel.com>

Add iommufd to the documentation tree.

Signed-off-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 Documentation/userspace-api/index.rst   |   1 +
 Documentation/userspace-api/iommufd.rst | 222 ++++++++++++++++++++++++
 2 files changed, 223 insertions(+)
 create mode 100644 Documentation/userspace-api/iommufd.rst

Comments

Bagas Sanjaya Oct. 26, 2022, 4:17 a.m. UTC | #1
On Tue, Oct 25, 2022 at 03:12:13PM -0300, Jason Gunthorpe wrote:
> From: Kevin Tian <kevin.tian@intel.com>
> 
> Add iommufd to the documentation tree.
> 

Better say "Document overview to iommufd".

> diff --git a/Documentation/userspace-api/iommufd.rst b/Documentation/userspace-api/iommufd.rst
> new file mode 100644
> index 00000000000000..3e1856469d96dd
> --- /dev/null
> +++ b/Documentation/userspace-api/iommufd.rst
> @@ -0,0 +1,222 @@
> +.. SPDX-License-Identifier: GPL-2.0+
> +
> +=======
> +IOMMUFD
> +=======
> +
> +:Author: Jason Gunthorpe
> +:Author: Kevin Tian
> +
> +Overview
> +========
> +
> +IOMMUFD is the user API to control the IOMMU subsystem as it relates to managing
> +IO page tables that point at user space memory. It intends to be general and
> +consumable by any driver that wants to DMA to userspace. These drivers are
> +eventually expected to deprecate any internal IOMMU logic, if existing (e.g.
> +vfio_iommu_type1.c).
> +
> +At minimum iommufd provides a universal support of managing I/O address spaces
> +and I/O page tables for all IOMMUs, with room in the design to add non-generic
> +features to cater to specific hardware functionality.
> +
> +In this context the capital letter (IOMMUFD) refers to the subsystem while the
> +small letter (iommufd) refers to the file descriptors created via /dev/iommu to
> +run the user API over.
> +
> +Key Concepts
> +============
> +
> +User Visible Objects
> +--------------------
> +
> +Following IOMMUFD objects are exposed to userspace:
> +
> +- IOMMUFD_OBJ_IOAS, representing an I/O address space (IOAS) allowing map/unmap
> +  of user space memory into ranges of I/O Virtual Address (IOVA).
> +
> +  The IOAS is a functional replacement for the VFIO container, and like the VFIO
> +  container copies its IOVA map to a list of iommu_domains held within it.
> +
> +- IOMMUFD_OBJ_DEVICE, representing a device that is bound to iommufd by an
> +  external driver.
> +
> +- IOMMUFD_OBJ_HW_PAGETABLE, representing an actual hardware I/O page table (i.e.
> +  a single struct iommu_domain) managed by the iommu driver.
> +
> +  The IOAS has a list of HW_PAGETABLES that share the same IOVA mapping and the
> +  IOAS will synchronize its mapping with each member HW_PAGETABLE.
> +
> +All user-visible objects are destroyed via the IOMMU_DESTROY uAPI.
> +
> +Linkage between user-visible objects and external kernel datastructures are
> +reflected by the arrows, with numbers referring to certain
> +operations creating the objects and links::
> +
> +  _________________________________________________________
> + |                         iommufd                         |
> + |       [1]                                               |
> + |  _________________                                      |
> + | |                 |                                     |
> + | |                 |                                     |
> + | |                 |                                     |
> + | |                 |                                     |
> + | |                 |                                     |
> + | |                 |                                     |
> + | |                 |        [3]                 [2]      |
> + | |                 |    ____________         __________  |
> + | |      IOAS       |<--|            |<------|          | |
> + | |                 |   |HW_PAGETABLE|       |  DEVICE  | |
> + | |                 |   |____________|       |__________| |
> + | |                 |         |                   |       |
> + | |                 |         |                   |       |
> + | |                 |         |                   |       |
> + | |                 |         |                   |       |
> + | |                 |         |                   |       |
> + | |_________________|         |                   |       |
> + |         |                   |                   |       |
> + |_________|___________________|___________________|_______|
> +           |                   |                   |
> +           |              _____v______      _______v_____
> +           | PFN storage |            |    |             |
> +           |------------>|iommu_domain|    |struct device|
> +                         |____________|    |_____________|
> +
> +1. IOMMUFD_OBJ_IOAS is created via the IOMMU_IOAS_ALLOC uAPI. One iommufd can
> +   hold multiple IOAS objects. IOAS is the most generic object and does not
> +   expose interfaces that are specific to single IOMMU drivers. All operations
> +   on the IOAS must operate equally on each of the iommu_domains that are inside
> +   it.
> +
> +2. IOMMUFD_OBJ_DEVICE is created when an external driver calls the IOMMUFD kAPI
> +   to bind a device to an iommufd. The external driver is expected to implement
> +   proper uAPI for userspace to initiate the binding operation. Successful
> +   completion of this operation establishes the desired DMA ownership over the
> +   device. The external driver must set driver_managed_dma flag and must not
> +   touch the device until this operation succeeds.
> +
> +3. IOMMUFD_OBJ_HW_PAGETABLE is created when an external driver calls the IOMMUFD
> +   kAPI to attach a bound device to an IOAS. Similarly the external driver uAPI
> +   allows userspace to initiate the attaching operation. If a compatible
> +   pagetable already exists then it is reused for the attachment. Otherwise a
> +   new pagetable object (and a new iommu_domain) is created. Successful
> +   completion of this operation sets up the linkages among an IOAS, a device and
> +   an iommu_domain. Once this completes the device could do DMA.
> +
> +   Every iommu_domain inside the IOAS is also represented to userspace as a
> +   HW_PAGETABLE object.
> +
> +   NOTE: Future additions to IOMMUFD will provide an API to create and
> +   manipulate the HW_PAGETABLE directly.
> +
> +One device can only bind to one iommufd (due to DMA ownership claim) and attach
> +to at most one IOAS object (no support of PASID yet).
> +
> +Currently only PCI device is allowed.
> +
> +Kernel Datastructure
> +--------------------
> +
> +User visible objects are backed by following datastructures:
> +
> +- iommufd_ioas for IOMMUFD_OBJ_IOAS.
> +- iommufd_device for IOMMUFD_OBJ_DEVICE.
> +- iommufd_hw_pagetable for IOMMUFD_OBJ_HW_PAGETABLE.
> +
> +Several terminologies when looking at these datastructures:
> +
> +- Automatic domain, referring to an iommu domain created automatically when
> +  attaching a device to an IOAS object. This is compatible to the semantics of
> +  VFIO type1.
> +
> +- Manual domain, referring to an iommu domain designated by the user as the
> +  target pagetable to be attached to by a device. Though currently no user API
> +  for userspace to directly create such domain, the datastructure and algorithms
> +  are ready for that usage.
> +
> +- In-kernel user, referring to something like a VFIO mdev that is accessing the
> +  IOAS and using a 'struct page \*' for CPU based access. Such users require an
> +  isolation granularity smaller than what an iommu domain can afford. They must
> +  manually enforce the IOAS constraints on DMA buffers before those buffers can
> +  be accessed by mdev. Though no kernel API for an external driver to bind a
> +  mdev, the datastructure and algorithms are ready for such usage.
> +
> +iommufd_ioas serves as the metadata datastructure to manage how IOVA ranges are
> +mapped to memory pages, composed of:
> +
> +- struct io_pagetable holding the IOVA map
> +- struct iopt_areas representing populated portions of IOVA
> +- struct iopt_pages representing the storage of PFNs
> +- struct iommu_domain representing the IO page table in the IOMMU
> +- struct iopt_pages_access representing in-kernel users of PFNs
> +- struct xarray pinned_pfns holding a list of pages pinned by
> +   in-kernel Users
> +
> +Each iopt_pages represents a logical linear array of full PFNs.  The PFNs are
> +ultimately derived from userspave VAs via an mm_struct. Once they have been
> +pinned the PFN is stored in an iommu_domain's IOPTEs or inside the pinned_pages
> +xarray if they are being "software accessed".
> +
> +PFN have to be copied between all combinations of storage locations, depending
> +on what domains are present and what kinds of in-kernel "software access" users
> +exists. The mechanism ensures that a page is pinned only once.
> +
> +An io_pagetable is composed of iopt_areas pointing at iopt_pages, along with a
> +list of iommu_domains that mirror the IOVA to PFN map.
> +
> +Multiple io_pagetable's, through their iopt_area's, can share a single
> +iopt_pages which avoids multi-pinning and double accounting of page consumption.
> +
> +iommufd_ioas is sharable between subsystems, e.g. VFIO and VDPA, as long as
> +devices managed by different subsystems are bound to a same iommufd.
> +
> +IOMMUFD User API
> +================
> +
> +.. kernel-doc:: include/uapi/linux/iommufd.h
> +
> +IOMMUFD Kernel API
> +==================
> +
> +The IOMMUFD kAPI is device-centric with group-related tricks managed behind the
> +scene. This allows the external driver calling such kAPI to implement a simple
> +device-centric uAPI for connecting its device to an iommufd, instead of
> +explicitly imposing the group semantics in its uAPI (as VFIO does).
> +
> +.. kernel-doc:: drivers/iommu/iommufd/device.c
> +   :export:
> +
> +VFIO and IOMMUFD
> +----------------
> +
> +Connecting a VFIO device to iommufd can be done in two approaches.
> +
> +First is a VFIO compatible way by directly implementing the /dev/vfio/vfio
> +container IOCTLs by mapping them into io_pagetable operations. Doing so allows
> +the use of iommufd in legacy VFIO applications by symlinking /dev/vfio/vfio to
> +/dev/iommufd or extending VFIO to SET_CONTAINER using an iommufd instead of a
> +container fd.
> +
> +The second approach directly extends VFIO to support a new set of device-centric
> +user API based on aforementioned IOMMUFD kernel API. It requires userspace
> +change but better matches the IOMMUFD API semantics and easier to support new
> +iommufd features when comparing it to the first approach.
> +
> +Currently both approaches are still work-in-progress.
> +
> +There are still a few gaps to be resolved to catch up with VFIO type1, as
> +documented in iommufd_vfio_check_extension().
> +
> +Future TODOs
> +============
> +
> +Currently IOMMUFD supports only kernel-managed I/O page table, similar to VFIO
> +type1. New features on the radar include:
> +
> + - Binding iommu_domain's to PASID/SSID
> + - Userspace page tables, for ARM, x86 and S390
> + - Kernel bypass'd invalidation of user page tables
> + - Re-use of the KVM page table in the IOMMU
> + - Dirty page tracking in the IOMMU
> + - Runtime Increase/Decrease of IOPTE size
> + - PRI support with faults resolved in userspace

What are "external driver"? Device drivers (most likely)? This is the
first time I hear the term.

What about this wordings below instead?

---- >8 ----

diff --git a/Documentation/userspace-api/iommufd.rst b/Documentation/userspace-api/iommufd.rst
index 3e1856469d96dd..49fda5f706ff58 100644
--- a/Documentation/userspace-api/iommufd.rst
+++ b/Documentation/userspace-api/iommufd.rst
@@ -10,19 +10,19 @@ IOMMUFD
 Overview
 ========
 
-IOMMUFD is the user API to control the IOMMU subsystem as it relates to managing
-IO page tables that point at user space memory. It intends to be general and
-consumable by any driver that wants to DMA to userspace. These drivers are
-eventually expected to deprecate any internal IOMMU logic, if existing (e.g.
+IOMMUFD is the user API to control the IOMMU subsystem as it relates to
+managing IO page tables using file descriptors. It intends to be general and
+consumable by any driver that wants to expose DMA to userspace. These drivers
+are eventually expected to deprecate any internal IOMMU logic if exists (e.g.
 vfio_iommu_type1.c).
 
-At minimum iommufd provides a universal support of managing I/O address spaces
+At minimum iommufd provides universal support of managing I/O address spaces
 and I/O page tables for all IOMMUs, with room in the design to add non-generic
 features to cater to specific hardware functionality.
 
 In this context the capital letter (IOMMUFD) refers to the subsystem while the
-small letter (iommufd) refers to the file descriptors created via /dev/iommu to
-run the user API over.
+small letter (iommufd) refers to the file descriptors created via /dev/iommu
+for use by uAPI.
 
 Key Concepts
 ============
@@ -32,26 +32,26 @@ User Visible Objects
 
 Following IOMMUFD objects are exposed to userspace:
 
-- IOMMUFD_OBJ_IOAS, representing an I/O address space (IOAS) allowing map/unmap
-  of user space memory into ranges of I/O Virtual Address (IOVA).
+- IOMMUFD_OBJ_IOAS, representing an I/O address space (IOAS), allowing
+  map/unmap of user space memory into ranges of I/O Virtual Address (IOVA).
 
-  The IOAS is a functional replacement for the VFIO container, and like the VFIO
-  container copies its IOVA map to a list of iommu_domains held within it.
+  The IOAS is a functional replacement for the VFIO container, and like the
+  VFIO container it copies IOVA map to a list of iommu_domains held within it.
 
-- IOMMUFD_OBJ_DEVICE, representing a device that is bound to iommufd by an
+- IOMMUFD_OBJ_DEVICE, representing a device that is bound to iommufd by the
   external driver.
 
-- IOMMUFD_OBJ_HW_PAGETABLE, representing an actual hardware I/O page table (i.e.
-  a single struct iommu_domain) managed by the iommu driver.
+- IOMMUFD_OBJ_HW_PAGETABLE, representing an actual hardware I/O page table
+  (i.e. a single struct iommu_domain) managed by the iommu driver.
 
-  The IOAS has a list of HW_PAGETABLES that share the same IOVA mapping and the
-  IOAS will synchronize its mapping with each member HW_PAGETABLE.
+  The IOAS has a list of HW_PAGETABLES that share the same IOVA mapping and
+  it will synchronize its mapping with each member HW_PAGETABLE.
 
 All user-visible objects are destroyed via the IOMMU_DESTROY uAPI.
 
-Linkage between user-visible objects and external kernel datastructures are
-reflected by the arrows, with numbers referring to certain
-operations creating the objects and links::
+The diagram below shows relationship between user-visible objects and kernel
+datastructures (external to iommufd), with numbers referred to operations
+creating the objects and links::
 
   _________________________________________________________
  |                         iommufd                         |
@@ -82,37 +82,38 @@ operations creating the objects and links::
            |------------>|iommu_domain|    |struct device|
                          |____________|    |_____________|
 
-1. IOMMUFD_OBJ_IOAS is created via the IOMMU_IOAS_ALLOC uAPI. One iommufd can
+1. IOMMUFD_OBJ_IOAS is created via the IOMMU_IOAS_ALLOC uAPI. A iommufd can
    hold multiple IOAS objects. IOAS is the most generic object and does not
    expose interfaces that are specific to single IOMMU drivers. All operations
-   on the IOAS must operate equally on each of the iommu_domains that are inside
-   it.
+   on the IOAS must operate equally on each of the iommu_domains inside of it.
 
 2. IOMMUFD_OBJ_DEVICE is created when an external driver calls the IOMMUFD kAPI
-   to bind a device to an iommufd. The external driver is expected to implement
-   proper uAPI for userspace to initiate the binding operation. Successful
-   completion of this operation establishes the desired DMA ownership over the
-   device. The external driver must set driver_managed_dma flag and must not
-   touch the device until this operation succeeds.
+   to bind a device to an iommufd. The driver is expected to implement
+   proper uAPI to initiate the binding operation. Successful completion of
+   this operation establishes the desired DMA ownership over the device. The
+   driver must also set driver_managed_dma flag and must not touch the device
+   until this operation succeeds.
 
-3. IOMMUFD_OBJ_HW_PAGETABLE is created when an external driver calls the IOMMUFD
-   kAPI to attach a bound device to an IOAS. Similarly the external driver uAPI
-   allows userspace to initiate the attaching operation. If a compatible
-   pagetable already exists then it is reused for the attachment. Otherwise a
-   new pagetable object (and a new iommu_domain) is created. Successful
-   completion of this operation sets up the linkages among an IOAS, a device and
-   an iommu_domain. Once this completes the device could do DMA.
+3. IOMMUFD_OBJ_HW_PAGETABLE is created when an external driver calls the
+   IOMMUFD kAPI to attach a bounded device to an IOAS. Similarly the external
+   driver uAPI allows userspace to initiate the attaching operation. If a
+   compatible pagetable already exists then it is reused for the attachment.
+   Otherwise a new pagetable object and iommu_domain is created. Successful
+   completion of this operation sets up the linkages among IOAS, device and
+   iommu_domain. Once this completes the device could do DMA.
 
    Every iommu_domain inside the IOAS is also represented to userspace as a
    HW_PAGETABLE object.
 
-   NOTE: Future additions to IOMMUFD will provide an API to create and
-   manipulate the HW_PAGETABLE directly.
+   .. note::
 
-One device can only bind to one iommufd (due to DMA ownership claim) and attach
+      Future IOMMUFD updates will provide an API to create and
+      manipulate the HW_PAGETABLE directly.
+
+A device can only bind to an iommufd due to DMA ownership claim and attach
 to at most one IOAS object (no support of PASID yet).
 
-Currently only PCI device is allowed.
+Currently only PCI device is allowed to use IOMMUFD.
 
 Kernel Datastructure
 --------------------
@@ -125,21 +126,21 @@ User visible objects are backed by following datastructures:
 
 Several terminologies when looking at these datastructures:
 
-- Automatic domain, referring to an iommu domain created automatically when
+- Automatic domain - refers to an iommu domain created automatically when
   attaching a device to an IOAS object. This is compatible to the semantics of
   VFIO type1.
 
-- Manual domain, referring to an iommu domain designated by the user as the
-  target pagetable to be attached to by a device. Though currently no user API
-  for userspace to directly create such domain, the datastructure and algorithms
-  are ready for that usage.
+- Manual domain - refers to an iommu domain designated by the user as the
+  target pagetable to be attached to by a device. Though currently there are
+  no uAPIs to directly create such domain, the datastructure and algorithms
+  are ready for handling that use case.
 
-- In-kernel user, referring to something like a VFIO mdev that is accessing the
-  IOAS and using a 'struct page \*' for CPU based access. Such users require an
+- In-kernel user - refers to something like a VFIO mdev that is accessing the
+  IOAS and using a 'struct page \*' for CPU based access. Such users require
   isolation granularity smaller than what an iommu domain can afford. They must
   manually enforce the IOAS constraints on DMA buffers before those buffers can
-  be accessed by mdev. Though no kernel API for an external driver to bind a
-  mdev, the datastructure and algorithms are ready for such usage.
+  be accessed by mdev. Although there are no kernel drivers APIs to bind a
+  mdev, the datastructure and algorithms are ready for handling that use case.
 
 iommufd_ioas serves as the metadata datastructure to manage how IOVA ranges are
 mapped to memory pages, composed of:
@@ -149,13 +150,12 @@ mapped to memory pages, composed of:
 - struct iopt_pages representing the storage of PFNs
 - struct iommu_domain representing the IO page table in the IOMMU
 - struct iopt_pages_access representing in-kernel users of PFNs
-- struct xarray pinned_pfns holding a list of pages pinned by
-   in-kernel Users
+- struct xarray pinned_pfns holding a list of pages pinned by in-kernel users
 
-Each iopt_pages represents a logical linear array of full PFNs.  The PFNs are
+Each iopt_pages represents a logical linear array of full PFNs. The PFNs are
 ultimately derived from userspave VAs via an mm_struct. Once they have been
-pinned the PFN is stored in an iommu_domain's IOPTEs or inside the pinned_pages
-xarray if they are being "software accessed".
+pinned the PFN is stored in IOPTEs of the iommu_domain or inside the
+pinned_pages xarray if they are being "software accessed".
 
 PFN have to be copied between all combinations of storage locations, depending
 on what domains are present and what kinds of in-kernel "software access" users
@@ -164,8 +164,9 @@ exists. The mechanism ensures that a page is pinned only once.
 An io_pagetable is composed of iopt_areas pointing at iopt_pages, along with a
 list of iommu_domains that mirror the IOVA to PFN map.
 
-Multiple io_pagetable's, through their iopt_area's, can share a single
-iopt_pages which avoids multi-pinning and double accounting of page consumption.
+Multiple io_pagetable-s, through their iopt_area-s, can share a single
+iopt_pages which avoids multi-pinning and double accounting of page
+consumption.
 
 iommufd_ioas is sharable between subsystems, e.g. VFIO and VDPA, as long as
 devices managed by different subsystems are bound to a same iommufd.
@@ -179,9 +180,9 @@ IOMMUFD Kernel API
 ==================
 
 The IOMMUFD kAPI is device-centric with group-related tricks managed behind the
-scene. This allows the external driver calling such kAPI to implement a simple
+scene. This allows the external drivers calling such kAPI to implement a simple
 device-centric uAPI for connecting its device to an iommufd, instead of
-explicitly imposing the group semantics in its uAPI (as VFIO does).
+explicitly imposing the group semantics in its uAPI as VFIO does.
 
 .. kernel-doc:: drivers/iommu/iommufd/device.c
    :export:
@@ -189,7 +190,7 @@ explicitly imposing the group semantics in its uAPI (as VFIO does).
 VFIO and IOMMUFD
 ----------------
 
-Connecting a VFIO device to iommufd can be done in two approaches.
+Connecting a VFIO device to iommufd can be done in two ways.
 
 First is a VFIO compatible way by directly implementing the /dev/vfio/vfio
 container IOCTLs by mapping them into io_pagetable operations. Doing so allows
@@ -197,10 +198,9 @@ the use of iommufd in legacy VFIO applications by symlinking /dev/vfio/vfio to
 /dev/iommufd or extending VFIO to SET_CONTAINER using an iommufd instead of a
 container fd.
 
-The second approach directly extends VFIO to support a new set of device-centric
-user API based on aforementioned IOMMUFD kernel API. It requires userspace
-change but better matches the IOMMUFD API semantics and easier to support new
-iommufd features when comparing it to the first approach.
+The second approach directly extends VFIO to support a new set of
+device-centric user API based on aforementioned IOMMUFD kernel API. It requires userspace change but better matches the IOMMUFD API semantics and easier to
+support new iommufd features when comparing it to the first approach.
 
 Currently both approaches are still work-in-progress.
 
Thanks.
Jason Gunthorpe Oct. 28, 2022, 7:09 p.m. UTC | #2
On Wed, Oct 26, 2022 at 11:17:24AM +0700, Bagas Sanjaya wrote:
> > + - Binding iommu_domain's to PASID/SSID
> > + - Userspace page tables, for ARM, x86 and S390
> > + - Kernel bypass'd invalidation of user page tables
> > + - Re-use of the KVM page table in the IOMMU
> > + - Dirty page tracking in the IOMMU
> > + - Runtime Increase/Decrease of IOPTE size
> > + - PRI support with faults resolved in userspace
> 
> What are "external driver"? Device drivers (most likely)? This is the
> first time I hear the term.

iommufd sits between two drivers, we have the iommu subsystem driver
and we have the "external" driver, which would be the susystem driver
using the iommufd kAPI.

> -IOMMUFD is the user API to control the IOMMU subsystem as it relates to managing
> -IO page tables that point at user space memory. It intends to be general and
> -consumable by any driver that wants to DMA to userspace. These drivers are
> -eventually expected to deprecate any internal IOMMU logic, if existing (e.g.
> +IOMMUFD is the user API to control the IOMMU subsystem as it relates to
> +managing IO page tables using file descriptors. It intends to be general and
                
I added "from userspace":

 IO page tables from userspace using file descriptors. It intends to be general

>  In this context the capital letter (IOMMUFD) refers to the subsystem while the
> -small letter (iommufd) refers to the file descriptors created via /dev/iommu to
> -run the user API over.
> +small letter (iommufd) refers to the file descriptors created via /dev/iommu
> +for use by uAPI.

"use by userspace", uaPI reads weird

  
  Key Concepts
>  ============
> @@ -32,26 +32,26 @@ User Visible Objects
>  
>  Following IOMMUFD objects are exposed to userspace:
>  
> -- IOMMUFD_OBJ_IOAS, representing an I/O address space (IOAS) allowing map/unmap
> -  of user space memory into ranges of I/O Virtual Address (IOVA).
> +- IOMMUFD_OBJ_IOAS, representing an I/O address space (IOAS), allowing
> +  map/unmap of user space memory into ranges of I/O Virtual Address (IOVA).
>  
> -  The IOAS is a functional replacement for the VFIO container, and like the VFIO
> -  container copies its IOVA map to a list of iommu_domains held within it.
> +  The IOAS is a functional replacement for the VFIO container, and like the
> +  VFIO container it copies IOVA map to a list of iommu_domains held within it.

"it copies IOVA map" is not good grammar, how about

  VFIO container it copies an IOVA map to a list of iommu_domains held within it.

> -- IOMMUFD_OBJ_DEVICE, representing a device that is bound to iommufd by an
> +- IOMMUFD_OBJ_DEVICE, representing a device that is bound to iommufd by the
>    external driver.

I would say 'an' is correct here since there can be multiple external
drivers using iommufd.

> -1. IOMMUFD_OBJ_IOAS is created via the IOMMU_IOAS_ALLOC uAPI. One iommufd can
> +1. IOMMUFD_OBJ_IOAS is created via the IOMMU_IOAS_ALLOC uAPI. A iommufd can

"an iommufd"

>  2. IOMMUFD_OBJ_DEVICE is created when an external driver calls the IOMMUFD kAPI
> -   to bind a device to an iommufd. The external driver is expected to implement
> -   proper uAPI for userspace to initiate the binding operation. 

"uAPI" -> "set of ioctls":

The driver is expected to implement proper a
set of ioctls to allow userspace to initiate the binding operation.

> +3. IOMMUFD_OBJ_HW_PAGETABLE is created when an external driver calls the
> +   IOMMUFD kAPI to attach a bounded device to an IOAS. Similarly the external

I think bound is correct here

> -- In-kernel user, referring to something like a VFIO mdev that is accessing the
> -  IOAS and using a 'struct page \*' for CPU based access. Such users require an
> +- In-kernel user - refers to something like a VFIO mdev that is accessing the
> +  IOAS and using a 'struct page \*' for CPU based access. Such users require
>    isolation granularity smaller than what an iommu domain can afford. They must
>    manually enforce the IOAS constraints on DMA buffers before those buffers can
> -  be accessed by mdev. Though no kernel API for an external driver to bind a
> -  mdev, the datastructure and algorithms are ready for such usage.
> +  be accessed by mdev. Although there are no kernel drivers APIs to bind a
> +  mdev, the datastructure and algorithms are ready for handling that use case.

Ah, this is out dated, we now have a kernel API to bind mdev drivers

- In-kernel user - refers to something like a VFIO mdev that is using the
  IOMMUFD access interface to access the IOAS. This starts by creating an
  iommufd_access object that is similar to the domain binding a physical device
  would do. The access object will then allow converting IOVA ranges into struct
  page * lists, or doing direct read/write to an IOVA.

> -Each iopt_pages represents a logical linear array of full PFNs.  The PFNs are
> +Each iopt_pages represents a logical linear array of full PFNs. The PFNs are
>  ultimately derived from userspave VAs via an mm_struct. Once they have been
> -pinned the PFN is stored in an iommu_domain's IOPTEs or inside the pinned_pages
> -xarray if they are being "software accessed".
> +pinned the PFN is stored in IOPTEs of the iommu_domain or inside
> the

> -Multiple io_pagetable's, through their iopt_area's, can share a single
> -iopt_pages which avoids multi-pinning and double accounting of page consumption.
> +Multiple io_pagetable-s, through their iopt_area-s, can share a single
> +iopt_pages which avoids multi-pinning and double accounting of page
> +consumption.

I've never seen the use of - to pluralize like this before??

I took substantially all of these edits, aside from the notes above

Thanks!
Jason

diff --git a/Documentation/userspace-api/iommufd.rst b/Documentation/userspace-api/iommufd.rst
index 3e1856469d96dd..64a135f3055adc 100644
--- a/Documentation/userspace-api/iommufd.rst
+++ b/Documentation/userspace-api/iommufd.rst
@@ -11,18 +11,18 @@ Overview
========

IOMMUFD is the user API to control the IOMMU subsystem as it relates to managing
IO page tables [-that point at user space memory.-]{+from userspace using file descriptors.+} It intends to be general
and consumable by any driver that wants to {+expose+} DMA to userspace. These
drivers are eventually expected to deprecate any internal IOMMU [-logic,-]{+logic+} if [-existing-]{+exists+}
(e.g. vfio_iommu_type1.c).

At minimum iommufd provides[-a-] universal support of managing I/O address spaces and
I/O page tables for all IOMMUs, with room in the design to add non-generic
features to cater to specific hardware functionality.

In this context the capital letter (IOMMUFD) refers to the subsystem while the
small letter (iommufd) refers to the file descriptors created via /dev/iommu [-to-]
[-run the user API over.-]{+for+}
{+use by userspace.+}

Key Concepts
============
@@ -32,26 +32,26 @@ User Visible Objects

Following IOMMUFD objects are exposed to userspace:

- IOMMUFD_OBJ_IOAS, representing an I/O address space [-(IOAS)-]{+(IOAS),+} allowing map/unmap
  of user space memory into ranges of I/O Virtual Address (IOVA).

  The IOAS is a functional replacement for the VFIO container, and like the VFIO
  container {+it+} copies [-its-]{+an+} IOVA map to a list of iommu_domains held within it.

- IOMMUFD_OBJ_DEVICE, representing a device that is bound to iommufd by an
  external driver.

- IOMMUFD_OBJ_HW_PAGETABLE, representing an actual hardware I/O page table
  (i.e. a single struct iommu_domain) managed by the iommu driver.

  The IOAS has a list of HW_PAGETABLES that share the same IOVA mapping and
  [-the-]
[-  IOAS-]{+it+} will synchronize its mapping with each member HW_PAGETABLE.

All user-visible objects are destroyed via the IOMMU_DESTROY uAPI.

[-Linkage-]{+The diagram below shows relationship+} between user-visible objects and[-external-] kernel
datastructures [-are-]
[-reflected by the arrows,-]{+(external to iommufd),+} with numbers [-referring-]{+referred+} to[-certain-] operations
creating the objects and links::

  _________________________________________________________
 |                         iommufd                         |
@@ -82,37 +82,38 @@ operations creating the objects and links::
           |------------>|iommu_domain|    |struct device|
                         |____________|    |_____________|

1. IOMMUFD_OBJ_IOAS is created via the IOMMU_IOAS_ALLOC uAPI. [-One-]{+An+} iommufd can
   hold multiple IOAS objects. IOAS is the most generic object and does not
   expose interfaces that are specific to single IOMMU drivers. All operations
   on the IOAS must operate equally on each of the iommu_domains[-that are-] inside {+of+} it.

2. IOMMUFD_OBJ_DEVICE is created when an external driver calls the IOMMUFD kAPI
   to bind a device to an iommufd. The[-external-] driver is expected to implement proper [-uAPI for-]{+a+}
{+   set of ioctls to allow+} userspace to initiate the binding operation.
   Successful completion of this operation establishes the desired DMA ownership
   over the device. The[-external-] driver must {+also+} set {+the+} driver_managed_dma flag and
   must not touch the device until this operation succeeds.

3. IOMMUFD_OBJ_HW_PAGETABLE is created when an external driver calls the IOMMUFD
   kAPI to attach a bound device to an IOAS. Similarly the external driver uAPI
   allows userspace to initiate the attaching operation. If a compatible
   pagetable already exists then it is reused for the attachment. Otherwise a
   new pagetable object [-(and a new iommu_domain)-]{+and iommu_domain+} is created. Successful completion of
   this operation sets up the linkages among[-an-] IOAS,[-a-] device and[-an-] iommu_domain. Once
   this completes the device could do DMA.

   Every iommu_domain inside the IOAS is also represented to userspace as a
   HW_PAGETABLE object.

   [-NOTE: Future additions to IOMMUFD will provide an API to create and-]
[-   manipulate the HW_PAGETABLE directly.-]{+.. note::+}

      [-One device can only bind-]{+Future IOMMUFD updates will provide an API+} to [-one iommufd (due to DMA ownership claim)-]{+create+} and [-attach-]
[-to at most one IOAS object (no support of PASID yet).-]{+manipulate the+}
{+      HW_PAGETABLE directly.+}

{+A device can only bind to an iommufd due to DMA ownership claim and attach to at+}
{+most one IOAS object (no support of PASID yet).+}

Currently only PCI device is [-allowed.-]{+allowed to use IOMMUFD.+}

Kernel Datastructure
--------------------
@@ -125,21 +126,20 @@ User visible objects are backed by following datastructures:

Several terminologies when looking at these datastructures:

- Automatic [-domain, referring-]{+domain - refers+} to an iommu domain created automatically when
  attaching a device to an IOAS object. This is compatible to the semantics of
  VFIO type1.

- Manual [-domain, referring-]{+domain - refers+} to an iommu domain designated by the user as the
  target pagetable to be attached to by a device. Though currently {+there are+}
  no [-user API-]
[-  for userspace-]{+uAPIs+} to directly create such domain, the datastructure and algorithms
  are ready for {+handling+} that [-usage.-]{+use case.+}

- In-kernel [-user, referring-]{+user - refers+} to something like a VFIO mdev that is[-accessing the-]
[-  IOAS and-] using[-a 'struct page \*' for CPU based access. Such users require an-]
[-  isolation granularity smaller than what an iommu domain can afford. They must-]
[-  manually enforce-] the
  [-IOAS constraints on DMA buffers before those buffers can-]
[-  be accessed-]{+IOMMUFD access interface to access the IOAS. This starts+} by [-mdev. Though no kernel API for-]{+creating+} an
  [-external driver-]{+iommufd_access object that is similar+} to[-bind a-]
[-  mdev,-] the [-datastructure and algorithms are ready for such usage.-]{+domain binding a physical device+}
{+  would do. The access object will then allow converting IOVA ranges into struct+}
{+  page * lists, or doing direct read/write to an IOVA.+}

iommufd_ioas serves as the metadata datastructure to manage how IOVA ranges are
mapped to memory pages, composed of:
@@ -149,13 +149,12 @@ mapped to memory pages, composed of:
- struct iopt_pages representing the storage of PFNs
- struct iommu_domain representing the IO page table in the IOMMU
- struct iopt_pages_access representing in-kernel users of PFNs
- struct xarray pinned_pfns holding a list of pages pinned by in-kernel [-Users-]{+users+}

Each iopt_pages represents a logical linear array of full PFNs. The PFNs are
ultimately derived from userspave VAs via an mm_struct. Once they have been
pinned the PFN is stored in[-an iommu_domain's-] IOPTEs {+of an iommu_domain+} or inside the pinned_pages
xarray if they [-are being "software accessed".-]{+have been pinned through an iommufd_access.+}

PFN have to be copied between all combinations of storage locations, depending
on what domains are present and what kinds of in-kernel "software access" users
@@ -164,8 +163,9 @@ exists. The mechanism ensures that a page is pinned only once.
An io_pagetable is composed of iopt_areas pointing at iopt_pages, along with a
list of iommu_domains that mirror the IOVA to PFN map.

Multiple [-io_pagetable's,-]{+io_pagetable-s,+} through their [-iopt_area's,-]{+iopt_area-s,+} can share a single
iopt_pages which avoids multi-pinning and double accounting of page
consumption.

iommufd_ioas is sharable between subsystems, e.g. VFIO and VDPA, as long as
devices managed by different subsystems are bound to a same iommufd.
@@ -179,9 +179,9 @@ IOMMUFD Kernel API
==================

The IOMMUFD kAPI is device-centric with group-related tricks managed behind the
scene. This allows the external [-driver-]{+drivers+} calling such kAPI to implement a simple
device-centric uAPI for connecting its device to an iommufd, instead of
explicitly imposing the group semantics in its uAPI [-(as-]{+as+} VFIO [-does).-]{+does.+}

.. kernel-doc:: drivers/iommu/iommufd/device.c
   :export:
@@ -189,7 +189,7 @@ explicitly imposing the group semantics in its uAPI (as VFIO does).
VFIO and IOMMUFD
----------------

Connecting a VFIO device to iommufd can be done in two [-approaches.-]{+ways.+}

First is a VFIO compatible way by directly implementing the /dev/vfio/vfio
container IOCTLs by mapping them into io_pagetable operations. Doing so allows
diff mbox series

Patch

diff --git a/Documentation/userspace-api/index.rst b/Documentation/userspace-api/index.rst
index c78da9ce0ec44e..f16337bdb8520f 100644
--- a/Documentation/userspace-api/index.rst
+++ b/Documentation/userspace-api/index.rst
@@ -25,6 +25,7 @@  place where this information is gathered.
    ebpf/index
    ioctl/index
    iommu
+   iommufd
    media/index
    netlink/index
    sysfs-platform_profile
diff --git a/Documentation/userspace-api/iommufd.rst b/Documentation/userspace-api/iommufd.rst
new file mode 100644
index 00000000000000..3e1856469d96dd
--- /dev/null
+++ b/Documentation/userspace-api/iommufd.rst
@@ -0,0 +1,222 @@ 
+.. SPDX-License-Identifier: GPL-2.0+
+
+=======
+IOMMUFD
+=======
+
+:Author: Jason Gunthorpe
+:Author: Kevin Tian
+
+Overview
+========
+
+IOMMUFD is the user API to control the IOMMU subsystem as it relates to managing
+IO page tables that point at user space memory. It intends to be general and
+consumable by any driver that wants to DMA to userspace. These drivers are
+eventually expected to deprecate any internal IOMMU logic, if existing (e.g.
+vfio_iommu_type1.c).
+
+At minimum iommufd provides a universal support of managing I/O address spaces
+and I/O page tables for all IOMMUs, with room in the design to add non-generic
+features to cater to specific hardware functionality.
+
+In this context the capital letter (IOMMUFD) refers to the subsystem while the
+small letter (iommufd) refers to the file descriptors created via /dev/iommu to
+run the user API over.
+
+Key Concepts
+============
+
+User Visible Objects
+--------------------
+
+Following IOMMUFD objects are exposed to userspace:
+
+- IOMMUFD_OBJ_IOAS, representing an I/O address space (IOAS) allowing map/unmap
+  of user space memory into ranges of I/O Virtual Address (IOVA).
+
+  The IOAS is a functional replacement for the VFIO container, and like the VFIO
+  container copies its IOVA map to a list of iommu_domains held within it.
+
+- IOMMUFD_OBJ_DEVICE, representing a device that is bound to iommufd by an
+  external driver.
+
+- IOMMUFD_OBJ_HW_PAGETABLE, representing an actual hardware I/O page table (i.e.
+  a single struct iommu_domain) managed by the iommu driver.
+
+  The IOAS has a list of HW_PAGETABLES that share the same IOVA mapping and the
+  IOAS will synchronize its mapping with each member HW_PAGETABLE.
+
+All user-visible objects are destroyed via the IOMMU_DESTROY uAPI.
+
+Linkage between user-visible objects and external kernel datastructures are
+reflected by the arrows, with numbers referring to certain
+operations creating the objects and links::
+
+  _________________________________________________________
+ |                         iommufd                         |
+ |       [1]                                               |
+ |  _________________                                      |
+ | |                 |                                     |
+ | |                 |                                     |
+ | |                 |                                     |
+ | |                 |                                     |
+ | |                 |                                     |
+ | |                 |                                     |
+ | |                 |        [3]                 [2]      |
+ | |                 |    ____________         __________  |
+ | |      IOAS       |<--|            |<------|          | |
+ | |                 |   |HW_PAGETABLE|       |  DEVICE  | |
+ | |                 |   |____________|       |__________| |
+ | |                 |         |                   |       |
+ | |                 |         |                   |       |
+ | |                 |         |                   |       |
+ | |                 |         |                   |       |
+ | |                 |         |                   |       |
+ | |_________________|         |                   |       |
+ |         |                   |                   |       |
+ |_________|___________________|___________________|_______|
+           |                   |                   |
+           |              _____v______      _______v_____
+           | PFN storage |            |    |             |
+           |------------>|iommu_domain|    |struct device|
+                         |____________|    |_____________|
+
+1. IOMMUFD_OBJ_IOAS is created via the IOMMU_IOAS_ALLOC uAPI. One iommufd can
+   hold multiple IOAS objects. IOAS is the most generic object and does not
+   expose interfaces that are specific to single IOMMU drivers. All operations
+   on the IOAS must operate equally on each of the iommu_domains that are inside
+   it.
+
+2. IOMMUFD_OBJ_DEVICE is created when an external driver calls the IOMMUFD kAPI
+   to bind a device to an iommufd. The external driver is expected to implement
+   proper uAPI for userspace to initiate the binding operation. Successful
+   completion of this operation establishes the desired DMA ownership over the
+   device. The external driver must set driver_managed_dma flag and must not
+   touch the device until this operation succeeds.
+
+3. IOMMUFD_OBJ_HW_PAGETABLE is created when an external driver calls the IOMMUFD
+   kAPI to attach a bound device to an IOAS. Similarly the external driver uAPI
+   allows userspace to initiate the attaching operation. If a compatible
+   pagetable already exists then it is reused for the attachment. Otherwise a
+   new pagetable object (and a new iommu_domain) is created. Successful
+   completion of this operation sets up the linkages among an IOAS, a device and
+   an iommu_domain. Once this completes the device could do DMA.
+
+   Every iommu_domain inside the IOAS is also represented to userspace as a
+   HW_PAGETABLE object.
+
+   NOTE: Future additions to IOMMUFD will provide an API to create and
+   manipulate the HW_PAGETABLE directly.
+
+One device can only bind to one iommufd (due to DMA ownership claim) and attach
+to at most one IOAS object (no support of PASID yet).
+
+Currently only PCI device is allowed.
+
+Kernel Datastructure
+--------------------
+
+User visible objects are backed by following datastructures:
+
+- iommufd_ioas for IOMMUFD_OBJ_IOAS.
+- iommufd_device for IOMMUFD_OBJ_DEVICE.
+- iommufd_hw_pagetable for IOMMUFD_OBJ_HW_PAGETABLE.
+
+Several terminologies when looking at these datastructures:
+
+- Automatic domain, referring to an iommu domain created automatically when
+  attaching a device to an IOAS object. This is compatible to the semantics of
+  VFIO type1.
+
+- Manual domain, referring to an iommu domain designated by the user as the
+  target pagetable to be attached to by a device. Though currently no user API
+  for userspace to directly create such domain, the datastructure and algorithms
+  are ready for that usage.
+
+- In-kernel user, referring to something like a VFIO mdev that is accessing the
+  IOAS and using a 'struct page \*' for CPU based access. Such users require an
+  isolation granularity smaller than what an iommu domain can afford. They must
+  manually enforce the IOAS constraints on DMA buffers before those buffers can
+  be accessed by mdev. Though no kernel API for an external driver to bind a
+  mdev, the datastructure and algorithms are ready for such usage.
+
+iommufd_ioas serves as the metadata datastructure to manage how IOVA ranges are
+mapped to memory pages, composed of:
+
+- struct io_pagetable holding the IOVA map
+- struct iopt_areas representing populated portions of IOVA
+- struct iopt_pages representing the storage of PFNs
+- struct iommu_domain representing the IO page table in the IOMMU
+- struct iopt_pages_access representing in-kernel users of PFNs
+- struct xarray pinned_pfns holding a list of pages pinned by
+   in-kernel Users
+
+Each iopt_pages represents a logical linear array of full PFNs.  The PFNs are
+ultimately derived from userspave VAs via an mm_struct. Once they have been
+pinned the PFN is stored in an iommu_domain's IOPTEs or inside the pinned_pages
+xarray if they are being "software accessed".
+
+PFN have to be copied between all combinations of storage locations, depending
+on what domains are present and what kinds of in-kernel "software access" users
+exists. The mechanism ensures that a page is pinned only once.
+
+An io_pagetable is composed of iopt_areas pointing at iopt_pages, along with a
+list of iommu_domains that mirror the IOVA to PFN map.
+
+Multiple io_pagetable's, through their iopt_area's, can share a single
+iopt_pages which avoids multi-pinning and double accounting of page consumption.
+
+iommufd_ioas is sharable between subsystems, e.g. VFIO and VDPA, as long as
+devices managed by different subsystems are bound to a same iommufd.
+
+IOMMUFD User API
+================
+
+.. kernel-doc:: include/uapi/linux/iommufd.h
+
+IOMMUFD Kernel API
+==================
+
+The IOMMUFD kAPI is device-centric with group-related tricks managed behind the
+scene. This allows the external driver calling such kAPI to implement a simple
+device-centric uAPI for connecting its device to an iommufd, instead of
+explicitly imposing the group semantics in its uAPI (as VFIO does).
+
+.. kernel-doc:: drivers/iommu/iommufd/device.c
+   :export:
+
+VFIO and IOMMUFD
+----------------
+
+Connecting a VFIO device to iommufd can be done in two approaches.
+
+First is a VFIO compatible way by directly implementing the /dev/vfio/vfio
+container IOCTLs by mapping them into io_pagetable operations. Doing so allows
+the use of iommufd in legacy VFIO applications by symlinking /dev/vfio/vfio to
+/dev/iommufd or extending VFIO to SET_CONTAINER using an iommufd instead of a
+container fd.
+
+The second approach directly extends VFIO to support a new set of device-centric
+user API based on aforementioned IOMMUFD kernel API. It requires userspace
+change but better matches the IOMMUFD API semantics and easier to support new
+iommufd features when comparing it to the first approach.
+
+Currently both approaches are still work-in-progress.
+
+There are still a few gaps to be resolved to catch up with VFIO type1, as
+documented in iommufd_vfio_check_extension().
+
+Future TODOs
+============
+
+Currently IOMMUFD supports only kernel-managed I/O page table, similar to VFIO
+type1. New features on the radar include:
+
+ - Binding iommu_domain's to PASID/SSID
+ - Userspace page tables, for ARM, x86 and S390
+ - Kernel bypass'd invalidation of user page tables
+ - Re-use of the KVM page table in the IOMMU
+ - Dirty page tracking in the IOMMU
+ - Runtime Increase/Decrease of IOPTE size
+ - PRI support with faults resolved in userspace