diff mbox series

[v4,04/17] iommufd: Document overview of iommufd

Message ID 4-v4-0de2f6c78ed0+9d1-iommufd_jgg@nvidia.com (mailing list archive)
State Superseded
Headers show
Series IOMMUFD Generic interface | expand

Commit Message

Jason Gunthorpe Nov. 8, 2022, 12:48 a.m. UTC
From: Kevin Tian <kevin.tian@intel.com>

Add iommufd into the documentation tree, and supply initial documentation.
Much of this is linked from code comments by kdoc.

Tested-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 Documentation/userspace-api/index.rst   |   1 +
 Documentation/userspace-api/iommufd.rst | 222 ++++++++++++++++++++++++
 2 files changed, 223 insertions(+)
 create mode 100644 Documentation/userspace-api/iommufd.rst

Comments

Bagas Sanjaya Nov. 8, 2022, 3:45 a.m. UTC | #1
On Mon, Nov 07, 2022 at 08:48:57PM -0400, Jason Gunthorpe wrote:
> From: Kevin Tian <kevin.tian@intel.com>
> 
> Add iommufd into the documentation tree, and supply initial documentation.
> Much of this is linked from code comments by kdoc.
> 

The documentation LGTM, thanks.

Reviewed-by: Bagas Sanjaya <bagasdotme@gmail.com>
Jason Gunthorpe Nov. 8, 2022, 5:10 p.m. UTC | #2
> +IOMMUFD User API
> +================
> +
> +.. kernel-doc:: include/uapi/linux/iommufd.h

I noticed this isn't working

It needs this patch:
  https://lore.kernel.org/r/0-v1-c80e152ce63b+12-kdoc_export_ns_jgg@nvidia.com

And also some updating to capture kdocs for all the exported symbols:

diff --git a/Documentation/userspace-api/iommufd.rst b/Documentation/userspace-api/iommufd.rst
index 64a135f3055adc..ffc5f4bc65492e 100644
--- a/Documentation/userspace-api/iommufd.rst
+++ b/Documentation/userspace-api/iommufd.rst
@@ -186,6 +186,9 @@ explicitly imposing the group semantics in its uAPI as VFIO does.
 .. kernel-doc:: drivers/iommu/iommufd/device.c
    :export:
 
+.. kernel-doc:: drivers/iommu/iommufd/main.c
+   :export:
+
 VFIO and IOMMUFD
 ----------------
 
diff --git a/drivers/iommu/iommufd/device.c b/drivers/iommu/iommufd/device.c
index dc3058e063d8de..8f4a0e11c51bae 100644
--- a/drivers/iommu/iommufd/device.c
+++ b/drivers/iommu/iommufd/device.c
@@ -107,6 +107,14 @@ struct iommufd_device *iommufd_device_bind(struct iommufd_ctx *ictx,
 }
 EXPORT_SYMBOL_NS_GPL(iommufd_device_bind, IOMMUFD);
 
+/**
+ * iommufd_device_unbind - Undo iommufd_device_bind()
+ * @idev: Device returned by iommufd_device_bind()
+ *
+ * Release the device from iommufd control. The DMA ownership will return back
+ * to unowned with blocked DMA. This invalidates the iommufd_device pointer,
+ * other APIs that consume it must not be called concurrently.
+ */
 void iommufd_device_unbind(struct iommufd_device *idev)
 {
 	bool was_destroyed;
@@ -372,6 +380,13 @@ int iommufd_device_attach(struct iommufd_device *idev, u32 *pt_id,
 }
 EXPORT_SYMBOL_NS_GPL(iommufd_device_attach, IOMMUFD);
 
+/**
+ * iommufd_device_detach - Disconnect a device to an iommu_domain
+ * @idev: device to detach
+ *
+ * Undoes iommufd_device_attach(). This disconnects the idev from the previously
+ * attached pt_id. The device returns back to a blocked DMA translation.
+ */
 void iommufd_device_detach(struct iommufd_device *idev)
 {
 	struct iommufd_hw_pagetable *hwpt = idev->hwpt;
@@ -412,6 +427,19 @@ void iommufd_access_destroy_object(struct iommufd_object *obj)
 	refcount_dec(&access->ioas->obj.users);
 }
 
+/**
+ * iommufd_access_create - Create an iommufd_access
+ * @ictx: iommufd file descriptor
+ * @ioas_id: ID for a IOMMUFD_OBJ_IOAS
+ * @ops: Driver's ops to associate with the access
+ * @data: Opaque data to pass into ops functions
+ *
+ * An iommufd_access allows a driver to read/write to the IOAS without using
+ * DMA. The underlying CPU memory can be accessed using the
+ * iommufd_access_pin_pages() or iommufd_access_rw() functions.
+ *
+ * The provided ops are required to use iommufd_access_pin_pages().
+ */
 struct iommufd_access *
 iommufd_access_create(struct iommufd_ctx *ictx, u32 ioas_id,
 		      const struct iommufd_access_ops *ops, void *data)
@@ -461,6 +489,12 @@ iommufd_access_create(struct iommufd_ctx *ictx, u32 ioas_id,
 }
 EXPORT_SYMBOL_NS_GPL(iommufd_access_create, IOMMUFD);
 
+/**
+ * iommufd_access_destroy - Destroy an iommufd_access
+ * @access: The access to destroy
+ *
+ * The caller must stop using the access before destroying it.
+ */
 void iommufd_access_destroy(struct iommufd_access *access)
 {
 	bool was_destroyed;
Bagas Sanjaya Nov. 10, 2022, 9:30 a.m. UTC | #3
On Mon, Nov 07, 2022 at 08:48:57PM -0400, Jason Gunthorpe wrote:
> From: Kevin Tian <kevin.tian@intel.com>
> 
> Add iommufd into the documentation tree, and supply initial documentation.
> Much of this is linked from code comments by kdoc.
> 

The patch also exposes htmldocs warnings as Stephen Rothwell has
reported on linux-next [1] due to the copyright comments mistaken for
kernel-doc comments, so I have applied the fixup:

---- >8 ----

diff --git a/drivers/iommu/iommufd/device.c b/drivers/iommu/iommufd/device.c
index 536a34d099968d..76b3761a89423e 100644
--- a/drivers/iommu/iommufd/device.c
+++ b/drivers/iommu/iommufd/device.c
@@ -1,5 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
-/* Copyright (c) 2021-2022, NVIDIA CORPORATION & AFFILIATES
+/*
+ * Copyright (c) 2021-2022, NVIDIA CORPORATION & AFFILIATES
  */
 #include <linux/iommufd.h>
 #include <linux/slab.h>
diff --git a/drivers/iommu/iommufd/main.c b/drivers/iommu/iommufd/main.c
index 1eeb326f74f005..fc4c80ec0511f4 100644
--- a/drivers/iommu/iommufd/main.c
+++ b/drivers/iommu/iommufd/main.c
@@ -1,5 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
-/* Copyright (C) 2021 Intel Corporation
+/*
+ * Copyright (C) 2021 Intel Corporation
  * Copyright (c) 2021-2022, NVIDIA CORPORATION & AFFILIATES
  *
  * iommufd provides control over the IOMMU HW objects created by IOMMU kernel

Thanks.
Jonathan Corbet Nov. 10, 2022, 2:49 p.m. UTC | #4
Bagas Sanjaya <bagasdotme@gmail.com> writes:

> On Mon, Nov 07, 2022 at 08:48:57PM -0400, Jason Gunthorpe wrote:
>> From: Kevin Tian <kevin.tian@intel.com>
>> 
>> Add iommufd into the documentation tree, and supply initial documentation.
>> Much of this is linked from code comments by kdoc.
>> 
>
> The patch also exposes htmldocs warnings as Stephen Rothwell has
> reported on linux-next [1] due to the copyright comments mistaken for
> kernel-doc comments, so I have applied the fixup:
>
> ---- >8 ----
>
> diff --git a/drivers/iommu/iommufd/device.c b/drivers/iommu/iommufd/device.c
> index 536a34d099968d..76b3761a89423e 100644
> --- a/drivers/iommu/iommufd/device.c
> +++ b/drivers/iommu/iommufd/device.c
> @@ -1,5 +1,6 @@
>  // SPDX-License-Identifier: GPL-2.0-only
> -/* Copyright (c) 2021-2022, NVIDIA CORPORATION & AFFILIATES
> +/*
> + * Copyright (c) 2021-2022, NVIDIA CORPORATION & AFFILIATES

Um ... this makes no sense at all.  If kernel-doc thought that was a
kernel-doc comment, the problem is there, not here.

<looks>

So the report you're referring to is

  https://lore.kernel.org/linux-next/20221110182938.40ce2651@canb.auug.org.au/

?  If so, this change will not fix the problem.  That error:

> drivers/iommu/iommufd/device.c:1: warning: no structured comments found
> drivers/iommu/iommufd/main.c:1: warning: no structured comments found

is caused by using .. kernel-doc:: directives to extract documentation
from files where none exists - thus "no structured comments found".

The *real* problem, methinks, is that the directives are added in patch 4
of the series, but the documentation doesn't show up until later.  So
the real fix would be to simply move this patch down.  Or just not worry
about it, since it all works out in the end and nobody will be bisecting
a docs build.

Bagas, you are *again* misadvising people.  Please stop doing that!

Thanks,

jon
Jason Gunthorpe Nov. 10, 2022, 2:54 p.m. UTC | #5
On Thu, Nov 10, 2022 at 07:49:14AM -0700, Jonathan Corbet wrote:

> The *real* problem, methinks, is that the directives are added in patch 4
> of the series, but the documentation doesn't show up until later.  So
> the real fix would be to simply move this patch down.  Or just not worry
> about it, since it all works out in the end and nobody will be bisecting
> a docs build.

That is half the problem, the other is this:

https://lore.kernel.org/r/0-v1-c80e152ce63b+12-kdoc_export_ns_jgg@nvidia.com

Since even after the whole series the EXPORT_NS functions don't parse
properly. I'm going to put this patch before the doc patch and ignore
the bisection problem.

I'd like someone to say they are happy with the perl :)

Jason
Jonathan Corbet Nov. 10, 2022, 3:10 p.m. UTC | #6
Jason Gunthorpe <jgg@nvidia.com> writes:

> On Thu, Nov 10, 2022 at 07:49:14AM -0700, Jonathan Corbet wrote:
>
>> The *real* problem, methinks, is that the directives are added in patch 4
>> of the series, but the documentation doesn't show up until later.  So
>> the real fix would be to simply move this patch down.  Or just not worry
>> about it, since it all works out in the end and nobody will be bisecting
>> a docs build.
>
> That is half the problem, the other is this:
>
> https://lore.kernel.org/r/0-v1-c80e152ce63b+12-kdoc_export_ns_jgg@nvidia.com
>
> Since even after the whole series the EXPORT_NS functions don't parse
> properly. I'm going to put this patch before the doc patch and ignore
> the bisection problem.
>
> I'd like someone to say they are happy with the perl :)

I'm not happy with *any* perl! :)

I've been sitting on that patch because I was under the impression
another version was coming - was that wrong?

Thanks,

jon
Jason Gunthorpe Nov. 10, 2022, 3:23 p.m. UTC | #7
On Thu, Nov 10, 2022 at 08:10:19AM -0700, Jonathan Corbet wrote:
> Jason Gunthorpe <jgg@nvidia.com> writes:
> 
> > On Thu, Nov 10, 2022 at 07:49:14AM -0700, Jonathan Corbet wrote:
> >
> >> The *real* problem, methinks, is that the directives are added in patch 4
> >> of the series, but the documentation doesn't show up until later.  So
> >> the real fix would be to simply move this patch down.  Or just not worry
> >> about it, since it all works out in the end and nobody will be bisecting
> >> a docs build.
> >
> > That is half the problem, the other is this:
> >
> > https://lore.kernel.org/r/0-v1-c80e152ce63b+12-kdoc_export_ns_jgg@nvidia.com
> >
> > Since even after the whole series the EXPORT_NS functions don't parse
> > properly. I'm going to put this patch before the doc patch and ignore
> > the bisection problem.
> >
> > I'd like someone to say they are happy with the perl :)
> 
> I'm not happy with *any* perl! :)
> 
> I've been sitting on that patch because I was under the impression
> another version was coming - was that wrong?

I can resend it with the single regex if that is the preference - it
is not quite as exacting as the first version. I have to test it is
all.

Jason
Jonathan Corbet Nov. 10, 2022, 3:28 p.m. UTC | #8
Jason Gunthorpe <jgg@nvidia.com> writes:

> On Thu, Nov 10, 2022 at 08:10:19AM -0700, Jonathan Corbet wrote:
>> Jason Gunthorpe <jgg@nvidia.com> writes:
>> 
>> > On Thu, Nov 10, 2022 at 07:49:14AM -0700, Jonathan Corbet wrote:
>> >
>> >> The *real* problem, methinks, is that the directives are added in patch 4
>> >> of the series, but the documentation doesn't show up until later.  So
>> >> the real fix would be to simply move this patch down.  Or just not worry
>> >> about it, since it all works out in the end and nobody will be bisecting
>> >> a docs build.
>> >
>> > That is half the problem, the other is this:
>> >
>> > https://lore.kernel.org/r/0-v1-c80e152ce63b+12-kdoc_export_ns_jgg@nvidia.com
>> >
>> > Since even after the whole series the EXPORT_NS functions don't parse
>> > properly. I'm going to put this patch before the doc patch and ignore
>> > the bisection problem.
>> >
>> > I'd like someone to say they are happy with the perl :)
>> 
>> I'm not happy with *any* perl! :)
>> 
>> I've been sitting on that patch because I was under the impression
>> another version was coming - was that wrong?
>
> I can resend it with the single regex if that is the preference - it
> is not quite as exacting as the first version. I have to test it is
> all.

Single is nicer but it's not worth a great deal of angst; nothing we do
is going to turn kernel-doc into a thing of beauty :)

Thanks,

jon
Jason Gunthorpe Nov. 10, 2022, 3:29 p.m. UTC | #9
On Thu, Nov 10, 2022 at 08:28:44AM -0700, Jonathan Corbet wrote:
> Jason Gunthorpe <jgg@nvidia.com> writes:
> 
> > On Thu, Nov 10, 2022 at 08:10:19AM -0700, Jonathan Corbet wrote:
> >> Jason Gunthorpe <jgg@nvidia.com> writes:
> >> 
> >> > On Thu, Nov 10, 2022 at 07:49:14AM -0700, Jonathan Corbet wrote:
> >> >
> >> >> The *real* problem, methinks, is that the directives are added in patch 4
> >> >> of the series, but the documentation doesn't show up until later.  So
> >> >> the real fix would be to simply move this patch down.  Or just not worry
> >> >> about it, since it all works out in the end and nobody will be bisecting
> >> >> a docs build.
> >> >
> >> > That is half the problem, the other is this:
> >> >
> >> > https://lore.kernel.org/r/0-v1-c80e152ce63b+12-kdoc_export_ns_jgg@nvidia.com
> >> >
> >> > Since even after the whole series the EXPORT_NS functions don't parse
> >> > properly. I'm going to put this patch before the doc patch and ignore
> >> > the bisection problem.
> >> >
> >> > I'd like someone to say they are happy with the perl :)
> >> 
> >> I'm not happy with *any* perl! :)
> >> 
> >> I've been sitting on that patch because I was under the impression
> >> another version was coming - was that wrong?
> >
> > I can resend it with the single regex if that is the preference - it
> > is not quite as exacting as the first version. I have to test it is
> > all.
> 
> Single is nicer but it's not worth a great deal of angst; nothing we do
> is going to turn kernel-doc into a thing of beauty :)

I will leave it be then because it is a bit tricky to tell if the new
regex breaks anything, and the first three attempts to create it
didn't work at all...

Thanks,
Jason
Jonathan Corbet Nov. 10, 2022, 3:52 p.m. UTC | #10
Jason Gunthorpe <jgg@nvidia.com> writes:

>> Single is nicer but it's not worth a great deal of angst; nothing we do
>> is going to turn kernel-doc into a thing of beauty :)
>
> I will leave it be then because it is a bit tricky to tell if the new
> regex breaks anything, and the first three attempts to create it
> didn't work at all...

That's fine.  If you want to keep it as part of your series feel free to
add:

Acked-by: Jonathan Corbet <corbet@lwn.net>

Otherwise I can carry it through the docs tree.

Thanks,

jon
Jason Gunthorpe Nov. 10, 2022, 4:54 p.m. UTC | #11
On Thu, Nov 10, 2022 at 08:52:16AM -0700, Jonathan Corbet wrote:
> Jason Gunthorpe <jgg@nvidia.com> writes:
> 
> >> Single is nicer but it's not worth a great deal of angst; nothing we do
> >> is going to turn kernel-doc into a thing of beauty :)
> >
> > I will leave it be then because it is a bit tricky to tell if the new
> > regex breaks anything, and the first three attempts to create it
> > didn't work at all...
> 
> That's fine.  If you want to keep it as part of your series feel free to
> add:
> 
> Acked-by: Jonathan Corbet <corbet@lwn.net>

Thanks, I'll keep it together since nothing else needs this right now

Jason
Bagas Sanjaya Nov. 11, 2022, 1:46 a.m. UTC | #12
On 11/10/22 21:49, Jonathan Corbet wrote:
> So the report you're referring to is
> 
>   https://lore.kernel.org/linux-next/20221110182938.40ce2651@canb.auug.org.au/
> 

Ah, I forget to refer to that link!

> ?  If so, this change will not fix the problem.  That error:
> 
>> drivers/iommu/iommufd/device.c:1: warning: no structured comments found
>> drivers/iommu/iommufd/main.c:1: warning: no structured comments found
> 
> is caused by using .. kernel-doc:: directives to extract documentation
> from files where none exists - thus "no structured comments found".
> 

-ENOENT files :)

> The *real* problem, methinks, is that the directives are added in patch 4
> of the series, but the documentation doesn't show up until later.  So
> the real fix would be to simply move this patch down.  Or just not worry
> about it, since it all works out in the end and nobody will be bisecting
> a docs build.
> 
> Bagas, you are *again* misadvising people.  Please stop doing that!
> 

OK, thanks.
Tian, Kevin Nov. 11, 2022, 5:59 a.m. UTC | #13
> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Wednesday, November 9, 2022 1:10 AM
> 
> +/**
> + * iommufd_device_unbind - Undo iommufd_device_bind()
> + * @idev: Device returned by iommufd_device_bind()
> + *
> + * Release the device from iommufd control. The DMA ownership will
> return back
> + * to unowned with blocked DMA. This invalidates the iommufd_device

unowned but not blocked DMA. iommu_device_release_dma_owner()
will decide what will be the state then, e.g. attached back to the default
domain in most cases.

> +/**
> + * iommufd_device_detach - Disconnect a device to an iommu_domain
> + * @idev: device to detach
> + *
> + * Undoes iommufd_device_attach(). This disconnects the idev from the

'Undoes' -> "Undo'
Jason Gunthorpe Nov. 14, 2022, 3:14 p.m. UTC | #14
On Fri, Nov 11, 2022 at 05:59:02AM +0000, Tian, Kevin wrote:
> > From: Jason Gunthorpe <jgg@nvidia.com>
> > Sent: Wednesday, November 9, 2022 1:10 AM
> > 
> > +/**
> > + * iommufd_device_unbind - Undo iommufd_device_bind()
> > + * @idev: Device returned by iommufd_device_bind()
> > + *
> > + * Release the device from iommufd control. The DMA ownership will
> > return back
> > + * to unowned with blocked DMA. This invalidates the iommufd_device
> 
> unowned but not blocked DMA. iommu_device_release_dma_owner()
> will decide what will be the state then, e.g. attached back to the default
> domain in most cases.

Woops

 * Release the device from iommufd control. The DMA ownership will return back
 * to unowned with DMA controlled by the DMA API. This invalidates the
 * iommufd_device pointer, other APIs that consume it must not be called
 * concurrently.

Thanks,
Jason
Eric Auger Nov. 14, 2022, 8:50 p.m. UTC | #15
Hi,

On 11/8/22 01:48, Jason Gunthorpe wrote:
> From: Kevin Tian <kevin.tian@intel.com>
>
> Add iommufd into the documentation tree, and supply initial documentation.
> Much of this is linked from code comments by kdoc.
>
> Tested-by: Nicolin Chen <nicolinc@nvidia.com>
> Signed-off-by: Kevin Tian <kevin.tian@intel.com>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>  Documentation/userspace-api/index.rst   |   1 +
>  Documentation/userspace-api/iommufd.rst | 222 ++++++++++++++++++++++++
>  2 files changed, 223 insertions(+)
>  create mode 100644 Documentation/userspace-api/iommufd.rst
>
> diff --git a/Documentation/userspace-api/index.rst b/Documentation/userspace-api/index.rst
> index c78da9ce0ec44e..f16337bdb8520f 100644
> --- a/Documentation/userspace-api/index.rst
> +++ b/Documentation/userspace-api/index.rst
> @@ -25,6 +25,7 @@ place where this information is gathered.
>     ebpf/index
>     ioctl/index
>     iommu
> +   iommufd
>     media/index
>     netlink/index
>     sysfs-platform_profile
> diff --git a/Documentation/userspace-api/iommufd.rst b/Documentation/userspace-api/iommufd.rst
> new file mode 100644
> index 00000000000000..64a135f3055adc
> --- /dev/null
> +++ b/Documentation/userspace-api/iommufd.rst
> @@ -0,0 +1,222 @@
> +.. SPDX-License-Identifier: GPL-2.0+
> +
> +=======
> +IOMMUFD
> +=======
> +
> +:Author: Jason Gunthorpe
> +:Author: Kevin Tian
> +
> +Overview
> +========
> +
> +IOMMUFD is the user API to control the IOMMU subsystem as it relates to managing
> +IO page tables from userspace using file descriptors. It intends to be general
> +and consumable by any driver that wants to expose DMA to userspace. These
> +drivers are eventually expected to deprecate any internal IOMMU logic if exists
they may already/historically implement (eg. vfio_iommu_type1.c)?
> +(e.g. vfio_iommu_type1.c).
> +
> +At minimum iommufd provides universal support of managing I/O address spaces and
> +I/O page tables for all IOMMUs, with room in the design to add non-generic
> +features to cater to specific hardware functionality.
> +
> +In this context the capital letter (IOMMUFD) refers to the subsystem while the
> +small letter (iommufd) refers to the file descriptors created via /dev/iommu for
> +use by userspace.
> +
> +Key Concepts
> +============
> +
> +User Visible Objects
> +--------------------
> +
> +Following IOMMUFD objects are exposed to userspace:
> +
> +- IOMMUFD_OBJ_IOAS, representing an I/O address space (IOAS), allowing map/unmap
> +  of user space memory into ranges of I/O Virtual Address (IOVA).
> +
> +  The IOAS is a functional replacement for the VFIO container, and like the VFIO
> +  container it copies an IOVA map to a list of iommu_domains held within it.
> +
> +- IOMMUFD_OBJ_DEVICE, representing a device that is bound to iommufd by an
> +  external driver.
> +
> +- IOMMUFD_OBJ_HW_PAGETABLE, representing an actual hardware I/O page table
> +  (i.e. a single struct iommu_domain) managed by the iommu driver.
> +
> +  The IOAS has a list of HW_PAGETABLES that share the same IOVA mapping and
> +  it will synchronize its mapping with each member HW_PAGETABLE.
> +
> +All user-visible objects are destroyed via the IOMMU_DESTROY uAPI.
> +
> +The diagram below shows relationship between user-visible objects and kernel
> +datastructures (external to iommufd), with numbers referred to operations
> +creating the objects and links::
> +
> +  _________________________________________________________
> + |                         iommufd                         |
> + |       [1]                                               |
> + |  _________________                                      |
> + | |                 |                                     |
> + | |                 |                                     |
> + | |                 |                                     |
> + | |                 |                                     |
> + | |                 |                                     |
> + | |                 |                                     |
> + | |                 |        [3]                 [2]      |
> + | |                 |    ____________         __________  |
> + | |      IOAS       |<--|            |<------|          | |
> + | |                 |   |HW_PAGETABLE|       |  DEVICE  | |
> + | |                 |   |____________|       |__________| |
> + | |                 |         |                   |       |
> + | |                 |         |                   |       |
> + | |                 |         |                   |       |
> + | |                 |         |                   |       |
> + | |                 |         |                   |       |
> + | |_________________|         |                   |       |
> + |         |                   |                   |       |
> + |_________|___________________|___________________|_______|
> +           |                   |                   |
> +           |              _____v______      _______v_____
> +           | PFN storage |            |    |             |
> +           |------------>|iommu_domain|    |struct device|
> +                         |____________|    |_____________|
> +
> +1. IOMMUFD_OBJ_IOAS is created via the IOMMU_IOAS_ALLOC uAPI. An iommufd can
> +   hold multiple IOAS objects. IOAS is the most generic object and does not
> +   expose interfaces that are specific to single IOMMU drivers. All operations
> +   on the IOAS must operate equally on each of the iommu_domains inside of it.
> +
> +2. IOMMUFD_OBJ_DEVICE is created when an external driver calls the IOMMUFD kAPI
> +   to bind a device to an iommufd. The driver is expected to implement proper a
s/proper/properly?
> +   set of ioctls to allow userspace to initiate the binding operation.
> +   Successful completion of this operation establishes the desired DMA ownership
> +   over the device. The driver must also set the driver_managed_dma flag and
> +   must not touch the device until this operation succeeds.
> +
> +3. IOMMUFD_OBJ_HW_PAGETABLE is created when an external driver calls the IOMMUFD
> +   kAPI to attach a bound device to an IOAS. Similarly the external driver uAPI
> +   allows userspace to initiate the attaching operation. If a compatible
> +   pagetable already exists then it is reused for the attachment. Otherwise a
> +   new pagetable object and iommu_domain is created. Successful completion of
> +   this operation sets up the linkages among IOAS, device and iommu_domain. Once
> +   this completes the device could do DMA.
> +
> +   Every iommu_domain inside the IOAS is also represented to userspace as a
> +   HW_PAGETABLE object.
> +
> +   .. note::
> +
> +      Future IOMMUFD updates will provide an API to create and manipulate the
> +      HW_PAGETABLE directly.
> +
> +A device can only bind to an iommufd due to DMA ownership claim and attach to at
> +most one IOAS object (no support of PASID yet).
> +
> +Currently only PCI device is allowed to use IOMMUFD.
is it still true? device_bind() now takes a struct device *

In [PATCH v4 12/17] iommufd: Add kAPI toward external drivers for
physical devices "PCI" is used at several places
but shouldn't it be removed now?

> +
> +Kernel Datastructure
> +--------------------
> +
> +User visible objects are backed by following datastructures:
> +
> +- iommufd_ioas for IOMMUFD_OBJ_IOAS.
> +- iommufd_device for IOMMUFD_OBJ_DEVICE.
> +- iommufd_hw_pagetable for IOMMUFD_OBJ_HW_PAGETABLE.
> +
> +Several terminologies when looking at these datastructures:
> +
> +- Automatic domain - refers to an iommu domain created automatically when
> +  attaching a device to an IOAS object. This is compatible to the semantics of
> +  VFIO type1.
> +
> +- Manual domain - refers to an iommu domain designated by the user as the
> +  target pagetable to be attached to by a device. Though currently there are
> +  no uAPIs to directly create such domain, the datastructure and algorithms
> +  are ready for handling that use case.
> +
> +- In-kernel user - refers to something like a VFIO mdev that is using the
> +  IOMMUFD access interface to access the IOAS. This starts by creating an
> +  iommufd_access object that is similar to the domain binding a physical device
> +  would do. The access object will then allow converting IOVA ranges into struct
> +  page * lists, or doing direct read/write to an IOVA.
> +
> +iommufd_ioas serves as the metadata datastructure to manage how IOVA ranges are
> +mapped to memory pages, composed of:
> +
> +- struct io_pagetable holding the IOVA map
> +- struct iopt_areas representing populated portions of IOVA
> +- struct iopt_pages representing the storage of PFNs
> +- struct iommu_domain representing the IO page table in the IOMMU
> +- struct iopt_pages_access representing in-kernel users of PFNs
> +- struct xarray pinned_pfns holding a list of pages pinned by in-kernel users
> +
> +Each iopt_pages represents a logical linear array of full PFNs. The PFNs are
> +ultimately derived from userspave VAs via an mm_struct. Once they have been
> +pinned the PFN is stored in IOPTEs of an iommu_domain or inside the pinned_pages
s/is/are
> +xarray if they have been pinned through an iommufd_access.
> +
> +PFN have to be copied between all combinations of storage locations, depending
> +on what domains are present and what kinds of in-kernel "software access" users
> +exists. The mechanism ensures that a page is pinned only once.
> +
> +An io_pagetable is composed of iopt_areas pointing at iopt_pages, along with a
> +list of iommu_domains that mirror the IOVA to PFN map.
> +
> +Multiple io_pagetable-s, through their iopt_area-s, can share a single
> +iopt_pages which avoids multi-pinning and double accounting of page
> +consumption.
> +
> +iommufd_ioas is sharable between subsystems, e.g. VFIO and VDPA, as long as
> +devices managed by different subsystems are bound to a same iommufd.
> +
> +IOMMUFD User API
> +================
> +
> +.. kernel-doc:: include/uapi/linux/iommufd.h
> +
> +IOMMUFD Kernel API
> +==================
> +
> +The IOMMUFD kAPI is device-centric with group-related tricks managed behind the
> +scene. This allows the external drivers calling such kAPI to implement a simple
> +device-centric uAPI for connecting its device to an iommufd, instead of
> +explicitly imposing the group semantics in its uAPI as VFIO does.
> +
> +.. kernel-doc:: drivers/iommu/iommufd/device.c
> +   :export:
> +
> +VFIO and IOMMUFD
> +----------------
> +
> +Connecting a VFIO device to iommufd can be done in two ways.
> +
> +First is a VFIO compatible way by directly implementing the /dev/vfio/vfio
> +container IOCTLs by mapping them into io_pagetable operations. Doing so allows
> +the use of iommufd in legacy VFIO applications by symlinking /dev/vfio/vfio to
> +/dev/iommufd or extending VFIO to SET_CONTAINER using an iommufd instead of a
> +container fd.
> +
> +The second approach directly extends VFIO to support a new set of device-centric
> +user API based on aforementioned IOMMUFD kernel API. It requires userspace
> +change but better matches the IOMMUFD API semantics and easier to support new
> +iommufd features when comparing it to the first approach.
> +
> +Currently both approaches are still work-in-progress.
> +
> +There are still a few gaps to be resolved to catch up with VFIO type1, as
> +documented in iommufd_vfio_check_extension().
> +
> +Future TODOs
> +============
> +
> +Currently IOMMUFD supports only kernel-managed I/O page table, similar to VFIO
> +type1. New features on the radar include:
> +
> + - Binding iommu_domain's to PASID/SSID
> + - Userspace page tables, for ARM, x86 and S390
> + - Kernel bypass'd invalidation of user page tables
> + - Re-use of the KVM page table in the IOMMU
> + - Dirty page tracking in the IOMMU
> + - Runtime Increase/Decrease of IOPTE size
> + - PRI support with faults resolved in userspace
Thanks

Eric
Jason Gunthorpe Nov. 15, 2022, 12:52 a.m. UTC | #16
On Mon, Nov 14, 2022 at 09:50:56PM +0100, Eric Auger wrote:

> > +IOMMUFD is the user API to control the IOMMU subsystem as it relates to managing
> > +IO page tables from userspace using file descriptors. It intends to be general
> > +and consumable by any driver that wants to expose DMA to userspace. These
> > +drivers are eventually expected to deprecate any internal IOMMU logic if exists
> they may already/historically implement (eg. vfio_iommu_type1.c)?

Done

> > +2. IOMMUFD_OBJ_DEVICE is created when an external driver calls the IOMMUFD kAPI
> > +   to bind a device to an iommufd. The driver is expected to implement proper a
> s/proper/properly?
> > +   set of ioctls to allow userspace to initiate the binding operation.
> > +   Successful completion of this operation establishes the desired DMA ownership
> > +   over the device. The driver must also set the driver_managed_dma flag and
> > +   must not touch the device until this operation succeeds.

I don't know what this was suppose to say, lets delete the word proper

> > +3. IOMMUFD_OBJ_HW_PAGETABLE is created when an external driver calls the IOMMUFD
> > +   kAPI to attach a bound device to an IOAS. Similarly the external driver uAPI
> > +   allows userspace to initiate the attaching operation. If a compatible
> > +   pagetable already exists then it is reused for the attachment. Otherwise a
> > +   new pagetable object and iommu_domain is created. Successful completion of
> > +   this operation sets up the linkages among IOAS, device and iommu_domain. Once
> > +   this completes the device could do DMA.
> > +
> > +   Every iommu_domain inside the IOAS is also represented to userspace as a
> > +   HW_PAGETABLE object.
> > +
> > +   .. note::
> > +
> > +      Future IOMMUFD updates will provide an API to create and manipulate the
> > +      HW_PAGETABLE directly.
> > +
> > +A device can only bind to an iommufd due to DMA ownership claim and attach to at
> > +most one IOAS object (no support of PASID yet).
> > +
> > +Currently only PCI device is allowed to use IOMMUFD.
> is it still true? device_bind() now takes a struct device *
> 
> In [PATCH v4 12/17] iommufd: Add kAPI toward external drivers for
> physical devices "PCI" is used at several places
> but shouldn't it be removed now?

Right, gone

> > +- struct io_pagetable holding the IOVA map
> > +- struct iopt_areas representing populated portions of IOVA
> > +- struct iopt_pages representing the storage of PFNs
> > +- struct iommu_domain representing the IO page table in the IOMMU
> > +- struct iopt_pages_access representing in-kernel users of PFNs
> > +- struct xarray pinned_pfns holding a list of pages pinned by in-kernel users
> > +
> > +Each iopt_pages represents a logical linear array of full PFNs. The PFNs are
> > +ultimately derived from userspave VAs via an mm_struct. Once they have been
> > +pinned the PFN is stored in IOPTEs of an iommu_domain or inside the pinned_pages
> s/is/are

Ah it should be "Once they have been pinned the PFNs are stored in
IOPTEs" as the whole thing is refering to plural PFNs

Thanks,
Jason
diff mbox series

Patch

diff --git a/Documentation/userspace-api/index.rst b/Documentation/userspace-api/index.rst
index c78da9ce0ec44e..f16337bdb8520f 100644
--- a/Documentation/userspace-api/index.rst
+++ b/Documentation/userspace-api/index.rst
@@ -25,6 +25,7 @@  place where this information is gathered.
    ebpf/index
    ioctl/index
    iommu
+   iommufd
    media/index
    netlink/index
    sysfs-platform_profile
diff --git a/Documentation/userspace-api/iommufd.rst b/Documentation/userspace-api/iommufd.rst
new file mode 100644
index 00000000000000..64a135f3055adc
--- /dev/null
+++ b/Documentation/userspace-api/iommufd.rst
@@ -0,0 +1,222 @@ 
+.. SPDX-License-Identifier: GPL-2.0+
+
+=======
+IOMMUFD
+=======
+
+:Author: Jason Gunthorpe
+:Author: Kevin Tian
+
+Overview
+========
+
+IOMMUFD is the user API to control the IOMMU subsystem as it relates to managing
+IO page tables from userspace using file descriptors. It intends to be general
+and consumable by any driver that wants to expose DMA to userspace. These
+drivers are eventually expected to deprecate any internal IOMMU logic if exists
+(e.g. vfio_iommu_type1.c).
+
+At minimum iommufd provides universal support of managing I/O address spaces and
+I/O page tables for all IOMMUs, with room in the design to add non-generic
+features to cater to specific hardware functionality.
+
+In this context the capital letter (IOMMUFD) refers to the subsystem while the
+small letter (iommufd) refers to the file descriptors created via /dev/iommu for
+use by userspace.
+
+Key Concepts
+============
+
+User Visible Objects
+--------------------
+
+Following IOMMUFD objects are exposed to userspace:
+
+- IOMMUFD_OBJ_IOAS, representing an I/O address space (IOAS), allowing map/unmap
+  of user space memory into ranges of I/O Virtual Address (IOVA).
+
+  The IOAS is a functional replacement for the VFIO container, and like the VFIO
+  container it copies an IOVA map to a list of iommu_domains held within it.
+
+- IOMMUFD_OBJ_DEVICE, representing a device that is bound to iommufd by an
+  external driver.
+
+- IOMMUFD_OBJ_HW_PAGETABLE, representing an actual hardware I/O page table
+  (i.e. a single struct iommu_domain) managed by the iommu driver.
+
+  The IOAS has a list of HW_PAGETABLES that share the same IOVA mapping and
+  it will synchronize its mapping with each member HW_PAGETABLE.
+
+All user-visible objects are destroyed via the IOMMU_DESTROY uAPI.
+
+The diagram below shows relationship between user-visible objects and kernel
+datastructures (external to iommufd), with numbers referred to operations
+creating the objects and links::
+
+  _________________________________________________________
+ |                         iommufd                         |
+ |       [1]                                               |
+ |  _________________                                      |
+ | |                 |                                     |
+ | |                 |                                     |
+ | |                 |                                     |
+ | |                 |                                     |
+ | |                 |                                     |
+ | |                 |                                     |
+ | |                 |        [3]                 [2]      |
+ | |                 |    ____________         __________  |
+ | |      IOAS       |<--|            |<------|          | |
+ | |                 |   |HW_PAGETABLE|       |  DEVICE  | |
+ | |                 |   |____________|       |__________| |
+ | |                 |         |                   |       |
+ | |                 |         |                   |       |
+ | |                 |         |                   |       |
+ | |                 |         |                   |       |
+ | |                 |         |                   |       |
+ | |_________________|         |                   |       |
+ |         |                   |                   |       |
+ |_________|___________________|___________________|_______|
+           |                   |                   |
+           |              _____v______      _______v_____
+           | PFN storage |            |    |             |
+           |------------>|iommu_domain|    |struct device|
+                         |____________|    |_____________|
+
+1. IOMMUFD_OBJ_IOAS is created via the IOMMU_IOAS_ALLOC uAPI. An iommufd can
+   hold multiple IOAS objects. IOAS is the most generic object and does not
+   expose interfaces that are specific to single IOMMU drivers. All operations
+   on the IOAS must operate equally on each of the iommu_domains inside of it.
+
+2. IOMMUFD_OBJ_DEVICE is created when an external driver calls the IOMMUFD kAPI
+   to bind a device to an iommufd. The driver is expected to implement proper a
+   set of ioctls to allow userspace to initiate the binding operation.
+   Successful completion of this operation establishes the desired DMA ownership
+   over the device. The driver must also set the driver_managed_dma flag and
+   must not touch the device until this operation succeeds.
+
+3. IOMMUFD_OBJ_HW_PAGETABLE is created when an external driver calls the IOMMUFD
+   kAPI to attach a bound device to an IOAS. Similarly the external driver uAPI
+   allows userspace to initiate the attaching operation. If a compatible
+   pagetable already exists then it is reused for the attachment. Otherwise a
+   new pagetable object and iommu_domain is created. Successful completion of
+   this operation sets up the linkages among IOAS, device and iommu_domain. Once
+   this completes the device could do DMA.
+
+   Every iommu_domain inside the IOAS is also represented to userspace as a
+   HW_PAGETABLE object.
+
+   .. note::
+
+      Future IOMMUFD updates will provide an API to create and manipulate the
+      HW_PAGETABLE directly.
+
+A device can only bind to an iommufd due to DMA ownership claim and attach to at
+most one IOAS object (no support of PASID yet).
+
+Currently only PCI device is allowed to use IOMMUFD.
+
+Kernel Datastructure
+--------------------
+
+User visible objects are backed by following datastructures:
+
+- iommufd_ioas for IOMMUFD_OBJ_IOAS.
+- iommufd_device for IOMMUFD_OBJ_DEVICE.
+- iommufd_hw_pagetable for IOMMUFD_OBJ_HW_PAGETABLE.
+
+Several terminologies when looking at these datastructures:
+
+- Automatic domain - refers to an iommu domain created automatically when
+  attaching a device to an IOAS object. This is compatible to the semantics of
+  VFIO type1.
+
+- Manual domain - refers to an iommu domain designated by the user as the
+  target pagetable to be attached to by a device. Though currently there are
+  no uAPIs to directly create such domain, the datastructure and algorithms
+  are ready for handling that use case.
+
+- In-kernel user - refers to something like a VFIO mdev that is using the
+  IOMMUFD access interface to access the IOAS. This starts by creating an
+  iommufd_access object that is similar to the domain binding a physical device
+  would do. The access object will then allow converting IOVA ranges into struct
+  page * lists, or doing direct read/write to an IOVA.
+
+iommufd_ioas serves as the metadata datastructure to manage how IOVA ranges are
+mapped to memory pages, composed of:
+
+- struct io_pagetable holding the IOVA map
+- struct iopt_areas representing populated portions of IOVA
+- struct iopt_pages representing the storage of PFNs
+- struct iommu_domain representing the IO page table in the IOMMU
+- struct iopt_pages_access representing in-kernel users of PFNs
+- struct xarray pinned_pfns holding a list of pages pinned by in-kernel users
+
+Each iopt_pages represents a logical linear array of full PFNs. The PFNs are
+ultimately derived from userspave VAs via an mm_struct. Once they have been
+pinned the PFN is stored in IOPTEs of an iommu_domain or inside the pinned_pages
+xarray if they have been pinned through an iommufd_access.
+
+PFN have to be copied between all combinations of storage locations, depending
+on what domains are present and what kinds of in-kernel "software access" users
+exists. The mechanism ensures that a page is pinned only once.
+
+An io_pagetable is composed of iopt_areas pointing at iopt_pages, along with a
+list of iommu_domains that mirror the IOVA to PFN map.
+
+Multiple io_pagetable-s, through their iopt_area-s, can share a single
+iopt_pages which avoids multi-pinning and double accounting of page
+consumption.
+
+iommufd_ioas is sharable between subsystems, e.g. VFIO and VDPA, as long as
+devices managed by different subsystems are bound to a same iommufd.
+
+IOMMUFD User API
+================
+
+.. kernel-doc:: include/uapi/linux/iommufd.h
+
+IOMMUFD Kernel API
+==================
+
+The IOMMUFD kAPI is device-centric with group-related tricks managed behind the
+scene. This allows the external drivers calling such kAPI to implement a simple
+device-centric uAPI for connecting its device to an iommufd, instead of
+explicitly imposing the group semantics in its uAPI as VFIO does.
+
+.. kernel-doc:: drivers/iommu/iommufd/device.c
+   :export:
+
+VFIO and IOMMUFD
+----------------
+
+Connecting a VFIO device to iommufd can be done in two ways.
+
+First is a VFIO compatible way by directly implementing the /dev/vfio/vfio
+container IOCTLs by mapping them into io_pagetable operations. Doing so allows
+the use of iommufd in legacy VFIO applications by symlinking /dev/vfio/vfio to
+/dev/iommufd or extending VFIO to SET_CONTAINER using an iommufd instead of a
+container fd.
+
+The second approach directly extends VFIO to support a new set of device-centric
+user API based on aforementioned IOMMUFD kernel API. It requires userspace
+change but better matches the IOMMUFD API semantics and easier to support new
+iommufd features when comparing it to the first approach.
+
+Currently both approaches are still work-in-progress.
+
+There are still a few gaps to be resolved to catch up with VFIO type1, as
+documented in iommufd_vfio_check_extension().
+
+Future TODOs
+============
+
+Currently IOMMUFD supports only kernel-managed I/O page table, similar to VFIO
+type1. New features on the radar include:
+
+ - Binding iommu_domain's to PASID/SSID
+ - Userspace page tables, for ARM, x86 and S390
+ - Kernel bypass'd invalidation of user page tables
+ - Re-use of the KVM page table in the IOMMU
+ - Dirty page tracking in the IOMMU
+ - Runtime Increase/Decrease of IOPTE size
+ - PRI support with faults resolved in userspace