mbox series

[00/16] Add new DMA mapping operation for P2PDMA

Message ID 20210408170123.8788-1-logang@deltatee.com (mailing list archive)
Headers show
Series Add new DMA mapping operation for P2PDMA | expand

Message

Logan Gunthorpe April 8, 2021, 5:01 p.m. UTC
Hi,

This patchset continues my work to to add P2PDMA support to the common
dma map operations. This allows for creating SGLs that have both P2PDMA
and regular pages which is a necessary step to allowing P2PDMA pages in
userspace.

The earlier RFC[1] generated a lot of great feedback and I heard no show
stopping objections. Thus, I've incorporated all the feedback and have
decided to post this as a proper patch series with hopes of eventually
getting it in mainline.

I'm happy to do a few more passes if anyone has any further feedback
or better ideas.

This series is based on v5.12-rc6 and a git branch can be found here:

  https://github.com/sbates130272/linux-p2pmem/  p2pdma_map_ops_v1

Thanks,

Logan

[1] https://lore.kernel.org/linux-block/20210311233142.7900-1-logang@deltatee.com/


Changes since the RFC:
 * Added comment and fixed up the pci_get_slot patch. (per Bjorn)
 * Fixed glaring sg_phys() double offset bug. (per Robin)
 * Created a new map operation (dma_map_sg_p2pdma()) with a new calling
   convention instead of modifying the calling convention of
   dma_map_sg(). (per Robin)
 * Integrated the two similar pci_p2pdma_dma_map_type() and
   pci_p2pdma_map_type() functions into one (per Ira)
 * Reworked some of the logic in the map_sg() implementations into
   helpers in the p2pdma code. (per Christoph)
 * Dropped a bunch of unnecessary symbol exports (per Christoph)
 * Expanded the code in dma_pci_p2pdma_supported() for clarity. (per
   Ira and Christoph)
 * Finished off using the new dma_map_sg_p2pdma() call in rdma_rw
   and removed the old pci_p2pdma_[un]map_sg(). (per Jason)

--

Logan Gunthorpe (16):
  PCI/P2PDMA: Pass gfp_mask flags to upstream_bridge_distance_warn()
  PCI/P2PDMA: Avoid pci_get_slot() which sleeps
  PCI/P2PDMA: Attempt to set map_type if it has not been set
  PCI/P2PDMA: Refactor pci_p2pdma_map_type() to take pagmap and device
  dma-mapping: Introduce dma_map_sg_p2pdma()
  lib/scatterlist: Add flag for indicating P2PDMA segments in an SGL
  PCI/P2PDMA: Make pci_p2pdma_map_type() non-static
  PCI/P2PDMA: Introduce helpers for dma_map_sg implementations
  dma-direct: Support PCI P2PDMA pages in dma-direct map_sg
  dma-mapping: Add flags to dma_map_ops to indicate PCI P2PDMA support
  iommu/dma: Support PCI P2PDMA pages in dma-iommu map_sg
  nvme-pci: Check DMA ops when indicating support for PCI P2PDMA
  nvme-pci: Convert to using dma_map_sg_p2pdma for p2pdma pages
  nvme-rdma: Ensure dma support when using p2pdma
  RDMA/rw: use dma_map_sg_p2pdma()
  PCI/P2PDMA: Remove pci_p2pdma_[un]map_sg()

 drivers/infiniband/core/rw.c |  50 +++-------
 drivers/iommu/dma-iommu.c    |  66 ++++++++++--
 drivers/nvme/host/core.c     |   3 +-
 drivers/nvme/host/nvme.h     |   2 +-
 drivers/nvme/host/pci.c      |  39 ++++----
 drivers/nvme/target/rdma.c   |   3 +-
 drivers/pci/Kconfig          |   2 +-
 drivers/pci/p2pdma.c         | 188 +++++++++++++++++++----------------
 include/linux/dma-map-ops.h  |   3 +
 include/linux/dma-mapping.h  |  20 ++++
 include/linux/pci-p2pdma.h   |  53 ++++++----
 include/linux/scatterlist.h  |  49 ++++++++-
 include/rdma/ib_verbs.h      |  32 ++++++
 kernel/dma/direct.c          |  25 ++++-
 kernel/dma/mapping.c         |  70 +++++++++++--
 15 files changed, 416 insertions(+), 189 deletions(-)


base-commit: e49d033bddf5b565044e2abe4241353959bc9120
--
2.20.1

Comments

Jason Gunthorpe April 27, 2021, 7:22 p.m. UTC | #1
On Thu, Apr 08, 2021 at 11:01:12AM -0600, Logan Gunthorpe wrote:
> dma_map_sg() either returns a positive number indicating the number
> of entries mapped or zero indicating that resources were not available
> to create the mapping. When zero is returned, it is always safe to retry
> the mapping later once resources have been freed.
> 
> Once P2PDMA pages are mixed into the SGL there may be pages that may
> never be successfully mapped with a given device because that device may
> not actually be able to access those pages. Thus, multiple error
> conditions will need to be distinguished to determine weather a retry
> is safe.
> 
> Introduce dma_map_sg_p2pdma[_attrs]() with a different calling
> convention from dma_map_sg(). The function will return a positive
> integer on success or a negative errno on failure.
> 
> ENOMEM will be used to indicate a resource failure and EREMOTEIO to
> indicate that a P2PDMA page is not mappable.
> 
> The __DMA_ATTR_PCI_P2PDMA attribute is introduced to inform the lower
> level implementations that P2PDMA pages are allowed and to warn if a
> caller introduces them into the regular dma_map_sg() interface.

So this new API is all about being able to return an error code
because auditing the old API is basically terrifying?

OK, but why name everything new P2PDMA? It seems nicer to give this
some generic name and have some general program to gradually deprecate
normal non-error-capable dma_map_sg() ?

I think that will raise less questions when subsystem people see the
changes, as I was wondering why RW was being moved to use what looked
like a p2pdma only API.

dma_map_sg_or_err() would have been clearer

The flag is also clearer as to the purpose if it is named
__DMA_ATTR_ERROR_ALLOWED

Jason
Jason Gunthorpe April 27, 2021, 7:28 p.m. UTC | #2
On Thu, Apr 08, 2021 at 11:01:07AM -0600, Logan Gunthorpe wrote:
> Hi,
> 
> This patchset continues my work to to add P2PDMA support to the common
> dma map operations. This allows for creating SGLs that have both P2PDMA
> and regular pages which is a necessary step to allowing P2PDMA pages in
> userspace.
> 
> The earlier RFC[1] generated a lot of great feedback and I heard no show
> stopping objections. Thus, I've incorporated all the feedback and have
> decided to post this as a proper patch series with hopes of eventually
> getting it in mainline.
>
> I'm happy to do a few more passes if anyone has any further feedback
> or better ideas.

For the user of the DMA API the idea seems reasonable enough, the next
steps to integrate with pin_user_pages() seem fairly straightfoward
too

Was there no feedback on this at all?

Jason
Jason Gunthorpe April 27, 2021, 7:31 p.m. UTC | #3
On Thu, Apr 08, 2021 at 11:01:12AM -0600, Logan Gunthorpe wrote:
> +/*
> + * dma_maps_sg_attrs returns 0 on error and > 0 on success.
> + * It should never return a value < 0.
> + */

Also it is weird a function that can't return 0 is returning an int type

> +int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
> +		enum dma_data_direction dir, unsigned long attrs)
> +{
> +	int ents;
> +
> +	ents = __dma_map_sg_attrs(dev, sg, nents, dir, attrs);
>  	BUG_ON(ents < 0);

if (WARN_ON(ents < 0))
     return 0;

instead of bug on?

Also, I see only 8 users of this function. How about just fix them all
to support negative returns and use this as the p2p API instead of
adding new API?

Add the opposite logic flag, 'DMA_ATTRS_NO_ERROR' and pass it through
the other api entry callers that can't handle it?

Jason
Jason Gunthorpe April 27, 2021, 7:43 p.m. UTC | #4
On Thu, Apr 08, 2021 at 11:01:18AM -0600, Logan Gunthorpe wrote:
> When a PCI P2PDMA page is seen, set the IOVA length of the segment
> to zero so that it is not mapped into the IOVA. Then, in finalise_sg(),
> apply the appropriate bus address to the segment. The IOVA is not
> created if the scatterlist only consists of P2PDMA pages.

I expect P2P to work with systems that use ATS, so we'd want to see
those systems have the IOMMU programmed with the bus address.

Is it OK like this because the other logic prohibits all PCI cases
that would lean on the IOMMU, like ATS, hairpinning through the root
port, or transiting the root complex?

If yes, the code deserves a big comment explaining this is incomplete,
and I'd want to know we can finish this to include ATS at least based
on this series.

Jason
John Hubbard April 27, 2021, 8:21 p.m. UTC | #5
On 4/27/21 12:28 PM, Jason Gunthorpe wrote:
> On Thu, Apr 08, 2021 at 11:01:07AM -0600, Logan Gunthorpe wrote:
>> Hi,
>>
>> This patchset continues my work to to add P2PDMA support to the common
>> dma map operations. This allows for creating SGLs that have both P2PDMA
>> and regular pages which is a necessary step to allowing P2PDMA pages in
>> userspace.
>>
>> The earlier RFC[1] generated a lot of great feedback and I heard no show
>> stopping objections. Thus, I've incorporated all the feedback and have
>> decided to post this as a proper patch series with hopes of eventually
>> getting it in mainline.
>>
>> I'm happy to do a few more passes if anyone has any further feedback
>> or better ideas.
> 
> For the user of the DMA API the idea seems reasonable enough, the next
> steps to integrate with pin_user_pages() seem fairly straightfoward
> too
> 
> Was there no feedback on this at all?
> 

oops, I meant to review this a lot sooner, because this whole p2pdma thing is
actually very interesting and important...somehow it slipped but I'll take
a look now.

thanks,
Dan Williams April 27, 2021, 8:48 p.m. UTC | #6
On Tue, Apr 27, 2021 at 1:22 PM John Hubbard <jhubbard@nvidia.com> wrote:
>
> On 4/27/21 12:28 PM, Jason Gunthorpe wrote:
> > On Thu, Apr 08, 2021 at 11:01:07AM -0600, Logan Gunthorpe wrote:
> >> Hi,
> >>
> >> This patchset continues my work to to add P2PDMA support to the common
> >> dma map operations. This allows for creating SGLs that have both P2PDMA
> >> and regular pages which is a necessary step to allowing P2PDMA pages in
> >> userspace.
> >>
> >> The earlier RFC[1] generated a lot of great feedback and I heard no show
> >> stopping objections. Thus, I've incorporated all the feedback and have
> >> decided to post this as a proper patch series with hopes of eventually
> >> getting it in mainline.
> >>
> >> I'm happy to do a few more passes if anyone has any further feedback
> >> or better ideas.
> >
> > For the user of the DMA API the idea seems reasonable enough, the next
> > steps to integrate with pin_user_pages() seem fairly straightfoward
> > too
> >
> > Was there no feedback on this at all?
> >
>
> oops, I meant to review this a lot sooner, because this whole p2pdma thing is
> actually very interesting and important...somehow it slipped but I'll take
> a look now.

Still in my queue as well behind Joao's memmap consolidation series,
and a recent copy_mc_to_iter() fix series from Al.
Logan Gunthorpe April 27, 2021, 10:49 p.m. UTC | #7
On 2021-04-27 1:22 p.m., Jason Gunthorpe wrote:
> On Thu, Apr 08, 2021 at 11:01:12AM -0600, Logan Gunthorpe wrote:
>> dma_map_sg() either returns a positive number indicating the number
>> of entries mapped or zero indicating that resources were not available
>> to create the mapping. When zero is returned, it is always safe to retry
>> the mapping later once resources have been freed.
>>
>> Once P2PDMA pages are mixed into the SGL there may be pages that may
>> never be successfully mapped with a given device because that device may
>> not actually be able to access those pages. Thus, multiple error
>> conditions will need to be distinguished to determine weather a retry
>> is safe.
>>
>> Introduce dma_map_sg_p2pdma[_attrs]() with a different calling
>> convention from dma_map_sg(). The function will return a positive
>> integer on success or a negative errno on failure.
>>
>> ENOMEM will be used to indicate a resource failure and EREMOTEIO to
>> indicate that a P2PDMA page is not mappable.
>>
>> The __DMA_ATTR_PCI_P2PDMA attribute is introduced to inform the lower
>> level implementations that P2PDMA pages are allowed and to warn if a
>> caller introduces them into the regular dma_map_sg() interface.
> 
> So this new API is all about being able to return an error code
> because auditing the old API is basically terrifying?
> 
> OK, but why name everything new P2PDMA? It seems nicer to give this
> some generic name and have some general program to gradually deprecate
> normal non-error-capable dma_map_sg() ?
> 
> I think that will raise less questions when subsystem people see the
> changes, as I was wondering why RW was being moved to use what looked
> like a p2pdma only API.
> 
> dma_map_sg_or_err() would have been clearer
> 
> The flag is also clearer as to the purpose if it is named
> __DMA_ATTR_ERROR_ALLOWED

I'm not opposed to these names. I can use them for v2 if there are no
other opinions.

Logan
Logan Gunthorpe April 27, 2021, 10:55 p.m. UTC | #8
On 2021-04-27 1:31 p.m., Jason Gunthorpe wrote:
> On Thu, Apr 08, 2021 at 11:01:12AM -0600, Logan Gunthorpe wrote:
>> +/*
>> + * dma_maps_sg_attrs returns 0 on error and > 0 on success.
>> + * It should never return a value < 0.
>> + */
> 
> Also it is weird a function that can't return 0 is returning an int type

Yes, Christoph mentioned in the last series that this should probably
change to an unsigned but I wasn't really sure if that change should be
a part of the P2PDMA series.

>> +int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
>> +		enum dma_data_direction dir, unsigned long attrs)
>> +{
>> +	int ents;
>> +
>> +	ents = __dma_map_sg_attrs(dev, sg, nents, dir, attrs);
>>  	BUG_ON(ents < 0);
> 
> if (WARN_ON(ents < 0))
>      return 0;
> 
> instead of bug on?

It was BUG_ON in the original code. So I felt I should leave it.

> Also, I see only 8 users of this function. How about just fix them all
> to support negative returns and use this as the p2p API instead of
> adding new API?

Well there might be 8 users of dma_map_sg_attrs() but there are a very
large number of dma_map_sg(). Seems odd to me to single out the first as
requiring these changes, but leave the latter.

> Add the opposite logic flag, 'DMA_ATTRS_NO_ERROR' and pass it through
> the other api entry callers that can't handle it?

I'm not that opposed to this. But it will make this series a fair bit
longer to change the 8 map_sg_attrs() usages.

Logan
Logan Gunthorpe April 27, 2021, 10:59 p.m. UTC | #9
On 2021-04-27 1:43 p.m., Jason Gunthorpe wrote:
> On Thu, Apr 08, 2021 at 11:01:18AM -0600, Logan Gunthorpe wrote:
>> When a PCI P2PDMA page is seen, set the IOVA length of the segment
>> to zero so that it is not mapped into the IOVA. Then, in finalise_sg(),
>> apply the appropriate bus address to the segment. The IOVA is not
>> created if the scatterlist only consists of P2PDMA pages.
> 
> I expect P2P to work with systems that use ATS, so we'd want to see
> those systems have the IOMMU programmed with the bus address.

Oh, the paragraph you quote isn't quite as clear as it could be. The bus
address is only used in specific circumstances depending on how the
P2PDMA core code figures the addresses should be mapped (see the
documentation for (upstream_bridge_distance()). The P2PDMA code
currently doesn't have any provisions for ATS (I haven't had access to
any such hardware) but I'm sure it wouldn't be too hard to add.

Logan
Jason Gunthorpe April 27, 2021, 11:01 p.m. UTC | #10
On Tue, Apr 27, 2021 at 04:55:45PM -0600, Logan Gunthorpe wrote:

> > Also, I see only 8 users of this function. How about just fix them all
> > to support negative returns and use this as the p2p API instead of
> > adding new API?
> 
> Well there might be 8 users of dma_map_sg_attrs() but there are a very
> large number of dma_map_sg(). Seems odd to me to single out the first as
> requiring these changes, but leave the latter.

At a high level I'm OK with it. dma_map_sg_attrs() is the extra
extended version of dma_map_sg(), it already has a different
signature, a different return code is not out of the question.

dma_map_sg() is just the simple easy to use interface that can't do
advanced stuff.

> I'm not that opposed to this. But it will make this series a fair bit
> longer to change the 8 map_sg_attrs() usages.

Yes, but the result seems much nicer to not grow the DMA API further.

Jason
John Hubbard May 2, 2021, 1:22 a.m. UTC | #11
On 4/8/21 10:01 AM, Logan Gunthorpe wrote:
> Hi,
> 
> This patchset continues my work to to add P2PDMA support to the common
> dma map operations. This allows for creating SGLs that have both P2PDMA
> and regular pages which is a necessary step to allowing P2PDMA pages in
> userspace.
> 
> The earlier RFC[1] generated a lot of great feedback and I heard no show
> stopping objections. Thus, I've incorporated all the feedback and have
> decided to post this as a proper patch series with hopes of eventually
> getting it in mainline.
> 
> I'm happy to do a few more passes if anyone has any further feedback
> or better ideas.
> 

After an initial pass through these, I think I like the approach. And I
don't have any huge structural comments or new ideas, just smaller comments
and notes.

I'll respond to each patch, but just wanted to say up front that this is
looking promising, in my opinion.


thanks,
John Hubbard May 2, 2021, 9:23 p.m. UTC | #12
On 4/8/21 10:01 AM, Logan Gunthorpe wrote:
> dma_map_sg() either returns a positive number indicating the number
> of entries mapped or zero indicating that resources were not available
> to create the mapping. When zero is returned, it is always safe to retry
> the mapping later once resources have been freed.
> 
> Once P2PDMA pages are mixed into the SGL there may be pages that may
> never be successfully mapped with a given device because that device may
> not actually be able to access those pages. Thus, multiple error
> conditions will need to be distinguished to determine weather a retry
> is safe.
> 
> Introduce dma_map_sg_p2pdma[_attrs]() with a different calling
> convention from dma_map_sg(). The function will return a positive
> integer on success or a negative errno on failure.
> 
> ENOMEM will be used to indicate a resource failure and EREMOTEIO to
> indicate that a P2PDMA page is not mappable.
> 
> The __DMA_ATTR_PCI_P2PDMA attribute is introduced to inform the lower
> level implementations that P2PDMA pages are allowed and to warn if a
> caller introduces them into the regular dma_map_sg() interface.
> 
> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
> ---
>   include/linux/dma-mapping.h | 15 +++++++++++
>   kernel/dma/mapping.c        | 52 ++++++++++++++++++++++++++++++++-----
>   2 files changed, 61 insertions(+), 6 deletions(-)
> 
> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
> index 2a984cb4d1e0..50b8f586cf59 100644
> --- a/include/linux/dma-mapping.h
> +++ b/include/linux/dma-mapping.h
> @@ -60,6 +60,12 @@
>    * at least read-only at lesser-privileged levels).
>    */
>   #define DMA_ATTR_PRIVILEGED		(1UL << 9)
> +/*
> + * __DMA_ATTR_PCI_P2PDMA: This should not be used directly, use
> + * dma_map_sg_p2pdma() instead. Used internally to indicate that the
> + * caller is using the dma_map_sg_p2pdma() interface.
> + */
> +#define __DMA_ATTR_PCI_P2PDMA		(1UL << 10)
>

As mentioned near the top of this file,
Documentation/core-api/dma-attributes.rst also needs to be updated, for
this new item.


>   /*
>    * A dma_addr_t can hold any valid DMA or bus address for the platform.  It can
> @@ -107,6 +113,8 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
>   		enum dma_data_direction dir, unsigned long attrs);
>   int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
>   		enum dma_data_direction dir, unsigned long attrs);
> +int dma_map_sg_p2pdma_attrs(struct device *dev, struct scatterlist *sg,
> +		int nents, enum dma_data_direction dir, unsigned long attrs);
>   void dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg,
>   				      int nents, enum dma_data_direction dir,
>   				      unsigned long attrs);
> @@ -160,6 +168,12 @@ static inline int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg,
>   {
>   	return 0;
>   }
> +static inline int dma_map_sg_p2pdma_attrs(struct device *dev,
> +		struct scatterlist *sg, int nents, enum dma_data_direction dir,
> +		unsigned long attrs)
> +{
> +	return 0;
> +}
>   static inline void dma_unmap_sg_attrs(struct device *dev,
>   		struct scatterlist *sg, int nents, enum dma_data_direction dir,
>   		unsigned long attrs)
> @@ -392,6 +406,7 @@ static inline void dma_sync_sgtable_for_device(struct device *dev,
>   #define dma_map_single(d, a, s, r) dma_map_single_attrs(d, a, s, r, 0)
>   #define dma_unmap_single(d, a, s, r) dma_unmap_single_attrs(d, a, s, r, 0)
>   #define dma_map_sg(d, s, n, r) dma_map_sg_attrs(d, s, n, r, 0)
> +#define dma_map_sg_p2pdma(d, s, n, r) dma_map_sg_p2pdma_attrs(d, s, n, r, 0)

This hunk is fine, of course.

But, about pre-existing issues: note to self, or to anyone: send a patch to turn
these into inline functions. The macro redirection here is not adding value, but
it does make things just a little bit worse.


>   #define dma_unmap_sg(d, s, n, r) dma_unmap_sg_attrs(d, s, n, r, 0)
>   #define dma_map_page(d, p, o, s, r) dma_map_page_attrs(d, p, o, s, r, 0)
>   #define dma_unmap_page(d, a, s, r) dma_unmap_page_attrs(d, a, s, r, 0)
> diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
> index b6a633679933..923089c4267b 100644
> --- a/kernel/dma/mapping.c
> +++ b/kernel/dma/mapping.c
> @@ -177,12 +177,8 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
>   }
>   EXPORT_SYMBOL(dma_unmap_page_attrs);
>   
> -/*
> - * dma_maps_sg_attrs returns 0 on error and > 0 on success.
> - * It should never return a value < 0.
> - */

It would be better to leave the comment in place, given the non-standard
return values. However, looking around here, it would be better if we go
with the standard -ERRNO for error, and >0 for sucess.

There are pre-existing BUG_ON() and WARN_ON_ONCE() items that are partly
an attempt to compensate for not being able to return proper -ERRNO
codes. For example, this:

	    BUG_ON(!valid_dma_direction(dir));

...arguably should be more like this:

         if(WARN_ON_ONCE(!valid_dma_direction(dir)))
                 return -EINVAL;


> -int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
> -		enum dma_data_direction dir, unsigned long attrs)
> +static int __dma_map_sg_attrs(struct device *dev, struct scatterlist *sg,
> +		int nents, enum dma_data_direction dir, unsigned long attrs)
>   {
>   	const struct dma_map_ops *ops = get_dma_ops(dev);
>   	int ents;
> @@ -197,6 +193,20 @@ int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
>   		ents = dma_direct_map_sg(dev, sg, nents, dir, attrs);
>   	else
>   		ents = ops->map_sg(dev, sg, nents, dir, attrs);
> +
> +	return ents;
> +}
> +
> +/*
> + * dma_maps_sg_attrs returns 0 on error and > 0 on success.
> + * It should never return a value < 0.
> + */
> +int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
> +		enum dma_data_direction dir, unsigned long attrs)
> +{
> +	int ents;

Pre-existing note, feel free to ignore: the ents and nents in the same
routines together, are way too close to the each other in naming. Maybe
using "requested_nents", or "nents_arg", for the incoming value, would
help.

> +
> +	ents = __dma_map_sg_attrs(dev, sg, nents, dir, attrs);
>   	BUG_ON(ents < 0);
>   	debug_dma_map_sg(dev, sg, nents, ents, dir);
>   
> @@ -204,6 +214,36 @@ int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
>   }
>   EXPORT_SYMBOL(dma_map_sg_attrs);
>   
> +/*
> + * like dma_map_sg_attrs, but returns a negative errno on error (and > 0
> + * on success). This function must be used if PCI P2PDMA pages might
> + * be in the scatterlist.

Let's turn this into a kernel doc comment block, seeing as how it clearly
wants to be--you're almost there already. You've even reinvented @Return,
below. :)

> + *
> + * On error this function may return:
> + *    -ENOMEM indicating that there was not enough resources available and
> + *      the transfer may be retried later
> + *    -EREMOTEIO indicating that P2PDMA pages were included but cannot
> + *      be mapped by the specified device, retries will always fail
> + *
> + * The scatterlist should be unmapped with the regular dma_unmap_sg[_attrs]().

How about:

"The scatterlist should be unmapped via dma_unmap_sg[_attrs]()."

> + */
> +int dma_map_sg_p2pdma_attrs(struct device *dev, struct scatterlist *sg,
> +		int nents, enum dma_data_direction dir, unsigned long attrs)
> +{
> +	int ents;
> +
> +	ents = __dma_map_sg_attrs(dev, sg, nents, dir,
> +				  attrs | __DMA_ATTR_PCI_P2PDMA);
> +	if (!ents)
> +		ents = -ENOMEM;
> +
> +	if (ents > 0)
> +		debug_dma_map_sg(dev, sg, nents, ents, dir);
> +
> +	return ents;
> +}
> +EXPORT_SYMBOL_GPL(dma_map_sg_p2pdma_attrs);
> +
>   void dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg,
>   				      int nents, enum dma_data_direction dir,
>   				      unsigned long attrs)
> 

thanks,
John Hubbard May 3, 2021, 1:14 a.m. UTC | #13
On 4/8/21 10:01 AM, Logan Gunthorpe wrote:
> When a PCI P2PDMA page is seen, set the IOVA length of the segment
> to zero so that it is not mapped into the IOVA. Then, in finalise_sg(),
> apply the appropriate bus address to the segment. The IOVA is not
> created if the scatterlist only consists of P2PDMA pages.
> 
> Similar to dma-direct, the sg_mark_pci_p2pdma() flag is used to
> indicate bus address segments. On unmap, P2PDMA segments are skipped
> over when determining the start and end IOVA addresses.
> 
> With this change, the flags variable in the dma_map_ops is
> set to DMA_F_PCI_P2PDMA_SUPPORTED to indicate support for
> P2PDMA pages.
> 
> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
> ---
>   drivers/iommu/dma-iommu.c | 66 ++++++++++++++++++++++++++++++++++-----
>   1 file changed, 58 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index af765c813cc8..ef49635f9819 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -20,6 +20,7 @@
>   #include <linux/mm.h>
>   #include <linux/mutex.h>
>   #include <linux/pci.h>
> +#include <linux/pci-p2pdma.h>
>   #include <linux/swiotlb.h>
>   #include <linux/scatterlist.h>
>   #include <linux/vmalloc.h>
> @@ -864,6 +865,16 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents,
>   		sg_dma_address(s) = DMA_MAPPING_ERROR;
>   		sg_dma_len(s) = 0;
>   
> +		if (is_pci_p2pdma_page(sg_page(s)) && !s_iova_len) {

Newbie question: I'm in the dark as to why the !s_iova_len check is there,
can you please enlighten me?

> +			if (i > 0)
> +				cur = sg_next(cur);
> +
> +			pci_p2pdma_map_bus_segment(s, cur);
> +			count++;
> +			cur_len = 0;
> +			continue;
> +		}
> +

This is really an if/else condition. And arguably, it would be better
to split out two subroutines, and call one or the other depending on
the result of if is_pci_p2pdma_page(), instead of this "continue" approach.

>   		/*
>   		 * Now fill in the real DMA data. If...
>   		 * - there is a valid output segment to append to
> @@ -961,10 +972,12 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>   	struct iova_domain *iovad = &cookie->iovad;
>   	struct scatterlist *s, *prev = NULL;
>   	int prot = dma_info_to_prot(dir, dev_is_dma_coherent(dev), attrs);
> +	struct dev_pagemap *pgmap = NULL;
> +	enum pci_p2pdma_map_type map_type;
>   	dma_addr_t iova;
>   	size_t iova_len = 0;
>   	unsigned long mask = dma_get_seg_boundary(dev);
> -	int i;
> +	int i, ret = 0;
>   
>   	if (static_branch_unlikely(&iommu_deferred_attach_enabled) &&
>   	    iommu_deferred_attach(dev, domain))
> @@ -993,6 +1006,31 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>   		s_length = iova_align(iovad, s_length + s_iova_off);
>   		s->length = s_length;
>   
> +		if (is_pci_p2pdma_page(sg_page(s))) {
> +			if (sg_page(s)->pgmap != pgmap) {
> +				pgmap = sg_page(s)->pgmap;
> +				map_type = pci_p2pdma_map_type(pgmap, dev,
> +							       attrs);
> +			}
> +
> +			switch (map_type) {
> +			case PCI_P2PDMA_MAP_BUS_ADDR:
> +				/*
> +				 * A zero length will be ignored by
> +				 * iommu_map_sg() and then can be detected
> +				 * in __finalise_sg() to actually map the
> +				 * bus address.
> +				 */
> +				s->length = 0;
> +				continue;
> +			case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
> +				break;
> +			default:
> +				ret = -EREMOTEIO;
> +				goto out_restore_sg;
> +			}
> +		}
> +
>   		/*
>   		 * Due to the alignment of our single IOVA allocation, we can
>   		 * depend on these assumptions about the segment boundary mask:
> @@ -1015,6 +1053,9 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>   		prev = s;
>   	}
>   
> +	if (!iova_len)
> +		return __finalise_sg(dev, sg, nents, 0);
> +

ohhh, we're really slicing up this function pretty severely, what with the
continue and the early out and several other control flow changes. I think
it would be better to spend some time factoring this function into two
cases, now that you're adding a second case for PCI P2PDMA. Roughly,
two subroutines would do it.

As it is, this leaves behind a routine that is extremely hard to mentally
verify as correct.


thanks,
Logan Gunthorpe May 3, 2021, 4:38 p.m. UTC | #14
On 2021-05-02 3:23 p.m., John Hubbard wrote:
> On 4/8/21 10:01 AM, Logan Gunthorpe wrote:
>> dma_map_sg() either returns a positive number indicating the number
>> of entries mapped or zero indicating that resources were not available
>> to create the mapping. When zero is returned, it is always safe to retry
>> the mapping later once resources have been freed.
>>
>> Once P2PDMA pages are mixed into the SGL there may be pages that may
>> never be successfully mapped with a given device because that device may
>> not actually be able to access those pages. Thus, multiple error
>> conditions will need to be distinguished to determine weather a retry
>> is safe.
>>
>> Introduce dma_map_sg_p2pdma[_attrs]() with a different calling
>> convention from dma_map_sg(). The function will return a positive
>> integer on success or a negative errno on failure.
>>
>> ENOMEM will be used to indicate a resource failure and EREMOTEIO to
>> indicate that a P2PDMA page is not mappable.
>>
>> The __DMA_ATTR_PCI_P2PDMA attribute is introduced to inform the lower
>> level implementations that P2PDMA pages are allowed and to warn if a
>> caller introduces them into the regular dma_map_sg() interface.
>>
>> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
>> ---
>>   include/linux/dma-mapping.h | 15 +++++++++++
>>   kernel/dma/mapping.c        | 52 ++++++++++++++++++++++++++++++++-----
>>   2 files changed, 61 insertions(+), 6 deletions(-)
>>
>> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
>> index 2a984cb4d1e0..50b8f586cf59 100644
>> --- a/include/linux/dma-mapping.h
>> +++ b/include/linux/dma-mapping.h
>> @@ -60,6 +60,12 @@
>>    * at least read-only at lesser-privileged levels).
>>    */
>>   #define DMA_ATTR_PRIVILEGED		(1UL << 9)
>> +/*
>> + * __DMA_ATTR_PCI_P2PDMA: This should not be used directly, use
>> + * dma_map_sg_p2pdma() instead. Used internally to indicate that the
>> + * caller is using the dma_map_sg_p2pdma() interface.
>> + */
>> +#define __DMA_ATTR_PCI_P2PDMA		(1UL << 10)
>>
> 
> As mentioned near the top of this file,
> Documentation/core-api/dma-attributes.rst also needs to be updated, for
> this new item.

As this attribute is not meant to be used by anyone outside the dma
functions, I don't think it should be documented here. (That's why it
has a double underscource prefix).

>>   /*
>>    * A dma_addr_t can hold any valid DMA or bus address for the platform.  It can
>> @@ -107,6 +113,8 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
>>   		enum dma_data_direction dir, unsigned long attrs);
>>   int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
>>   		enum dma_data_direction dir, unsigned long attrs);
>> +int dma_map_sg_p2pdma_attrs(struct device *dev, struct scatterlist *sg,
>> +		int nents, enum dma_data_direction dir, unsigned long attrs);
>>   void dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg,
>>   				      int nents, enum dma_data_direction dir,
>>   				      unsigned long attrs);
>> @@ -160,6 +168,12 @@ static inline int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg,
>>   {
>>   	return 0;
>>   }
>> +static inline int dma_map_sg_p2pdma_attrs(struct device *dev,
>> +		struct scatterlist *sg, int nents, enum dma_data_direction dir,
>> +		unsigned long attrs)
>> +{
>> +	return 0;
>> +}
>>   static inline void dma_unmap_sg_attrs(struct device *dev,
>>   		struct scatterlist *sg, int nents, enum dma_data_direction dir,
>>   		unsigned long attrs)
>> @@ -392,6 +406,7 @@ static inline void dma_sync_sgtable_for_device(struct device *dev,
>>   #define dma_map_single(d, a, s, r) dma_map_single_attrs(d, a, s, r, 0)
>>   #define dma_unmap_single(d, a, s, r) dma_unmap_single_attrs(d, a, s, r, 0)
>>   #define dma_map_sg(d, s, n, r) dma_map_sg_attrs(d, s, n, r, 0)
>> +#define dma_map_sg_p2pdma(d, s, n, r) dma_map_sg_p2pdma_attrs(d, s, n, r, 0)
> 
> This hunk is fine, of course.
> 
> But, about pre-existing issues: note to self, or to anyone: send a patch to turn
> these into inline functions. The macro redirection here is not adding value, but
> it does make things just a little bit worse.
> 
> 
>>   #define dma_unmap_sg(d, s, n, r) dma_unmap_sg_attrs(d, s, n, r, 0)
>>   #define dma_map_page(d, p, o, s, r) dma_map_page_attrs(d, p, o, s, r, 0)
>>   #define dma_unmap_page(d, a, s, r) dma_unmap_page_attrs(d, a, s, r, 0)
>> diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
>> index b6a633679933..923089c4267b 100644
>> --- a/kernel/dma/mapping.c
>> +++ b/kernel/dma/mapping.c
>> @@ -177,12 +177,8 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
>>   }
>>   EXPORT_SYMBOL(dma_unmap_page_attrs);
>>   
>> -/*
>> - * dma_maps_sg_attrs returns 0 on error and > 0 on success.
>> - * It should never return a value < 0.
>> - */
> 
> It would be better to leave the comment in place, given the non-standard
> return values. However, looking around here, it would be better if we go
> with the standard -ERRNO for error, and >0 for sucess.

The comment is actually left in place. The diff just makes it look like
it was removed. It is added back lower down in the diff.

> There are pre-existing BUG_ON() and WARN_ON_ONCE() items that are partly
> an attempt to compensate for not being able to return proper -ERRNO
> codes. For example, this:
> 
> 	    BUG_ON(!valid_dma_direction(dir));
> 
> ...arguably should be more like this:
> 
>          if(WARN_ON_ONCE(!valid_dma_direction(dir)))
>                  return -EINVAL;

Yes, but you'll have to see the discussion in the RFC. The complaint was
that the calling convention for dma_map_sg() is not expected to return
anything other than 0 or the number of entries mapped. It can't return a
negative error code. That's why BUG_ON(ents < 0) is in the existing
code. That's also why this series introduces the new dma_map_sg_p2pdma()
function. (Though, Jason has made some suggestions to further change this).

> 
>> -int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
>> -		enum dma_data_direction dir, unsigned long attrs)
>> +static int __dma_map_sg_attrs(struct device *dev, struct scatterlist *sg,
>> +		int nents, enum dma_data_direction dir, unsigned long attrs)
>>   {
>>   	const struct dma_map_ops *ops = get_dma_ops(dev);
>>   	int ents;
>> @@ -197,6 +193,20 @@ int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
>>   		ents = dma_direct_map_sg(dev, sg, nents, dir, attrs);
>>   	else
>>   		ents = ops->map_sg(dev, sg, nents, dir, attrs);
>> +
>> +	return ents;
>> +}
>> +
>> +/*
>> + * dma_maps_sg_attrs returns 0 on error and > 0 on success.
>> + * It should never return a value < 0.
>> + */
>> +int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
>> +		enum dma_data_direction dir, unsigned long attrs)
>> +{
>> +	int ents;
> 
> Pre-existing note, feel free to ignore: the ents and nents in the same
> routines together, are way too close to the each other in naming. Maybe
> using "requested_nents", or "nents_arg", for the incoming value, would
> help.

Ok, will change.

>> +
>> +	ents = __dma_map_sg_attrs(dev, sg, nents, dir, attrs);
>>   	BUG_ON(ents < 0);
>>   	debug_dma_map_sg(dev, sg, nents, ents, dir);
>>   
>> @@ -204,6 +214,36 @@ int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
>>   }
>>   EXPORT_SYMBOL(dma_map_sg_attrs);
>>   
>> +/*
>> + * like dma_map_sg_attrs, but returns a negative errno on error (and > 0
>> + * on success). This function must be used if PCI P2PDMA pages might
>> + * be in the scatterlist.
> 
> Let's turn this into a kernel doc comment block, seeing as how it clearly
> wants to be--you're almost there already. You've even reinvented @Return,
> below. :)

Just trying to follow the convention in this file. But I can make it a
kernel doc.

>> + *
>> + * On error this function may return:
>> + *    -ENOMEM indicating that there was not enough resources available and
>> + *      the transfer may be retried later
>> + *    -EREMOTEIO indicating that P2PDMA pages were included but cannot
>> + *      be mapped by the specified device, retries will always fail
>> + *
>> + * The scatterlist should be unmapped with the regular dma_unmap_sg[_attrs]().
> 
> How about:
> 
> "The scatterlist should be unmapped via dma_unmap_sg[_attrs]()."

Ok

Logan
Christoph Hellwig May 3, 2021, 6:28 p.m. UTC | #15
On Tue, Apr 27, 2021 at 08:01:13PM -0300, Jason Gunthorpe wrote:
> At a high level I'm OK with it. dma_map_sg_attrs() is the extra
> extended version of dma_map_sg(), it already has a different
> signature, a different return code is not out of the question.
> 
> dma_map_sg() is just the simple easy to use interface that can't do
> advanced stuff.
> 
> > I'm not that opposed to this. But it will make this series a fair bit
> > longer to change the 8 map_sg_attrs() usages.
> 
> Yes, but the result seems much nicer to not grow the DMA API further.

We already have a mapping function that can return errors:
dma_map_sgtable.

I think it might make more sense to piggy back on that, as the sg_table
abstraction is pretty useful basically everywhere that we deal with
scatterlists anyway.

In the hopefully no too long run I plan to get rid of scatterlists in
at least NVMe and other high performance devices anyway.
Logan Gunthorpe May 3, 2021, 6:31 p.m. UTC | #16
On 2021-05-03 12:28 p.m., Christoph Hellwig wrote:
> On Tue, Apr 27, 2021 at 08:01:13PM -0300, Jason Gunthorpe wrote:
>> At a high level I'm OK with it. dma_map_sg_attrs() is the extra
>> extended version of dma_map_sg(), it already has a different
>> signature, a different return code is not out of the question.
>>
>> dma_map_sg() is just the simple easy to use interface that can't do
>> advanced stuff.
>>
>>> I'm not that opposed to this. But it will make this series a fair bit
>>> longer to change the 8 map_sg_attrs() usages.
>>
>> Yes, but the result seems much nicer to not grow the DMA API further.
> 
> We already have a mapping function that can return errors:
> dma_map_sgtable.
> 
> I think it might make more sense to piggy back on that, as the sg_table
> abstraction is pretty useful basically everywhere that we deal with
> scatterlists anyway.

Oh, I didn't even realize that existed. I'll use dma_map_sgtable() for v2.

Thanks,

Logan
Logan Gunthorpe May 6, 2021, 11:59 p.m. UTC | #17
Sorry, I think I missed responding to this one so here are the answers:

On 2021-05-02 7:14 p.m., John Hubbard wrote:
> On 4/8/21 10:01 AM, Logan Gunthorpe wrote:
>> When a PCI P2PDMA page is seen, set the IOVA length of the segment
>> to zero so that it is not mapped into the IOVA. Then, in finalise_sg(),
>> apply the appropriate bus address to the segment. The IOVA is not
>> created if the scatterlist only consists of P2PDMA pages.
>>
>> Similar to dma-direct, the sg_mark_pci_p2pdma() flag is used to
>> indicate bus address segments. On unmap, P2PDMA segments are skipped
>> over when determining the start and end IOVA addresses.
>>
>> With this change, the flags variable in the dma_map_ops is
>> set to DMA_F_PCI_P2PDMA_SUPPORTED to indicate support for
>> P2PDMA pages.
>>
>> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
>> ---
>>   drivers/iommu/dma-iommu.c | 66 ++++++++++++++++++++++++++++++++++-----
>>   1 file changed, 58 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>> index af765c813cc8..ef49635f9819 100644
>> --- a/drivers/iommu/dma-iommu.c
>> +++ b/drivers/iommu/dma-iommu.c
>> @@ -20,6 +20,7 @@
>>   #include <linux/mm.h>
>>   #include <linux/mutex.h>
>>   #include <linux/pci.h>
>> +#include <linux/pci-p2pdma.h>
>>   #include <linux/swiotlb.h>
>>   #include <linux/scatterlist.h>
>>   #include <linux/vmalloc.h>
>> @@ -864,6 +865,16 @@ static int __finalise_sg(struct device *dev,
>> struct scatterlist *sg, int nents,
>>           sg_dma_address(s) = DMA_MAPPING_ERROR;
>>           sg_dma_len(s) = 0;
>>   +        if (is_pci_p2pdma_page(sg_page(s)) && !s_iova_len) {
> 
> Newbie question: I'm in the dark as to why the !s_iova_len check is there,
> can you please enlighten me?

The loop in iommu_dma_map_sg() will decide what to do with P2PDMA pages.
If it is to map it with the bus address it will set s_iova_len to zero
so that no space is allocated in the IOVA. If it is to map it through
the host bridge, then it it will leave s_iova_len alone and create the
appropriate mapping with the CPU physical address.

This condition notices that s_iova_len was set to zero and fills in a SG
segment with the PCI bus address for that region.


> 
>> +            if (i > 0)
>> +                cur = sg_next(cur);
>> +
>> +            pci_p2pdma_map_bus_segment(s, cur);
>> +            count++;
>> +            cur_len = 0;
>> +            continue;
>> +        }
>> +
> 
> This is really an if/else condition. And arguably, it would be better
> to split out two subroutines, and call one or the other depending on
> the result of if is_pci_p2pdma_page(), instead of this "continue" approach.

I really disagree here. Putting the exceptional condition in it's own if
statement and leaving the normal case un-indented is easier to read and
understand. It also saves an extra level of indentation in code that is
already starting to look a little squished.


>>           /*
>>            * Now fill in the real DMA data. If...
>>            * - there is a valid output segment to append to
>> @@ -961,10 +972,12 @@ static int iommu_dma_map_sg(struct device *dev,
>> struct scatterlist *sg,
>>       struct iova_domain *iovad = &cookie->iovad;
>>       struct scatterlist *s, *prev = NULL;
>>       int prot = dma_info_to_prot(dir, dev_is_dma_coherent(dev), attrs);
>> +    struct dev_pagemap *pgmap = NULL;
>> +    enum pci_p2pdma_map_type map_type;
>>       dma_addr_t iova;
>>       size_t iova_len = 0;
>>       unsigned long mask = dma_get_seg_boundary(dev);
>> -    int i;
>> +    int i, ret = 0;
>>         if (static_branch_unlikely(&iommu_deferred_attach_enabled) &&
>>           iommu_deferred_attach(dev, domain))
>> @@ -993,6 +1006,31 @@ static int iommu_dma_map_sg(struct device *dev,
>> struct scatterlist *sg,
>>           s_length = iova_align(iovad, s_length + s_iova_off);
>>           s->length = s_length;
>>   +        if (is_pci_p2pdma_page(sg_page(s))) {
>> +            if (sg_page(s)->pgmap != pgmap) {
>> +                pgmap = sg_page(s)->pgmap;
>> +                map_type = pci_p2pdma_map_type(pgmap, dev,
>> +                                   attrs);
>> +            }
>> +
>> +            switch (map_type) {
>> +            case PCI_P2PDMA_MAP_BUS_ADDR:
>> +                /*
>> +                 * A zero length will be ignored by
>> +                 * iommu_map_sg() and then can be detected
>> +                 * in __finalise_sg() to actually map the
>> +                 * bus address.
>> +                 */
>> +                s->length = 0;
>> +                continue;
>> +            case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
>> +                break;
>> +            default:
>> +                ret = -EREMOTEIO;
>> +                goto out_restore_sg;
>> +            }
>> +        }
>> +
>>           /*
>>            * Due to the alignment of our single IOVA allocation, we can
>>            * depend on these assumptions about the segment boundary mask:
>> @@ -1015,6 +1053,9 @@ static int iommu_dma_map_sg(struct device *dev,
>> struct scatterlist *sg,
>>           prev = s;
>>       }
>>   +    if (!iova_len)
>> +        return __finalise_sg(dev, sg, nents, 0);
>> +
> 
> ohhh, we're really slicing up this function pretty severely, what with the
> continue and the early out and several other control flow changes. I think
> it would be better to spend some time factoring this function into two
> cases, now that you're adding a second case for PCI P2PDMA. Roughly,
> two subroutines would do it.

I don't see how we can factor this into two cases. The SGL may contain
normal pages or P2PDMA pages or a mix of both and we have to create an
IOVA area for all the regions that map the CPU physical address (ie
normal pages and some P2PDMA pages) then also insert segments for any
PCI bus address.

> As it is, this leaves behind a routine that is extremely hard to mentally
> verify as correct.

Yes, this is tricky code, but not that incomprehensible. Most of the
difficulty is in understanding how it works before adding the P2PDMA bits.

There are two loops: one to prepare the IOVA region and another to fill
in the SGL. We have to add cases in both loops to skip the segments that
need to be mapped with the bus address in the first loop, and insert the
dma SGL segments in the second loop.

Logan
Donald Dutile May 11, 2021, 4:05 p.m. UTC | #18
On 4/8/21 1:01 PM, Logan Gunthorpe wrote:
> Hi,
>
> This patchset continues my work to to add P2PDMA support to the common
> dma map operations. This allows for creating SGLs that have both P2PDMA
> and regular pages which is a necessary step to allowing P2PDMA pages in
> userspace.
>
> The earlier RFC[1] generated a lot of great feedback and I heard no show
> stopping objections. Thus, I've incorporated all the feedback and have
> decided to post this as a proper patch series with hopes of eventually
> getting it in mainline.
>
> I'm happy to do a few more passes if anyone has any further feedback
> or better ideas.
>
> This series is based on v5.12-rc6 and a git branch can be found here:
>
>    https://github.com/sbates130272/linux-p2pmem/  p2pdma_map_ops_v1
>
> Thanks,
>
> Logan
>
> [1] https://lore.kernel.org/linux-block/20210311233142.7900-1-logang@deltatee.com/
>
>
> Changes since the RFC:
>   * Added comment and fixed up the pci_get_slot patch. (per Bjorn)
>   * Fixed glaring sg_phys() double offset bug. (per Robin)
>   * Created a new map operation (dma_map_sg_p2pdma()) with a new calling
>     convention instead of modifying the calling convention of
>     dma_map_sg(). (per Robin)
>   * Integrated the two similar pci_p2pdma_dma_map_type() and
>     pci_p2pdma_map_type() functions into one (per Ira)
>   * Reworked some of the logic in the map_sg() implementations into
>     helpers in the p2pdma code. (per Christoph)
>   * Dropped a bunch of unnecessary symbol exports (per Christoph)
>   * Expanded the code in dma_pci_p2pdma_supported() for clarity. (per
>     Ira and Christoph)
>   * Finished off using the new dma_map_sg_p2pdma() call in rdma_rw
>     and removed the old pci_p2pdma_[un]map_sg(). (per Jason)
>
> --
>
> Logan Gunthorpe (16):
>    PCI/P2PDMA: Pass gfp_mask flags to upstream_bridge_distance_warn()
>    PCI/P2PDMA: Avoid pci_get_slot() which sleeps
>    PCI/P2PDMA: Attempt to set map_type if it has not been set
>    PCI/P2PDMA: Refactor pci_p2pdma_map_type() to take pagmap and device
>    dma-mapping: Introduce dma_map_sg_p2pdma()
>    lib/scatterlist: Add flag for indicating P2PDMA segments in an SGL
>    PCI/P2PDMA: Make pci_p2pdma_map_type() non-static
>    PCI/P2PDMA: Introduce helpers for dma_map_sg implementations
>    dma-direct: Support PCI P2PDMA pages in dma-direct map_sg
>    dma-mapping: Add flags to dma_map_ops to indicate PCI P2PDMA support
>    iommu/dma: Support PCI P2PDMA pages in dma-iommu map_sg
>    nvme-pci: Check DMA ops when indicating support for PCI P2PDMA
>    nvme-pci: Convert to using dma_map_sg_p2pdma for p2pdma pages
>    nvme-rdma: Ensure dma support when using p2pdma
>    RDMA/rw: use dma_map_sg_p2pdma()
>    PCI/P2PDMA: Remove pci_p2pdma_[un]map_sg()
>
>   drivers/infiniband/core/rw.c |  50 +++-------
>   drivers/iommu/dma-iommu.c    |  66 ++++++++++--
>   drivers/nvme/host/core.c     |   3 +-
>   drivers/nvme/host/nvme.h     |   2 +-
>   drivers/nvme/host/pci.c      |  39 ++++----
>   drivers/nvme/target/rdma.c   |   3 +-
>   drivers/pci/Kconfig          |   2 +-
>   drivers/pci/p2pdma.c         | 188 +++++++++++++++++++----------------
>   include/linux/dma-map-ops.h  |   3 +
>   include/linux/dma-mapping.h  |  20 ++++
>   include/linux/pci-p2pdma.h   |  53 ++++++----
>   include/linux/scatterlist.h  |  49 ++++++++-
>   include/rdma/ib_verbs.h      |  32 ++++++
>   kernel/dma/direct.c          |  25 ++++-
>   kernel/dma/mapping.c         |  70 +++++++++++--
>   15 files changed, 416 insertions(+), 189 deletions(-)
>
>
> base-commit: e49d033bddf5b565044e2abe4241353959bc9120
> --
> 2.20.1
>
Apologies in the delay to provide feedback; climbing out of several deep trenches at the mother ship :-/

Replying to some directly, and indirectly (mostly through JohH's reply's).

General comments:
1) nits in 1,2,3,5;
    4: I agree w/JohnH & JasonG -- seems like it needs a device-layer that gets to a bus-layer, but I'm wearing my 'broader then PCI' hat in this review; I see a (classic) ChristophH refactoring and cleanup in this area, and wondering if we ought to clean it up now, since CH has done so much to clean it up and make the dma-mapping system so much easier to add/modify/review due to the broad arch (& bus) cleanup that has been done.  If that delays it too much, then add a TODO to do so.
2) 6: yes! let's not worry or even both supporting 32-bit anything wrt p2pdma.
3) 7:nit
4) 8: ok;
5) 9: ditto to JohnH's feedback on added / clearer comment & code flow (if-else).
6) 10: nits; q: should p2pdma mapping go through dma-ops so it is generalized for future interconnects (CXL, GenZ)?
7) 11: It says it is supporting p2pdma in dma-iommu's map_sg, but it seems like it is just leveraging shared code and short-circuiting IOMMU use.
8) 12-14: didn't review; letting the block/nvme/direct-io folks cover this space
9) 15: Looking to JasonG to sanitize
10) 16: cleanup; a-ok.

- DonD
Donald Dutile May 11, 2021, 4:05 p.m. UTC | #19
On 4/8/21 1:01 PM, Logan Gunthorpe wrote:
> dma_map_sg() either returns a positive number indicating the number
> of entries mapped or zero indicating that resources were not available
> to create the mapping. When zero is returned, it is always safe to retry
> the mapping later once resources have been freed.
>
> Once P2PDMA pages are mixed into the SGL there may be pages that may
> never be successfully mapped with a given device because that device may
> not actually be able to access those pages. Thus, multiple error
> conditions will need to be distinguished to determine weather a retry
s/weather/whether/
> is safe.
>
> Introduce dma_map_sg_p2pdma[_attrs]() with a different calling
> convention from dma_map_sg(). The function will return a positive
> integer on success or a negative errno on failure.
>
> ENOMEM will be used to indicate a resource failure and EREMOTEIO to
> indicate that a P2PDMA page is not mappable.
>
> The __DMA_ATTR_PCI_P2PDMA attribute is introduced to inform the lower
> level implementations that P2PDMA pages are allowed and to warn if a
> caller introduces them into the regular dma_map_sg() interface.
>
> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
John caught any other comment I had (and more).
-dd

> ---
>   include/linux/dma-mapping.h | 15 +++++++++++
>   kernel/dma/mapping.c        | 52 ++++++++++++++++++++++++++++++++-----
>   2 files changed, 61 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
> index 2a984cb4d1e0..50b8f586cf59 100644
> --- a/include/linux/dma-mapping.h
> +++ b/include/linux/dma-mapping.h
> @@ -60,6 +60,12 @@
>    * at least read-only at lesser-privileged levels).
>    */
>   #define DMA_ATTR_PRIVILEGED		(1UL << 9)
> +/*
> + * __DMA_ATTR_PCI_P2PDMA: This should not be used directly, use
> + * dma_map_sg_p2pdma() instead. Used internally to indicate that the
> + * caller is using the dma_map_sg_p2pdma() interface.
> + */
> +#define __DMA_ATTR_PCI_P2PDMA		(1UL << 10)
>   
>   /*
>    * A dma_addr_t can hold any valid DMA or bus address for the platform.  It can
> @@ -107,6 +113,8 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
>   		enum dma_data_direction dir, unsigned long attrs);
>   int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
>   		enum dma_data_direction dir, unsigned long attrs);
> +int dma_map_sg_p2pdma_attrs(struct device *dev, struct scatterlist *sg,
> +		int nents, enum dma_data_direction dir, unsigned long attrs);
>   void dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg,
>   				      int nents, enum dma_data_direction dir,
>   				      unsigned long attrs);
> @@ -160,6 +168,12 @@ static inline int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg,
>   {
>   	return 0;
>   }
> +static inline int dma_map_sg_p2pdma_attrs(struct device *dev,
> +		struct scatterlist *sg, int nents, enum dma_data_direction dir,
> +		unsigned long attrs)
> +{
> +	return 0;
> +}
>   static inline void dma_unmap_sg_attrs(struct device *dev,
>   		struct scatterlist *sg, int nents, enum dma_data_direction dir,
>   		unsigned long attrs)
> @@ -392,6 +406,7 @@ static inline void dma_sync_sgtable_for_device(struct device *dev,
>   #define dma_map_single(d, a, s, r) dma_map_single_attrs(d, a, s, r, 0)
>   #define dma_unmap_single(d, a, s, r) dma_unmap_single_attrs(d, a, s, r, 0)
>   #define dma_map_sg(d, s, n, r) dma_map_sg_attrs(d, s, n, r, 0)
> +#define dma_map_sg_p2pdma(d, s, n, r) dma_map_sg_p2pdma_attrs(d, s, n, r, 0)
>   #define dma_unmap_sg(d, s, n, r) dma_unmap_sg_attrs(d, s, n, r, 0)
>   #define dma_map_page(d, p, o, s, r) dma_map_page_attrs(d, p, o, s, r, 0)
>   #define dma_unmap_page(d, a, s, r) dma_unmap_page_attrs(d, a, s, r, 0)
> diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
> index b6a633679933..923089c4267b 100644
> --- a/kernel/dma/mapping.c
> +++ b/kernel/dma/mapping.c
> @@ -177,12 +177,8 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
>   }
>   EXPORT_SYMBOL(dma_unmap_page_attrs);
>   
> -/*
> - * dma_maps_sg_attrs returns 0 on error and > 0 on success.
> - * It should never return a value < 0.
> - */
> -int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
> -		enum dma_data_direction dir, unsigned long attrs)
> +static int __dma_map_sg_attrs(struct device *dev, struct scatterlist *sg,
> +		int nents, enum dma_data_direction dir, unsigned long attrs)
>   {
>   	const struct dma_map_ops *ops = get_dma_ops(dev);
>   	int ents;
> @@ -197,6 +193,20 @@ int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
>   		ents = dma_direct_map_sg(dev, sg, nents, dir, attrs);
>   	else
>   		ents = ops->map_sg(dev, sg, nents, dir, attrs);
> +
> +	return ents;
> +}
> +
> +/*
> + * dma_maps_sg_attrs returns 0 on error and > 0 on success.
> + * It should never return a value < 0.
> + */
> +int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
> +		enum dma_data_direction dir, unsigned long attrs)
> +{
> +	int ents;
> +
> +	ents = __dma_map_sg_attrs(dev, sg, nents, dir, attrs);
>   	BUG_ON(ents < 0);
>   	debug_dma_map_sg(dev, sg, nents, ents, dir);
>   
> @@ -204,6 +214,36 @@ int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
>   }
>   EXPORT_SYMBOL(dma_map_sg_attrs);
>   
> +/*
> + * like dma_map_sg_attrs, but returns a negative errno on error (and > 0
> + * on success). This function must be used if PCI P2PDMA pages might
> + * be in the scatterlist.
> + *
> + * On error this function may return:
> + *    -ENOMEM indicating that there was not enough resources available and
> + *      the transfer may be retried later
> + *    -EREMOTEIO indicating that P2PDMA pages were included but cannot
> + *      be mapped by the specified device, retries will always fail
> + *
> + * The scatterlist should be unmapped with the regular dma_unmap_sg[_attrs]().
> + */
> +int dma_map_sg_p2pdma_attrs(struct device *dev, struct scatterlist *sg,
> +		int nents, enum dma_data_direction dir, unsigned long attrs)
> +{
> +	int ents;
> +
> +	ents = __dma_map_sg_attrs(dev, sg, nents, dir,
> +				  attrs | __DMA_ATTR_PCI_P2PDMA);
> +	if (!ents)
> +		ents = -ENOMEM;
> +
> +	if (ents > 0)
> +		debug_dma_map_sg(dev, sg, nents, ents, dir);
> +
> +	return ents;
> +}
> +EXPORT_SYMBOL_GPL(dma_map_sg_p2pdma_attrs);
> +
>   void dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg,
>   				      int nents, enum dma_data_direction dir,
>   				      unsigned long attrs)
Donald Dutile May 11, 2021, 4:06 p.m. UTC | #20
On 4/8/21 1:01 PM, Logan Gunthorpe wrote:
> When a PCI P2PDMA page is seen, set the IOVA length of the segment
> to zero so that it is not mapped into the IOVA. Then, in finalise_sg(),
> apply the appropriate bus address to the segment. The IOVA is not
> created if the scatterlist only consists of P2PDMA pages.
>
> Similar to dma-direct, the sg_mark_pci_p2pdma() flag is used to
> indicate bus address segments. On unmap, P2PDMA segments are skipped
> over when determining the start and end IOVA addresses.
>
> With this change, the flags variable in the dma_map_ops is
> set to DMA_F_PCI_P2PDMA_SUPPORTED to indicate support for
> P2PDMA pages.
>
> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
So, this code prevents use of p2pdma using an IOMMU, which wasn't checked and
short-circuited by other checks to use dma-direct?

So my overall comment to this code & related comments is that it should be sprinkled
with notes like "doesn't support IOMMU" and / or "TODO" when/if IOMMU is to be supported.
Or, if IOMMU-based p2pdma isn't supported in these routines directly, where/how they will be supported?

> ---
>   drivers/iommu/dma-iommu.c | 66 ++++++++++++++++++++++++++++++++++-----
>   1 file changed, 58 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index af765c813cc8..ef49635f9819 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -20,6 +20,7 @@
>   #include <linux/mm.h>
>   #include <linux/mutex.h>
>   #include <linux/pci.h>
> +#include <linux/pci-p2pdma.h>
>   #include <linux/swiotlb.h>
>   #include <linux/scatterlist.h>
>   #include <linux/vmalloc.h>
> @@ -864,6 +865,16 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents,
>   		sg_dma_address(s) = DMA_MAPPING_ERROR;
>   		sg_dma_len(s) = 0;
>   
> +		if (is_pci_p2pdma_page(sg_page(s)) && !s_iova_len) {
> +			if (i > 0)
> +				cur = sg_next(cur);
> +
> +			pci_p2pdma_map_bus_segment(s, cur);
> +			count++;
> +			cur_len = 0;
> +			continue;
> +		}
> +
>   		/*
>   		 * Now fill in the real DMA data. If...
>   		 * - there is a valid output segment to append to
> @@ -961,10 +972,12 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>   	struct iova_domain *iovad = &cookie->iovad;
>   	struct scatterlist *s, *prev = NULL;
>   	int prot = dma_info_to_prot(dir, dev_is_dma_coherent(dev), attrs);
> +	struct dev_pagemap *pgmap = NULL;
> +	enum pci_p2pdma_map_type map_type;
>   	dma_addr_t iova;
>   	size_t iova_len = 0;
>   	unsigned long mask = dma_get_seg_boundary(dev);
> -	int i;
> +	int i, ret = 0;
>   
>   	if (static_branch_unlikely(&iommu_deferred_attach_enabled) &&
>   	    iommu_deferred_attach(dev, domain))
> @@ -993,6 +1006,31 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>   		s_length = iova_align(iovad, s_length + s_iova_off);
>   		s->length = s_length;
>   
> +		if (is_pci_p2pdma_page(sg_page(s))) {
> +			if (sg_page(s)->pgmap != pgmap) {
> +				pgmap = sg_page(s)->pgmap;
> +				map_type = pci_p2pdma_map_type(pgmap, dev,
> +							       attrs);
> +			}
> +
> +			switch (map_type) {
> +			case PCI_P2PDMA_MAP_BUS_ADDR:
> +				/*
> +				 * A zero length will be ignored by
> +				 * iommu_map_sg() and then can be detected
> +				 * in __finalise_sg() to actually map the
> +				 * bus address.
> +				 */
> +				s->length = 0;
> +				continue;

> +			case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
> +				break;
So, this 'short-circuits' the use of the IOMMU, silently?
This seems ripe for users to enable IOMMU for secure computing reasons, and using/enabling p2pdma,
and not realizing that it isn't as secure as 1+1=2  appears to be.
If my understanding is wrong, please point me to the Documentation or code that corrects this mis-understanding.  I could have missed a warning when both are enabled in a past patch set.
Thanks.
--dd
> +			default:
> +				ret = -EREMOTEIO;
> +				goto out_restore_sg;
> +			}
> +		}
> +
>   		/*
>   		 * Due to the alignment of our single IOVA allocation, we can
>   		 * depend on these assumptions about the segment boundary mask:
> @@ -1015,6 +1053,9 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>   		prev = s;
>   	}
>   
> +	if (!iova_len)
> +		return __finalise_sg(dev, sg, nents, 0);
> +
>   	iova = iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev);
>   	if (!iova)
>   		goto out_restore_sg;
> @@ -1032,13 +1073,13 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>   	iommu_dma_free_iova(cookie, iova, iova_len, NULL);
>   out_restore_sg:
>   	__invalidate_sg(sg, nents);
> -	return 0;
> +	return ret;
>   }
>   
>   static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
>   		int nents, enum dma_data_direction dir, unsigned long attrs)
>   {
> -	dma_addr_t start, end;
> +	dma_addr_t end, start = DMA_MAPPING_ERROR;
>   	struct scatterlist *tmp;
>   	int i;
>   
> @@ -1054,14 +1095,22 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
>   	 * The scatterlist segments are mapped into a single
>   	 * contiguous IOVA allocation, so this is incredibly easy.
>   	 */
> -	start = sg_dma_address(sg);
> -	for_each_sg(sg_next(sg), tmp, nents - 1, i) {
> +	for_each_sg(sg, tmp, nents, i) {
> +		if (sg_is_pci_p2pdma(tmp)) {
> +			sg_unmark_pci_p2pdma(tmp);
> +			continue;
> +		}
>   		if (sg_dma_len(tmp) == 0)
>   			break;
> -		sg = tmp;
> +
> +		if (start == DMA_MAPPING_ERROR)
> +			start = sg_dma_address(tmp);
> +
> +		end = sg_dma_address(tmp) + sg_dma_len(tmp);
>   	}
> -	end = sg_dma_address(sg) + sg_dma_len(sg);
> -	__iommu_dma_unmap(dev, start, end - start);
> +
> +	if (start != DMA_MAPPING_ERROR)
> +		__iommu_dma_unmap(dev, start, end - start);
>   }
>   
overall, fiddling with the generic dma-iommu code instead of using a dma-ops-based, p2pdma function that has it carved out and separated/refactored out to be cleaner seems less complicated, but I'm guessing you tried that and it was too complicated to do?
--dd

>   static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys,
> @@ -1254,6 +1303,7 @@ static unsigned long iommu_dma_get_merge_boundary(struct device *dev)
>   }
>   
>   static const struct dma_map_ops iommu_dma_ops = {
> +	.flags			= DMA_F_PCI_P2PDMA_SUPPORTED,
wait, it's a const that's always turned on?
shouldn't the define for this flag be 0 for non-p2pdma configs?

>   	.alloc			= iommu_dma_alloc,
>   	.free			= iommu_dma_free,
>   	.alloc_pages		= dma_common_alloc_pages,
Logan Gunthorpe May 11, 2021, 4:35 p.m. UTC | #21
On 2021-05-11 10:06 a.m., Don Dutile wrote:
> On 4/8/21 1:01 PM, Logan Gunthorpe wrote:
>> When a PCI P2PDMA page is seen, set the IOVA length of the segment
>> to zero so that it is not mapped into the IOVA. Then, in finalise_sg(),
>> apply the appropriate bus address to the segment. The IOVA is not
>> created if the scatterlist only consists of P2PDMA pages.
>>
>> Similar to dma-direct, the sg_mark_pci_p2pdma() flag is used to
>> indicate bus address segments. On unmap, P2PDMA segments are skipped
>> over when determining the start and end IOVA addresses.
>>
>> With this change, the flags variable in the dma_map_ops is
>> set to DMA_F_PCI_P2PDMA_SUPPORTED to indicate support for
>> P2PDMA pages.
>>
>> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
> So, this code prevents use of p2pdma using an IOMMU, which wasn't checked and
> short-circuited by other checks to use dma-direct?

No, not at all. This patch is adding support for p2pdma pages for IOMMUs
that use the dma-iommu abstraction. Other arch specific IOMMUs that
don't use the dma-iommu abstraction are left unsupported. Support would
need to be added to them, or better yet; they should be ported to dma-iommu.

> 
> So my overall comment to this code & related comments is that it should be sprinkled
> with notes like "doesn't support IOMMU" and / or "TODO" when/if IOMMU is to be supported.
> Or, if IOMMU-based p2pdma isn't supported in these routines directly, where/how they will be supported?
> 
>> ---
>>   drivers/iommu/dma-iommu.c | 66 ++++++++++++++++++++++++++++++++++-----
>>   1 file changed, 58 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>> index af765c813cc8..ef49635f9819 100644
>> --- a/drivers/iommu/dma-iommu.c
>> +++ b/drivers/iommu/dma-iommu.c
>> @@ -20,6 +20,7 @@
>>   #include <linux/mm.h>
>>   #include <linux/mutex.h>
>>   #include <linux/pci.h>
>> +#include <linux/pci-p2pdma.h>
>>   #include <linux/swiotlb.h>
>>   #include <linux/scatterlist.h>
>>   #include <linux/vmalloc.h>
>> @@ -864,6 +865,16 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents,
>>   		sg_dma_address(s) = DMA_MAPPING_ERROR;
>>   		sg_dma_len(s) = 0;
>>   
>> +		if (is_pci_p2pdma_page(sg_page(s)) && !s_iova_len) {
>> +			if (i > 0)
>> +				cur = sg_next(cur);
>> +
>> +			pci_p2pdma_map_bus_segment(s, cur);
>> +			count++;
>> +			cur_len = 0;
>> +			continue;
>> +		}
>> +
>>   		/*
>>   		 * Now fill in the real DMA data. If...
>>   		 * - there is a valid output segment to append to
>> @@ -961,10 +972,12 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>>   	struct iova_domain *iovad = &cookie->iovad;
>>   	struct scatterlist *s, *prev = NULL;
>>   	int prot = dma_info_to_prot(dir, dev_is_dma_coherent(dev), attrs);
>> +	struct dev_pagemap *pgmap = NULL;
>> +	enum pci_p2pdma_map_type map_type;
>>   	dma_addr_t iova;
>>   	size_t iova_len = 0;
>>   	unsigned long mask = dma_get_seg_boundary(dev);
>> -	int i;
>> +	int i, ret = 0;
>>   
>>   	if (static_branch_unlikely(&iommu_deferred_attach_enabled) &&
>>   	    iommu_deferred_attach(dev, domain))
>> @@ -993,6 +1006,31 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>>   		s_length = iova_align(iovad, s_length + s_iova_off);
>>   		s->length = s_length;
>>   
>> +		if (is_pci_p2pdma_page(sg_page(s))) {
>> +			if (sg_page(s)->pgmap != pgmap) {
>> +				pgmap = sg_page(s)->pgmap;
>> +				map_type = pci_p2pdma_map_type(pgmap, dev,
>> +							       attrs);
>> +			}
>> +
>> +			switch (map_type) {
>> +			case PCI_P2PDMA_MAP_BUS_ADDR:
>> +				/*
>> +				 * A zero length will be ignored by
>> +				 * iommu_map_sg() and then can be detected
>> +				 * in __finalise_sg() to actually map the
>> +				 * bus address.
>> +				 */
>> +				s->length = 0;
>> +				continue;
> 
>> +			case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
>> +				break;
> So, this 'short-circuits' the use of the IOMMU, silently?
> This seems ripe for users to enable IOMMU for secure computing reasons, and using/enabling p2pdma,
> and not realizing that it isn't as secure as 1+1=2  appears to be.
> If my understanding is wrong, please point me to the Documentation or code that corrects this mis-understanding.  I could have missed a warning when both are enabled in a past patch set.


Yes, you've misunderstood this. Part of this dovetails with your comment
about the documentation for PCI_P2PDMA_MAP_THRU_HOST_BRIDGE.

This does not short circuit the IOMMU in any way. THRU_HOST_BRIDGE mode
means the TLPs for this transaction will hit the CPU/HOST BRIDGE and
thus the IOMMU has to be involved. In this case the IOMMU is programmed
with the physical address of the memory (which is normal) and everything
works.

One could argue the PCI_P2PDMA_MAP_BUS_ADDR is short circuiting the
IOMMU by using PCI bus address in the DMA transaction. But this requires
the user to do special setup with the ACS bits ahead of time (not part
of this series).

For the user to use the BUS_ADDR with an IOMMU, they need to
specifically disable the ACS redirect bits on specific PCI switch bridge
ports using a kernel command line option. When they do this, the IOMMU
code will put those devices in the same IOMMU group thus making it
impossible for the user to use devices that can do P2PDMA transactions
together in different security domains.

This was all hashed out in the original P2PDMA patchset and does make sense.

>> +			default:
>> +				ret = -EREMOTEIO;
>> +				goto out_restore_sg;
>> +			}
>> +		}
>> +
>>   		/*
>>   		 * Due to the alignment of our single IOVA allocation, we can
>>   		 * depend on these assumptions about the segment boundary mask:
>> @@ -1015,6 +1053,9 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>>   		prev = s;
>>   	}
>>   
>> +	if (!iova_len)
>> +		return __finalise_sg(dev, sg, nents, 0);
>> +
>>   	iova = iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev);
>>   	if (!iova)
>>   		goto out_restore_sg;
>> @@ -1032,13 +1073,13 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>>   	iommu_dma_free_iova(cookie, iova, iova_len, NULL);
>>   out_restore_sg:
>>   	__invalidate_sg(sg, nents);
>> -	return 0;
>> +	return ret;
>>   }
>>   
>>   static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
>>   		int nents, enum dma_data_direction dir, unsigned long attrs)
>>   {
>> -	dma_addr_t start, end;
>> +	dma_addr_t end, start = DMA_MAPPING_ERROR;
>>   	struct scatterlist *tmp;
>>   	int i;
>>   
>> @@ -1054,14 +1095,22 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
>>   	 * The scatterlist segments are mapped into a single
>>   	 * contiguous IOVA allocation, so this is incredibly easy.
>>   	 */
>> -	start = sg_dma_address(sg);
>> -	for_each_sg(sg_next(sg), tmp, nents - 1, i) {
>> +	for_each_sg(sg, tmp, nents, i) {
>> +		if (sg_is_pci_p2pdma(tmp)) {
>> +			sg_unmark_pci_p2pdma(tmp);
>> +			continue;
>> +		}
>>   		if (sg_dma_len(tmp) == 0)
>>   			break;
>> -		sg = tmp;
>> +
>> +		if (start == DMA_MAPPING_ERROR)
>> +			start = sg_dma_address(tmp);
>> +
>> +		end = sg_dma_address(tmp) + sg_dma_len(tmp);
>>   	}
>> -	end = sg_dma_address(sg) + sg_dma_len(sg);
>> -	__iommu_dma_unmap(dev, start, end - start);
>> +
>> +	if (start != DMA_MAPPING_ERROR)
>> +		__iommu_dma_unmap(dev, start, end - start);
>>   }
>>   
> overall, fiddling with the generic dma-iommu code instead of using a dma-ops-based, p2pdma function that has it carved out and separated/refactored out to be cleaner seems less complicated, but I'm guessing you tried that and it was too complicated to do?

I don't think you've understood this code correctly. What it does can't
be done in the dma-ops.

>>   static const struct dma_map_ops iommu_dma_ops = {
>> +	.flags			= DMA_F_PCI_P2PDMA_SUPPORTED,
> wait, it's a const that's always turned on?
> shouldn't the define for this flag be 0 for non-p2pdma configs?

All this flag is saying is that iommu_dma_map_sg() has support for
handling P2PDMA pages. Yes this is a const. The point is to reject it
for map_sg implementations that have not done the above work (ie.
arm_iommu_map_sg).

Hopefully, more of the arch-specific implementations will convert to the
generic dma-iommu code in time but those that don't simply won't support
P2PDMA until they do (or add their own support).

Logan