diff mbox series

[v4,01/38] dma-mapping: add generic helpers for mapping sgtable objects

Message ID 20200512090058.14910-1-m.szyprowski@samsung.com (mailing list archive)
State New, archived
Headers show
Series DRM: fix struct sg_table nents vs. orig_nents misuse | expand

Commit Message

Marek Szyprowski May 12, 2020, 9 a.m. UTC
struct sg_table is a common structure used for describing a memory
buffer. It consists of a scatterlist with memory pages and DMA addresses
(sgl entry), as well as the number of scatterlist entries: CPU pages
(orig_nents entry) and DMA mapped pages (nents entry).

It turned out that it was a common mistake to misuse nents and orig_nents
entries, calling DMA-mapping functions with a wrong number of entries or
ignoring the number of mapped entries returned by the dma_map_sg
function.

To avoid such issues, lets introduce a common wrappers operating directly
on the struct sg_table objects, which take care of the proper use of
the nents and orig_nents entries.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
---
For more information, see '[PATCH v4 00/38] DRM: fix struct sg_table nents
vs. orig_nents misuse' thread:
https://lore.kernel.org/dri-devel/20200512085710.14688-1-m.szyprowski@samsung.com/T/
---
 include/linux/dma-mapping.h | 79 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 79 insertions(+)

Comments

Christoph Hellwig May 12, 2020, 12:18 p.m. UTC | #1
On Tue, May 12, 2020 at 11:00:21AM +0200, Marek Szyprowski wrote:
> struct sg_table is a common structure used for describing a memory
> buffer. It consists of a scatterlist with memory pages and DMA addresses
> (sgl entry), as well as the number of scatterlist entries: CPU pages
> (orig_nents entry) and DMA mapped pages (nents entry).
> 
> It turned out that it was a common mistake to misuse nents and orig_nents
> entries, calling DMA-mapping functions with a wrong number of entries or
> ignoring the number of mapped entries returned by the dma_map_sg
> function.
> 
> To avoid such issues, lets introduce a common wrappers operating directly
> on the struct sg_table objects, which take care of the proper use of
> the nents and orig_nents entries.
> 
> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
> ---
> For more information, see '[PATCH v4 00/38] DRM: fix struct sg_table nents
> vs. orig_nents misuse' thread:
> https://lore.kernel.org/dri-devel/20200512085710.14688-1-m.szyprowski@samsung.com/T/
> ---
>  include/linux/dma-mapping.h | 79 +++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 79 insertions(+)
> 
> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
> index b43116a..88f01cc 100644
> --- a/include/linux/dma-mapping.h
> +++ b/include/linux/dma-mapping.h
> @@ -609,6 +609,85 @@ static inline void dma_sync_single_range_for_device(struct device *dev,
>  	return dma_sync_single_for_device(dev, addr + offset, size, dir);
>  }
>  
> +/**
> + * dma_map_sgtable - Map the given buffer for the DMA operations
> + * @dev:	The device to perform a DMA operation
> + * @sgt:	The sg_table object describing the buffer
> + * @dir:	DMA direction
> + * @attrs:	Optional DMA attributes for the map operation
> + *
> + * Maps a buffer described by a scatterlist stored in the given sg_table
> + * object for the @dir DMA operation by the @dev device. After success
> + * the ownership for the buffer is transferred to the DMA domain. One has
> + * to call dma_sync_sgtable_for_cpu() or dma_unmap_sgtable() to move the
> + * ownership of the buffer back to the CPU domain before touching the
> + * buffer by the CPU.
> + * Returns 0 on success or -EINVAL on error during mapping the buffer.
> + */
> +static inline int dma_map_sgtable(struct device *dev, struct sg_table *sgt,
> +		enum dma_data_direction dir, unsigned long attrs)
> +{
> +	int n = dma_map_sg_attrs(dev, sgt->sgl, sgt->orig_nents, dir, attrs);
> +
> +	if (n > 0) {
> +		sgt->nents = n;
> +		return 0;
> +	}
> +	return -EINVAL;

Nit:  code tend to be a tad easier to read if the the exceptional
condition is inside the branch block, so:

	if (n <= 0)
		return -EINVAL;
	sgt->nents = n;
	return 0;

Otherwise this looks good to me:

Reviewed-by: Christoph Hellwig <hch@lst.de>
Marek Szyprowski May 12, 2020, 1 p.m. UTC | #2
Hi Christoph,

On 12.05.2020 14:18, Christoph Hellwig wrote:
> On Tue, May 12, 2020 at 11:00:21AM +0200, Marek Szyprowski wrote:
>> struct sg_table is a common structure used for describing a memory
>> buffer. It consists of a scatterlist with memory pages and DMA addresses
>> (sgl entry), as well as the number of scatterlist entries: CPU pages
>> (orig_nents entry) and DMA mapped pages (nents entry).
>>
>> It turned out that it was a common mistake to misuse nents and orig_nents
>> entries, calling DMA-mapping functions with a wrong number of entries or
>> ignoring the number of mapped entries returned by the dma_map_sg
>> function.
>>
>> To avoid such issues, lets introduce a common wrappers operating directly
>> on the struct sg_table objects, which take care of the proper use of
>> the nents and orig_nents entries.
>>
>> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
>> ---
>> For more information, see '[PATCH v4 00/38] DRM: fix struct sg_table nents
>> vs. orig_nents misuse' thread:
>> https://lore.kernel.org/dri-devel/20200512085710.14688-1-m.szyprowski@samsung.com/T/
>> ---
>>   include/linux/dma-mapping.h | 79 +++++++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 79 insertions(+)
>>
>> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
>> index b43116a..88f01cc 100644
>> --- a/include/linux/dma-mapping.h
>> +++ b/include/linux/dma-mapping.h
>> @@ -609,6 +609,85 @@ static inline void dma_sync_single_range_for_device(struct device *dev,
>>   	return dma_sync_single_for_device(dev, addr + offset, size, dir);
>>   }
>>   
>> +/**
>> + * dma_map_sgtable - Map the given buffer for the DMA operations
>> + * @dev:	The device to perform a DMA operation
>> + * @sgt:	The sg_table object describing the buffer
>> + * @dir:	DMA direction
>> + * @attrs:	Optional DMA attributes for the map operation
>> + *
>> + * Maps a buffer described by a scatterlist stored in the given sg_table
>> + * object for the @dir DMA operation by the @dev device. After success
>> + * the ownership for the buffer is transferred to the DMA domain. One has
>> + * to call dma_sync_sgtable_for_cpu() or dma_unmap_sgtable() to move the
>> + * ownership of the buffer back to the CPU domain before touching the
>> + * buffer by the CPU.
>> + * Returns 0 on success or -EINVAL on error during mapping the buffer.
>> + */
>> +static inline int dma_map_sgtable(struct device *dev, struct sg_table *sgt,
>> +		enum dma_data_direction dir, unsigned long attrs)
>> +{
>> +	int n = dma_map_sg_attrs(dev, sgt->sgl, sgt->orig_nents, dir, attrs);
>> +
>> +	if (n > 0) {
>> +		sgt->nents = n;
>> +		return 0;
>> +	}
>> +	return -EINVAL;
> Nit:  code tend to be a tad easier to read if the the exceptional
> condition is inside the branch block, so:
>
> 	if (n <= 0)
> 		return -EINVAL;
> 	sgt->nents = n;
> 	return 0;
>
Indeed this version looks much better. I will resend it in a few minutes.
> Otherwise this looks good to me:
>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
>
Best regards
Christoph Hellwig May 12, 2020, 1:09 p.m. UTC | #3
On Tue, May 12, 2020 at 03:00:31PM +0200, Marek Szyprowski wrote:
> > 	if (n <= 0)
> > 		return -EINVAL;
> > 	sgt->nents = n;
> > 	return 0;
> >
> Indeed this version looks much better. I will resend it in a few minutes.

I could also just fix it up when applying the patch.
Marek Szyprowski May 12, 2020, 1:19 p.m. UTC | #4
Hi Christoph,

On 12.05.2020 15:09, Christoph Hellwig wrote:
> On Tue, May 12, 2020 at 03:00:31PM +0200, Marek Szyprowski wrote:
>>> 	if (n <= 0)
>>> 		return -EINVAL;
>>> 	sgt->nents = n;
>>> 	return 0;
>>>
>> Indeed this version looks much better. I will resend it in a few minutes.
> I could also just fix it up when applying the patch.
Fine for me.

Best regards
Robin Murphy May 13, 2020, 1:23 p.m. UTC | #5
On 2020-05-12 10:00 am, Marek Szyprowski wrote:
> struct sg_table is a common structure used for describing a memory
> buffer. It consists of a scatterlist with memory pages and DMA addresses
> (sgl entry), as well as the number of scatterlist entries: CPU pages
> (orig_nents entry) and DMA mapped pages (nents entry).
> 
> It turned out that it was a common mistake to misuse nents and orig_nents
> entries, calling DMA-mapping functions with a wrong number of entries or
> ignoring the number of mapped entries returned by the dma_map_sg
> function.
> 
> To avoid such issues, lets introduce a common wrappers operating directly

Nit: "let's"

> on the struct sg_table objects, which take care of the proper use of
> the nents and orig_nents entries.

A few more documentation nitpicks below, but either way the 
implementation itself (modulo Christoph's fixup) looks good;

Reviewed-by: Robin Murphy <robin.murphy@arm.com>

> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
> ---
> For more information, see '[PATCH v4 00/38] DRM: fix struct sg_table nents
> vs. orig_nents misuse' thread:
> https://lore.kernel.org/dri-devel/20200512085710.14688-1-m.szyprowski@samsung.com/T/
> ---
>   include/linux/dma-mapping.h | 79 +++++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 79 insertions(+)
> 
> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
> index b43116a..88f01cc 100644
> --- a/include/linux/dma-mapping.h
> +++ b/include/linux/dma-mapping.h
> @@ -609,6 +609,85 @@ static inline void dma_sync_single_range_for_device(struct device *dev,
>   	return dma_sync_single_for_device(dev, addr + offset, size, dir);
>   }
>   
> +/**
> + * dma_map_sgtable - Map the given buffer for the DMA operations

Either "for DMA operations", "for the DMA operation", or "for a DMA 
operation", depending on the exact context. Or at that point, perhaps 
just "for DMA".

> + * @dev:	The device to perform a DMA operation

That doesn't quite parse, maybe "the device performing the DMA 
operation", or "the device for which to perform the DMA operation", 
depending on whether "DMA operation" means the mapping or the actual 
hardware access?

> + * @sgt:	The sg_table object describing the buffer
> + * @dir:	DMA direction
> + * @attrs:	Optional DMA attributes for the map operation
> + *
> + * Maps a buffer described by a scatterlist stored in the given sg_table
> + * object for the @dir DMA operation by the @dev device. After success
> + * the ownership for the buffer is transferred to the DMA domain. One has
> + * to call dma_sync_sgtable_for_cpu() or dma_unmap_sgtable() to move the
> + * ownership of the buffer back to the CPU domain before touching the
> + * buffer by the CPU.
> + * Returns 0 on success or -EINVAL on error during mapping the buffer.

Maybe make that a proper "Return:" section?

> + */
> +static inline int dma_map_sgtable(struct device *dev, struct sg_table *sgt,
> +		enum dma_data_direction dir, unsigned long attrs)
> +{
> +	int n = dma_map_sg_attrs(dev, sgt->sgl, sgt->orig_nents, dir, attrs);
> +
> +	if (n > 0) {
> +		sgt->nents = n;
> +		return 0;
> +	}
> +	return -EINVAL;
> +}
> +
> +/**
> + * dma_unmap_sgtable - Unmap the given buffer for the DMA operations
> + * @dev:	The device to perform a DMA operation

Same two points as before.

> + * @sgt:	The sg_table object describing the buffer
> + * @dir:	DMA direction
> + * @attrs:	Optional DMA attributes for the map operation

Presumably "the unmap operation", although it *is* true that some 
attributes are expected to match those originally passed to 
dma_map_sgtable()... not sure if kerneldoc can can stretch to that level 
of detail concisely ;)

> + *
> + * Unmaps a buffer described by a scatterlist stored in the given sg_table
> + * object for the @dir DMA operation by the @dev device. After this function
> + * the ownership of the buffer is transferred back to the CPU domain.
> + */
> +static inline void dma_unmap_sgtable(struct device *dev, struct sg_table *sgt,
> +		enum dma_data_direction dir, unsigned long attrs)
> +{
> +	dma_unmap_sg_attrs(dev, sgt->sgl, sgt->orig_nents, dir, attrs);
> +}
> +
> +/**
> + * dma_sync_sgtable_for_cpu - Synchronize the given buffer for the CPU access

s/the CPU/CPU/

> + * @dev:	The device to perform a DMA operation

As before.

> + * @sgt:	The sg_table object describing the buffer
> + * @dir:	DMA direction
> + *
> + * Performs the needed cache synchronization and moves the ownership of the
> + * buffer back to the CPU domain, so it is safe to perform any access to it
> + * by the CPU. Before doing any further DMA operations, one has to transfer
> + * the ownership of the buffer back to the DMA domain by calling the
> + * dma_sync_sgtable_for_device().
> + */
> +static inline void dma_sync_sgtable_for_cpu(struct device *dev,
> +		struct sg_table *sgt, enum dma_data_direction dir)
> +{
> +	dma_sync_sg_for_cpu(dev, sgt->sgl, sgt->orig_nents, dir);
> +}
> +
> +/**
> + * dma_sync_sgtable_for_device - Synchronize the given buffer for the DMA

That one doesn't even

> + * @dev:	The device to perform a DMA operation

As before.

But of course, many thanks for taking the effort to add such complete 
documentation in the first place :)

Cheers,
Robin.

> + * @sgt:	The sg_table object describing the buffer
> + * @dir:	DMA direction
> + *
> + * Performs the needed cache synchronization and moves the ownership of the
> + * buffer back to the DMA domain, so it is safe to perform the DMA operation.
> + * Once finished, one has to call dma_sync_sgtable_for_cpu() or
> + * dma_unmap_sgtable().
> + */
> +static inline void dma_sync_sgtable_for_device(struct device *dev,
> +		struct sg_table *sgt, enum dma_data_direction dir)
> +{
> +	dma_sync_sg_for_device(dev, sgt->sgl, sgt->orig_nents, dir);
> +}
> +
>   #define dma_map_single(d, a, s, r) dma_map_single_attrs(d, a, s, r, 0)
>   #define dma_unmap_single(d, a, s, r) dma_unmap_single_attrs(d, a, s, r, 0)
>   #define dma_map_sg(d, s, n, r) dma_map_sg_attrs(d, s, n, r, 0)
>
diff mbox series

Patch

diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index b43116a..88f01cc 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -609,6 +609,85 @@  static inline void dma_sync_single_range_for_device(struct device *dev,
 	return dma_sync_single_for_device(dev, addr + offset, size, dir);
 }
 
+/**
+ * dma_map_sgtable - Map the given buffer for the DMA operations
+ * @dev:	The device to perform a DMA operation
+ * @sgt:	The sg_table object describing the buffer
+ * @dir:	DMA direction
+ * @attrs:	Optional DMA attributes for the map operation
+ *
+ * Maps a buffer described by a scatterlist stored in the given sg_table
+ * object for the @dir DMA operation by the @dev device. After success
+ * the ownership for the buffer is transferred to the DMA domain. One has
+ * to call dma_sync_sgtable_for_cpu() or dma_unmap_sgtable() to move the
+ * ownership of the buffer back to the CPU domain before touching the
+ * buffer by the CPU.
+ * Returns 0 on success or -EINVAL on error during mapping the buffer.
+ */
+static inline int dma_map_sgtable(struct device *dev, struct sg_table *sgt,
+		enum dma_data_direction dir, unsigned long attrs)
+{
+	int n = dma_map_sg_attrs(dev, sgt->sgl, sgt->orig_nents, dir, attrs);
+
+	if (n > 0) {
+		sgt->nents = n;
+		return 0;
+	}
+	return -EINVAL;
+}
+
+/**
+ * dma_unmap_sgtable - Unmap the given buffer for the DMA operations
+ * @dev:	The device to perform a DMA operation
+ * @sgt:	The sg_table object describing the buffer
+ * @dir:	DMA direction
+ * @attrs:	Optional DMA attributes for the map operation
+ *
+ * Unmaps a buffer described by a scatterlist stored in the given sg_table
+ * object for the @dir DMA operation by the @dev device. After this function
+ * the ownership of the buffer is transferred back to the CPU domain.
+ */
+static inline void dma_unmap_sgtable(struct device *dev, struct sg_table *sgt,
+		enum dma_data_direction dir, unsigned long attrs)
+{
+	dma_unmap_sg_attrs(dev, sgt->sgl, sgt->orig_nents, dir, attrs);
+}
+
+/**
+ * dma_sync_sgtable_for_cpu - Synchronize the given buffer for the CPU access
+ * @dev:	The device to perform a DMA operation
+ * @sgt:	The sg_table object describing the buffer
+ * @dir:	DMA direction
+ *
+ * Performs the needed cache synchronization and moves the ownership of the
+ * buffer back to the CPU domain, so it is safe to perform any access to it
+ * by the CPU. Before doing any further DMA operations, one has to transfer
+ * the ownership of the buffer back to the DMA domain by calling the
+ * dma_sync_sgtable_for_device().
+ */
+static inline void dma_sync_sgtable_for_cpu(struct device *dev,
+		struct sg_table *sgt, enum dma_data_direction dir)
+{
+	dma_sync_sg_for_cpu(dev, sgt->sgl, sgt->orig_nents, dir);
+}
+
+/**
+ * dma_sync_sgtable_for_device - Synchronize the given buffer for the DMA
+ * @dev:	The device to perform a DMA operation
+ * @sgt:	The sg_table object describing the buffer
+ * @dir:	DMA direction
+ *
+ * Performs the needed cache synchronization and moves the ownership of the
+ * buffer back to the DMA domain, so it is safe to perform the DMA operation.
+ * Once finished, one has to call dma_sync_sgtable_for_cpu() or
+ * dma_unmap_sgtable().
+ */
+static inline void dma_sync_sgtable_for_device(struct device *dev,
+		struct sg_table *sgt, enum dma_data_direction dir)
+{
+	dma_sync_sg_for_device(dev, sgt->sgl, sgt->orig_nents, dir);
+}
+
 #define dma_map_single(d, a, s, r) dma_map_single_attrs(d, a, s, r, 0)
 #define dma_unmap_single(d, a, s, r) dma_unmap_single_attrs(d, a, s, r, 0)
 #define dma_map_sg(d, s, n, r) dma_map_sg_attrs(d, s, n, r, 0)