diff mbox series

[1/1] docs: dma: correct dma_set_mask() sample code

Message ID 20240328154827.809286-1-Frank.Li@nxp.com (mailing list archive)
State Superseded
Headers show
Series [1/1] docs: dma: correct dma_set_mask() sample code | expand

Commit Message

Frank Li March 28, 2024, 3:48 p.m. UTC
There are bunch of codes in driver like

       if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)))
               dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))

Actaully it is wrong because if dma_set_mask_and_coherent(64) failure,
dma_set_mask_and_coherent(32) will be failure by the same reason.

And dma_set_mask_and_coherent(64) never return failure.

According to defination of dma_set_mask(), it indicate the width of address
that device DMA can access. If it can access 64bit address, it must access
32bit address inherently. So only need set biggest address width.

See below code fragment:

dma_set_mask(mask)
{
	mask = (dma_addr_t)mask;

	if (!dev->dma_mask || !dma_supported(dev, mask))
		return -EIO;

	arch_dma_set_mask(dev, mask);
	*dev->dma_mask = mask;
	return 0;
}

dma_supported() will call dma_direct_supported or iommux's dma_supported
call back function.

int dma_direct_supported(struct device *dev, u64 mask)
{
	u64 min_mask = (max_pfn - 1) << PAGE_SHIFT;

	/*
	 * Because 32-bit DMA masks are so common we expect every architecture
	 * to be able to satisfy them - either by not supporting more physical
	 * memory, or by providing a ZONE_DMA32.  If neither is the case, the
	 * architecture needs to use an IOMMU instead of the direct mapping.
	 */
	if (mask >= DMA_BIT_MASK(32))
		return 1;

	...
}

The iommux's dma_supported() actual means iommu require devices's minimized
dma capatiblity.

An example:

static int sba_dma_supported( struct device *dev, u64 mask)()
{
	...
	 * check if mask is >= than the current max IO Virt Address
         * The max IO Virt address will *always* < 30 bits.
         */
        return((int)(mask >= (ioc->ibase - 1 +
                        (ioc->pdir_size / sizeof(u64) * IOVP_SIZE) )));
	...
}

1 means supported. 0 means unsupported.

Correct document to make it more clear and provide correct sample code.

Signed-off-by: Frank Li <Frank.Li@nxp.com>
---
 Documentation/core-api/dma-api-howto.rst | 24 ++++++++++++++++++++++--
 1 file changed, 22 insertions(+), 2 deletions(-)

Comments

Randy Dunlap March 28, 2024, 6:44 p.m. UTC | #1
Hi,

I have some text corrections. No idea if the updates
are correct or not.


On 3/28/24 08:48, Frank Li wrote:
> There are bunch of codes in driver like
> 
>        if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)))
>                dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))
> 
> Actaully it is wrong because if dma_set_mask_and_coherent(64) failure,

  Actually                                                      fails,

> dma_set_mask_and_coherent(32) will be failure by the same reason.

                                will fail for the same reason.

> 
> And dma_set_mask_and_coherent(64) never return failure.

                                          returns failure.

> 
> According to defination of dma_set_mask(), it indicate the width of address

            to the definition                   indicates

> that device DMA can access. If it can access 64bit address, it must access

                                               64-bit

> 32bit address inherently. So only need set biggest address width.

  32-bit

> 
> See below code fragment:
> 
> dma_set_mask(mask)
> {
> 	mask = (dma_addr_t)mask;
> 
> 	if (!dev->dma_mask || !dma_supported(dev, mask))
> 		return -EIO;
> 
> 	arch_dma_set_mask(dev, mask);
> 	*dev->dma_mask = mask;
> 	return 0;
> }
> 
> dma_supported() will call dma_direct_supported or iommux's dma_supported
> call back function.
> 
> int dma_direct_supported(struct device *dev, u64 mask)
> {
> 	u64 min_mask = (max_pfn - 1) << PAGE_SHIFT;
> 
> 	/*
> 	 * Because 32-bit DMA masks are so common we expect every architecture
> 	 * to be able to satisfy them - either by not supporting more physical
> 	 * memory, or by providing a ZONE_DMA32.  If neither is the case, the
> 	 * architecture needs to use an IOMMU instead of the direct mapping.
> 	 */
> 	if (mask >= DMA_BIT_MASK(32))
> 		return 1;
> 
> 	...
> }
> 
> The iommux's dma_supported() actual means iommu require devices's minimized

                               actually           requires
or just drop "actual"

> dma capatiblity.

      capability.

> 
> An example:
> 
> static int sba_dma_supported( struct device *dev, u64 mask)()
> {
> 	...
> 	 * check if mask is >= than the current max IO Virt Address
>          * The max IO Virt address will *always* < 30 bits.
>          */
>         return((int)(mask >= (ioc->ibase - 1 +
>                         (ioc->pdir_size / sizeof(u64) * IOVP_SIZE) )));
> 	...
> }
> 
> 1 means supported. 0 means unsupported.
> 
> Correct document to make it more clear and provide correct sample code.
> 
> Signed-off-by: Frank Li <Frank.Li@nxp.com>
> ---
>  Documentation/core-api/dma-api-howto.rst | 24 ++++++++++++++++++++++--
>  1 file changed, 22 insertions(+), 2 deletions(-)
> 
> diff --git a/Documentation/core-api/dma-api-howto.rst b/Documentation/core-api/dma-api-howto.rst
> index e8a55f9d61dbc..7871d3b906104 100644
> --- a/Documentation/core-api/dma-api-howto.rst
> +++ b/Documentation/core-api/dma-api-howto.rst
> @@ -203,13 +203,33 @@ setting the DMA mask fails.  In this manner, if a user of your driver reports
>  that performance is bad or that the device is not even detected, you can ask
>  them for the kernel messages to find out exactly why.
>  
> -The standard 64-bit addressing device would do something like this::
> +The 24-bit addressing device would do something like this::
>  
> -	if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) {
> +	if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(24))) {
>  		dev_warn(dev, "mydev: No suitable DMA available\n");
>  		goto ignore_this_device;
>  	}
>  
> +The standard 64-bit addressing device would do something like this::
> +
> +	dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))

                                                        ;

> +
> +dma_set_mask_and_coherence never return fail when DMA_BIT_MASK(64). Typical

   dma_set_mask_and_coherent        returns failure when DMA_BIT_MASK(64) <does what?>. Typical
?

> +error code like::
> +
> +	/* Wrong code */
> +	if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)))
> +		dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))

                                                                ;

> +
> +dma_set_mask_and_coherence() will never return failure when bigger then 32.

   dma_set_mask_and_coherent()

> +So typical code like::
> +
> +	/* Recommented code */

           Recommended

> +	if (support_64bit)
> +		dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
> +	else
> +		dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
> +
>  If the device only supports 32-bit addressing for descriptors in the
>  coherent allocations, but supports full 64-bits for streaming mappings
>  it would look like this::
diff mbox series

Patch

diff --git a/Documentation/core-api/dma-api-howto.rst b/Documentation/core-api/dma-api-howto.rst
index e8a55f9d61dbc..7871d3b906104 100644
--- a/Documentation/core-api/dma-api-howto.rst
+++ b/Documentation/core-api/dma-api-howto.rst
@@ -203,13 +203,33 @@  setting the DMA mask fails.  In this manner, if a user of your driver reports
 that performance is bad or that the device is not even detected, you can ask
 them for the kernel messages to find out exactly why.
 
-The standard 64-bit addressing device would do something like this::
+The 24-bit addressing device would do something like this::
 
-	if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) {
+	if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(24))) {
 		dev_warn(dev, "mydev: No suitable DMA available\n");
 		goto ignore_this_device;
 	}
 
+The standard 64-bit addressing device would do something like this::
+
+	dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))
+
+dma_set_mask_and_coherence never return fail when DMA_BIT_MASK(64). Typical
+error code like::
+
+	/* Wrong code */
+	if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)))
+		dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))
+
+dma_set_mask_and_coherence() will never return failure when bigger then 32.
+So typical code like::
+
+	/* Recommented code */
+	if (support_64bit)
+		dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
+	else
+		dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
+
 If the device only supports 32-bit addressing for descriptors in the
 coherent allocations, but supports full 64-bits for streaming mappings
 it would look like this::