Message ID | 20231016054755.915155-8-hch@lst.de (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | [01/12] riscv: RISCV_NONSTANDARD_CACHE_OPS shouldn't depend on RISCV_DMA_NONCOHERENT | expand |
Context | Check | Description |
---|---|---|
conchuod/vmtest-fixes-PR | fail | merge-conflict |
On 16/10/2023 6:47 am, Christoph Hellwig wrote: > The logic in dma_direct_alloc when to use the atomic pool vs remapping > grew a bit unreadable. Consolidate it into a single check, and clean > up the set_uncached vs remap logic a bit as well. Reviewed-by: Robin Murphy <robin.murphy@arm.com> > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > kernel/dma/direct.c | 25 ++++++++++--------------- > 1 file changed, 10 insertions(+), 15 deletions(-) > > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > index ec410af1d8a14e..1327d04fa32a25 100644 > --- a/kernel/dma/direct.c > +++ b/kernel/dma/direct.c > @@ -234,27 +234,22 @@ void *dma_direct_alloc(struct device *dev, size_t size, > dma_handle); > > /* > - * Otherwise remap if the architecture is asking for it. But > - * given that remapping memory is a blocking operation we'll > - * instead have to dip into the atomic pools. > + * Otherwise we require the architecture to either be able to > + * mark arbitrary parts of the kernel direct mapping uncached, > + * or remapped it uncached. > */ > + set_uncached = IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED); > remap = IS_ENABLED(CONFIG_DMA_DIRECT_REMAP); > - if (remap) { > - if (dma_direct_use_pool(dev, gfp)) > - return dma_direct_alloc_from_pool(dev, size, > - dma_handle, gfp); > - } else { > - if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED)) > - return NULL; > - set_uncached = true; > - } > + if (!set_uncached && !remap) > + return NULL; > } > > /* > - * Decrypting memory may block, so allocate the memory from the atomic > - * pools if we can't block. > + * Remapping or decrypting memory may block, allocate the memory from > + * the atomic pools instead if we aren't allowed block. > */ > - if (force_dma_unencrypted(dev) && dma_direct_use_pool(dev, gfp)) > + if ((remap || force_dma_unencrypted(dev)) && > + dma_direct_use_pool(dev, gfp)) > return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); > > /* we always manually zero the memory once we are done */
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index ec410af1d8a14e..1327d04fa32a25 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -234,27 +234,22 @@ void *dma_direct_alloc(struct device *dev, size_t size, dma_handle); /* - * Otherwise remap if the architecture is asking for it. But - * given that remapping memory is a blocking operation we'll - * instead have to dip into the atomic pools. + * Otherwise we require the architecture to either be able to + * mark arbitrary parts of the kernel direct mapping uncached, + * or remapped it uncached. */ + set_uncached = IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED); remap = IS_ENABLED(CONFIG_DMA_DIRECT_REMAP); - if (remap) { - if (dma_direct_use_pool(dev, gfp)) - return dma_direct_alloc_from_pool(dev, size, - dma_handle, gfp); - } else { - if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED)) - return NULL; - set_uncached = true; - } + if (!set_uncached && !remap) + return NULL; } /* - * Decrypting memory may block, so allocate the memory from the atomic - * pools if we can't block. + * Remapping or decrypting memory may block, allocate the memory from + * the atomic pools instead if we aren't allowed block. */ - if (force_dma_unencrypted(dev) && dma_direct_use_pool(dev, gfp)) + if ((remap || force_dma_unencrypted(dev)) && + dma_direct_use_pool(dev, gfp)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); /* we always manually zero the memory once we are done */
The logic in dma_direct_alloc when to use the atomic pool vs remapping grew a bit unreadable. Consolidate it into a single check, and clean up the set_uncached vs remap logic a bit as well. Signed-off-by: Christoph Hellwig <hch@lst.de> --- kernel/dma/direct.c | 25 ++++++++++--------------- 1 file changed, 10 insertions(+), 15 deletions(-)