diff mbox series

[PATCHv4,10/12] dmapool: don't memset on free twice

Message ID 20230126215125.4069751-11-kbusch@meta.com (mailing list archive)
State New
Headers show
Series dmapool enhancements | expand

Commit Message

Keith Busch Jan. 26, 2023, 9:51 p.m. UTC
From: Keith Busch <kbusch@kernel.org>

If debug is enabled, dmapool will poison the range, so no need to clear
it to 0 immediately before writing over it.

Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 mm/dmapool.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
diff mbox series

Patch

diff --git a/mm/dmapool.c b/mm/dmapool.c
index 4dea2a0dbd336..21e6d362c7264 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -160,6 +160,8 @@  static void pool_check_block(struct dma_pool *pool, void *retval,
 static bool pool_page_err(struct dma_pool *pool, struct dma_page *page,
 			  void *vaddr, dma_addr_t dma)
 {
+	if (want_init_on_free())
+		memset(vaddr, 0, pool->size);
 	return false;
 }
 
@@ -441,8 +443,6 @@  void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
 		return;
 	}
 
-	if (want_init_on_free())
-		memset(vaddr, 0, pool->size);
 	if (pool_page_err(pool, page, vaddr, dma)) {
 		spin_unlock_irqrestore(&pool->lock, flags);
 		return;