diff mbox series

mm/dmapool.c: add lock protect in dma_pool_destroy

Message ID 20200722090516.28829-1-qiang.zhang@windriver.com (mailing list archive)
State New, archived
Headers show
Series mm/dmapool.c: add lock protect in dma_pool_destroy | expand

Commit Message

Zhang, Qiang July 22, 2020, 9:05 a.m. UTC
From: Zhang Qiang <qiang.zhang@windriver.com>

When traversing "pool->page" linked list, to prevent possible
other path operations this list, causing it to be destroyed, we
should add lock protect for this list in dma_pool_destroy func.

Signed-off-by: Zhang Qiang <qiang.zhang@windriver.com>
---
 mm/dmapool.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

Comments

Matthew Wilcox July 22, 2020, 11:27 a.m. UTC | #1
On Wed, Jul 22, 2020 at 05:05:16PM +0800, qiang.zhang@windriver.com wrote:
> When traversing "pool->page" linked list, to prevent possible
> other path operations this list, causing it to be destroyed, we
> should add lock protect for this list in dma_pool_destroy func.

The pool is being destroyed.  If somebody else is trying to allocate from
it while it's in the middle of being destroyed, there is a larger problem
to solve, and it can't be solved in the dmapool code.
diff mbox series

Patch

diff --git a/mm/dmapool.c b/mm/dmapool.c
index f9fb9bbd733e..f7375b25af6c 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -267,6 +267,9 @@  static void pool_free_page(struct dma_pool *pool, struct dma_page *page)
 void dma_pool_destroy(struct dma_pool *pool)
 {
 	bool empty = false;
+	LIST_HEAD(discard);
+	struct dma_page *page,*h;
+	unsigned long flags;
 
 	if (unlikely(!pool))
 		return;
@@ -281,8 +284,8 @@  void dma_pool_destroy(struct dma_pool *pool)
 		device_remove_file(pool->dev, &dev_attr_pools);
 	mutex_unlock(&pools_reg_lock);
 
+	spin_lock_irqsave(&pool->lock, flags);
 	while (!list_empty(&pool->page_list)) {
-		struct dma_page *page;
 		page = list_entry(pool->page_list.next,
 				  struct dma_page, page_list);
 		if (is_page_busy(page)) {
@@ -297,8 +300,12 @@  void dma_pool_destroy(struct dma_pool *pool)
 			list_del(&page->page_list);
 			kfree(page);
 		} else
-			pool_free_page(pool, page);
+			list_move(&page->page_list, &discard);
 	}
+	spin_unlock_irqrestore(&pool->lock, flags);
+
+	list_for_each_entry_safe(page, h, &discard, page_list)
+		pool_free_page(pool, page);
 
 	kfree(pool);
 }