diff mbox series

[v6,3/3] net/xdp: convert put_page() to put_user_page*()

Message ID 20190804214042.4564-4-jhubbard@nvidia.com (mailing list archive)
State New, archived
Headers show
Series mm/gup: add make_dirty arg to put_user_pages_dirty_lock() | expand

Commit Message

john.hubbard@gmail.com Aug. 4, 2019, 9:40 p.m. UTC
From: John Hubbard <jhubbard@nvidia.com>

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Acked-by: Björn Töpel <bjorn.topel@intel.com>
Cc: Magnus Karlsson <magnus.karlsson@intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: netdev@vger.kernel.org
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
 net/xdp/xdp_umem.c | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)
diff mbox series


diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
index 83de74ca729a..17c4b3d3dc34 100644
--- a/net/xdp/xdp_umem.c
+++ b/net/xdp/xdp_umem.c
@@ -166,14 +166,7 @@  void xdp_umem_clear_dev(struct xdp_umem *umem)
 static void xdp_umem_unpin_pages(struct xdp_umem *umem)
-	unsigned int i;
-	for (i = 0; i < umem->npgs; i++) {
-		struct page *page = umem->pgs[i];
-		set_page_dirty_lock(page);
-		put_page(page);
-	}
+	put_user_pages_dirty_lock(umem->pgs, umem->npgs, true);
 	umem->pgs = NULL;