From patchwork Sat Mar 8 14:54:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= X-Patchwork-Id: 14007549 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 823D3C28B25 for ; Sat, 8 Mar 2025 14:55:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5BF9E6B0082; Sat, 8 Mar 2025 09:55:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 547ED6B0083; Sat, 8 Mar 2025 09:55:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C2C36B0085; Sat, 8 Mar 2025 09:55:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 1AD316B0082 for ; Sat, 8 Mar 2025 09:55:20 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id E9DBE81A6E for ; Sat, 8 Mar 2025 14:55:19 +0000 (UTC) X-FDA: 83198681958.05.CB10989 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf21.hostedemail.com (Postfix) with ESMTP id AE6C01C0004 for ; Sat, 8 Mar 2025 14:55:17 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=NIHCLOj1; spf=pass (imf21.hostedemail.com: domain of toke@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=toke@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741445717; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=PCmvY6Sc5FZMvxBsN2EioQfAKoU5EAMLDPRS1zRsOEo=; b=XFe6dUIYrQpx2WC+fDtAkEGaRukW+LK8S+u2zvV4AaV6GW21A7clMBMwMB6ezd9Dys3YpD 5YDGe8ZxHvEwd6AUdNSBGHGawFjiyIiZD2S6hhL4vUF2SAkCkefMSOOi6w/e/8YTtnkmNo 6Xpq2a+41uwbF5HpGW3jRoTf+yUrGzo= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=NIHCLOj1; spf=pass (imf21.hostedemail.com: domain of toke@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=toke@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741445717; a=rsa-sha256; cv=none; b=GhtcVbYShJIBkIO4kNhrZzMQ7qv7ub/WvmvWrTk5uCJFpXCdcYN78M6mMNlUvzFnZYdzdV JjEPM+MwJsK08evMpcCcqr9EJ2F4t4SUp0CSkG8gtRxFd3SOLqoQL8Ijnaw7Degj06NshS 3BFJiVlae6gI8MYXTfeUAWL59gI7UDI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1741445716; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=PCmvY6Sc5FZMvxBsN2EioQfAKoU5EAMLDPRS1zRsOEo=; b=NIHCLOj1owunKpTBHR6ZGHebPWOif7BAXOlzz/iMIDejLJ+dJFJReuz0FrsT6vH3ltIt/Q 3+fPNpCDhCMFYFSNeYsdyV19H/+zogOdhCj2bFpq7qj/SuuVC8+cXaYceO2p+9nB8fzBal Bw6HfRuHbUjhM1bSpefvumzr0btebc4= Received: from mail-lj1-f198.google.com (mail-lj1-f198.google.com [209.85.208.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-616-xkGdnLfmO2CG6hhmleSuzg-1; Sat, 08 Mar 2025 09:55:15 -0500 X-MC-Unique: xkGdnLfmO2CG6hhmleSuzg-1 X-Mimecast-MFC-AGG-ID: xkGdnLfmO2CG6hhmleSuzg_1741445714 Received: by mail-lj1-f198.google.com with SMTP id 38308e7fff4ca-30bf4297559so7704601fa.2 for ; Sat, 08 Mar 2025 06:55:15 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741445714; x=1742050514; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=PCmvY6Sc5FZMvxBsN2EioQfAKoU5EAMLDPRS1zRsOEo=; b=atJ2uJdbIdYbiaISOTPnmrvkxn4Bv8ab+yN2SX1Qy03UMkpeqUa5im8PDMAMZqha7Y AIf7869mzmwjJTEJKzmAzWQWu7Yk8tyXn+yaO7sgoERpjHZ4XKE3wYkL7BksRO/mE9Fh gSMBvO4Y1Nb5gupmSFtW/k1lSO4By4omYpo09MNnUKVYqKrbxVavcRIhwQYHr1uP4vKL ZvJZCN3xHahmYYA3lTvL2ugG8xbxtGGj564oNtztJRpV/5SRA+hDIRs9t+H0e7WNcrt7 Xs7RSVGEnwbBlJWA1DlTiYW17r1oP3BYlE3R2JxvNVn+xv/yY91ffmsblRuTV7wXhC8u NMSg== X-Forwarded-Encrypted: i=1; AJvYcCWgrkfCRParwQSCTCginkvb9r42Ghw4H0Jkht2oMbB1wuSA1pZehDI0PvPeh2Y92l9QaPLsqD2dnw==@kvack.org X-Gm-Message-State: AOJu0YwlyzGGJ2SQB5BdAgo1bW3zfXeTgU7Zicn4a2GIu74aNHcn6Tek h5kB2qI1G/puuFbzPChAQSlAWnuX9fr9ZsUyLZdH6KTF7JdyG1I+PLfhj4huPrUnolK/Kw94xzz R/SCc6v+f08qoE19trVCag0VfmmpOd+l54aBxu4/PFhCM7W72 X-Gm-Gg: ASbGncvuKKB2JqozMzNqhURKCyrqj5sHACqz3LKgHaZP+qMmuk5D4B9Z869sIwbm72R Dpf/52wQoJ7fpBYa3AIe0x8Q+Zv9qyNrRdULMRxn2dSoy+62bg+AxGPVfhBMRFSb84PT+Adn3LR dOJ0biMC/iyXRhCXduiYG5c3HWxHLZw8DY9e2hFMJhmfQrqNWW5sDV49wXDi0e84ctQEvgdBpv7 nsnQkPujHaeGvlL3e28Iytm/f8QWaGjy39VHDYVXwie55f0IdzxtdGY8eJC33OGRvwLr9ZNT53Q Cgum//2s1ewHC9JNEB4ei/x271QVfX8G7v43kPL6 X-Received: by 2002:a05:651c:b2c:b0:300:2a29:d47c with SMTP id 38308e7fff4ca-30bf4606942mr23205881fa.24.1741445713666; Sat, 08 Mar 2025 06:55:13 -0800 (PST) X-Google-Smtp-Source: AGHT+IHzsQqPg4H08FfVDKNDsbH7bqFv3QPDqrk1h8mLbZyAiDMoS62GYE8j8MXgzI5KJvsKlizyWQ== X-Received: by 2002:a05:651c:b2c:b0:300:2a29:d47c with SMTP id 38308e7fff4ca-30bf4606942mr23205701fa.24.1741445713135; Sat, 08 Mar 2025 06:55:13 -0800 (PST) Received: from alrua-x1.borgediget.toke.dk ([45.145.92.2]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-30bf794bff6sm5296511fa.23.2025.03.08.06.55.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 08 Mar 2025 06:55:11 -0800 (PST) Received: by alrua-x1.borgediget.toke.dk (Postfix, from userid 1000) id 4F9CB18FA09F; Sat, 08 Mar 2025 15:55:10 +0100 (CET) From: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= To: Andrew Morton , Jesper Dangaard Brouer , Ilias Apalodimas , "David S. Miller" Cc: Yunsheng Lin , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?q?=C3=B8rgensen?= , Yonglong Liu , Mina Almasry , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , linux-mm@kvack.org, netdev@vger.kernel.org Subject: [RFC PATCH net-next] page_pool: Track DMA-mapped pages and unmap them when destroying the pool Date: Sat, 8 Mar 2025 15:54:58 +0100 Message-ID: <20250308145500.14046-1-toke@redhat.com> X-Mailer: git-send-email 2.48.1 MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: 68WcL4FSqs3BZRaQ0W2yjfYRyvzpNiKO3oLTtIcPi8k_1741445714 X-Mimecast-Originator: redhat.com X-Rspam-User: X-Rspamd-Queue-Id: AE6C01C0004 X-Stat-Signature: 3ombmt9sdyrw5qes9rpzbo7ruwyiu9ra X-Rspamd-Server: rspam09 X-HE-Tag: 1741445717-311244 X-HE-Meta: U2FsdGVkX18Dq2rcx2PWDXmZQClThy/3HLwx2a9nvYmPPBjvnAZnC7wxoCQVnOi3QuJNB7bwEwpKLmLGbEcN1NQ6yXwuWRU9uqjkHzg6t9NmkPiHubZ6M4ByEufEIItoGXkSZlcXKnl/vhTFg7mkxP9RFAtJescZ5hcxPFFz7C40RmfrLBRrqIGmCt8Su6QUpjLIl2fceRjZ6IcAfVpnC83L54P/kt0Hu/D8z9UPzLxaDInaWL9+PPol6RCwuFySENojS3dFdue5JBCRKYpT80jtn/jP4yMaQZyRIAUxsnAfxQND3BkWo84tNsmQSM6mvZxu1iQmFirshJMZROlk8e5EUUGoVhM4yyyw21kzS+u/tgPQeI+1hedB9xeBVxPlfvws+Clf2okrrbRPLVEgruMKKRUklQmhajozmzZQtEcueVABnC4fxgfHHF6CW4Q8wKdCply8Df2Oe2+fQhshzjPk+Rbuw5JLDqtNgyAg+bmCi8ZT9aFvrdOOyryneJzGwCsA6jzivCUwqdCJQA0I26QhJBfuP75n3hE4XlFK4Zu0iBpH+UAkgzcR7McAMmQ4cwRBzb+uaagPTw2k2ahx8Av6dOi3aN/0ID8i7Iki+P0ZSrlYFkxcOA7yGWwgQDxX7czPj7svVC60jsWTSlzcrVTN+ZEFWaAeLjsftJUM0R6SWTcV+HvTT5mGC74Fu06GpbEclTRDMW9W/0Y5KeHUH9Ozls37ryYy4WnN6Ld8WBsu6WqLly48uoIa6R2YsUdLnXZPCFlVw9QNWBEa7vqrmpHOB5UITZky0Lig8Co5QnRSI8WK2plw3j8Kes2gFtNKU5bNtZUm13InZ4Kpgamf2W+WbixLCpddvKMuP12rVm3VwVD/wylQwl9RIukEvV7y4nEmzIyEJr7+mvDAvskioVOV/aMMZiiXDrr3vPIurrkKDpK2XVTZxk7ZjDOjJv1RnmnJgXwfg+dA5WxHb45 7XVNirzK zOv/1A7MHPC3AB6QJcu7EhkRKVSAlXRs79/bC/j50RXBR0grAs9rEkyc7xMTXpq5mtSCJvEnBbCttolU7l6uTVX1LtOUsSxxf5JJfNe2UCREsc9N8lhFMOJ70Ut6zgOKRVGsPIJ7W5p8J5fWcwC7YNVZmbEvvYyRDQZ0e5c2Pi1iD///Gz6FibCY+1T3TEnbOTLvD5Lf4T+ujwPzA3dsjRw8KLICW8z9dwPHKBMUcZYiQ7zkedS2AAbe65L3xYXslyIUID5h/1+R4sgIKDsIast7ay3FH9rPA+c6ux0v2ySn1PdZDtB23ldB2xx2UdWiswVsMdHtReK55/Jb6Re4YSRJhqCXNvMQjmVPCGDCQ0NxAEyfYI0PYw3PCcfxTPyVoRVIxf2CzoYf0CWmFbsF6UHDcI8m8VsUd3dtrgImlSqa4k21/gmSM/kxd4uTFL5rIvzlO/fjL47FhWicGr4RC/GCa4VZq1ImgKAsxf9770pMTt61z8olEuciARsHR9CVvcgSkOlb8ZO0rv9X8Z78Wrwar3m6J3EvJobKkuBOsTfQP8Wo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When enabling DMA mapping in page_pool, pages are kept DMA mapped until they are released from the pool, to avoid the overhead of re-mapping the pages every time they are used. This causes problems when a device is torn down, because the page pool can't unmap the pages until they are returned to the pool. This causes resource leaks and/or crashes when there are pages still outstanding while the device is torn down, because page_pool will attempt an unmap of a non-existent DMA device on the subsequent page return. To fix this, implement a simple tracking of outstanding dma-mapped pages in page pool using an xarray. This was first suggested by Mina[0], and turns out to be fairly straight forward: We simply store pointers to pages directly in the xarray with xa_alloc() when they are first DMA mapped, and remove them from the array on unmap. Then, when a page pool is torn down, it can simply walk the xarray and unmap all pages still present there before returning, which also allows us to get rid of the get/put_device() calls in page_pool. Using xa_cmpxchg(), no additional synchronisation is needed, as a page will only ever be unmapped once. To avoid having to walk the entire xarray on unmap to find the page reference, we stash the ID assigned by xa_alloc() into the page structure itself, in the field previously called '_pp_mapping_pad' in the page_pool struct inside struct page. This field overlaps with the page->mapping pointer, which may turn out to be problematic, so an alternative is probably needed. Sticking the ID into some of the upper bits of page->pp_magic may work as an alternative, but that requires further investigation. Using the 'mapping' field works well enough as a demonstration for this RFC, though. Since all the tracking is performed on DMA map/unmap, no additional code is needed in the fast path, meaning the performance overhead of this tracking is negligible. The extra memory needed to track the pages is neatly encapsulated inside xarray, which uses the 'struct xa_node' structure to track items. This structure is 576 bytes long, with slots for 64 items, meaning that a full node occurs only 9 bytes of overhead per slot it tracks (in practice, it probably won't be this efficient, but in any case it should be an acceptable overhead). [0] https://lore.kernel.org/all/CAHS8izPg7B5DwKfSuzz-iOop_YRbk3Sd6Y4rX7KBG9DcVJcyWg@mail.gmail.com/ Fixes: ff7d6b27f894 ("page_pool: refurbish version of page_pool code") Reported-by: Yonglong Liu Suggested-by: Mina Almasry Reviewed-by: Jesper Dangaard Brouer Tested-by: Jesper Dangaard Brouer Signed-off-by: Toke Høiland-Jørgensen --- This is an alternative to Yunsheng's series. Yunsheng requested I send this as an RFC to better be able to discuss the different approaches; see some initial discussion in[1], also regarding where to store the ID as alluded to above. -Toke [1] https://lore.kernel.org/r/40b33879-509a-4c4a-873b-b5d3573b6e14@gmail.com include/linux/mm_types.h | 2 +- include/net/page_pool/types.h | 2 ++ net/core/netmem_priv.h | 17 +++++++++++++ net/core/page_pool.c | 46 +++++++++++++++++++++++++++++------ 4 files changed, 58 insertions(+), 9 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 0234f14f2aa6..d2c7a7b04bea 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -121,7 +121,7 @@ struct page { */ unsigned long pp_magic; struct page_pool *pp; - unsigned long _pp_mapping_pad; + unsigned long pp_dma_index; unsigned long dma_addr; atomic_long_t pp_ref_count; }; diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 36eb57d73abc..13597a77aa36 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -221,6 +221,8 @@ struct page_pool { void *mp_priv; const struct memory_provider_ops *mp_ops; + struct xarray dma_mapped; + #ifdef CONFIG_PAGE_POOL_STATS /* recycle stats are per-cpu to avoid locking */ struct page_pool_recycle_stats __percpu *recycle_stats; diff --git a/net/core/netmem_priv.h b/net/core/netmem_priv.h index 7eadb8393e00..59679406a7b7 100644 --- a/net/core/netmem_priv.h +++ b/net/core/netmem_priv.h @@ -28,4 +28,21 @@ static inline void netmem_set_dma_addr(netmem_ref netmem, { __netmem_clear_lsb(netmem)->dma_addr = dma_addr; } + +static inline unsigned long netmem_get_dma_index(netmem_ref netmem) +{ + if (WARN_ON_ONCE(netmem_is_net_iov(netmem))) + return 0; + + return netmem_to_page(netmem)->pp_dma_index; +} + +static inline void netmem_set_dma_index(netmem_ref netmem, + unsigned long id) +{ + if (WARN_ON_ONCE(netmem_is_net_iov(netmem))) + return; + + netmem_to_page(netmem)->pp_dma_index = id; +} #endif diff --git a/net/core/page_pool.c b/net/core/page_pool.c index acef1fcd8ddc..d5530f29bf62 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -226,6 +226,8 @@ static int page_pool_init(struct page_pool *pool, return -EINVAL; pool->dma_map = true; + + xa_init_flags(&pool->dma_mapped, XA_FLAGS_ALLOC1); } if (pool->slow.flags & PP_FLAG_DMA_SYNC_DEV) { @@ -275,9 +277,6 @@ static int page_pool_init(struct page_pool *pool, /* Driver calling page_pool_create() also call page_pool_destroy() */ refcount_set(&pool->user_cnt, 1); - if (pool->dma_map) - get_device(pool->p.dev); - if (pool->slow.flags & PP_FLAG_ALLOW_UNREADABLE_NETMEM) { /* We rely on rtnl_lock()ing to make sure netdev_rx_queue * configuration doesn't change while we're initializing @@ -325,7 +324,7 @@ static void page_pool_uninit(struct page_pool *pool) ptr_ring_cleanup(&pool->ring, NULL); if (pool->dma_map) - put_device(pool->p.dev); + xa_destroy(&pool->dma_mapped); #ifdef CONFIG_PAGE_POOL_STATS if (!pool->system) @@ -470,9 +469,11 @@ page_pool_dma_sync_for_device(const struct page_pool *pool, __page_pool_dma_sync_for_device(pool, netmem, dma_sync_size); } -static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem) +static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem, gfp_t gfp) { dma_addr_t dma; + int err; + u32 id; /* Setup DMA mapping: use 'struct page' area for storing DMA-addr * since dma_addr_t can be either 32 or 64 bits and does not always fit @@ -486,9 +487,19 @@ static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem) if (dma_mapping_error(pool->p.dev, dma)) return false; + if (in_softirq()) + err = xa_alloc(&pool->dma_mapped, &id, netmem_to_page(netmem), + XA_LIMIT(1, UINT_MAX), gfp); + else + err = xa_alloc_bh(&pool->dma_mapped, &id, netmem_to_page(netmem), + XA_LIMIT(1, UINT_MAX), gfp); + if (err) + goto unmap_failed; + if (page_pool_set_dma_addr_netmem(netmem, dma)) goto unmap_failed; + netmem_set_dma_index(netmem, id); page_pool_dma_sync_for_device(pool, netmem, pool->p.max_len); return true; @@ -511,7 +522,7 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool, if (unlikely(!page)) return NULL; - if (pool->dma_map && unlikely(!page_pool_dma_map(pool, page_to_netmem(page)))) { + if (pool->dma_map && unlikely(!page_pool_dma_map(pool, page_to_netmem(page), gfp))) { put_page(page); return NULL; } @@ -557,7 +568,7 @@ static noinline netmem_ref __page_pool_alloc_pages_slow(struct page_pool *pool, */ for (i = 0; i < nr_pages; i++) { netmem = pool->alloc.cache[i]; - if (dma_map && unlikely(!page_pool_dma_map(pool, netmem))) { + if (dma_map && unlikely(!page_pool_dma_map(pool, netmem, gfp))) { put_page(netmem_to_page(netmem)); continue; } @@ -659,6 +670,8 @@ void page_pool_clear_pp_info(netmem_ref netmem) static __always_inline void __page_pool_release_page_dma(struct page_pool *pool, netmem_ref netmem) { + struct page *old, *page = netmem_to_page(netmem); + unsigned long id; dma_addr_t dma; if (!pool->dma_map) @@ -667,6 +680,17 @@ static __always_inline void __page_pool_release_page_dma(struct page_pool *pool, */ return; + id = netmem_get_dma_index(netmem); + if (!id) + return; + + if (in_softirq()) + old = xa_cmpxchg(&pool->dma_mapped, id, page, NULL, 0); + else + old = xa_cmpxchg_bh(&pool->dma_mapped, id, page, NULL, 0); + if (old != page) + return; + dma = page_pool_get_dma_addr_netmem(netmem); /* When page is unmapped, it cannot be returned to our pool */ @@ -674,6 +698,7 @@ static __always_inline void __page_pool_release_page_dma(struct page_pool *pool, PAGE_SIZE << pool->p.order, pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING); page_pool_set_dma_addr_netmem(netmem, 0); + netmem_set_dma_index(netmem, 0); } /* Disconnects a page (from a page_pool). API users can have a need @@ -1083,8 +1108,13 @@ static void page_pool_empty_alloc_cache_once(struct page_pool *pool) static void page_pool_scrub(struct page_pool *pool) { + unsigned long id; + void *ptr; + page_pool_empty_alloc_cache_once(pool); - pool->destroy_cnt++; + if (!pool->destroy_cnt++) + xa_for_each(&pool->dma_mapped, id, ptr) + __page_pool_release_page_dma(pool, page_to_netmem(ptr)); /* No more consumers should exist, but producers could still * be in-flight.