Message ID | 20250326-page-pool-track-dma-v3-2-8e464016e0ac@redhat.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | Fix late DMA unmap crash for page pool | expand |
On 26/03/2025 09.18, Toke Høiland-Jørgensen wrote: > Change the single-bit boolean for dma_sync into a full-width bool, so we > can read it as volatile with READ_ONCE(). A subsequent patch will add > writing with WRITE_ONCE() on teardown. > > Reviewed-by: Mina Almasry<almasrymina@google.com> > Tested-by: Yonglong Liu<liuyonglong@huawei.com> > Signed-off-by: Toke Høiland-Jørgensen<toke@redhat.com> > --- > include/net/page_pool/types.h | 6 +++--- > net/core/page_pool.c | 2 +- > 2 files changed, 4 insertions(+), 4 deletions(-) LGTM Acked-by: Jesper Dangaard Brouer <hawk@kernel.org>
From: Toke Høiland-Jørgensen <toke@redhat.com> Date: Wed, 26 Mar 2025 09:18:39 +0100 > Change the single-bit boolean for dma_sync into a full-width bool, so we > can read it as volatile with READ_ONCE(). A subsequent patch will add > writing with WRITE_ONCE() on teardown. Don't we have something like READ_ONCE(), but for one bit? Like atomic-load-blah? > > Reviewed-by: Mina Almasry <almasrymina@google.com> > Tested-by: Yonglong Liu <liuyonglong@huawei.com> > Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> > --- > include/net/page_pool/types.h | 6 +++--- > net/core/page_pool.c | 2 +- > 2 files changed, 4 insertions(+), 4 deletions(-) > > diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h > index df0d3c1608929605224feb26173135ff37951ef8..d6c93150384fbc4579bb0d0afb357ebb26c564a3 100644 > --- a/include/net/page_pool/types.h > +++ b/include/net/page_pool/types.h > @@ -173,10 +173,10 @@ struct page_pool { > int cpuid; > u32 pages_state_hold_cnt; > > - bool has_init_callback:1; /* slow::init_callback is set */ > + bool dma_sync; /* Perform DMA sync for device */ > + bool dma_sync_for_cpu:1; /* Perform DMA sync for cpu */ > bool dma_map:1; /* Perform DMA mapping */ > - bool dma_sync:1; /* Perform DMA sync for device */ > - bool dma_sync_for_cpu:1; /* Perform DMA sync for cpu */ > + bool has_init_callback:1; /* slow::init_callback is set */ > #ifdef CONFIG_PAGE_POOL_STATS > bool system:1; /* This is a global percpu pool */ > #endif > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index acef1fcd8ddcfd1853a6f2055c1f1820ab248e8d..fb32768a97765aacc7f1103bfee38000c988b0de 100644 > --- a/net/core/page_pool.c > +++ b/net/core/page_pool.c > @@ -466,7 +466,7 @@ page_pool_dma_sync_for_device(const struct page_pool *pool, > netmem_ref netmem, > u32 dma_sync_size) > { > - if (pool->dma_sync && dma_dev_need_sync(pool->p.dev)) > + if (READ_ONCE(pool->dma_sync) && dma_dev_need_sync(pool->p.dev)) > __page_pool_dma_sync_for_device(pool, netmem, dma_sync_size); > } Thanks, Olek
diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index df0d3c1608929605224feb26173135ff37951ef8..d6c93150384fbc4579bb0d0afb357ebb26c564a3 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -173,10 +173,10 @@ struct page_pool { int cpuid; u32 pages_state_hold_cnt; - bool has_init_callback:1; /* slow::init_callback is set */ + bool dma_sync; /* Perform DMA sync for device */ + bool dma_sync_for_cpu:1; /* Perform DMA sync for cpu */ bool dma_map:1; /* Perform DMA mapping */ - bool dma_sync:1; /* Perform DMA sync for device */ - bool dma_sync_for_cpu:1; /* Perform DMA sync for cpu */ + bool has_init_callback:1; /* slow::init_callback is set */ #ifdef CONFIG_PAGE_POOL_STATS bool system:1; /* This is a global percpu pool */ #endif diff --git a/net/core/page_pool.c b/net/core/page_pool.c index acef1fcd8ddcfd1853a6f2055c1f1820ab248e8d..fb32768a97765aacc7f1103bfee38000c988b0de 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -466,7 +466,7 @@ page_pool_dma_sync_for_device(const struct page_pool *pool, netmem_ref netmem, u32 dma_sync_size) { - if (pool->dma_sync && dma_dev_need_sync(pool->p.dev)) + if (READ_ONCE(pool->dma_sync) && dma_dev_need_sync(pool->p.dev)) __page_pool_dma_sync_for_device(pool, netmem, dma_sync_size); }