Message ID | 20220520031548.175582-2-wangkefeng.wang@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: Fix kcsan test_barrier fail and panic | expand |
On Fri, May 20, 2022 at 11:15AM +0800, Kefeng Wang wrote: > The memory barrier dma_mb() is introduced by commit a76a37777f2c > ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"), > which is used to ensure that prior (both reads and writes) accesses to > memory by a CPU are ordered w.r.t. a subsequent MMIO write. > > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> > --- > Documentation/memory-barriers.txt | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt > index b12df9137e1c..1eabcc0e4eca 100644 > --- a/Documentation/memory-barriers.txt > +++ b/Documentation/memory-barriers.txt > @@ -1894,10 +1894,13 @@ There are some more advanced barrier functions: > > (*) dma_wmb(); > (*) dma_rmb(); > + (*) dma_mb(); > > These are for use with consistent memory to guarantee the ordering > of writes or reads of shared memory accessible to both the CPU and a > - DMA capable device. > + DMA capable device, in the case of ensure the prior (both reads and > + writes) accesses to memory by a CPU are ordered w.r.t. a subsequent > + MMIO write, dma_mb(). > I think this is out of place; this explanation here is not yet elaborating on either. Elaboration on dma_mb() should go where dma_rmb() and dma_wmb() are explained. Something like this: ------ >8 ------ diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt index b12df9137e1c..fb322b6cce70 100644 --- a/Documentation/memory-barriers.txt +++ b/Documentation/memory-barriers.txt @@ -1894,6 +1894,7 @@ There are some more advanced barrier functions: (*) dma_wmb(); (*) dma_rmb(); + (*) dma_mb(); These are for use with consistent memory to guarantee the ordering of writes or reads of shared memory accessible to both the CPU and a @@ -1925,11 +1926,11 @@ There are some more advanced barrier functions: The dma_rmb() allows us guarantee the device has released ownership before we read the data from the descriptor, and the dma_wmb() allows us to guarantee the data is written to the descriptor before the device - can see it now has ownership. Note that, when using writel(), a prior - wmb() is not needed to guarantee that the cache coherent memory writes - have completed before writing to the MMIO region. The cheaper - writel_relaxed() does not provide this guarantee and must not be used - here. + can see it now has ownership. The dma_mb() implies both a dma_rmb() and a + dma_wmb(). Note that, when using writel(), a prior wmb() is not needed to + guarantee that the cache coherent memory writes have completed before + writing to the MMIO region. The cheaper writel_relaxed() does not provide + this guarantee and must not be used here. See the subsection "Kernel I/O barrier effects" for more information on relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for ------ >8 ------ Also, now that you're making dma_mb() part of the official API, it might need a generic definition in include/asm-generic/barrier.h, because as-is it's only available in arm64 builds. Thoughts? Thanks, -- Marco
On 2022/5/20 18:08, Marco Elver wrote: > On Fri, May 20, 2022 at 11:15AM +0800, Kefeng Wang wrote: >> The memory barrier dma_mb() is introduced by commit a76a37777f2c >> ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"), >> which is used to ensure that prior (both reads and writes) accesses to >> memory by a CPU are ordered w.r.t. a subsequent MMIO write. >> >> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> >> --- >> Documentation/memory-barriers.txt | 5 ++++- >> 1 file changed, 4 insertions(+), 1 deletion(-) >> >> diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt >> index b12df9137e1c..1eabcc0e4eca 100644 >> --- a/Documentation/memory-barriers.txt >> +++ b/Documentation/memory-barriers.txt >> @@ -1894,10 +1894,13 @@ There are some more advanced barrier functions: >> >> (*) dma_wmb(); >> (*) dma_rmb(); >> + (*) dma_mb(); >> >> These are for use with consistent memory to guarantee the ordering >> of writes or reads of shared memory accessible to both the CPU and a >> - DMA capable device. >> + DMA capable device, in the case of ensure the prior (both reads and >> + writes) accesses to memory by a CPU are ordered w.r.t. a subsequent >> + MMIO write, dma_mb(). >> > I think this is out of place; this explanation here is not yet > elaborating on either. Elaboration on dma_mb() should go where > dma_rmb() and dma_wmb() are explained. > > Something like this: > > ------ >8 ------ > > diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt > index b12df9137e1c..fb322b6cce70 100644 > --- a/Documentation/memory-barriers.txt > +++ b/Documentation/memory-barriers.txt > @@ -1894,6 +1894,7 @@ There are some more advanced barrier functions: > > (*) dma_wmb(); > (*) dma_rmb(); > + (*) dma_mb(); > > These are for use with consistent memory to guarantee the ordering > of writes or reads of shared memory accessible to both the CPU and a > @@ -1925,11 +1926,11 @@ There are some more advanced barrier functions: > The dma_rmb() allows us guarantee the device has released ownership > before we read the data from the descriptor, and the dma_wmb() allows > us to guarantee the data is written to the descriptor before the device > - can see it now has ownership. Note that, when using writel(), a prior > - wmb() is not needed to guarantee that the cache coherent memory writes > - have completed before writing to the MMIO region. The cheaper > - writel_relaxed() does not provide this guarantee and must not be used > - here. > + can see it now has ownership. The dma_mb() implies both a dma_rmb() and a > + dma_wmb(). Note that, when using writel(), a prior wmb() is not needed to > + guarantee that the cache coherent memory writes have completed before > + writing to the MMIO region. The cheaper writel_relaxed() does not provide > + this guarantee and must not be used here. > > See the subsection "Kernel I/O barrier effects" for more information on > relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for > > ------ >8 ------ Thanks, will use above explanation. > Also, now that you're making dma_mb() part of the official API, it might > need a generic definition in include/asm-generic/barrier.h, because > as-is it's only available in arm64 builds. Ok, it's good to add the dma_mb() and __dma_mb definition with a separate patch into include/asm-generic/barrier.h. > > Thoughts? > > Thanks, > -- Marco > .
diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt index b12df9137e1c..1eabcc0e4eca 100644 --- a/Documentation/memory-barriers.txt +++ b/Documentation/memory-barriers.txt @@ -1894,10 +1894,13 @@ There are some more advanced barrier functions: (*) dma_wmb(); (*) dma_rmb(); + (*) dma_mb(); These are for use with consistent memory to guarantee the ordering of writes or reads of shared memory accessible to both the CPU and a - DMA capable device. + DMA capable device, in the case of ensure the prior (both reads and + writes) accesses to memory by a CPU are ordered w.r.t. a subsequent + MMIO write, dma_mb(). For example, consider a device driver that shares memory with a device and uses a descriptor status value to indicate if the descriptor belongs
The memory barrier dma_mb() is introduced by commit a76a37777f2c ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"), which is used to ensure that prior (both reads and writes) accesses to memory by a CPU are ordered w.r.t. a subsequent MMIO write. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> --- Documentation/memory-barriers.txt | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)