From patchwork Mon May 23 11:26:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 12858888 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FAAAC4332F for ; Mon, 23 May 2022 11:16:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1DB586B000A; Mon, 23 May 2022 07:16:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 18B446B000C; Mon, 23 May 2022 07:16:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 079DC6B000D; Mon, 23 May 2022 07:16:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id DE8546B000C for ; Mon, 23 May 2022 07:16:19 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id B987334D66 for ; Mon, 23 May 2022 11:16:19 +0000 (UTC) X-FDA: 79496754078.05.9EB8C0B Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf09.hostedemail.com (Postfix) with ESMTP id B017014001C for ; Mon, 23 May 2022 11:16:06 +0000 (UTC) Received: from dggpemm500021.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4L6F9x29rGzhZ28; Mon, 23 May 2022 19:15:33 +0800 (CST) Received: from dggpemm500001.china.huawei.com (7.185.36.107) by dggpemm500021.china.huawei.com (7.185.36.109) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 23 May 2022 19:16:14 +0800 Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 23 May 2022 19:16:13 +0800 From: Kefeng Wang To: , , , , CC: , , , , Kefeng Wang Subject: [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb() Date: Mon, 23 May 2022 19:26:40 +0800 Message-ID: <20220523112641.170060-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220523112641.170060-1-wangkefeng.wang@huawei.com> References: <20220523112641.170060-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Queue-Id: B017014001C X-Stat-Signature: o88t3uqbagwun7ysz4bp8681w6ufhy3y Authentication-Results: imf09.hostedemail.com; dkim=none; spf=pass (imf09.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspamd-Server: rspam03 X-HE-Tag: 1653304566-91459 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The memory barrier dma_mb() is introduced by commit a76a37777f2c ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"), which is used to ensure that prior (both reads and writes) accesses to memory by a CPU are ordered w.r.t. a subsequent MMIO write. Reviewed-by: Arnd Bergmann # for asm-generic Signed-off-by: Kefeng Wang --- Documentation/memory-barriers.txt | 11 ++++++----- include/asm-generic/barrier.h | 8 ++++++++ 2 files changed, 14 insertions(+), 5 deletions(-) diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt index b12df9137e1c..832b5d36e279 100644 --- a/Documentation/memory-barriers.txt +++ b/Documentation/memory-barriers.txt @@ -1894,6 +1894,7 @@ There are some more advanced barrier functions: (*) dma_wmb(); (*) dma_rmb(); + (*) dma_mb(); These are for use with consistent memory to guarantee the ordering of writes or reads of shared memory accessible to both the CPU and a @@ -1925,11 +1926,11 @@ There are some more advanced barrier functions: The dma_rmb() allows us guarantee the device has released ownership before we read the data from the descriptor, and the dma_wmb() allows us to guarantee the data is written to the descriptor before the device - can see it now has ownership. Note that, when using writel(), a prior - wmb() is not needed to guarantee that the cache coherent memory writes - have completed before writing to the MMIO region. The cheaper - writel_relaxed() does not provide this guarantee and must not be used - here. + can see it now has ownership. The dma_mb() implies both a dma_rmb() and + a dma_wmb(). Note that, when using writel(), a prior wmb() is not needed + to guarantee that the cache coherent memory writes have completed before + writing to the MMIO region. The cheaper writel_relaxed() does not provide + this guarantee and must not be used here. See the subsection "Kernel I/O barrier effects" for more information on relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h index fd7e8fbaeef1..961f4d88f9ef 100644 --- a/include/asm-generic/barrier.h +++ b/include/asm-generic/barrier.h @@ -38,6 +38,10 @@ #define wmb() do { kcsan_wmb(); __wmb(); } while (0) #endif +#ifdef __dma_mb +#define dma_mb() do { kcsan_mb(); __dma_mb(); } while (0) +#endif + #ifdef __dma_rmb #define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0) #endif @@ -65,6 +69,10 @@ #define wmb() mb() #endif +#ifndef dma_mb +#define dma_mb() mb() +#endif + #ifndef dma_rmb #define dma_rmb() rmb() #endif