From patchwork Mon Apr 21 18:03:10 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Santosh Shilimkar X-Patchwork-Id: 4025821 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 26DCE9F1F4 for ; Mon, 21 Apr 2014 18:08:24 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3428B201C8 for ; Mon, 21 Apr 2014 18:08:23 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 44901201B9 for ; Mon, 21 Apr 2014 18:08:21 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WcIZQ-00038q-4g; Mon, 21 Apr 2014 18:03:56 +0000 Received: from comal.ext.ti.com ([198.47.26.152]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WcIZM-000385-Jp for linux-arm-kernel@lists.infradead.org; Mon, 21 Apr 2014 18:03:53 +0000 Received: from dflxv15.itg.ti.com ([128.247.5.124]) by comal.ext.ti.com (8.13.7/8.13.7) with ESMTP id s3LI3Hsw007292; Mon, 21 Apr 2014 13:03:17 -0500 Received: from DLEE71.ent.ti.com (dlee71.ent.ti.com [157.170.170.114]) by dflxv15.itg.ti.com (8.14.3/8.13.8) with ESMTP id s3LI3Hq8020271; Mon, 21 Apr 2014 13:03:17 -0500 Received: from dflp32.itg.ti.com (10.64.6.15) by DLEE71.ent.ti.com (157.170.170.114) with Microsoft SMTP Server id 14.3.174.1; Mon, 21 Apr 2014 13:03:17 -0500 Received: from ula0393909.am.dhcp.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by dflp32.itg.ti.com (8.14.3/8.13.8) with ESMTP id s3LI3Gc3001099; Mon, 21 Apr 2014 13:03:16 -0500 From: Santosh Shilimkar To: Subject: [PATCH] ARM: mm: dma: Update coherent streaming apis with missing memory barrier Date: Mon, 21 Apr 2014 14:03:10 -0400 Message-ID: <1398103390-31968-1-git-send-email-santosh.shilimkar@ti.com> X-Mailer: git-send-email 1.7.9.5 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140421_110352_736863_6CF741A3 X-CRM114-Status: GOOD ( 13.01 ) X-Spam-Score: -5.7 (-----) Cc: Nicolas Pitre , Russell King , Arnd Bergmann , Catalin Marinas , Will Deacon , Santosh Shilimkar , Marek Szyprowski X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ARM coherent CPU dma map APIS are assumed to be nops on cache coherent machines. While this is true, one still needs to ensure that no outstanding writes are pending in CPU write buffers. To take care of that, we at least need a memory barrier to commit those changes to main memory. Patch is trying to fix those cases. Without such a patch, you will end up patching device drivers to avoid the synchronisation issues. Cc: Russell King Cc: Marek Szyprowski Cc: Will Deacon Cc: Catalin Marinas Cc: Nicolas Pitre Cc: Arnd Bergmann Signed-off-by: Santosh Shilimkar --- arch/arm/mm/dma-mapping.c | 57 ++++++++++++++++++++++++++++++--------------- 1 file changed, 38 insertions(+), 19 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 5260f43..7c9f55d 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -83,6 +83,8 @@ static dma_addr_t arm_coherent_dma_map_page(struct device *dev, struct page *pag unsigned long offset, size_t size, enum dma_data_direction dir, struct dma_attrs *attrs) { + /* Drain cpu write buffer to main memory */ + wmb(); return pfn_to_dma(dev, page_to_pfn(page)) + offset; } @@ -117,6 +119,13 @@ static void arm_dma_sync_single_for_cpu(struct device *dev, __dma_page_dev_to_cpu(page, offset, size, dir); } +static void arm_coherent_dma_sync_single_for_cpu(struct device *dev, + dma_addr_t handle, size_t size, enum dma_data_direction dir) +{ + /* Drain cpu write buffer to main memory */ + wmb(); +} + static void arm_dma_sync_single_for_device(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir) { @@ -125,6 +134,33 @@ static void arm_dma_sync_single_for_device(struct device *dev, __dma_page_cpu_to_dev(page, offset, size, dir); } +/** + * arm_dma_sync_sg_for_cpu + * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices + * @sg: list of buffers + * @nents: number of buffers to map (returned from dma_map_sg) + * @dir: DMA transfer direction (same as was passed to dma_map_sg) + */ +void arm_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, + int nents, enum dma_data_direction dir) +{ + struct dma_map_ops *ops = get_dma_ops(dev); + struct scatterlist *s; + int i; + + for_each_sg(sg, s, nents, i) + ops->sync_single_for_cpu(dev, sg_dma_address(s), s->length, + dir); +} + +void arm_coherent_dma_sync_sg_for_cpu(struct device *dev, + struct scatterlist *sg, + int nents, enum dma_data_direction dir) +{ + /* Drain cpu write buffer to main memory */ + wmb(); +} + struct dma_map_ops arm_dma_ops = { .alloc = arm_dma_alloc, .free = arm_dma_free, @@ -154,6 +190,8 @@ struct dma_map_ops arm_coherent_dma_ops = { .get_sgtable = arm_dma_get_sgtable, .map_page = arm_coherent_dma_map_page, .map_sg = arm_dma_map_sg, + .sync_single_for_cpu = arm_coherent_dma_sync_single_for_cpu, + .sync_sg_for_cpu = arm_coherent_dma_sync_sg_for_cpu, .set_dma_mask = arm_dma_set_mask, }; EXPORT_SYMBOL(arm_coherent_dma_ops); @@ -994,25 +1032,6 @@ void arm_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents, } /** - * arm_dma_sync_sg_for_cpu - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices - * @sg: list of buffers - * @nents: number of buffers to map (returned from dma_map_sg) - * @dir: DMA transfer direction (same as was passed to dma_map_sg) - */ -void arm_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, - int nents, enum dma_data_direction dir) -{ - struct dma_map_ops *ops = get_dma_ops(dev); - struct scatterlist *s; - int i; - - for_each_sg(sg, s, nents, i) - ops->sync_single_for_cpu(dev, sg_dma_address(s), s->length, - dir); -} - -/** * arm_dma_sync_sg_for_device * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices * @sg: list of buffers