From patchwork Wed Jun 5 11:11:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yoshihiro Shimoda X-Patchwork-Id: 10976637 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 36C0913AD for ; Wed, 5 Jun 2019 11:16:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2A25B28520 for ; Wed, 5 Jun 2019 11:16:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1E439285F0; Wed, 5 Jun 2019 11:16:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 956D928827 for ; Wed, 5 Jun 2019 11:16:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727436AbfFELQz (ORCPT ); Wed, 5 Jun 2019 07:16:55 -0400 Received: from relmlor2.renesas.com ([210.160.252.172]:62646 "EHLO relmlie6.idc.renesas.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727366AbfFELQz (ORCPT ); Wed, 5 Jun 2019 07:16:55 -0400 X-IronPort-AV: E=Sophos;i="5.60,550,1549897200"; d="scan'208";a="17680350" Received: from unknown (HELO relmlir6.idc.renesas.com) ([10.200.68.152]) by relmlie6.idc.renesas.com with ESMTP; 05 Jun 2019 20:16:52 +0900 Received: from localhost.localdomain (unknown [10.166.17.210]) by relmlir6.idc.renesas.com (Postfix) with ESMTP id CEC6541BD771; Wed, 5 Jun 2019 20:16:52 +0900 (JST) From: Yoshihiro Shimoda To: ulf.hansson@linaro.org, wsa+renesas@sang-engineering.com, hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com, joro@8bytes.org Cc: linux-mmc@vger.kernel.org, iommu@lists.linux-foundation.org, linux-renesas-soc@vger.kernel.org, Yoshihiro Shimoda Subject: [RFC PATCH v5 2/8] iommu/dma: move iommu_dma_unmap_sg() place Date: Wed, 5 Jun 2019 20:11:48 +0900 Message-Id: <1559733114-4221-3-git-send-email-yoshihiro.shimoda.uh@renesas.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1559733114-4221-1-git-send-email-yoshihiro.shimoda.uh@renesas.com> References: <1559733114-4221-1-git-send-email-yoshihiro.shimoda.uh@renesas.com> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP iommu_dma_map_sg() will use the unmap function in the future. To avoid a forward declaration, this patch move the function place. Signed-off-by: Yoshihiro Shimoda --- drivers/iommu/dma-iommu.c | 48 +++++++++++++++++++++++------------------------ 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 0dee374..034caae 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -730,6 +730,30 @@ static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, __iommu_dma_unmap(dev, dma_handle, size); } +static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, + int nents, enum dma_data_direction dir, unsigned long attrs) +{ + dma_addr_t start, end; + struct scatterlist *tmp; + int i; + + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + iommu_dma_sync_sg_for_cpu(dev, sg, nents, dir); + + /* + * The scatterlist segments are mapped into a single + * contiguous IOVA allocation, so this is incredibly easy. + */ + start = sg_dma_address(sg); + for_each_sg(sg_next(sg), tmp, nents - 1, i) { + if (sg_dma_len(tmp) == 0) + break; + sg = tmp; + } + end = sg_dma_address(sg) + sg_dma_len(sg); + __iommu_dma_unmap(dev, start, end - start); +} + /* * Prepare a successfully-mapped scatterlist to give back to the caller. * @@ -887,30 +911,6 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, return 0; } -static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, - int nents, enum dma_data_direction dir, unsigned long attrs) -{ - dma_addr_t start, end; - struct scatterlist *tmp; - int i; - - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - iommu_dma_sync_sg_for_cpu(dev, sg, nents, dir); - - /* - * The scatterlist segments are mapped into a single - * contiguous IOVA allocation, so this is incredibly easy. - */ - start = sg_dma_address(sg); - for_each_sg(sg_next(sg), tmp, nents - 1, i) { - if (sg_dma_len(tmp) == 0) - break; - sg = tmp; - } - end = sg_dma_address(sg) + sg_dma_len(sg); - __iommu_dma_unmap(dev, start, end - start); -} - static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, size_t size, enum dma_data_direction dir, unsigned long attrs) {