From patchwork Thu Jun 20 10:58:43 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hiroshi DOYU X-Patchwork-Id: 2754631 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 03D79C0AB1 for ; Thu, 20 Jun 2013 11:03:10 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 269F4204FF for ; Thu, 20 Jun 2013 11:03:05 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A89C4204F6 for ; Thu, 20 Jun 2013 11:03:02 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UpcbN-00066u-6H; Thu, 20 Jun 2013 11:00:31 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Upcal-0002uf-Uk; Thu, 20 Jun 2013 10:59:51 +0000 Received: from hqemgate04.nvidia.com ([216.228.121.35]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UpcaS-0002rf-Ib for linux-arm-kernel@lists.infradead.org; Thu, 20 Jun 2013 10:59:33 +0000 Received: from hqnvupgp07.nvidia.com (Not Verified[216.228.121.13]) by hqemgate04.nvidia.com id ; Thu, 20 Jun 2013 03:59:25 -0700 Received: from hqemhub01.nvidia.com ([172.20.12.94]) by hqnvupgp07.nvidia.com (PGP Universal service); Thu, 20 Jun 2013 03:57:30 -0700 X-PGP-Universal: processed; by hqnvupgp07.nvidia.com on Thu, 20 Jun 2013 03:57:30 -0700 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by hqemhub01.nvidia.com (172.20.150.30) with Microsoft SMTP Server id 8.3.298.1; Thu, 20 Jun 2013 03:59:10 -0700 Received: from thelma.nvidia.com (Not Verified[172.16.212.77]) by hqnvemgw02.nvidia.com with MailMarshal (v7,1,2,5326) id ; Thu, 20 Jun 2013 03:59:10 -0700 Received: from oreo.Nvidia.com (dhcp-10-21-26-134.nvidia.com [10.21.26.134]) by thelma.nvidia.com (8.13.8+Sun/8.8.8) with ESMTP id r5KAx4pl023091; Thu, 20 Jun 2013 03:59:08 -0700 (PDT) From: Hiroshi Doyu To: Subject: [PATCH 2/3] ARM: dma-mapping: Pass DMA attrs as IOMMU prot Date: Thu, 20 Jun 2013 13:58:43 +0300 Message-ID: <1371725924-13957-3-git-send-email-hdoyu@nvidia.com> X-Mailer: git-send-email 1.8.1.5 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130620_065932_817131_2B166F05 X-CRM114-Status: GOOD ( 11.31 ) X-Spam-Score: -8.2 (--------) Cc: linux-tegra@vger.kernel.org, linaro-mm-sig@lists.linaro.org, iommu@lists.linux-foundation.org, Hiroshi Doyu X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Pass DMA attribute as IOMMU property, which can be proccessed in the backend implementation of IOMMU. For example, DMA_ATTR_READ_ONLY can be translated into each IOMMU H/W implementaion. Signed-off-by: Hiroshi Doyu --- arch/arm/mm/dma-mapping.c | 34 +++++++++++++++++++++------------- 1 file changed, 21 insertions(+), 13 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 4152ed6..cbc6768 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -1254,7 +1254,8 @@ err: */ static dma_addr_t ____iommu_create_mapping(struct device *dev, dma_addr_t *req, - struct page **pages, size_t size) + struct page **pages, size_t size, + struct dma_attrs *attrs) { struct dma_iommu_mapping *mapping = dev->archdata.mapping; unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT; @@ -1280,7 +1281,7 @@ ____iommu_create_mapping(struct device *dev, dma_addr_t *req, break; len = (j - i) << PAGE_SHIFT; - ret = iommu_map(mapping->domain, iova, phys, len, 0); + ret = iommu_map(mapping->domain, iova, phys, len, (int)attrs); if (ret < 0) goto fail; iova += len; @@ -1294,9 +1295,10 @@ fail: } static dma_addr_t -__iommu_create_mapping(struct device *dev, struct page **pages, size_t size) +__iommu_create_mapping(struct device *dev, struct page **pages, size_t size, + struct dma_attrs *attrs) { - return ____iommu_create_mapping(dev, NULL, pages, size); + return ____iommu_create_mapping(dev, NULL, pages, size, attrs); } static int __iommu_remove_mapping(struct device *dev, dma_addr_t iova, size_t size) @@ -1332,7 +1334,7 @@ static struct page **__iommu_get_pages(void *cpu_addr, struct dma_attrs *attrs) } static void *__iommu_alloc_atomic(struct device *dev, size_t size, - dma_addr_t *handle) + dma_addr_t *handle, struct dma_attrs *attrs) { struct page *page; void *addr; @@ -1341,7 +1343,7 @@ static void *__iommu_alloc_atomic(struct device *dev, size_t size, if (!addr) return NULL; - *handle = __iommu_create_mapping(dev, &page, size); + *handle = __iommu_create_mapping(dev, &page, size, attrs); if (*handle == DMA_ERROR_CODE) goto err_mapping; @@ -1378,17 +1380,20 @@ static void *arm_iommu_alloc_attrs(struct device *dev, size_t size, size = PAGE_ALIGN(size); if (gfp & GFP_ATOMIC) - return __iommu_alloc_atomic(dev, size, handle); + + return __iommu_alloc_atomic(dev, size, handle, attrs); pages = __iommu_alloc_buffer(dev, size, gfp); if (!pages) return NULL; if (*handle == DMA_ERROR_CODE) - *handle = __iommu_create_mapping(dev, pages, size); + *handle = __iommu_create_mapping(dev, pages, size, attrs); else - *handle = ____iommu_create_mapping(dev, handle, pages, size); + *handle = ____iommu_create_mapping(dev, handle, pages, size, + attrs); + *handle = __iommu_create_mapping(dev, pages, size, attrs); if (*handle == DMA_ERROR_CODE) goto err_buffer; @@ -1513,7 +1518,7 @@ static int __map_sg_chunk(struct device *dev, struct scatterlist *sg, skip_cmaint: count = size >> PAGE_SHIFT; - ret = iommu_map_sg(mapping->domain, iova_base, sg, count, 0); + ret = iommu_map_sg(mapping->domain, iova_base, sg, count, (int)attrs); if (WARN_ON(ret < 0)) goto fail; @@ -1716,7 +1721,8 @@ static dma_addr_t arm_coherent_iommu_map_page(struct device *dev, struct page *p if (dma_addr == DMA_ERROR_CODE) return dma_addr; - ret = iommu_map(mapping->domain, dma_addr, page_to_phys(page), len, 0); + ret = iommu_map(mapping->domain, dma_addr, page_to_phys(page), len, + (int)attrs); if (ret < 0) goto fail; @@ -1756,7 +1762,8 @@ static dma_addr_t arm_iommu_map_page_at(struct device *dev, struct page *page, if (!dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs)) __dma_page_cpu_to_dev(page, offset, size, dir); - ret = iommu_map(mapping->domain, dma_addr, page_to_phys(page), len, 0); + ret = iommu_map(mapping->domain, dma_addr, page_to_phys(page), len, + (int)attrs); if (ret < 0) return DMA_ERROR_CODE; @@ -1778,7 +1785,8 @@ static dma_addr_t arm_iommu_map_pages(struct device *dev, struct page **pages, __dma_page_cpu_to_dev(pages[i], 0, PAGE_SIZE, dir); } - ret = iommu_map_pages(mapping->domain, dma_handle, pages, count, 0); + ret = iommu_map_pages(mapping->domain, dma_handle, pages, count, + (int)attrs); if (ret < 0) return DMA_ERROR_CODE;