From patchwork Wed Jun 1 13:30:12 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Pinchart X-Patchwork-Id: 840172 Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p51DV3FP007224 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Wed, 1 Jun 2011 13:31:24 GMT Received: from canuck.infradead.org ([134.117.69.58]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1QRlVP-00014o-Jd; Wed, 01 Jun 2011 13:30:40 +0000 Received: from localhost ([127.0.0.1] helo=canuck.infradead.org) by canuck.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1QRlVP-0003Rs-Dl; Wed, 01 Jun 2011 13:30:39 +0000 Received: from perceval.ideasonboard.com ([95.142.166.194]) by canuck.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1QRlV4-0003M0-Ff for linux-arm-kernel@lists.infradead.org; Wed, 01 Jun 2011 13:30:19 +0000 Received: from localhost.localdomain (unknown [91.178.85.245]) by perceval.ideasonboard.com (Postfix) with ESMTPSA id 918393599C; Wed, 1 Jun 2011 13:30:15 +0000 (UTC) From: Laurent Pinchart To: linux-omap@vger.kernel.org Subject: [PATCH v3 2/2] omap3: iovmm: Support non page-aligned buffers in iommu_vmap Date: Wed, 1 Jun 2011 15:30:12 +0200 Message-Id: <1306935012-12406-2-git-send-email-laurent.pinchart@ideasonboard.com> X-Mailer: git-send-email 1.7.3.4 In-Reply-To: <20110601131744.GH11352@atomide.com> References: <20110601131744.GH11352@atomide.com> X-CRM114-Version: 20090807-BlameThorstenAndJenny ( TRE 0.7.6 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20110601_093018_690850_12785DC9 X-CRM114-Status: GOOD ( 13.48 ) X-Spam-Score: -0.0 (/) X-Spam-Report: SpamAssassin version 3.3.1 on canuck.infradead.org summary: Content analysis details: (-0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 T_RP_MATCHES_RCVD Envelope sender domain matches handover relay domain Cc: linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Wed, 01 Jun 2011 13:31:24 +0000 (UTC) The IOMMU virtual memory mapping API requires page-aligned buffers. There's no hardware reason behind such a restriction. Remove it by rounding the address of the first page entry down, and adding the offset back to the IOMMU virtual address. Signed-off-by: Laurent Pinchart Acked-by: Hiroshi DOYU --- arch/arm/plat-omap/iovmm.c | 32 ++++++++++++++++++++++++-------- 1 files changed, 24 insertions(+), 8 deletions(-) diff --git a/arch/arm/plat-omap/iovmm.c b/arch/arm/plat-omap/iovmm.c index b82cef4..fa5ae98 100644 --- a/arch/arm/plat-omap/iovmm.c +++ b/arch/arm/plat-omap/iovmm.c @@ -60,6 +60,15 @@ static struct kmem_cache *iovm_area_cachep; +/* return the offset of the first scatterlist entry in a sg table */ +static unsigned int sgtable_offset(const struct sg_table *sgt) +{ + if (!sgt || !sgt->nents) + return 0; + + return sgt->sgl->offset; +} + /* return total bytes of sg buffers */ static size_t sgtable_len(const struct sg_table *sgt) { @@ -72,11 +81,17 @@ static size_t sgtable_len(const struct sg_table *sgt) for_each_sg(sgt->sgl, sg, sgt->nents, i) { size_t bytes; - bytes = sg_dma_len(sg); + bytes = sg_dma_len(sg) + sg->offset; if (!iopgsz_ok(bytes)) { - pr_err("%s: sg[%d] not iommu pagesize(%x)\n", - __func__, i, bytes); + pr_err("%s: sg[%d] not iommu pagesize(%u %u)\n", + __func__, i, bytes, sg->offset); + return 0; + } + + if (i && sg->offset) { + pr_err("%s: sg[%d] offset not allowed in internal " + "entries\n", __func__, i); return 0; } @@ -207,8 +222,8 @@ static void *vmap_sg(const struct sg_table *sgt) u32 pa; int err; - pa = sg_phys(sg); - bytes = sg_dma_len(sg); + pa = sg_phys(sg) - sg->offset; + bytes = sg_dma_len(sg) + sg->offset; BUG_ON(bytes != PAGE_SIZE); @@ -485,8 +500,8 @@ static int map_iovm_area(struct iommu *obj, struct iovm_struct *new, size_t bytes; struct iotlb_entry e; - pa = sg_phys(sg); - bytes = sg_dma_len(sg); + pa = sg_phys(sg) - sg->offset; + bytes = sg_dma_len(sg) + sg->offset; flags &= ~IOVMF_PGSZ_MASK; pgsz = bytes_to_iopgsz(bytes); @@ -666,7 +681,7 @@ u32 iommu_vmap(struct iommu *obj, u32 da, const struct sg_table *sgt, if (IS_ERR_VALUE(da)) vunmap_sg(va); - return da; + return da + sgtable_offset(sgt); } EXPORT_SYMBOL_GPL(iommu_vmap); @@ -685,6 +700,7 @@ struct sg_table *iommu_vunmap(struct iommu *obj, u32 da) * 'sgt' is allocated before 'iommu_vmalloc()' is called. * Just returns 'sgt' to the caller to free */ + da &= PAGE_MASK; sgt = unmap_vm_area(obj, da, vunmap_sg, IOVMF_DISCONT | IOVMF_MMIO); if (!sgt) dev_dbg(obj->dev, "%s: No sgt\n", __func__);