From patchwork Thu Jun 3 12:53:07 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Pinchart X-Patchwork-Id: 104086 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o53CohsO000565 for ; Thu, 3 Jun 2010 12:50:43 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752573Ab0FCMul (ORCPT ); Thu, 3 Jun 2010 08:50:41 -0400 Received: from perceval.irobotique.be ([92.243.18.41]:34064 "EHLO perceval.irobotique.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752492Ab0FCMul (ORCPT ); Thu, 3 Jun 2010 08:50:41 -0400 Received: from localhost.localdomain (115.110-65-87.adsl-dyn.isp.belgacom.be [87.65.110.115]) by perceval.irobotique.be (Postfix) with ESMTPSA id 885AD361D1; Thu, 3 Jun 2010 12:49:57 +0000 (UTC) From: Laurent Pinchart To: linux-omap@vger.kernel.org Cc: hiroshi.doyu@nokia.com Subject: [PATCH] iovmm: Support non page-aligned buffers in iommu_vmap Date: Thu, 3 Jun 2010 14:53:07 +0200 Message-Id: <1275569587-32463-1-git-send-email-laurent.pinchart@ideasonboard.com> X-Mailer: git-send-email 1.6.4.4 Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Thu, 03 Jun 2010 12:50:44 +0000 (UTC) diff --git a/arch/arm/plat-omap/iovmm.c b/arch/arm/plat-omap/iovmm.c index 663e2d2..7d63f98 100644 --- a/arch/arm/plat-omap/iovmm.c +++ b/arch/arm/plat-omap/iovmm.c @@ -59,6 +59,15 @@ static struct kmem_cache *iovm_area_cachep; +/* return the offset of the first scatterlist entry in a sg table */ +static unsigned int sgtable_offset(const struct sg_table *sgt) +{ + if (!sgt || !sgt->nents) + return 0; + + return sgt->sgl->offset; +} + /* return total bytes of sg buffers */ static size_t sgtable_len(const struct sg_table *sgt) { @@ -71,11 +80,17 @@ static size_t sgtable_len(const struct sg_table *sgt) for_each_sg(sgt->sgl, sg, sgt->nents, i) { size_t bytes; - bytes = sg_dma_len(sg); + bytes = sg_dma_len(sg) + sg->offset; if (!iopgsz_ok(bytes)) { - pr_err("%s: sg[%d] not iommu pagesize(%x)\n", - __func__, i, bytes); + pr_err("%s: sg[%d] not iommu pagesize(%u %u)\n", + __func__, i, bytes, sg->offset); + return 0; + } + + if (i && sg->offset) { + pr_err("%s: sg[%d] offset not allowed in internal " + "entries\n", __func__, i); return 0; } @@ -197,8 +212,8 @@ static void *vmap_sg(const struct sg_table *sgt) u32 pa; int err; - pa = sg_phys(sg); - bytes = sg_dma_len(sg); + pa = sg_phys(sg) - sg->offset; + bytes = sg_dma_len(sg) + sg->offset; BUG_ON(bytes != PAGE_SIZE); @@ -467,8 +482,8 @@ static int map_iovm_area(struct iommu *obj, struct iovm_struct *new, size_t bytes; struct iotlb_entry e; - pa = sg_phys(sg); - bytes = sg_dma_len(sg); + pa = sg_phys(sg) - sg->offset; + bytes = sg_dma_len(sg) + sg->offset; flags &= ~IOVMF_PGSZ_MASK; pgsz = bytes_to_iopgsz(bytes); @@ -649,7 +664,7 @@ u32 iommu_vmap(struct iommu *obj, u32 da, const struct sg_table *sgt, if (IS_ERR_VALUE(da)) vunmap_sg(va); - return da; + return da + sgtable_offset(sgt); } EXPORT_SYMBOL_GPL(iommu_vmap); @@ -668,6 +683,7 @@ struct sg_table *iommu_vunmap(struct iommu *obj, u32 da) * 'sgt' is allocated before 'iommu_vmalloc()' is called. * Just returns 'sgt' to the caller to free */ + da &= PAGE_MASK; sgt = unmap_vm_area(obj, da, vunmap_sg, IOVMF_DISCONT | IOVMF_MMIO); if (!sgt) dev_dbg(obj->dev, "%s: No sgt\n", __func__);