From patchwork Fri Sep 2 17:32:30 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ohad Ben Cohen X-Patchwork-Id: 1122772 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter2.kernel.org (8.14.4/8.14.4) with ESMTP id p82HYh1R023314 for ; Fri, 2 Sep 2011 17:34:44 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753832Ab1IBRdH (ORCPT ); Fri, 2 Sep 2011 13:33:07 -0400 Received: from mail-wy0-f174.google.com ([74.125.82.174]:41213 "EHLO mail-wy0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753464Ab1IBRdB (ORCPT ); Fri, 2 Sep 2011 13:33:01 -0400 Received: by wyh22 with SMTP id 22so2141299wyh.19 for ; Fri, 02 Sep 2011 10:33:00 -0700 (PDT) Received: by 10.227.202.21 with SMTP id fc21mr1261792wbb.100.1314984779917; Fri, 02 Sep 2011 10:32:59 -0700 (PDT) Received: from localhost.localdomain (93-173-174-182.bb.netvision.net.il [93.173.174.182]) by mx.google.com with ESMTPS id fq2sm2376189wbb.24.2011.09.02.10.32.57 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 02 Sep 2011 10:32:59 -0700 (PDT) From: Ohad Ben-Cohen To: Cc: , Hiroshi DOYU , Laurent Pinchart , Joerg Roedel , David Woodhouse , , David Brown , Arnd Bergmann , , Ohad Ben-Cohen Subject: [PATCH 1/7] iommu/omap-iovmm: support non page-aligned buffers in iommu_vmap Date: Fri, 2 Sep 2011 20:32:30 +0300 Message-Id: <1314984756-4400-2-git-send-email-ohad@wizery.com> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1314984756-4400-1-git-send-email-ohad@wizery.com> References: <1314984756-4400-1-git-send-email-ohad@wizery.com> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter2.kernel.org [140.211.167.43]); Fri, 02 Sep 2011 17:34:44 +0000 (UTC) From: Laurent Pinchart omap_iovmm requires page-aligned buffers, and that sometimes causes omap3isp failures (i.e. whenever the buffer passed from userspace is not page-aligned). Remove this limitation by rounding the address of the first page entry down, and adding the offset back to the device address. Signed-off-by: Laurent Pinchart Acked-by: Hiroshi DOYU [ohad@wizery.com: rebased, but tested only with aligned buffers] [ohad@wizery.com: slightly edited the commit log] Signed-off-by: Ohad Ben-Cohen --- drivers/iommu/omap-iovmm.c | 36 ++++++++++++++++++++++++++---------- 1 files changed, 26 insertions(+), 10 deletions(-) diff --git a/drivers/iommu/omap-iovmm.c b/drivers/iommu/omap-iovmm.c index 5e7f97d..39bdb92 100644 --- a/drivers/iommu/omap-iovmm.c +++ b/drivers/iommu/omap-iovmm.c @@ -27,6 +27,15 @@ static struct kmem_cache *iovm_area_cachep; +/* return the offset of the first scatterlist entry in a sg table */ +static unsigned int sgtable_offset(const struct sg_table *sgt) +{ + if (!sgt || !sgt->nents) + return 0; + + return sgt->sgl->offset; +} + /* return total bytes of sg buffers */ static size_t sgtable_len(const struct sg_table *sgt) { @@ -39,11 +48,17 @@ static size_t sgtable_len(const struct sg_table *sgt) for_each_sg(sgt->sgl, sg, sgt->nents, i) { size_t bytes; - bytes = sg->length; + bytes = sg->length + sg->offset; if (!iopgsz_ok(bytes)) { - pr_err("%s: sg[%d] not iommu pagesize(%x)\n", - __func__, i, bytes); + pr_err("%s: sg[%d] not iommu pagesize(%u %u)\n", + __func__, i, bytes, sg->offset); + return 0; + } + + if (i && sg->offset) { + pr_err("%s: sg[%d] offset not allowed in internal " + "entries\n", __func__, i); return 0; } @@ -164,8 +179,8 @@ static void *vmap_sg(const struct sg_table *sgt) u32 pa; int err; - pa = sg_phys(sg); - bytes = sg->length; + pa = sg_phys(sg) - sg->offset; + bytes = sg->length + sg->offset; BUG_ON(bytes != PAGE_SIZE); @@ -405,8 +420,8 @@ static int map_iovm_area(struct iommu_domain *domain, struct iovm_struct *new, u32 pa; size_t bytes; - pa = sg_phys(sg); - bytes = sg->length; + pa = sg_phys(sg) - sg->offset; + bytes = sg->length + sg->offset; flags &= ~IOVMF_PGSZ_MASK; @@ -432,7 +447,7 @@ err_out: for_each_sg(sgt->sgl, sg, i, j) { size_t bytes; - bytes = sg->length; + bytes = sg->length + sg->offset; order = get_order(bytes); /* ignore failures.. we're already handling one */ @@ -461,7 +476,7 @@ static void unmap_iovm_area(struct iommu_domain *domain, struct omap_iommu *obj, size_t bytes; int order; - bytes = sg->length; + bytes = sg->length + sg->offset; order = get_order(bytes); err = iommu_unmap(domain, start, order); @@ -600,7 +615,7 @@ u32 omap_iommu_vmap(struct iommu_domain *domain, struct omap_iommu *obj, u32 da, if (IS_ERR_VALUE(da)) vunmap_sg(va); - return da; + return da + sgtable_offset(sgt); } EXPORT_SYMBOL_GPL(omap_iommu_vmap); @@ -620,6 +635,7 @@ omap_iommu_vunmap(struct iommu_domain *domain, struct omap_iommu *obj, u32 da) * 'sgt' is allocated before 'omap_iommu_vmalloc()' is called. * Just returns 'sgt' to the caller to free */ + da &= PAGE_MASK; sgt = unmap_vm_area(domain, obj, da, vunmap_sg, IOVMF_DISCONT | IOVMF_MMIO); if (!sgt)