From patchwork Mon Dec 14 23:17:39 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 7849901 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 0FF659F32E for ; Mon, 14 Dec 2015 23:20:13 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0D402202E6 for ; Mon, 14 Dec 2015 23:20:12 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 04C3220222 for ; Mon, 14 Dec 2015 23:20:11 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1a8cO2-0004sS-Bd; Mon, 14 Dec 2015 23:18:34 +0000 Received: from mga01.intel.com ([192.55.52.88]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1a8cNz-0004rB-5i for linux-arm-kernel@lists.infradead.org; Mon, 14 Dec 2015 23:18:33 +0000 Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP; 14 Dec 2015 15:18:07 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,429,1444719600"; d="scan'208";a="873590554" Received: from dwillia2-desk3.jf.intel.com ([10.54.39.136]) by fmsmga002.fm.intel.com with ESMTP; 14 Dec 2015 15:18:06 -0800 Subject: [for-4.4-rc6 PATCH] scatterlist: fix sg_phys() masking From: Dan Williams To: axboe@fb.com Date: Mon, 14 Dec 2015 15:17:39 -0800 Message-ID: <20151214231739.10377.11843.stgit@dwillia2-desk3.jf.intel.com> User-Agent: StGit/0.17.1-9-g687f MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151214_151831_290971_28276179 X-CRM114-Status: GOOD ( 18.75 ) X-Spam-Score: -6.9 (------) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Russell King , Vitaly Lavrov , Joerg Roedel , x86@kernel.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org, Andrew Morton , David Woodhouse , Christoph Hellwig , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP commit db0fa0cb0157 "scatterlist: use sg_phys()" did replacements of the form: phys_addr_t phys = page_to_phys(sg_page(s)); phys_addr_t phys = sg_phys(s) & PAGE_MASK; However, this breaks platforms where sizeof(phys_addr_t) > sizeof(unsigned long). Since PAGE_MASK is an unsigned long this inadvertently masks the high bits returned by sg_phys(). Convert to PHYSICAL_PAGE_MASK in these cases which will do the proper sign extension. As caught by the kbuild robot, a generic fallback definition of PHYSICAL_PAGE_MASK is needed for several archs. Cc: Cc: Jens Axboe Cc: Joerg Roedel Cc: Christoph Hellwig Cc: Russell King Cc: David Woodhouse Cc: Andrew Morton Reported-by: Vitaly Lavrov Signed-off-by: Dan Williams --- arch/arm/mm/dma-mapping.c | 2 +- drivers/iommu/intel-iommu.c | 2 +- drivers/staging/android/ion/ion_chunk_heap.c | 4 ++-- include/linux/mm.h | 12 ++++++++++++ 4 files changed, 16 insertions(+), 4 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index e62400e5fb99..3eec6cbe9995 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -1521,7 +1521,7 @@ static int __map_sg_chunk(struct device *dev, struct scatterlist *sg, return -ENOMEM; for (count = 0, s = sg; count < (size >> PAGE_SHIFT); s = sg_next(s)) { - phys_addr_t phys = sg_phys(s) & PAGE_MASK; + phys_addr_t phys = sg_phys(s) & PHYSICAL_PAGE_MASK; unsigned int len = PAGE_ALIGN(s->offset + s->length); if (!is_coherent && diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index f1042daef9ad..f3bc16c5ac70 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -2159,7 +2159,7 @@ static int __domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn, sg_res = aligned_nrpages(sg->offset, sg->length); sg->dma_address = ((dma_addr_t)iov_pfn << VTD_PAGE_SHIFT) + sg->offset; sg->dma_length = sg->length; - pteval = (sg_phys(sg) & PAGE_MASK) | prot; + pteval = (sg_phys(sg) & PHYSICAL_PAGE_MASK) | prot; phys_pfn = pteval >> VTD_PAGE_SHIFT; } diff --git a/drivers/staging/android/ion/ion_chunk_heap.c b/drivers/staging/android/ion/ion_chunk_heap.c index 195c41d7bd53..b5bb0aa26a3f 100644 --- a/drivers/staging/android/ion/ion_chunk_heap.c +++ b/drivers/staging/android/ion/ion_chunk_heap.c @@ -81,7 +81,7 @@ static int ion_chunk_heap_allocate(struct ion_heap *heap, err: sg = table->sgl; for (i -= 1; i >= 0; i--) { - gen_pool_free(chunk_heap->pool, sg_phys(sg) & PAGE_MASK, + gen_pool_free(chunk_heap->pool, sg_phys(sg) & PHYSICAL_PAGE_MASK, sg->length); sg = sg_next(sg); } @@ -109,7 +109,7 @@ static void ion_chunk_heap_free(struct ion_buffer *buffer) DMA_BIDIRECTIONAL); for_each_sg(table->sgl, sg, table->nents, i) { - gen_pool_free(chunk_heap->pool, sg_phys(sg) & PAGE_MASK, + gen_pool_free(chunk_heap->pool, sg_phys(sg) & PHYSICAL_PAGE_MASK, sg->length); } chunk_heap->allocated -= allocated_size; diff --git a/include/linux/mm.h b/include/linux/mm.h index 00bad7793788..877ca73946a4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -90,6 +90,18 @@ extern int overcommit_kbytes_handler(struct ctl_table *, int, void __user *, /* test whether an address (unsigned long or pointer) is aligned to PAGE_SIZE */ #define PAGE_ALIGNED(addr) IS_ALIGNED((unsigned long)addr, PAGE_SIZE) +#ifndef PHYSICAL_PAGE_MASK +/* + * Cast *PAGE_MASK to a signed type so that it is sign-extended if + * virtual addresses are 32-bits but physical addresses are larger (ie, + * 32-bit PAE). + * + * An arch may redefine this to mask out values outside the max + * address-width of the cpu. + */ +#define PHYSICAL_PAGE_MASK ((signed long) PAGE_MASK) +#endif + /* * Linux kernel virtual memory manager primitives. * The idea being to have a "virtual" mm in the same way