From patchwork Wed Aug 9 20:07:53 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tycho Andersen X-Patchwork-Id: 9891899 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BC0C860363 for ; Wed, 9 Aug 2017 20:11:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AEE5E28A91 for ; Wed, 9 Aug 2017 20:11:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A39DE28A9C; Wed, 9 Aug 2017 20:11:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 680DA28A91 for ; Wed, 9 Aug 2017 20:11:03 +0000 (UTC) Received: (qmail 1903 invoked by uid 550); 9 Aug 2017 20:09:31 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 32406 invoked from network); 9 Aug 2017 20:09:15 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=docker.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=X5htgiEd87KA3juTj1GcJ2YAJVutgZuUIozWmdnXBdg=; b=gWpVIY5IE6amdUBMQ/VAiWr+OLlPgj1/dxZsSL7ITGiD5/VbYEcUmPwC4M0E+RO66s LbWvDWysGdThfSp4mwn+IY5FozZzaxBTqkAwae8wU5RiEJqacpEsAS0mVGcakh0mqUu2 e3xqaxTlECMrRQNCAV8GUZ6VeAEwXGaq5AuDQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=X5htgiEd87KA3juTj1GcJ2YAJVutgZuUIozWmdnXBdg=; b=YIFTDHI0iugbUN0zeHPZLH7ajZEjR71BjhdBNquRFJVe771FS6EOrxRQhkRoixIQsX qGREQww/LS7dXgXnS+lPuy9+kHTNV/UYGutF/4fswY18VnV/hwlHVw6Hr7ZyYcKBj+fa fK5UKcuY2GwEMQQHHtX4UjXfeC/pePxbEdtgHlaDiXeerDw3ubaZiSNbYlsE5Cl4olxM Dpbr9TK/O0mID01smSU/SaY/lU2+KLZ5+/6yZoDRyif5WeSilQimlYs9H2S7V3CJeeaJ 5InUOZ1Y3wNxhmWeQPQPNb15ME3iM9+cu8h1UDVqWGDwWl/eOxhOE8d/NGMZNckLyBf+ D/Dw== X-Gm-Message-State: AHYfb5jqCOkLDy/YTY94JJV0FdxHQe+X71dhmNWcwmDUGCEVJXJ0xJOk ILTcoE5OpQFVgq4E X-Received: by 10.36.80.201 with SMTP id m192mr8316599itb.31.1502309344094; Wed, 09 Aug 2017 13:09:04 -0700 (PDT) From: Tycho Andersen To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, kernel-hardening@lists.openwall.com, Marco Benatto , Juerg Haefliger , Juerg Haefliger , Tycho Andersen Date: Wed, 9 Aug 2017 14:07:53 -0600 Message-Id: <20170809200755.11234-9-tycho@docker.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170809200755.11234-1-tycho@docker.com> References: <20170809200755.11234-1-tycho@docker.com> Subject: [kernel-hardening] [PATCH v5 08/10] arm64/mm: Add support for XPFO to swiotlb X-Virus-Scanned: ClamAV using ClamSMTP From: Juerg Haefliger Pages that are unmapped by XPFO need to be mapped before and unmapped again after (to restore the original state) the __dma_{map,unmap}_area() operations to prevent fatal page faults. Signed-off-by: Juerg Haefliger Signed-off-by: Tycho Andersen --- arch/arm64/include/asm/cacheflush.h | 11 +++++++++ arch/arm64/mm/dma-mapping.c | 32 +++++++++++++------------- arch/arm64/mm/xpfo.c | 45 +++++++++++++++++++++++++++++++++++++ 3 files changed, 72 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index d74a284abdc2..b6a462e3b2f9 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -93,6 +93,17 @@ extern void __dma_map_area(const void *, size_t, int); extern void __dma_unmap_area(const void *, size_t, int); extern void __dma_flush_area(const void *, size_t); +#ifdef CONFIG_XPFO +#include +#define _dma_map_area(addr, size, dir) \ + xpfo_dma_map_unmap_area(true, addr, size, dir) +#define _dma_unmap_area(addr, size, dir) \ + xpfo_dma_map_unmap_area(false, addr, size, dir) +#else +#define _dma_map_area(addr, size, dir) __dma_map_area(addr, size, dir) +#define _dma_unmap_area(addr, size, dir) __dma_unmap_area(addr, size, dir) +#endif + /* * Copy user data from/to a page which is mapped into a different * processes address space. Really, we want to allow our "user diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index f27d4dd04384..a79f200786ab 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -204,7 +204,7 @@ static dma_addr_t __swiotlb_map_page(struct device *dev, struct page *page, dev_addr = swiotlb_map_page(dev, page, offset, size, dir, attrs); if (!is_device_dma_coherent(dev) && (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) - __dma_map_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); + _dma_map_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); return dev_addr; } @@ -216,7 +216,7 @@ static void __swiotlb_unmap_page(struct device *dev, dma_addr_t dev_addr, { if (!is_device_dma_coherent(dev) && (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) - __dma_unmap_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); + _dma_unmap_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); swiotlb_unmap_page(dev, dev_addr, size, dir, attrs); } @@ -231,8 +231,8 @@ static int __swiotlb_map_sg_attrs(struct device *dev, struct scatterlist *sgl, if (!is_device_dma_coherent(dev) && (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) for_each_sg(sgl, sg, ret, i) - __dma_map_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), - sg->length, dir); + _dma_map_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), + sg->length, dir); return ret; } @@ -248,8 +248,8 @@ static void __swiotlb_unmap_sg_attrs(struct device *dev, if (!is_device_dma_coherent(dev) && (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) for_each_sg(sgl, sg, nelems, i) - __dma_unmap_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), - sg->length, dir); + _dma_unmap_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), + sg->length, dir); swiotlb_unmap_sg_attrs(dev, sgl, nelems, dir, attrs); } @@ -258,7 +258,7 @@ static void __swiotlb_sync_single_for_cpu(struct device *dev, enum dma_data_direction dir) { if (!is_device_dma_coherent(dev)) - __dma_unmap_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); + _dma_unmap_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); swiotlb_sync_single_for_cpu(dev, dev_addr, size, dir); } @@ -268,7 +268,7 @@ static void __swiotlb_sync_single_for_device(struct device *dev, { swiotlb_sync_single_for_device(dev, dev_addr, size, dir); if (!is_device_dma_coherent(dev)) - __dma_map_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); + _dma_map_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); } static void __swiotlb_sync_sg_for_cpu(struct device *dev, @@ -280,8 +280,8 @@ static void __swiotlb_sync_sg_for_cpu(struct device *dev, if (!is_device_dma_coherent(dev)) for_each_sg(sgl, sg, nelems, i) - __dma_unmap_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), - sg->length, dir); + _dma_unmap_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), + sg->length, dir); swiotlb_sync_sg_for_cpu(dev, sgl, nelems, dir); } @@ -295,8 +295,8 @@ static void __swiotlb_sync_sg_for_device(struct device *dev, swiotlb_sync_sg_for_device(dev, sgl, nelems, dir); if (!is_device_dma_coherent(dev)) for_each_sg(sgl, sg, nelems, i) - __dma_map_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), - sg->length, dir); + _dma_map_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), + sg->length, dir); } static int __swiotlb_mmap_pfn(struct vm_area_struct *vma, @@ -758,7 +758,7 @@ static void __iommu_sync_single_for_cpu(struct device *dev, return; phys = iommu_iova_to_phys(iommu_get_domain_for_dev(dev), dev_addr); - __dma_unmap_area(phys_to_virt(phys), size, dir); + _dma_unmap_area(phys_to_virt(phys), size, dir); } static void __iommu_sync_single_for_device(struct device *dev, @@ -771,7 +771,7 @@ static void __iommu_sync_single_for_device(struct device *dev, return; phys = iommu_iova_to_phys(iommu_get_domain_for_dev(dev), dev_addr); - __dma_map_area(phys_to_virt(phys), size, dir); + _dma_map_area(phys_to_virt(phys), size, dir); } static dma_addr_t __iommu_map_page(struct device *dev, struct page *page, @@ -811,7 +811,7 @@ static void __iommu_sync_sg_for_cpu(struct device *dev, return; for_each_sg(sgl, sg, nelems, i) - __dma_unmap_area(sg_virt(sg), sg->length, dir); + _dma_unmap_area(sg_virt(sg), sg->length, dir); } static void __iommu_sync_sg_for_device(struct device *dev, @@ -825,7 +825,7 @@ static void __iommu_sync_sg_for_device(struct device *dev, return; for_each_sg(sgl, sg, nelems, i) - __dma_map_area(sg_virt(sg), sg->length, dir); + _dma_map_area(sg_virt(sg), sg->length, dir); } static int __iommu_map_sg_attrs(struct device *dev, struct scatterlist *sgl, diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c index de03a652d48a..c4deb2b720cf 100644 --- a/arch/arm64/mm/xpfo.c +++ b/arch/arm64/mm/xpfo.c @@ -11,8 +11,10 @@ * the Free Software Foundation. */ +#include #include #include +#include #include @@ -62,3 +64,46 @@ inline void xpfo_flush_kernel_page(struct page *page, int order) flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size); } + +inline void xpfo_dma_map_unmap_area(bool map, const void *addr, size_t size, + int dir) +{ + unsigned long flags; + struct page *page = virt_to_page(addr); + + /* + * +2 here because we really want + * ceil(size / PAGE_SIZE), not floor(), and one extra in case things are + * not page aligned + */ + int i, possible_pages = size / PAGE_SIZE + 2; + void *buf[possible_pages]; + + memset(buf, 0, sizeof(void *) * possible_pages); + + local_irq_save(flags); + + /* Map the first page */ + if (xpfo_page_is_unmapped(page)) + buf[0] = kmap_atomic(page); + + /* Map the remaining pages */ + for (i = 1; i < possible_pages; i++) { + if (page_to_virt(page + i) >= addr + size) + break; + + if (xpfo_page_is_unmapped(page + i)) + buf[i] = kmap_atomic(page + i); + } + + if (map) + __dma_map_area(addr, size, dir); + else + __dma_unmap_area(addr, size, dir); + + for (i = 0; i < possible_pages; i++) + if (buf[i]) + kunmap_atomic(buf[i]); + + local_irq_restore(flags); +}