From patchwork Thu Sep 7 17:36:06 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tycho Andersen X-Patchwork-Id: 9942593 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DA2AE600CB for ; Thu, 7 Sep 2017 17:38:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CB5A1285B7 for ; Thu, 7 Sep 2017 17:38:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BF01F28617; Thu, 7 Sep 2017 17:38:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 71E38285B7 for ; Thu, 7 Sep 2017 17:38:41 +0000 (UTC) Received: (qmail 21641 invoked by uid 550); 7 Sep 2017 17:37:30 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 20358 invoked from network); 7 Sep 2017 17:37:24 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=docker.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=JMCBTGDY55WOqMORj91u9BjHWM7Qylnek/0SAf/FGhE=; b=i+/A39otoNWtDj75EPrAnjZqmKv/rlSuI7WW34lDWDncof4FhFJPkAYy3MaOfJF/us Xgn2ZcxHsP3eu/lMcyRp9l6ktosXEbzP8lKuAT0WpEcgahcM6Muvp2ICrQZX1NwnPd3B 4iepGptW++IpXfXNcapTTncPII+IApw/x1hDI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=JMCBTGDY55WOqMORj91u9BjHWM7Qylnek/0SAf/FGhE=; b=a4rVw8Zgwg91PPnH3AZYqZgjSaTmQm95FC8Pxaxe7fQ27t+ROydMmnNaiT9NR2yFxD Wyg2jVuw0/6p4CQuTUbYd3YdgvIYiMg8iRPp0Zi+SR2bYAS0y+RY5Xjx3wF93QAR2AsZ DqLHWjM69ZnqjbUJ5Y/h7Ylc93H+khXMnzQ+bRor42QEsxwturC5onDimrE0s9JoMXTs AR0cjUGv8N5Sjq2liLQVhU181GK4Xs0EhrDolnkYPj5dWACDf1ndptxb5Ywc9wZmKt+n 1N0WsbnUg8FzDmbevlCu8T+U6Gq0LGpaXw0WSRGZDq7ZN5w/3z9XMwzX+5zPmaanbawm U3uw== X-Gm-Message-State: AHPjjUh4Y1zbJxAwtF3ZZheB2mJlaVGgeD7qZlJFXZkfjUqXLXtVQcIW A+51Bo7c3/+1w8ed X-Google-Smtp-Source: AOwi7QC/5FlQYezhD+YOGKYMeIvEYzXWma+syf4lkRWeZ9fmoWSrKJq5F/eAa0Um6xilf6zkmTjVjQ== X-Received: by 10.36.142.134 with SMTP id h128mr60149ite.129.1504805832342; Thu, 07 Sep 2017 10:37:12 -0700 (PDT) From: Tycho Andersen To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, kernel-hardening@lists.openwall.com, Marco Benatto , Juerg Haefliger , linux-arm-kernel@lists.infradead.org, Tycho Andersen Date: Thu, 7 Sep 2017 11:36:06 -0600 Message-Id: <20170907173609.22696-9-tycho@docker.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170907173609.22696-1-tycho@docker.com> References: <20170907173609.22696-1-tycho@docker.com> Subject: [kernel-hardening] [PATCH v6 08/11] arm64/mm: Add support for XPFO to swiotlb X-Virus-Scanned: ClamAV using ClamSMTP From: Juerg Haefliger Pages that are unmapped by XPFO need to be mapped before and unmapped again after (to restore the original state) the __dma_{map,unmap}_area() operations to prevent fatal page faults. v6: * use the hoisted out temporary mapping code instead CC: linux-arm-kernel@lists.infradead.org Signed-off-by: Juerg Haefliger Signed-off-by: Tycho Andersen --- arch/arm64/include/asm/cacheflush.h | 11 +++++++++++ arch/arm64/mm/dma-mapping.c | 32 ++++++++++++++++---------------- arch/arm64/mm/xpfo.c | 18 ++++++++++++++++++ include/linux/xpfo.h | 2 ++ 4 files changed, 47 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index d74a284abdc2..b6a462e3b2f9 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -93,6 +93,17 @@ extern void __dma_map_area(const void *, size_t, int); extern void __dma_unmap_area(const void *, size_t, int); extern void __dma_flush_area(const void *, size_t); +#ifdef CONFIG_XPFO +#include +#define _dma_map_area(addr, size, dir) \ + xpfo_dma_map_unmap_area(true, addr, size, dir) +#define _dma_unmap_area(addr, size, dir) \ + xpfo_dma_map_unmap_area(false, addr, size, dir) +#else +#define _dma_map_area(addr, size, dir) __dma_map_area(addr, size, dir) +#define _dma_unmap_area(addr, size, dir) __dma_unmap_area(addr, size, dir) +#endif + /* * Copy user data from/to a page which is mapped into a different * processes address space. Really, we want to allow our "user diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index f27d4dd04384..a79f200786ab 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -204,7 +204,7 @@ static dma_addr_t __swiotlb_map_page(struct device *dev, struct page *page, dev_addr = swiotlb_map_page(dev, page, offset, size, dir, attrs); if (!is_device_dma_coherent(dev) && (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) - __dma_map_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); + _dma_map_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); return dev_addr; } @@ -216,7 +216,7 @@ static void __swiotlb_unmap_page(struct device *dev, dma_addr_t dev_addr, { if (!is_device_dma_coherent(dev) && (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) - __dma_unmap_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); + _dma_unmap_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); swiotlb_unmap_page(dev, dev_addr, size, dir, attrs); } @@ -231,8 +231,8 @@ static int __swiotlb_map_sg_attrs(struct device *dev, struct scatterlist *sgl, if (!is_device_dma_coherent(dev) && (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) for_each_sg(sgl, sg, ret, i) - __dma_map_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), - sg->length, dir); + _dma_map_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), + sg->length, dir); return ret; } @@ -248,8 +248,8 @@ static void __swiotlb_unmap_sg_attrs(struct device *dev, if (!is_device_dma_coherent(dev) && (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) for_each_sg(sgl, sg, nelems, i) - __dma_unmap_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), - sg->length, dir); + _dma_unmap_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), + sg->length, dir); swiotlb_unmap_sg_attrs(dev, sgl, nelems, dir, attrs); } @@ -258,7 +258,7 @@ static void __swiotlb_sync_single_for_cpu(struct device *dev, enum dma_data_direction dir) { if (!is_device_dma_coherent(dev)) - __dma_unmap_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); + _dma_unmap_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); swiotlb_sync_single_for_cpu(dev, dev_addr, size, dir); } @@ -268,7 +268,7 @@ static void __swiotlb_sync_single_for_device(struct device *dev, { swiotlb_sync_single_for_device(dev, dev_addr, size, dir); if (!is_device_dma_coherent(dev)) - __dma_map_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); + _dma_map_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); } static void __swiotlb_sync_sg_for_cpu(struct device *dev, @@ -280,8 +280,8 @@ static void __swiotlb_sync_sg_for_cpu(struct device *dev, if (!is_device_dma_coherent(dev)) for_each_sg(sgl, sg, nelems, i) - __dma_unmap_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), - sg->length, dir); + _dma_unmap_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), + sg->length, dir); swiotlb_sync_sg_for_cpu(dev, sgl, nelems, dir); } @@ -295,8 +295,8 @@ static void __swiotlb_sync_sg_for_device(struct device *dev, swiotlb_sync_sg_for_device(dev, sgl, nelems, dir); if (!is_device_dma_coherent(dev)) for_each_sg(sgl, sg, nelems, i) - __dma_map_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), - sg->length, dir); + _dma_map_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), + sg->length, dir); } static int __swiotlb_mmap_pfn(struct vm_area_struct *vma, @@ -758,7 +758,7 @@ static void __iommu_sync_single_for_cpu(struct device *dev, return; phys = iommu_iova_to_phys(iommu_get_domain_for_dev(dev), dev_addr); - __dma_unmap_area(phys_to_virt(phys), size, dir); + _dma_unmap_area(phys_to_virt(phys), size, dir); } static void __iommu_sync_single_for_device(struct device *dev, @@ -771,7 +771,7 @@ static void __iommu_sync_single_for_device(struct device *dev, return; phys = iommu_iova_to_phys(iommu_get_domain_for_dev(dev), dev_addr); - __dma_map_area(phys_to_virt(phys), size, dir); + _dma_map_area(phys_to_virt(phys), size, dir); } static dma_addr_t __iommu_map_page(struct device *dev, struct page *page, @@ -811,7 +811,7 @@ static void __iommu_sync_sg_for_cpu(struct device *dev, return; for_each_sg(sgl, sg, nelems, i) - __dma_unmap_area(sg_virt(sg), sg->length, dir); + _dma_unmap_area(sg_virt(sg), sg->length, dir); } static void __iommu_sync_sg_for_device(struct device *dev, @@ -825,7 +825,7 @@ static void __iommu_sync_sg_for_device(struct device *dev, return; for_each_sg(sgl, sg, nelems, i) - __dma_map_area(sg_virt(sg), sg->length, dir); + _dma_map_area(sg_virt(sg), sg->length, dir); } static int __iommu_map_sg_attrs(struct device *dev, struct scatterlist *sgl, diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c index 678e2be848eb..342a9ccb93c1 100644 --- a/arch/arm64/mm/xpfo.c +++ b/arch/arm64/mm/xpfo.c @@ -11,8 +11,10 @@ * the Free Software Foundation. */ +#include #include #include +#include #include @@ -56,3 +58,19 @@ inline void xpfo_flush_kernel_tlb(struct page *page, int order) flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size); } + +void xpfo_dma_map_unmap_area(bool map, const void *addr, size_t size, + enum dma_data_direction dir) +{ + unsigned long num_pages = XPFO_NUM_PAGES(addr, size); + void *mapping[num_pages]; + + xpfo_temp_map(addr, size, mapping, sizeof(mapping[0]) * num_pages); + + if (map) + __dma_map_area(addr, size, dir); + else + __dma_unmap_area(addr, size, dir); + + xpfo_temp_unmap(addr, size, mapping, sizeof(mapping[0]) * num_pages); +} diff --git a/include/linux/xpfo.h b/include/linux/xpfo.h index 304b104ec637..d37a06c9d62c 100644 --- a/include/linux/xpfo.h +++ b/include/linux/xpfo.h @@ -18,6 +18,8 @@ #ifdef CONFIG_XPFO +#include + extern struct page_ext_operations page_xpfo_ops; void set_kpte(void *kaddr, struct page *page, pgprot_t prot);