From patchwork Fri May 11 13:55:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 10394315 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 02F4260236 for ; Fri, 11 May 2018 13:58:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 90CE228C18 for ; Fri, 11 May 2018 13:58:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 851F828C87; Fri, 11 May 2018 13:58:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D553C28C18 for ; Fri, 11 May 2018 13:58:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=mBWEctVxJnyfLct/JBlgkfTj53zJLqlDK97gahVHjv8=; b=ar/ozTNd3mBVKJrx+gdv8tGfms GWGE0ShgHlEyY3ABPzg5bwWtwZOk6+Sv5midEhErIBYkO++TXiqoALFEYdTNzWWYmtjbekLFdjQyT UeyN+ADSbxUaF0LOhQBWkkoWqIunhi2HYJI3qpn2v3BwhkdDjEG1wAp2OWUYgvH2VuEibhgJauiYI uNpBJVXQlndeJi2OypWwp0Zv9f5Eej3uxw729qSKAukc2A0v0UqKdnmw+gNpsYZ2MtwUz4TKfwP+c uCs25Wg06nc0Y1LChZhuvL2LNArvQbykHcwifJUY3I67tR0Wjptns2pyTBQ/UAF1PQnWbjey0C1Vq EGeLnbLA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fH8Yu-0001Bt-7H; Fri, 11 May 2018 13:58:20 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fH8Wk-0000Cd-JE for linux-arm-kernel@lists.infradead.org; Fri, 11 May 2018 13:56:10 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1338C1684; Fri, 11 May 2018 06:56:01 -0700 (PDT) Received: from armageddon.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B29AF3F577; Fri, 11 May 2018 06:55:59 -0700 (PDT) From: Catalin Marinas To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v3 3/3] arm64: Force swiotlb bounce buffering for non-coherent DMA with large CWG Date: Fri, 11 May 2018 14:55:47 +0100 Message-Id: <20180511135547.34521-4-catalin.marinas@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180511135547.34521-1-catalin.marinas@arm.com> References: <20180511135547.34521-1-catalin.marinas@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180511_065606_692938_E147798A X-CRM114-Status: GOOD ( 19.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Robin Murphy , iommu@lists.linux-foundation.org, Will Deacon , Christoph Hellwig , Yang Li MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP On systems with a Cache Writeback Granule (CTR_EL0.CWG) greater than ARCH_DMA_MINALIGN, DMA cache maintenance on sub-CWG ranges is not safe, leading to data corruption. If such configuration is detected, the kernel will force swiotlb bounce buffering for all non-coherent devices. Cc: Will Deacon Cc: Robin Murphy Cc: Christoph Hellwig Signed-off-by: Catalin Marinas --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/dma-direct.h | 43 +++++++++++++++++++++++++++++++++++++ arch/arm64/mm/dma-mapping.c | 17 +++++++++++++++ arch/arm64/mm/init.c | 3 ++- 4 files changed, 63 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/include/asm/dma-direct.h diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index eb2cf4938f6d..ef56b2478205 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -17,6 +17,7 @@ config ARM64 select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA select ARCH_HAS_KCOV select ARCH_HAS_MEMBARRIER_SYNC_CORE + select ARCH_HAS_PHYS_TO_DMA select ARCH_HAS_SET_MEMORY select ARCH_HAS_SG_CHAIN select ARCH_HAS_STRICT_KERNEL_RWX diff --git a/arch/arm64/include/asm/dma-direct.h b/arch/arm64/include/asm/dma-direct.h new file mode 100644 index 000000000000..0c18a4d56702 --- /dev/null +++ b/arch/arm64/include/asm/dma-direct.h @@ -0,0 +1,43 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_DMA_DIRECT_H +#define __ASM_DMA_DIRECT_H + +#include +#include + +#include + +DECLARE_STATIC_KEY_FALSE(swiotlb_noncoherent_bounce); + +static inline dma_addr_t __phys_to_dma(struct device *dev, phys_addr_t paddr) +{ + dma_addr_t dev_addr = (dma_addr_t)paddr; + + return dev_addr - ((dma_addr_t)dev->dma_pfn_offset << PAGE_SHIFT); +} + +static inline phys_addr_t __dma_to_phys(struct device *dev, dma_addr_t dev_addr) +{ + phys_addr_t paddr = (phys_addr_t)dev_addr; + + return paddr + ((phys_addr_t)dev->dma_pfn_offset << PAGE_SHIFT); +} + +static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) +{ + if (!dev->dma_mask) + return false; + + /* + * Force swiotlb buffer bouncing when ARCH_DMA_MINALIGN < CWG. The + * swiotlb bounce buffers are aligned to (1 << IO_TLB_SHIFT). + */ + if (static_branch_unlikely(&swiotlb_noncoherent_bounce) && + !is_device_dma_coherent(dev) && + !is_swiotlb_buffer(__dma_to_phys(dev, addr))) + return false; + + return addr + size - 1 <= *dev->dma_mask; +} + +#endif /* __ASM_DMA_DIRECT_H */ diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index a96ec0181818..1e9dac8684ca 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -33,6 +33,7 @@ #include static int swiotlb __ro_after_init; +DEFINE_STATIC_KEY_FALSE(swiotlb_noncoherent_bounce); static pgprot_t __get_dma_pgprot(unsigned long attrs, pgprot_t prot, bool coherent) @@ -504,6 +505,14 @@ static int __init arm64_dma_init(void) max_pfn > (arm64_dma_phys_limit >> PAGE_SHIFT)) swiotlb = 1; + if (WARN_TAINT(ARCH_DMA_MINALIGN < cache_line_size(), + TAINT_CPU_OUT_OF_SPEC, + "ARCH_DMA_MINALIGN smaller than CTR_EL0.CWG (%d < %d)", + ARCH_DMA_MINALIGN, cache_line_size())) { + swiotlb = 1; + static_branch_enable(&swiotlb_noncoherent_bounce); + } + return atomic_pool_init(); } arch_initcall(arm64_dma_init); @@ -882,6 +891,14 @@ static void __iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, const struct iommu_ops *iommu, bool coherent) { + /* + * Enable swiotlb for buffer bouncing if ARCH_DMA_MINALIGN < CWG. + * dma_capable() forces the actual bounce if the device is + * non-coherent. + */ + if (static_branch_unlikely(&swiotlb_noncoherent_bounce) && !coherent) + iommu = NULL; + if (!dev->dma_ops) dev->dma_ops = &arm64_swiotlb_dma_ops; diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 9f3c47acf8ff..664acf177799 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -586,7 +586,8 @@ static void __init free_unused_memmap(void) void __init mem_init(void) { if (swiotlb_force == SWIOTLB_FORCE || - max_pfn > (arm64_dma_phys_limit >> PAGE_SHIFT)) + max_pfn > (arm64_dma_phys_limit >> PAGE_SHIFT) || + ARCH_DMA_MINALIGN < cache_line_size()) swiotlb_init(1); else swiotlb_force = SWIOTLB_NO_FORCE;