From patchwork Mon Mar 27 12:13:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 13189134 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACC01C77B62 for ; Mon, 27 Mar 2023 12:15:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232871AbjC0MPe (ORCPT ); Mon, 27 Mar 2023 08:15:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232827AbjC0MPI (ORCPT ); Mon, 27 Mar 2023 08:15:08 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE77A3A84; Mon, 27 Mar 2023 05:14:59 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 26051611B6; Mon, 27 Mar 2023 12:14:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 660FDC4331E; Mon, 27 Mar 2023 12:14:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679919298; bh=Y+wx0FiQgLjkqhPKxtczgrowoCTrPuzhrrVdIO/KSAA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CPGniWUcV5VUAZiOWzIY5/EsUqYQgyEzxjQ8UMeF3Al20/Hh61j9pc8T9PS2mzH4N RSYd/J7YHPH5Nmfk7XsUSK79nWPo+dGlRYvhxWqnqg+N+CIjbuT/Wb2iRTjyp1l0YE q3AT+X8nNFF6xSpGl06sKKOZy0e/LCKNsDpUgC07vuwopI9ykgcvXa2bBFGYOjVaGl X5PV9Eydwk0GTrNzStCQUtT6ovd3xxwDRfs6NRHuZY4OjKijI2Z6KdM95ztBnGcDwK 5HPw8DEJ4VDkMTWgdZ8nt+6QlnfKBn6oiVjPRF9H4yYTieJdfpVbSHx8IkThq5gzbA lsa0aYW1Hchqw== From: Arnd Bergmann To: linux-kernel@vger.kernel.org Cc: Arnd Bergmann , Vineet Gupta , Russell King , Neil Armstrong , Linus Walleij , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Stafford Horne , Helge Deller , Michael Ellerman , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Max Filippov , Christoph Hellwig , Robin Murphy , Lad Prabhakar , Conor Dooley , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-oxnas@groups.io, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org Subject: [PATCH 07/21] powerpc: dma-mapping: always clean cache in _for_device() op Date: Mon, 27 Mar 2023 14:13:03 +0200 Message-Id: <20230327121317.4081816-8-arnd@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230327121317.4081816-1-arnd@kernel.org> References: <20230327121317.4081816-1-arnd@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org From: Arnd Bergmann The powerpc implementation of arch_sync_dma_for_device() is unique in that it sometimes performs a full flush for the arch_sync_dma_for_device(paddr, size, DMA_FROM_DEVICE) operation when the address is unaligned, but otherwise invalidates the caches. Since the _for_cpu() counterpart has to invalidate the cache already in order to avoid stale data from prefetching, this operation only really needs to ensure that there are no dirty cache lines, which can be done using either invalidation or cleaning the cache, but not necessarily both. Most architectures traditionally go for invalidation here, but as Will Deacon points out, this can leak old data to user space if a DMA is started but the device ends up not actually filling the entire buffer, see the link below. The same argument applies to DMA_BIDIRECTIONAL transfers. Using a cache-clean operation is the safe choice here, followed by invalidating the cache after the DMA to get rid of stale data that was prefetched before the completion of the DMA. Link: https://lore.kernel.org/all/20220606152150.GA31568@willie-the-truck/ Signed-off-by: Arnd Bergmann --- arch/powerpc/mm/dma-noncoherent.c | 21 +-------------------- 1 file changed, 1 insertion(+), 20 deletions(-) diff --git a/arch/powerpc/mm/dma-noncoherent.c b/arch/powerpc/mm/dma-noncoherent.c index e108cacf877f..00e59a4faa2b 100644 --- a/arch/powerpc/mm/dma-noncoherent.c +++ b/arch/powerpc/mm/dma-noncoherent.c @@ -104,26 +104,7 @@ static void __dma_phys_op(phys_addr_t paddr, size_t size, enum dma_cache_op op) void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, enum dma_data_direction dir) { - switch (direction) { - case DMA_NONE: - BUG(); - case DMA_FROM_DEVICE: - /* - * invalidate only when cache-line aligned otherwise there is - * the potential for discarding uncommitted data from the cache - */ - if ((start | end) & (L1_CACHE_BYTES - 1)) - __dma_phys_op(start, end, DMA_CACHE_FLUSH); - else - __dma_phys_op(start, end, DMA_CACHE_INVAL); - break; - case DMA_TO_DEVICE: /* writeback only */ - __dma_phys_op(start, end, DMA_CACHE_CLEAN); - break; - case DMA_BIDIRECTIONAL: /* writeback and invalidate */ - __dma_phys_op(start, end, DMA_CACHE_FLUSH); - break; - } + __dma_phys_op(start, end, DMA_CACHE_CLEAN); } void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,