From patchwork Mon Aug 14 20:28:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lad, Prabhakar" X-Patchwork-Id: 13353232 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A7A7DEB64DD for ; Mon, 14 Aug 2023 20:29:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=O1JZR+t5Oneejh/le5EUV8Bbsks+2ZzIJ+2ZFrIb0Cc=; b=vD2sV268+q6MjM gNS1rj9vrhuJOMgQ84YVB17RSPXSUt/5TfxwDchbxsAHDNUrGD06ej1GAm0Bnf0kHEgg/RQ2Isi5O o5l2SG4iMPK7Gg5WadkZl7vxuSjuc0E4uX8vrCdMyaHJyLlsOylGyDxouRhsaG8jRfPm0A2lwUM0r 6rEQMiXrl3cDOhAn1LFc9RFn0yW/OwcBFU+EE8vfdI0nAYiJtxvnD+yZ/lBVZ3/zrDSobkgGv91bk Czp7YUOlwfebUX9jTe06RO8qDy2k6cZuQVCvaTwjtiCad/iUAV97snSDi49kSvVtkE60fPGo4rsbv KCKKIemvhk9aZSiQNWtQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qVeC0-000I7C-1T; Mon, 14 Aug 2023 20:29:36 +0000 Received: from mail-wm1-x32c.google.com ([2a00:1450:4864:20::32c]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qVeBx-000I6P-3A for linux-riscv@lists.infradead.org; Mon, 14 Aug 2023 20:29:35 +0000 Received: by mail-wm1-x32c.google.com with SMTP id 5b1f17b1804b1-3fe12baec61so42888825e9.2 for ; Mon, 14 Aug 2023 13:29:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692044970; x=1692649770; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mgES1U2rAJt0vkuQ5rZJ9UhsIJ2P8NX/YjkPZttVOXs=; b=cmKgegzaVqNxEVWiR+prIVGiodpzFr31luuavYINSMhzPNf0kg/2oheiHveCXdOsTM 2EIfOD017s+20lW79kUrZfcIDOCar7OwuHJ8bei6vYZxv7lEwooIJjS8FapzdcwnoBDt S2AfbR+QPq3H3ny4W1/2uhxAxuzlnzM3Kuh0j977bM9+IX8Q/4eCso1fia1z2YwwK4w+ XF5cb6AsvOkBALOVgaQ3ZcpJ6nrjx7RGhxIX+Yk8UfAczVKwq4LaRHCSb+U7EiQoWz42 uVOdvp7wdtKUhdQi0YfT3FHut/qI9HYbFKroI3djWyhQz/G0vxyJp2GGM8qzyvlsClX1 3Ppw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692044970; x=1692649770; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mgES1U2rAJt0vkuQ5rZJ9UhsIJ2P8NX/YjkPZttVOXs=; b=YkSYhMCJZVKC2dzx0Eid9x5sVxsK6FBonIxnsC+ggJQEfGozZhf7GaE22V81vhsbbk tOOuKua/nuptp4UU0OMdbcg0G5YBaTYh8uD2QOiQeMD6sJ5VbkJJdiNf+NoOaUrEk/eO fD0d5NlF+f0bK38zOzVK4zLv3dXX6Op6T6wxEbL0yZGvTqlcN9bZBJProbAqT6MTynYH yoF9XGPcExmatmTLNMlDGs6Fim/X6o8y2hi8ON9p25S5fhXoZkAKAOIcP+4pwG7QIHaz 140CetUkUxr3MvM//i0A5slBP0jQ0DiirJbnQxZM9TL9E1Xsr6ocAYOpNlQLjQ+55gEM y44w== X-Gm-Message-State: AOJu0YwBMePmOlSECJAAi0pDJCO10LjMo9pyb9+9M9BrAFDc4kkOQMCU dFTam8FSFA4QahfFdcvP8Yg= X-Google-Smtp-Source: AGHT+IE5AyDcYJd4G1UT7NTo1tKSeic6cgBusYnNxNedxa7HhjLy9CDCy4a8k4YcD2ORKou12eQPoA== X-Received: by 2002:a7b:c4d0:0:b0:3fd:3006:410b with SMTP id g16-20020a7bc4d0000000b003fd3006410bmr8182828wmk.34.1692044969972; Mon, 14 Aug 2023 13:29:29 -0700 (PDT) Received: from prasmi.home ([2a00:23c8:2501:c701:20e9:baea:a4f7:d880]) by smtp.gmail.com with ESMTPSA id 8-20020a05600c024800b003fd2d33ea53sm15181223wmj.14.2023.08.14.13.29.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Aug 2023 13:29:29 -0700 (PDT) From: Prabhakar X-Google-Original-From: Prabhakar To: Arnd Bergmann , Christoph Hellwig , Paul Walmsley , Palmer Dabbelt , Albert Ou , Conor Dooley , Anup Patel , Andrew Jones , Jisheng Zhang , linux-kernel@vger.kernel.org Cc: Geert Uytterhoeven , Samuel Holland , linux-riscv@lists.infradead.org, linux-renesas-soc@vger.kernel.org, Lad Prabhakar , Palmer Dabbelt Subject: [(subset) PATCH v2 1/3] riscv: dma-mapping: only invalidate after DMA, not flush Date: Mon, 14 Aug 2023 21:28:19 +0100 Message-Id: <20230814202821.78120-2-prabhakar.mahadev-lad.rj@bp.renesas.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230814202821.78120-1-prabhakar.mahadev-lad.rj@bp.renesas.com> References: <20230814202821.78120-1-prabhakar.mahadev-lad.rj@bp.renesas.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230814_132934_021381_B4080F3A X-CRM114-Status: GOOD ( 13.24 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Arnd Bergmann No other architecture intentionally writes back dirty cache lines into a buffer that a device has just finished writing into. If the cache is clean, this has no effect at all, but if a cacheline in the buffer has actually been written by the CPU, there is a driver bug that is likely made worse by overwriting that buffer. Signed-off-by: Arnd Bergmann Reviewed-by: Conor Dooley Reviewed-by: Lad Prabhakar Acked-by: Palmer Dabbelt Signed-off-by: Lad Prabhakar --- v1->v2 * Fixed typo drive->driver * Included RB and ACKs --- arch/riscv/mm/dma-noncoherent.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/riscv/mm/dma-noncoherent.c b/arch/riscv/mm/dma-noncoherent.c index d51a75864e53..94614cf61cdd 100644 --- a/arch/riscv/mm/dma-noncoherent.c +++ b/arch/riscv/mm/dma-noncoherent.c @@ -42,7 +42,7 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, break; case DMA_FROM_DEVICE: case DMA_BIDIRECTIONAL: - ALT_CMO_OP(flush, vaddr, size, riscv_cbom_block_size); + ALT_CMO_OP(inval, vaddr, size, riscv_cbom_block_size); break; default: break; From patchwork Mon Aug 14 20:28:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lad, Prabhakar" X-Patchwork-Id: 13353233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AE422C001DB for ; Mon, 14 Aug 2023 20:29:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CrLpJkad6CEATrXt9SYMHohNQVnPeRDpYhLwo/VQVUg=; b=CdGUIPtuJvOTlx cZ88G/9eRpnnW5pu+VBLSocUH235kWDqSCJ9OWU5f7z/aVhGUEveENJk5pmrNjRv+k+SMizU2Wiq0 9jOKoAw09mvuE5SbQp97GJzuyIjwISny9hvDPce5ucl8GlnnQ2oRZfGoys5GoKRsZsd1s6F71qjv2 CHdULY66OOw6VIexM4tje5LJfp2sBepU98aKP9osA+vS7MiaRett7/mTvUY/WAxBNZC/5m++7hLTH BhkOjK4xJAcB89D+/vOGmCSZtS/25HtUAZCSOrmS3NMlFS7x5ZKl0ARkK+qq3koU7Cj8vlSUb0D0/ iqhbFRvEc5rnmQJPDqHw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qVeC5-000I8K-0O; Mon, 14 Aug 2023 20:29:41 +0000 Received: from mail-wr1-x430.google.com ([2a00:1450:4864:20::430]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qVeC2-000I75-0z for linux-riscv@lists.infradead.org; Mon, 14 Aug 2023 20:29:39 +0000 Received: by mail-wr1-x430.google.com with SMTP id ffacd0b85a97d-31771bb4869so4450308f8f.0 for ; Mon, 14 Aug 2023 13:29:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692044975; x=1692649775; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=phGWQqQcUgpdnM6txJP29u6yPK67tIQiA1bCjq6RfTI=; b=TZi0hYLN362cjsZT1oNEzmhRnBOccQDBG0Y2mUBeEyo9+omFOgNs7ZKt1Zy6yRNp9P WTPnTBE9ktmy45z2SgdLdeEUNs2Ii50+16AWo7PdV0jIKpw6CSLj61BaBIot/UjbeWof utdsi/V6UDr85zbvxE6sgzrV0JPuNFBZy/SRI3t8ZUrTbpBlXdjwrRfSXU1QdTUQbwn1 qAjrnsmfAx9zisuh+1gkweC/Ku3zLlGyiGzB1grpyZHdCwyIFvaLfCIySqHVFsYRATeg Q22bk235+2rC9v+7MG+zKgfsMrZWFYRpXXGRah2jgDCCrAZ8xJMDlLbSIMzpENsG/3+K wL8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692044975; x=1692649775; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=phGWQqQcUgpdnM6txJP29u6yPK67tIQiA1bCjq6RfTI=; b=Z/zBttyzFzjeV6pLJHQdJjEXS+tXQ8OJ14FRawaJj06xSb76qvuWOIodh7wbSoeBT8 QNZzRlMemCkaPGOIHMg3+C4TrUHAiY2aq4un+uisXNFf/uTnxP7ue8qdmXCtZ2Wx6Xja yIZkLA7DkSjuZvwxsPK1Axw24N55TUBMeflftBqlHe7/ZlP4XUXsUcdHU69IEtAZTT6L SUlZ63+XFcBQCPNz/Fl+PBhNxybv3+O0seId8DhMFeWnQgTtAWhdvmh35fDoXXb9ehM8 TibsRxdr4SsBrUmtkv51NabEMxmOMl2rCtpv/OIOd/x7TEDIufz/eozWbY8HIX5jRlMr l3NA== X-Gm-Message-State: AOJu0YzK9eR7nWBRHaxx+ePtYakcEi8GMaAmc9UE9UTC0c821opNm45K jDiW7BMt40/S29A+dyt8FKw= X-Google-Smtp-Source: AGHT+IFUyJg4cY4seE25ARQXiaapGP7j3LA7z8+pdyjsJsyW90DXEm0xU7PIeqvENYTBC0jJL23K+w== X-Received: by 2002:adf:e88d:0:b0:317:5d1c:9719 with SMTP id d13-20020adfe88d000000b003175d1c9719mr8005347wrm.9.1692044975128; Mon, 14 Aug 2023 13:29:35 -0700 (PDT) Received: from prasmi.home ([2a00:23c8:2501:c701:20e9:baea:a4f7:d880]) by smtp.gmail.com with ESMTPSA id 8-20020a05600c024800b003fd2d33ea53sm15181223wmj.14.2023.08.14.13.29.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Aug 2023 13:29:34 -0700 (PDT) From: Prabhakar X-Google-Original-From: Prabhakar To: Arnd Bergmann , Christoph Hellwig , Paul Walmsley , Palmer Dabbelt , Albert Ou , Conor Dooley , Anup Patel , Andrew Jones , Jisheng Zhang , linux-kernel@vger.kernel.org Cc: Geert Uytterhoeven , Samuel Holland , linux-riscv@lists.infradead.org, linux-renesas-soc@vger.kernel.org, Lad Prabhakar , Palmer Dabbelt , Guo Ren Subject: [(subset) PATCH v2 2/3] riscv: dma-mapping: skip invalidation before bidirectional DMA Date: Mon, 14 Aug 2023 21:28:20 +0100 Message-Id: <20230814202821.78120-3-prabhakar.mahadev-lad.rj@bp.renesas.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230814202821.78120-1-prabhakar.mahadev-lad.rj@bp.renesas.com> References: <20230814202821.78120-1-prabhakar.mahadev-lad.rj@bp.renesas.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230814_132938_343078_3863015F X-CRM114-Status: GOOD ( 12.67 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Arnd Bergmann For a DMA_BIDIRECTIONAL transfer, the caches have to be cleaned first to let the device see data written by the CPU, and invalidated after the transfer to let the CPU see data written by the device. riscv also invalidates the caches before the transfer, which does not appear to serve any purpose. Signed-off-by: Arnd Bergmann Reviewed-by: Conor Dooley Reviewed-by: Lad Prabhakar Acked-by: Palmer Dabbelt Acked-by: Guo Ren Signed-off-by: Lad Prabhakar --- v1->v2 * Included RB and ACKs --- arch/riscv/mm/dma-noncoherent.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/riscv/mm/dma-noncoherent.c b/arch/riscv/mm/dma-noncoherent.c index 94614cf61cdd..fc6377a64c8d 100644 --- a/arch/riscv/mm/dma-noncoherent.c +++ b/arch/riscv/mm/dma-noncoherent.c @@ -25,7 +25,7 @@ void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, ALT_CMO_OP(clean, vaddr, size, riscv_cbom_block_size); break; case DMA_BIDIRECTIONAL: - ALT_CMO_OP(flush, vaddr, size, riscv_cbom_block_size); + ALT_CMO_OP(clean, vaddr, size, riscv_cbom_block_size); break; default: break; From patchwork Mon Aug 14 20:28:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lad, Prabhakar" X-Patchwork-Id: 13353234 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B42DCEB64DD for ; Mon, 14 Aug 2023 20:29:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=pWLfoLqISH+ww2haIigyFEqSfs5BbaGJnvhS/7Z686Y=; b=s2hOslriVqqewm m+WGuxdS+EDoSyo1lthskFFA/QMGy/bI/56Lt+HMofkAxmRm0LKkJWexmjS4u0YuxHsYSWQeZaA2x LiJ9lPM26al6d1yQyGEjpoC8dH58dMOKE1OrdMQESrAh0VpXQen+RD6zvcjjMnErdPuYAuX6QZDE1 jgiKgEH99QoI3AoEE/Q9jbWHxuvwKboV2xNwNdPUNzSPYMLqVp9ZwGbALTSAYaLTQkyB/iHeQiIDY +/n9BPgIIKiKnNmix+9i2y+mo96dE70tH1a/3RjBA7mqweKvh4kR7FE/sXZM9Dc0MMTZIUMTfusm+ zMPw7sGyjdqml0z39Ukw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qVeCB-000IAF-2m; Mon, 14 Aug 2023 20:29:47 +0000 Received: from mail-wm1-x32a.google.com ([2a00:1450:4864:20::32a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qVeC8-000I8d-0v for linux-riscv@lists.infradead.org; Mon, 14 Aug 2023 20:29:46 +0000 Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-3fe8a1591c8so32626005e9.3 for ; Mon, 14 Aug 2023 13:29:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692044981; x=1692649781; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WZS9k4hytgqpdauCxiUcsVRcbK/dI1ReGuOryfTEBiQ=; b=XI+s+7CVo6H/28yrasQEb1zG0CwG8Gw05f+BBbim9Oy77Fpazbchju2ryuV0tjTxnr h4p39E5/3Y2DOzYrVii7nZ647u0y1DOTjG1u1/RgpMr45afmmprJhu02BDujXr5pNK6t EVIVj3/DpXqpRyqu3J+20MUl2BoYWCIKT9ooaxlFIlNW2JOUvBxCOH3x+EnXgcqtoVol t06B99iVi5vnYqpuBTR908hAgbWfElTfIs4B+rwmPAeUQqeoLLuMedM1VGUVYUDDImtk +Mggr7330KY8nN2/tPNHXgPYag8EoYWaQ71t4EfrwZT5EDayIRerCKViGZPsyp3Yz7ha gcBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692044981; x=1692649781; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WZS9k4hytgqpdauCxiUcsVRcbK/dI1ReGuOryfTEBiQ=; b=B3Q4JX76+TrneAEIukNDSVcW/Rv/N8+qdA+JXXBKuPbXrWl1SPObUwVQkpR5uTG5o2 DVnnJxA95pA6zY04/LbeHMgby+kzlExEf8cWkoSUhWIp/ZYHvJrEEmvFM5fZQqHjKLhq wWNqP0vfHESVMUmi4DwCce4zpU7dhzwfSNzo2+/4TvsUJFOEXoNrvq33NHhx/e1cE8j6 g5J0GkBcYPvLRmBRonjIDrJC/UA2n9KvbAISgpUqbWl4lJ8L9SUJVsBGR5k4DL9FhzFM 22em0cLxDqdkoRVNCNSOoCwmn9MsyTXndN2aZ7wTThGgZXjyW3d4N3b1XNx3jRMVkrLn LRiw== X-Gm-Message-State: AOJu0YzNGazjEp5Niv4F6xiqGor+Ip1cJXt4sAGnKaRbYOC8P9v1y4pX Xr1vPyyO2TAEAOjKby/mU8o= X-Google-Smtp-Source: AGHT+IHQ7bdRxGxrq2fuUzX2MdpRR3MB+8ItcJVxiqiOb/Bhlr4j/+phIulI07DH0brzvBPdxURdIA== X-Received: by 2002:a05:600c:253:b0:3fe:1d13:4664 with SMTP id 19-20020a05600c025300b003fe1d134664mr9172860wmj.6.1692044980914; Mon, 14 Aug 2023 13:29:40 -0700 (PDT) Received: from prasmi.home ([2a00:23c8:2501:c701:20e9:baea:a4f7:d880]) by smtp.gmail.com with ESMTPSA id 8-20020a05600c024800b003fd2d33ea53sm15181223wmj.14.2023.08.14.13.29.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Aug 2023 13:29:40 -0700 (PDT) From: Prabhakar X-Google-Original-From: Prabhakar To: Arnd Bergmann , Christoph Hellwig , Paul Walmsley , Palmer Dabbelt , Albert Ou , Conor Dooley , Anup Patel , Andrew Jones , Jisheng Zhang , linux-kernel@vger.kernel.org Cc: Geert Uytterhoeven , Samuel Holland , linux-riscv@lists.infradead.org, linux-renesas-soc@vger.kernel.org, Lad Prabhakar Subject: [(subset) PATCH v2 3/3] riscv: dma-mapping: replace custom code with generic implementation Date: Mon, 14 Aug 2023 21:28:21 +0100 Message-Id: <20230814202821.78120-4-prabhakar.mahadev-lad.rj@bp.renesas.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230814202821.78120-1-prabhakar.mahadev-lad.rj@bp.renesas.com> References: <20230814202821.78120-1-prabhakar.mahadev-lad.rj@bp.renesas.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230814_132944_328523_37C50DF3 X-CRM114-Status: GOOD ( 29.02 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Arnd Bergmann Now that all of these have consistent behavior, replace them with a single shared implementation of arch_sync_dma_for_device() and arch_sync_dma_for_cpu() and three parameters to pick how they should operate: - If the CPU has speculative prefetching, then the cache has to be invalidated after a transfer from the device. On the rarer CPUs without prefetching, this can be skipped, with all cache management happening before the transfer. This flag can be runtime detected, but is usually fixed per architecture. - Some architectures currently clean the caches before DMA from a device, while others invalidate it. There has not been a conclusion regarding whether we should change all architectures to use clean instead, so this adds an architecture specific flag that we can change later on. For the function naming, I picked 'wback' over 'clean', and 'wback_inv' over 'flush', to avoid any ambiguity of what the helper functions are supposed to do. Moving the global functions into a header file is usually a bad idea as it prevents the header from being included more than once, but it helps keep the behavior as close as possible to the previous state, including the possibility of inlining most of it into these functions where that was done before. This also helps keep the global namespace clean, by hiding the new arch_dma_cache{_wback,_inv,_wback_inv} from device drivers that might use them incorrectly. Signed-off-by: Arnd Bergmann Reviewed-by: Lad Prabhakar Tested-by: Lad Prabhakar [PL: Dropped other archs, updated commit message and fixed checkpatch issues] Signed-off-by: Lad Prabhakar --- v1->v2 * Updated commit message * Fixed checkpatch issues * Dropped other archs --- arch/riscv/mm/dma-noncoherent.c | 50 +++++++------- include/linux/dma-sync.h | 113 ++++++++++++++++++++++++++++++++ 2 files changed, 136 insertions(+), 27 deletions(-) create mode 100644 include/linux/dma-sync.h diff --git a/arch/riscv/mm/dma-noncoherent.c b/arch/riscv/mm/dma-noncoherent.c index fc6377a64c8d..b6a1e9cc9339 100644 --- a/arch/riscv/mm/dma-noncoherent.c +++ b/arch/riscv/mm/dma-noncoherent.c @@ -12,43 +12,39 @@ static bool noncoherent_supported __ro_after_init; -void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, - enum dma_data_direction dir) +static inline void arch_dma_cache_wback(phys_addr_t paddr, size_t size) { void *vaddr = phys_to_virt(paddr); - switch (dir) { - case DMA_TO_DEVICE: - ALT_CMO_OP(clean, vaddr, size, riscv_cbom_block_size); - break; - case DMA_FROM_DEVICE: - ALT_CMO_OP(clean, vaddr, size, riscv_cbom_block_size); - break; - case DMA_BIDIRECTIONAL: - ALT_CMO_OP(clean, vaddr, size, riscv_cbom_block_size); - break; - default: - break; - } + ALT_CMO_OP(clean, vaddr, size, riscv_cbom_block_size); } -void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, - enum dma_data_direction dir) +static inline void arch_dma_cache_inv(phys_addr_t paddr, size_t size) { void *vaddr = phys_to_virt(paddr); - switch (dir) { - case DMA_TO_DEVICE: - break; - case DMA_FROM_DEVICE: - case DMA_BIDIRECTIONAL: - ALT_CMO_OP(inval, vaddr, size, riscv_cbom_block_size); - break; - default: - break; - } + ALT_CMO_OP(inval, vaddr, size, riscv_cbom_block_size); } +static inline void arch_dma_cache_wback_inv(phys_addr_t paddr, size_t size) +{ + void *vaddr = phys_to_virt(paddr); + + ALT_CMO_OP(flush, vaddr, size, riscv_cbom_block_size); +} + +static inline bool arch_sync_dma_clean_before_fromdevice(void) +{ + return true; +} + +static inline bool arch_sync_dma_cpu_needs_post_dma_flush(void) +{ + return true; +} + +#include + void arch_dma_prep_coherent(struct page *page, size_t size) { void *flush_addr = page_address(page); diff --git a/include/linux/dma-sync.h b/include/linux/dma-sync.h new file mode 100644 index 000000000000..be23e8dda2e2 --- /dev/null +++ b/include/linux/dma-sync.h @@ -0,0 +1,113 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* + * Cache operations depending on function and direction argument, inspired by + * https://lore.kernel.org/lkml/20180518175004.GF17671@n2100.armlinux.org.uk + * "dma_sync_*_for_cpu and direction=TO_DEVICE (was Re: [PATCH 02/20] + * dma-mapping: provide a generic dma-noncoherent implementation)" + * + * | map == for_device | unmap == for_cpu + * |---------------------------------------------------------------- + * TO_DEV | writeback writeback | none none + * FROM_DEV | invalidate invalidate | invalidate* invalidate* + * BIDIR | writeback writeback | invalidate invalidate + * + * [*] needed for CPU speculative prefetches + * + * NOTE: we don't check the validity of direction argument as it is done in + * upper layer functions (in include/linux/dma-mapping.h) + * + * This file can be included by arch/.../kernel/dma-noncoherent.c to provide + * the respective high-level operations without having to expose the + * cache management ops to drivers. + */ + +#ifndef __DMA_SYNC_H__ +#define __DMA_SYNC_H__ + +void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, + enum dma_data_direction dir) +{ + switch (dir) { + case DMA_TO_DEVICE: + /* + * This may be an empty function on write-through caches, + * and it might invalidate the cache if an architecture has + * a write-back cache but no way to write it back without + * invalidating + */ + arch_dma_cache_wback(paddr, size); + break; + + case DMA_FROM_DEVICE: + /* + * FIXME: this should be handled the same across all + * architectures, see + * https://lore.kernel.org/all/20220606152150.GA31568@willie-the-truck/ + */ + if (!arch_sync_dma_clean_before_fromdevice()) { + arch_dma_cache_inv(paddr, size); + break; + } + fallthrough; + + case DMA_BIDIRECTIONAL: + /* Skip the invalidate here if it's done later */ + if (IS_ENABLED(CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU) && + arch_sync_dma_cpu_needs_post_dma_flush()) + arch_dma_cache_wback(paddr, size); + else + arch_dma_cache_wback_inv(paddr, size); + break; + + default: + break; + } +} + +#ifdef CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU +/* + * Mark the D-cache clean for these pages to avoid extra flushing. + */ +static void arch_dma_mark_dcache_clean(phys_addr_t paddr, size_t size) +{ +#ifdef CONFIG_ARCH_DMA_MARK_DCACHE_CLEAN + unsigned long pfn = PFN_UP(paddr); + unsigned long off = paddr & (PAGE_SIZE - 1); + size_t left = size; + + if (off) + left -= PAGE_SIZE - off; + + while (left >= PAGE_SIZE) { + struct page *page = pfn_to_page(pfn++); + + set_bit(PG_dcache_clean, &page->flags); + left -= PAGE_SIZE; + } +#endif +} + +void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, + enum dma_data_direction dir) +{ + switch (dir) { + case DMA_TO_DEVICE: + break; + + case DMA_FROM_DEVICE: + case DMA_BIDIRECTIONAL: + /* FROM_DEVICE invalidate needed if speculative CPU prefetch only */ + if (arch_sync_dma_cpu_needs_post_dma_flush()) + arch_dma_cache_inv(paddr, size); + + if (size > PAGE_SIZE) + arch_dma_mark_dcache_clean(paddr, size); + break; + + default: + break; + } +} +#endif +#endif /* __DMA_SYNC_H__ */