From patchwork Wed Aug 16 23:23:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lad, Prabhakar" X-Patchwork-Id: 13355774 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A9DC3C27C53 for ; Wed, 16 Aug 2023 23:23:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=QiW4Dd1MGXWNhUEed4NenbG4La/klmgrFXqI7ZQYV40=; b=ny+c9pS4tevE8B mo/YNC6eG7Ep2sTLsgwcpkuwwsp4ouYDd/x2W7H62d9GyADPjfju2vULUHdg0UKXTMUsXMgF49YBT LeX77Zqe0g0YpzuWOK0+8Sp6VHxtBVgSfs8V2YfZoIGsvptWHWqV6d69FIHUo9jCaVI4Yykq05H3S 9jiJBSSymV7aTuotMx5HK68JQkvBh1O7yo7xP+bScmi3egJ9V8B7lzTjXJoR6y/PHwz8NZ1Bk0F7s ZBeL9dgW3wAdirsKTNrvE927mjWrwySSs7GL5aAX7DQI32iRamko9lr2Ka/3M1lLtYrtcHBPWjLE3 9qfqFqYeMKQpB5HF7rwQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qWPro-005AAy-0k; Wed, 16 Aug 2023 23:23:56 +0000 Received: from mail-wm1-x32b.google.com ([2a00:1450:4864:20::32b]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qWPri-005A8w-2O for linux-riscv@lists.infradead.org; Wed, 16 Aug 2023 23:23:52 +0000 Received: by mail-wm1-x32b.google.com with SMTP id 5b1f17b1804b1-3fbd33a57b6so70853825e9.2 for ; Wed, 16 Aug 2023 16:23:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692228229; x=1692833029; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Qa3AibLZ/flcMM8pjMbhTffSCLpDoNcTFNENKYYaFNY=; b=HVyU7R+7NAHiP1FHnuES28Woqar0atRkeoYGbaaacWeuA4/koMeyt7sWGUK2Ynjwk6 47O7kvMM9nK3nI36bqh4DSCXGzVwhx0oTCphBopPWKKMVjyz/xjtF/WK2UmC7BukPkFc j/TiiIgSJqnuheF9Rsdz6rOCZIT7SGCP9aaFepCP6Sgr1qBAXitiX1NBq2QfoY7lZYN+ V2g9luT63NSlPJCzbfSbqJZdutg+mM+QoCfCJ0cwJgwA2FiVj3rGoSvL0G3lfgqxZ9dT kQQuCXiima/MAqL5WWi71hRzx2jA9zZfbP5gvYXrqBH7aTwz5NxMSAf94rAhe8HsqTI1 ODUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692228229; x=1692833029; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Qa3AibLZ/flcMM8pjMbhTffSCLpDoNcTFNENKYYaFNY=; b=NaDB1+vp3r2rZEq4MIh9f33aDhMnirdrAi/KcNOVfKph7jWWxowQeTa+GP4yE4kZ4X T+Eoeafx8y/U6U5Dz/LlRT3GeWDdC8dOsrINHCt8wE44dgqPaaSo+cZ78OqzAkzEJ4xb zxBq2IzdAPfAZ3gGlUdSzbEdwWoH6WGtfxBVt6a92ThkJXv9EaFjW63F1TqaG5TUAnws w3MHqRy/djnMtiGaiLL1ntrVkeYA5OOgEIfLyn9TRoBHIgbVjEqRT+a3OmG7TX1RkcxL c1kQWGlGH0uV7hzzcLXX/A83kyNZO3eqJI6RaRXLU9YnaCp0Db7NSoeb9K/5TKeCdoar rEHQ== X-Gm-Message-State: AOJu0Yzov/GtUw8LEZs6SrOXJBsp2qPhQk30iOKsUCBfS2ynW7wUW/54 1r5XvdM9g7ZkYVI2RBwH0QM= X-Google-Smtp-Source: AGHT+IFnz3zO5DhsTq/r4acaRKYRoJxmqtk6kyVWxCNSs6D3aHnm7Cqcu25/wzXO/Vs8/Q9UZVonkA== X-Received: by 2002:a7b:c8ca:0:b0:3f6:1474:905 with SMTP id f10-20020a7bc8ca000000b003f614740905mr2692739wml.29.1692228228455; Wed, 16 Aug 2023 16:23:48 -0700 (PDT) Received: from prasmi.home ([2a00:23c8:2501:c701:aeb8:832b:ebc0:1bbf]) by smtp.gmail.com with ESMTPSA id q4-20020adff944000000b003143c9beeaesm22752617wrr.44.2023.08.16.16.23.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 16:23:47 -0700 (PDT) From: Prabhakar X-Google-Original-From: Prabhakar To: Arnd Bergmann , Christoph Hellwig , Paul Walmsley , Palmer Dabbelt , Albert Ou , Conor Dooley , Anup Patel , Andrew Jones , Jisheng Zhang , linux-kernel@vger.kernel.org Cc: Geert Uytterhoeven , Samuel Holland , linux-riscv@lists.infradead.org, linux-renesas-soc@vger.kernel.org, Prabhakar , Lad Prabhakar Subject: [PATCH v3 3/3] riscv: dma-mapping: switch over to generic implementation Date: Thu, 17 Aug 2023 00:23:36 +0100 Message-Id: <20230816232336.164413-4-prabhakar.mahadev-lad.rj@bp.renesas.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230816232336.164413-1-prabhakar.mahadev-lad.rj@bp.renesas.com> References: <20230816232336.164413-1-prabhakar.mahadev-lad.rj@bp.renesas.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230816_162350_779006_BFD22B76 X-CRM114-Status: GOOD ( 14.38 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Lad Prabhakar Add helper functions for cache wback/inval/clean and use them arch_sync_dma_for_device()/arch_sync_dma_for_cpu() functions. The proposed changes are in preparation for switching over to generic implementation. Reorganization of the code is based on the patch (Link[0]) from Arnd. For now I have dropped CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU check as this will be enabled by default upon selection of RISCV_DMA_NONCOHERENT and also dropped arch_dma_mark_dcache_clean(). Link[0]: https://lore.kernel.org/all/20230327121317.4081816-22-arnd@kernel.org/ Signed-off-by: Arnd Bergmann Signed-off-by: Lad Prabhakar --- Hi Arnd, I have kept the changes minimal here as compared to your original patch and dropped arch_dma_mark_dcache_clean() function and config checks for now as we are currently implementing for riscv only. Cheers, Prabhakar v3: * New patch --- arch/riscv/mm/dma-noncoherent.c | 60 ++++++++++++++++++++++++++++----- 1 file changed, 51 insertions(+), 9 deletions(-) diff --git a/arch/riscv/mm/dma-noncoherent.c b/arch/riscv/mm/dma-noncoherent.c index fc6377a64c8d..06b8fea58e20 100644 --- a/arch/riscv/mm/dma-noncoherent.c +++ b/arch/riscv/mm/dma-noncoherent.c @@ -12,21 +12,61 @@ static bool noncoherent_supported __ro_after_init; -void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, - enum dma_data_direction dir) +static inline void arch_dma_cache_wback(phys_addr_t paddr, size_t size) +{ + void *vaddr = phys_to_virt(paddr); + + ALT_CMO_OP(clean, vaddr, size, riscv_cbom_block_size); +} + +static inline void arch_dma_cache_inv(phys_addr_t paddr, size_t size) +{ + void *vaddr = phys_to_virt(paddr); + + ALT_CMO_OP(inval, vaddr, size, riscv_cbom_block_size); +} + +static inline void arch_dma_cache_wback_inv(phys_addr_t paddr, size_t size) { void *vaddr = phys_to_virt(paddr); + ALT_CMO_OP(flush, vaddr, size, riscv_cbom_block_size); +} + +static inline bool arch_sync_dma_clean_before_fromdevice(void) +{ + return true; +} + +static inline bool arch_sync_dma_cpu_needs_post_dma_flush(void) +{ + return true; +} + +void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, + enum dma_data_direction dir) +{ switch (dir) { case DMA_TO_DEVICE: - ALT_CMO_OP(clean, vaddr, size, riscv_cbom_block_size); + arch_dma_cache_wback(paddr, size); break; + case DMA_FROM_DEVICE: - ALT_CMO_OP(clean, vaddr, size, riscv_cbom_block_size); - break; + if (!arch_sync_dma_clean_before_fromdevice()) { + arch_dma_cache_inv(paddr, size); + break; + } + fallthrough; + case DMA_BIDIRECTIONAL: - ALT_CMO_OP(clean, vaddr, size, riscv_cbom_block_size); + /* Skip the invalidate here if it's done later */ + if (IS_ENABLED(CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU) && + arch_sync_dma_cpu_needs_post_dma_flush()) + arch_dma_cache_wback(paddr, size); + else + arch_dma_cache_wback_inv(paddr, size); break; + default: break; } @@ -35,15 +75,17 @@ void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, enum dma_data_direction dir) { - void *vaddr = phys_to_virt(paddr); - switch (dir) { case DMA_TO_DEVICE: break; + case DMA_FROM_DEVICE: case DMA_BIDIRECTIONAL: - ALT_CMO_OP(inval, vaddr, size, riscv_cbom_block_size); + /* FROM_DEVICE invalidate needed if speculative CPU prefetch only */ + if (arch_sync_dma_cpu_needs_post_dma_flush()) + arch_dma_cache_inv(paddr, size); break; + default: break; }