From patchwork Tue Aug 22 09:57:06 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 9914719 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 30E3A60381 for ; Tue, 22 Aug 2017 09:59:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 404B42883F for ; Tue, 22 Aug 2017 09:59:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3546328856; Tue, 22 Aug 2017 09:59:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8C5772883F for ; Tue, 22 Aug 2017 09:59:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932738AbdHVJ7X (ORCPT ); Tue, 22 Aug 2017 05:59:23 -0400 Received: from mail-wr0-f180.google.com ([209.85.128.180]:37465 "EHLO mail-wr0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932538AbdHVJ7U (ORCPT ); Tue, 22 Aug 2017 05:59:20 -0400 Received: by mail-wr0-f180.google.com with SMTP id z91so118796116wrc.4 for ; Tue, 22 Aug 2017 02:59:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=LHcvQCygsBqQ7QzH1qK7YFlydPPVzJODhfzE1WvoFQ8=; b=ajr9lKEWFCLYGf7rKe99F0BWPMrhCyPG8D3aqkDe1Kd3XCcm1VAIeifyg6K8QuoNhF 8GjklKqQU0u07iB/23utdFCb1RnCNcB62PeVG7eiZR5iqjiRJPoATzWM4gE5mTkuCMEu x/R90rKjLuCN6np9tJTnKhYIQgtkOAFjeBiTY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=LHcvQCygsBqQ7QzH1qK7YFlydPPVzJODhfzE1WvoFQ8=; b=p7Zz0a+4HJKuaeLcIMg5udysJDL/Q69Ng/YRbmoql3fAjxximW+xNN2EFPabKjAxd7 eADe8YWpAoO+BNh5SQHuiYEkJfWFt/dTJ1Ep5cXd5ffmMR7ergQ+jJ0o20x7PmPec0VY RB0gb2ckrgeeGsUurObuZ3q/FUlccWcyrvM4FvXcerbYKrdSXsq6sfTVuo81bHD3tshM 4QdyW1NYe0HrHYp6uDkZ/5wvt5E5qGNdSg30VfIGnuyaQ7UHolTQs3kQniSGTusKUQbU 2cHMRLbeK32Y9FYxDSN7p8ZCQ9hA+vcHnjHRfELNHUBcxeQkJ15Dk0c7uqNoa6EL5pJ5 tj0g== X-Gm-Message-State: AHYfb5jSqEiPhbBgCN8UZqnxuc3u5rqf5yACvLF6LGZvb+7xaDkBVJPs yvc5oneudi4GaVzg X-Received: by 10.28.218.75 with SMTP id r72mr64535wmg.172.1503395958927; Tue, 22 Aug 2017 02:59:18 -0700 (PDT) Received: from anup-HP-Compaq-8100-Elite-CMT-PC.dhcp.avagotech.net ([192.19.237.250]) by smtp.gmail.com with ESMTPSA id e137sm12257913wma.29.2017.08.22.02.59.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 22 Aug 2017 02:59:18 -0700 (PDT) From: Anup Patel To: Vinod Koul , Dan Williams Cc: Florian Fainelli , Scott Branden , Ray Jui , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, dmaengine@vger.kernel.org, bcm-kernel-feedback-list@broadcom.com, Anup Patel Subject: [PATCH v3 17/17] dmaengine: bcm-sba-raid: Remove redundant SBA_REQUEST_STATE_COMPLETED Date: Tue, 22 Aug 2017 15:27:06 +0530 Message-Id: <1503395827-19428-18-git-send-email-anup.patel@broadcom.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1503395827-19428-1-git-send-email-anup.patel@broadcom.com> References: <1503395827-19428-1-git-send-email-anup.patel@broadcom.com> Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The SBA_REQUEST_STATE_COMPLETED state was added to keep track of sba_request which got completed but cannot be freed because underlying Async Tx descriptor was not ACKed by DMA client. Instead of above, we can free the sba_request with non-ACKed Async Tx descriptor and sba_alloc_request() will ensure that it always allocates sba_request with ACKed Async Tx descriptor. This alternate approach makes SBA_REQUEST_STATE_COMPLETED state redundant hence this patch removes it. Signed-off-by: Anup Patel Reviewed-by: Ray Jui Reviewed-by: Scott Branden --- drivers/dma/bcm-sba-raid.c | 63 +++++++++++++--------------------------------- 1 file changed, 17 insertions(+), 46 deletions(-) diff --git a/drivers/dma/bcm-sba-raid.c b/drivers/dma/bcm-sba-raid.c index 0c41136..5b1595f 100644 --- a/drivers/dma/bcm-sba-raid.c +++ b/drivers/dma/bcm-sba-raid.c @@ -99,8 +99,7 @@ enum sba_request_flags { SBA_REQUEST_STATE_ALLOCED = 0x002, SBA_REQUEST_STATE_PENDING = 0x004, SBA_REQUEST_STATE_ACTIVE = 0x008, - SBA_REQUEST_STATE_COMPLETED = 0x010, - SBA_REQUEST_STATE_ABORTED = 0x020, + SBA_REQUEST_STATE_ABORTED = 0x010, SBA_REQUEST_STATE_MASK = 0x0ff, SBA_REQUEST_FENCE = 0x100, }; @@ -160,7 +159,6 @@ struct sba_device { struct list_head reqs_alloc_list; struct list_head reqs_pending_list; struct list_head reqs_active_list; - struct list_head reqs_completed_list; struct list_head reqs_aborted_list; struct list_head reqs_free_list; /* DebugFS directory entries */ @@ -212,17 +210,21 @@ static void sba_peek_mchans(struct sba_device *sba) static struct sba_request *sba_alloc_request(struct sba_device *sba) { + bool found = false; unsigned long flags; struct sba_request *req = NULL; spin_lock_irqsave(&sba->reqs_lock, flags); - req = list_first_entry_or_null(&sba->reqs_free_list, - struct sba_request, node); - if (req) - list_move_tail(&req->node, &sba->reqs_alloc_list); + list_for_each_entry(req, &sba->reqs_free_list, node) { + if (async_tx_test_ack(&req->tx)) { + list_move_tail(&req->node, &sba->reqs_alloc_list); + found = true; + break; + } + } spin_unlock_irqrestore(&sba->reqs_lock, flags); - if (!req) { + if (!found) { /* * We have no more free requests so, we peek * mailbox channels hoping few active requests @@ -297,18 +299,6 @@ static void _sba_free_request(struct sba_device *sba, sba->reqs_fence = false; } -/* Note: Must be called with sba->reqs_lock held */ -static void _sba_complete_request(struct sba_device *sba, - struct sba_request *req) -{ - lockdep_assert_held(&sba->reqs_lock); - req->flags &= ~SBA_REQUEST_STATE_MASK; - req->flags |= SBA_REQUEST_STATE_COMPLETED; - list_move_tail(&req->node, &sba->reqs_completed_list); - if (list_empty(&sba->reqs_active_list)) - sba->reqs_fence = false; -} - static void sba_free_chained_requests(struct sba_request *req) { unsigned long flags; @@ -350,10 +340,6 @@ static void sba_cleanup_nonpending_requests(struct sba_device *sba) list_for_each_entry_safe(req, req1, &sba->reqs_alloc_list, node) _sba_free_request(sba, req); - /* Freeup all completed request */ - list_for_each_entry_safe(req, req1, &sba->reqs_completed_list, node) - _sba_free_request(sba, req); - /* Set all active requests as aborted */ list_for_each_entry_safe(req, req1, &sba->reqs_active_list, node) _sba_abort_request(sba, req); @@ -472,20 +458,8 @@ static void sba_process_received_request(struct sba_device *sba, _sba_free_request(sba, nreq); INIT_LIST_HEAD(&first->next); - /* The client is allowed to attach dependent operations - * until 'ack' is set - */ - if (!async_tx_test_ack(tx)) - _sba_complete_request(sba, first); - else - _sba_free_request(sba, first); - - /* Cleanup completed requests */ - list_for_each_entry_safe(req, nreq, - &sba->reqs_completed_list, node) { - if (async_tx_test_ack(&req->tx)) - _sba_free_request(sba, req); - } + /* Free the first request */ + _sba_free_request(sba, first); /* Process pending requests */ _sba_process_pending_requests(sba); @@ -499,13 +473,14 @@ static void sba_write_stats_in_seqfile(struct sba_device *sba, { unsigned long flags; struct sba_request *req; - u32 free_count = 0, alloced_count = 0, pending_count = 0; - u32 active_count = 0, aborted_count = 0, completed_count = 0; + u32 free_count = 0, alloced_count = 0; + u32 pending_count = 0, active_count = 0, aborted_count = 0; spin_lock_irqsave(&sba->reqs_lock, flags); list_for_each_entry(req, &sba->reqs_free_list, node) - free_count++; + if (async_tx_test_ack(&req->tx)) + free_count++; list_for_each_entry(req, &sba->reqs_alloc_list, node) alloced_count++; @@ -519,9 +494,6 @@ static void sba_write_stats_in_seqfile(struct sba_device *sba, list_for_each_entry(req, &sba->reqs_aborted_list, node) aborted_count++; - list_for_each_entry(req, &sba->reqs_completed_list, node) - completed_count++; - spin_unlock_irqrestore(&sba->reqs_lock, flags); seq_printf(file, "maximum requests = %d\n", sba->max_req); @@ -530,7 +502,6 @@ static void sba_write_stats_in_seqfile(struct sba_device *sba, seq_printf(file, "pending requests = %d\n", pending_count); seq_printf(file, "active requests = %d\n", active_count); seq_printf(file, "aborted requests = %d\n", aborted_count); - seq_printf(file, "completed requests = %d\n", completed_count); } /* ====== DMAENGINE callbacks ===== */ @@ -1537,7 +1508,6 @@ static int sba_prealloc_channel_resources(struct sba_device *sba) INIT_LIST_HEAD(&sba->reqs_alloc_list); INIT_LIST_HEAD(&sba->reqs_pending_list); INIT_LIST_HEAD(&sba->reqs_active_list); - INIT_LIST_HEAD(&sba->reqs_completed_list); INIT_LIST_HEAD(&sba->reqs_aborted_list); INIT_LIST_HEAD(&sba->reqs_free_list); @@ -1565,6 +1535,7 @@ static int sba_prealloc_channel_resources(struct sba_device *sba) } memset(&req->msg, 0, sizeof(req->msg)); dma_async_tx_descriptor_init(&req->tx, &sba->dma_chan); + async_tx_ack(&req->tx); req->tx.tx_submit = sba_tx_submit; req->tx.phys = sba->resp_dma_base + i * sba->hw_resp_size; list_add_tail(&req->node, &sba->reqs_free_list);