From patchwork Tue Aug 1 10:37:47 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 9874213 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C2A9A6038F for ; Tue, 1 Aug 2017 10:51:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D6C682867D for ; Tue, 1 Aug 2017 10:51:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CB5A328689; Tue, 1 Aug 2017 10:51:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DA4372867D for ; Tue, 1 Aug 2017 10:51:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=s1/GVsFZomxkxwxMyj/l7RaGtgmhwIPZZ1RGX9BQ/PI=; b=B3fTb5iP5OWYimiych7bU+rKoi u/r+8prHJ9GVhaF60uMkr3KTJBdtxlaoefg5jXZlWywaaeOKCegIZ6HGiCdvLVtmHBbOszzOtJn0G 6xUe4MoXkHbh1jI5oXDDQ5AiCAJYYijSXa3vkHetnzHnJhHM2u4j4gXJNagZ0PcrhqZRMPPPgRt3W 6O/fgFUIBnR+5BaCwWB4SKihktscNej0p/rg5+0jIdUarGKgNI3lCTvpvGHMIeWZ2zntk6JjNW1Zs nQe1r9W9Itr7u8WusHXtX7T+lgs4Tmi+Ow1xoniSg5Wcae5ah7+IuIzhC4KhVomW4YI4V4DmOu3tO xdK6cltA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dcUlj-0004BG-Dm; Tue, 01 Aug 2017 10:51:19 +0000 Received: from casper.infradead.org ([85.118.1.10]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1dcUlB-0003Mq-9g for linux-arm-kernel@bombadil.infradead.org; Tue, 01 Aug 2017 10:50:45 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=hRMtXlvrua+ymURrz2zxseJT5In/je3XZwbYAzE1Xg0=; b=ZyYkmIEOe42pgqzxPNjPKu0kI dGJbmNxmdZyxftcoTnGfUBFZSKF0GWRh/k7PqzMzLLFIYgsX9Gv10X7vvsn267H6g9IhVLEGUNRNs LTXWJBD4lFL2Y+6/s0nMEZ0vhsXdmal2zaRPMZ1ijumqv/9FuU5Tv9eSwW5OUfxrO5ISE3AFR9NrQ 1JMnqRP4ondG8AGwcIl6QYRfei70nAYtRQScZ/2mKPYoMIoKcyAsAKHWLZmlGHoSgmPAb/c8THh4H xgFklyZDcMKqcLGU+/B8klPGSSgekLYP2asP7KqgEhzWR+7AQxLqsbqD9W1horXEmfbY1ywVq8gIY ZPD4hZ5gQ==; Received: from mail-qk0-x232.google.com ([2607:f8b0:400d:c09::232]) by casper.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1dcUZn-0002aF-Up for linux-arm-kernel@lists.infradead.org; Tue, 01 Aug 2017 10:39:03 +0000 Received: by mail-qk0-x232.google.com with SMTP id x191so6405005qka.5 for ; Tue, 01 Aug 2017 03:38:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=hRMtXlvrua+ymURrz2zxseJT5In/je3XZwbYAzE1Xg0=; b=Kj4SaAHZ4sRLSVCKGtItJJay6RFVBllupmOEIgtZLpzqeaNWLnKt5NCrvajavybJo2 s1quMQIarmnZHftfmXmfwYnc69ut/F2wjgxwBcp2pvIUS5RHFyqUXv8xpFNVQdJUURkI 6lsgTeRhwf3TnuP3rFcaUokVgE4z7WDEpCc8k= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=hRMtXlvrua+ymURrz2zxseJT5In/je3XZwbYAzE1Xg0=; b=DV0dm0w9/ii90ebjEYew5daznCysKngUqhvQCMSMjiPGo3tFlcnUC8ghLuKDn53dcw YEwz72MBNu/AN93JKNgEz6BYW5g3mTyvaNPAOSgv792xweWwxZlF4f85bnkrUaueaaXl NTl7P98UrM3wh7SKB3ohQ2jfyhGirw/ezH+96qkVHe66T9kYXNWnz5T/RvWgysvTNf+1 yamP46l7Ud7LZiEGm6SoHM2RLpaiNp2O4mvgiB+9bqwxTo6eClPuRhJdrMY4G8GGok3u vhuT1q0jZz2+ZohvNP3LkJUu6tVjE7NhCWNzPn1QRAELYdskCohm+/ZvcEbuFHQzhSGw smBA== X-Gm-Message-State: AIVw113KMvh/nBJd88Z3gb03hKdKbXORtGOAfNqwNLO2lZ8jztkTCZfO W6CaCUST2FkJMs9B X-Received: by 10.55.169.150 with SMTP id s144mr26240788qke.115.1501583918107; Tue, 01 Aug 2017 03:38:38 -0700 (PDT) Received: from anup-HP-Compaq-8100-Elite-CMT-PC.dhcp.avagotech.net ([192.19.237.250]) by smtp.gmail.com with ESMTPSA id e32sm21784162qtb.63.2017.08.01.03.38.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 01 Aug 2017 03:38:37 -0700 (PDT) From: Anup Patel To: Rob Herring , Mark Rutland , Vinod Koul , Dan Williams Subject: [PATCH v2 03/16] dmaengine: bcm-sba-raid: Common flags for sba_request state and fence Date: Tue, 1 Aug 2017 16:07:47 +0530 Message-Id: <1501583880-32072-4-git-send-email-anup.patel@broadcom.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1501583880-32072-1-git-send-email-anup.patel@broadcom.com> References: <1501583880-32072-1-git-send-email-anup.patel@broadcom.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170801_113900_094491_9ED16801 X-CRM114-Status: GOOD ( 16.78 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, Florian Fainelli , Anup Patel , Scott Branden , Ray Jui , linux-kernel@vger.kernel.org, bcm-kernel-feedback-list@broadcom.com, dmaengine@vger.kernel.org, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch merges sba_request state and fence into common sba_request flags. Also, in-future we can extend sba_request flags as required. Signed-off-by: Anup Patel --- drivers/dma/bcm-sba-raid.c | 66 ++++++++++++++++++++++++++-------------------- 1 file changed, 38 insertions(+), 28 deletions(-) diff --git a/drivers/dma/bcm-sba-raid.c b/drivers/dma/bcm-sba-raid.c index f81d5ac..6fa3df1 100644 --- a/drivers/dma/bcm-sba-raid.c +++ b/drivers/dma/bcm-sba-raid.c @@ -91,22 +91,23 @@ /* ===== Driver data structures ===== */ -enum sba_request_state { - SBA_REQUEST_STATE_FREE = 1, - SBA_REQUEST_STATE_ALLOCED = 2, - SBA_REQUEST_STATE_PENDING = 3, - SBA_REQUEST_STATE_ACTIVE = 4, - SBA_REQUEST_STATE_RECEIVED = 5, - SBA_REQUEST_STATE_COMPLETED = 6, - SBA_REQUEST_STATE_ABORTED = 7, +enum sba_request_flags { + SBA_REQUEST_STATE_FREE = 0x001, + SBA_REQUEST_STATE_ALLOCED = 0x002, + SBA_REQUEST_STATE_PENDING = 0x004, + SBA_REQUEST_STATE_ACTIVE = 0x008, + SBA_REQUEST_STATE_RECEIVED = 0x010, + SBA_REQUEST_STATE_COMPLETED = 0x020, + SBA_REQUEST_STATE_ABORTED = 0x040, + SBA_REQUEST_STATE_MASK = 0x0ff, + SBA_REQUEST_FENCE = 0x100, }; struct sba_request { /* Global state */ struct list_head node; struct sba_device *sba; - enum sba_request_state state; - bool fence; + u32 flags; /* Chained requests management */ struct sba_request *first; struct list_head next; @@ -217,8 +218,7 @@ static struct sba_request *sba_alloc_request(struct sba_device *sba) if (!req) return NULL; - req->state = SBA_REQUEST_STATE_ALLOCED; - req->fence = false; + req->flags = SBA_REQUEST_STATE_ALLOCED; req->first = req; INIT_LIST_HEAD(&req->next); req->next_count = 1; @@ -234,7 +234,8 @@ static void _sba_pending_request(struct sba_device *sba, struct sba_request *req) { lockdep_assert_held(&sba->reqs_lock); - req->state = SBA_REQUEST_STATE_PENDING; + req->flags &= ~SBA_REQUEST_STATE_MASK; + req->flags |= SBA_REQUEST_STATE_PENDING; list_move_tail(&req->node, &sba->reqs_pending_list); if (list_empty(&sba->reqs_active_list)) sba->reqs_fence = false; @@ -249,9 +250,10 @@ static bool _sba_active_request(struct sba_device *sba, sba->reqs_fence = false; if (sba->reqs_fence) return false; - req->state = SBA_REQUEST_STATE_ACTIVE; + req->flags &= ~SBA_REQUEST_STATE_MASK; + req->flags |= SBA_REQUEST_STATE_ACTIVE; list_move_tail(&req->node, &sba->reqs_active_list); - if (req->fence) + if (req->flags & SBA_REQUEST_FENCE) sba->reqs_fence = true; return true; } @@ -261,7 +263,8 @@ static void _sba_abort_request(struct sba_device *sba, struct sba_request *req) { lockdep_assert_held(&sba->reqs_lock); - req->state = SBA_REQUEST_STATE_ABORTED; + req->flags &= ~SBA_REQUEST_STATE_MASK; + req->flags |= SBA_REQUEST_STATE_ABORTED; list_move_tail(&req->node, &sba->reqs_aborted_list); if (list_empty(&sba->reqs_active_list)) sba->reqs_fence = false; @@ -272,7 +275,8 @@ static void _sba_free_request(struct sba_device *sba, struct sba_request *req) { lockdep_assert_held(&sba->reqs_lock); - req->state = SBA_REQUEST_STATE_FREE; + req->flags &= ~SBA_REQUEST_STATE_MASK; + req->flags |= SBA_REQUEST_STATE_FREE; list_move_tail(&req->node, &sba->reqs_free_list); if (list_empty(&sba->reqs_active_list)) sba->reqs_fence = false; @@ -285,7 +289,8 @@ static void sba_received_request(struct sba_request *req) struct sba_device *sba = req->sba; spin_lock_irqsave(&sba->reqs_lock, flags); - req->state = SBA_REQUEST_STATE_RECEIVED; + req->flags &= ~SBA_REQUEST_STATE_MASK; + req->flags |= SBA_REQUEST_STATE_RECEIVED; list_move_tail(&req->node, &sba->reqs_received_list); spin_unlock_irqrestore(&sba->reqs_lock, flags); } @@ -298,10 +303,12 @@ static void sba_complete_chained_requests(struct sba_request *req) spin_lock_irqsave(&sba->reqs_lock, flags); - req->state = SBA_REQUEST_STATE_COMPLETED; + req->flags &= ~SBA_REQUEST_STATE_MASK; + req->flags |= SBA_REQUEST_STATE_COMPLETED; list_move_tail(&req->node, &sba->reqs_completed_list); list_for_each_entry(nreq, &req->next, next) { - nreq->state = SBA_REQUEST_STATE_COMPLETED; + nreq->flags &= ~SBA_REQUEST_STATE_MASK; + nreq->flags |= SBA_REQUEST_STATE_COMPLETED; list_move_tail(&nreq->node, &sba->reqs_completed_list); } if (list_empty(&sba->reqs_active_list)) @@ -576,7 +583,7 @@ sba_prep_dma_interrupt(struct dma_chan *dchan, unsigned long flags) * Force fence so that no requests are submitted * until DMA callback for this request is invoked. */ - req->fence = true; + req->flags |= SBA_REQUEST_FENCE; /* Fillup request message */ sba_fillup_interrupt_msg(req, req->cmds, &req->msg); @@ -659,7 +666,8 @@ sba_prep_dma_memcpy_req(struct sba_device *sba, req = sba_alloc_request(sba); if (!req) return NULL; - req->fence = (flags & DMA_PREP_FENCE) ? true : false; + if (flags & DMA_PREP_FENCE) + req->flags |= SBA_REQUEST_FENCE; /* Fillup request message */ sba_fillup_memcpy_msg(req, req->cmds, &req->msg, @@ -796,7 +804,8 @@ sba_prep_dma_xor_req(struct sba_device *sba, req = sba_alloc_request(sba); if (!req) return NULL; - req->fence = (flags & DMA_PREP_FENCE) ? true : false; + if (flags & DMA_PREP_FENCE) + req->flags |= SBA_REQUEST_FENCE; /* Fillup request message */ sba_fillup_xor_msg(req, req->cmds, &req->msg, @@ -1005,7 +1014,8 @@ sba_prep_dma_pq_req(struct sba_device *sba, dma_addr_t off, req = sba_alloc_request(sba); if (!req) return NULL; - req->fence = (flags & DMA_PREP_FENCE) ? true : false; + if (flags & DMA_PREP_FENCE) + req->flags |= SBA_REQUEST_FENCE; /* Fillup request messages */ sba_fillup_pq_msg(req, dmaf_continue(flags), @@ -1258,7 +1268,8 @@ sba_prep_dma_pq_single_req(struct sba_device *sba, dma_addr_t off, req = sba_alloc_request(sba); if (!req) return NULL; - req->fence = (flags & DMA_PREP_FENCE) ? true : false; + if (flags & DMA_PREP_FENCE) + req->flags |= SBA_REQUEST_FENCE; /* Fillup request messages */ sba_fillup_pq_single_msg(req, dmaf_continue(flags), @@ -1425,7 +1436,7 @@ static void sba_receive_message(struct mbox_client *cl, void *msg) req = req->first; /* Update request */ - if (req->state == SBA_REQUEST_STATE_RECEIVED) + if (req->flags & SBA_REQUEST_STATE_RECEIVED) sba_dma_tx_actions(req); else sba_free_chained_requests(req); @@ -1488,11 +1499,10 @@ static int sba_prealloc_channel_resources(struct sba_device *sba) req = &sba->reqs[i]; INIT_LIST_HEAD(&req->node); req->sba = sba; - req->state = SBA_REQUEST_STATE_FREE; + req->flags = SBA_REQUEST_STATE_FREE; INIT_LIST_HEAD(&req->next); req->next_count = 1; atomic_set(&req->next_pending_count, 0); - req->fence = false; req->resp = sba->resp_base + p; req->resp_dma = sba->resp_dma_base + p; p += sba->hw_resp_size;