From patchwork Thu Feb 15 14:52:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mikko Perttunen X-Patchwork-Id: 10221461 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 58A13601E7 for ; Thu, 15 Feb 2018 14:58:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 47A44287D3 for ; Thu, 15 Feb 2018 14:58:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3BBF62886A; Thu, 15 Feb 2018 14:58:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 78F06287D3 for ; Thu, 15 Feb 2018 14:58:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=UxZ1nBAkvhQkcSIjXLmQY2Bts2ROaRG8Eix5YpJY0/g=; b=Ww0WQfJwr0wWjH SwVUebseB4c4vhZBVTS2PiriGJVzk35j6Ex3lu9Z9ySKar0oaNiZOkFkNTh5mee9TYehgKB7x28Db rWC9abrrZv+JgnaKNGyui54Kohc3UfFs9otHsqMjAZHF6mUZ75WWybwDa6jJllo8UDuhKSY0A1KCD 2UOzZT3B/8+pwaspkWWpf2wRtucY0NAI5mR+cBh2P6GD2xjq3dKlySD053UfHiNnU8VWuF9LifTVO crEcwGp6f/IJfenZkLIezQrGyLMoQCMJmc3DmzbmT21Edc8dJcTjvtwchl8gRAoJKexE/mdL0xaHK JSIMRkJfmtGohBGViyvQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1emKz4-00034j-NQ; Thu, 15 Feb 2018 14:58:02 +0000 Received: from hqemgate15.nvidia.com ([216.228.121.64]) by bombadil.infradead.org with esmtps (Exim 4.89 #1 (Red Hat Linux)) id 1emKub-0007eY-Ho for linux-arm-kernel@lists.infradead.org; Thu, 15 Feb 2018 14:53:38 +0000 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com id ; Thu, 15 Feb 2018 06:53:19 -0800 Received: from HQMAIL101.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 15 Feb 2018 06:53:14 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 15 Feb 2018 06:53:14 -0800 Received: from HQMAIL102.nvidia.com (172.18.146.10) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1347.2; Thu, 15 Feb 2018 14:53:14 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL102.nvidia.com (172.18.146.10) with Microsoft SMTP Server id 15.0.1347.2 via Frontend Transport; Thu, 15 Feb 2018 14:53:14 +0000 Received: from mperttunen-lnx.Nvidia.com (Not Verified[10.21.26.144]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7, 5, 8, 10121) id ; Thu, 15 Feb 2018 06:53:14 -0800 From: Mikko Perttunen To: , , , Subject: [PATCH v3 1/7] firmware: tegra: Simplify channel management Date: Thu, 15 Feb 2018 16:52:00 +0200 Message-ID: <20180215145206.24775-2-mperttunen@nvidia.com> X-Mailer: git-send-email 2.16.1 In-Reply-To: <20180215145206.24775-1-mperttunen@nvidia.com> References: <20180215145206.24775-1-mperttunen@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180215_065325_726508_AA83600C X-CRM114-Status: GOOD ( 20.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, Mikko Perttunen , talho@nvidia.com, linux-tegra@vger.kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The Tegra194 BPMP only implements 5 channels (4 to BPMP, 1 to CCPLEX), and they are not placed contiguously in memory. The current channel management in the BPMP driver does not support this. Simplify and refactor the channel management such that only one atomic transmit channel and one receive channel are supported, and channels are not required to be placed contiguously in memory. The same configuration also works on T186 so we end up with less code. Signed-off-by: Mikko Perttunen --- drivers/firmware/tegra/bpmp.c | 142 +++++++++++++++++++----------------------- include/soc/tegra/bpmp.h | 4 +- 2 files changed, 66 insertions(+), 80 deletions(-) diff --git a/drivers/firmware/tegra/bpmp.c b/drivers/firmware/tegra/bpmp.c index a7f461f2e650..81bc2dce8626 100644 --- a/drivers/firmware/tegra/bpmp.c +++ b/drivers/firmware/tegra/bpmp.c @@ -70,57 +70,20 @@ void tegra_bpmp_put(struct tegra_bpmp *bpmp) } EXPORT_SYMBOL_GPL(tegra_bpmp_put); -static int tegra_bpmp_channel_get_index(struct tegra_bpmp_channel *channel) -{ - return channel - channel->bpmp->channels; -} - static int tegra_bpmp_channel_get_thread_index(struct tegra_bpmp_channel *channel) { struct tegra_bpmp *bpmp = channel->bpmp; - unsigned int offset, count; + unsigned int count; int index; - offset = bpmp->soc->channels.thread.offset; count = bpmp->soc->channels.thread.count; - index = tegra_bpmp_channel_get_index(channel); - if (index < 0) - return index; - - if (index < offset || index >= offset + count) + index = channel - channel->bpmp->threaded_channels; + if (index < 0 || index >= count) return -EINVAL; - return index - offset; -} - -static struct tegra_bpmp_channel * -tegra_bpmp_channel_get_thread(struct tegra_bpmp *bpmp, unsigned int index) -{ - unsigned int offset = bpmp->soc->channels.thread.offset; - unsigned int count = bpmp->soc->channels.thread.count; - - if (index >= count) - return NULL; - - return &bpmp->channels[offset + index]; -} - -static struct tegra_bpmp_channel * -tegra_bpmp_channel_get_tx(struct tegra_bpmp *bpmp) -{ - unsigned int offset = bpmp->soc->channels.cpu_tx.offset; - - return &bpmp->channels[offset + smp_processor_id()]; -} - -static struct tegra_bpmp_channel * -tegra_bpmp_channel_get_rx(struct tegra_bpmp *bpmp) -{ - unsigned int offset = bpmp->soc->channels.cpu_rx.offset; - - return &bpmp->channels[offset]; + return index; } static bool tegra_bpmp_message_valid(const struct tegra_bpmp_message *msg) @@ -271,11 +234,7 @@ tegra_bpmp_write_threaded(struct tegra_bpmp *bpmp, unsigned int mrq, goto unlock; } - channel = tegra_bpmp_channel_get_thread(bpmp, index); - if (!channel) { - err = -EINVAL; - goto unlock; - } + channel = &bpmp->threaded_channels[index]; if (!tegra_bpmp_master_free(channel)) { err = -EBUSY; @@ -328,12 +287,18 @@ int tegra_bpmp_transfer_atomic(struct tegra_bpmp *bpmp, if (!tegra_bpmp_message_valid(msg)) return -EINVAL; - channel = tegra_bpmp_channel_get_tx(bpmp); + channel = bpmp->tx_channel; + + spin_lock(&bpmp->atomic_tx_lock); err = tegra_bpmp_channel_write(channel, msg->mrq, MSG_ACK, msg->tx.data, msg->tx.size); - if (err < 0) + if (err < 0) { + spin_unlock(&bpmp->atomic_tx_lock); return err; + } + + spin_unlock(&bpmp->atomic_tx_lock); err = mbox_send_message(bpmp->mbox.channel, NULL); if (err < 0) @@ -607,7 +572,7 @@ static void tegra_bpmp_handle_rx(struct mbox_client *client, void *data) unsigned int i, count; unsigned long *busy; - channel = tegra_bpmp_channel_get_rx(bpmp); + channel = bpmp->rx_channel; count = bpmp->soc->channels.thread.count; busy = bpmp->threaded.busy; @@ -619,9 +584,7 @@ static void tegra_bpmp_handle_rx(struct mbox_client *client, void *data) for_each_set_bit(i, busy, count) { struct tegra_bpmp_channel *channel; - channel = tegra_bpmp_channel_get_thread(bpmp, i); - if (!channel) - continue; + channel = &bpmp->threaded_channels[i]; if (tegra_bpmp_master_acked(channel)) { tegra_bpmp_channel_signal(channel); @@ -698,7 +661,6 @@ static void tegra_bpmp_channel_cleanup(struct tegra_bpmp_channel *channel) static int tegra_bpmp_probe(struct platform_device *pdev) { - struct tegra_bpmp_channel *channel; struct tegra_bpmp *bpmp; unsigned int i; char tag[32]; @@ -758,24 +720,45 @@ static int tegra_bpmp_probe(struct platform_device *pdev) goto free_rx; } - bpmp->num_channels = bpmp->soc->channels.cpu_tx.count + - bpmp->soc->channels.thread.count + - bpmp->soc->channels.cpu_rx.count; + spin_lock_init(&bpmp->atomic_tx_lock); + bpmp->tx_channel = devm_kzalloc(&pdev->dev, sizeof(*bpmp->tx_channel), + GFP_KERNEL); + if (!bpmp->tx_channel) { + err = -ENOMEM; + goto free_rx; + } - bpmp->channels = devm_kcalloc(&pdev->dev, bpmp->num_channels, - sizeof(*channel), GFP_KERNEL); - if (!bpmp->channels) { + bpmp->rx_channel = devm_kzalloc(&pdev->dev, sizeof(*bpmp->rx_channel), + GFP_KERNEL); + if (!bpmp->rx_channel) { err = -ENOMEM; goto free_rx; } - /* message channel initialization */ - for (i = 0; i < bpmp->num_channels; i++) { - struct tegra_bpmp_channel *channel = &bpmp->channels[i]; + bpmp->threaded_channels = devm_kcalloc(&pdev->dev, bpmp->threaded.count, + sizeof(*bpmp->threaded_channels), + GFP_KERNEL); + if (!bpmp->threaded_channels) { + err = -ENOMEM; + goto free_rx; + } - err = tegra_bpmp_channel_init(channel, bpmp, i); + err = tegra_bpmp_channel_init(bpmp->tx_channel, bpmp, + bpmp->soc->channels.cpu_tx.offset); + if (err < 0) + goto free_rx; + + err = tegra_bpmp_channel_init(bpmp->rx_channel, bpmp, + bpmp->soc->channels.cpu_rx.offset); + if (err < 0) + goto cleanup_tx_channel; + + for (i = 0; i < bpmp->threaded.count; i++) { + err = tegra_bpmp_channel_init( + &bpmp->threaded_channels[i], bpmp, + bpmp->soc->channels.thread.offset + i); if (err < 0) - goto cleanup_channels; + goto cleanup_threaded_channels; } /* mbox registration */ @@ -788,15 +771,14 @@ static int tegra_bpmp_probe(struct platform_device *pdev) if (IS_ERR(bpmp->mbox.channel)) { err = PTR_ERR(bpmp->mbox.channel); dev_err(&pdev->dev, "failed to get HSP mailbox: %d\n", err); - goto cleanup_channels; + goto cleanup_threaded_channels; } /* reset message channels */ - for (i = 0; i < bpmp->num_channels; i++) { - struct tegra_bpmp_channel *channel = &bpmp->channels[i]; - - tegra_bpmp_channel_reset(channel); - } + tegra_bpmp_channel_reset(bpmp->tx_channel); + tegra_bpmp_channel_reset(bpmp->rx_channel); + for (i = 0; i < bpmp->threaded.count; i++) + tegra_bpmp_channel_reset(&bpmp->threaded_channels[i]); err = tegra_bpmp_request_mrq(bpmp, MRQ_PING, tegra_bpmp_mrq_handle_ping, bpmp); @@ -845,9 +827,15 @@ static int tegra_bpmp_probe(struct platform_device *pdev) tegra_bpmp_free_mrq(bpmp, MRQ_PING, bpmp); free_mbox: mbox_free_channel(bpmp->mbox.channel); -cleanup_channels: - while (i--) - tegra_bpmp_channel_cleanup(&bpmp->channels[i]); +cleanup_threaded_channels: + for (i = 0; i < bpmp->threaded.count; i++) { + if (bpmp->threaded_channels[i].bpmp) + tegra_bpmp_channel_cleanup(&bpmp->threaded_channels[i]); + } + + tegra_bpmp_channel_cleanup(bpmp->rx_channel); +cleanup_tx_channel: + tegra_bpmp_channel_cleanup(bpmp->tx_channel); free_rx: gen_pool_free(bpmp->rx.pool, (unsigned long)bpmp->rx.virt, 4096); free_tx: @@ -858,18 +846,16 @@ static int tegra_bpmp_probe(struct platform_device *pdev) static const struct tegra_bpmp_soc tegra186_soc = { .channels = { .cpu_tx = { - .offset = 0, - .count = 6, + .offset = 3, .timeout = 60 * USEC_PER_SEC, }, .thread = { - .offset = 6, - .count = 7, + .offset = 0, + .count = 3, .timeout = 600 * USEC_PER_SEC, }, .cpu_rx = { .offset = 13, - .count = 1, .timeout = 0, }, }, diff --git a/include/soc/tegra/bpmp.h b/include/soc/tegra/bpmp.h index aeae4466dd25..e69e4c4d80ae 100644 --- a/include/soc/tegra/bpmp.h +++ b/include/soc/tegra/bpmp.h @@ -75,8 +75,8 @@ struct tegra_bpmp { struct mbox_chan *channel; } mbox; - struct tegra_bpmp_channel *channels; - unsigned int num_channels; + spinlock_t atomic_tx_lock; + struct tegra_bpmp_channel *tx_channel, *rx_channel, *threaded_channels; struct { unsigned long *allocated;