From patchwork Mon Dec 20 19:56:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cristian Marussi X-Patchwork-Id: 12696873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 27F67C433EF for ; Mon, 20 Dec 2021 19:58:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Th+ekmNH9vjRnuxnaA8GdrDWad1TfCvJUMcJifOVM/M=; b=Y+ZXnr2ZFFWSvy nWEyUp9CRnqaRdSD4tYklmr8Of9+7ze8MY/DelupgWwvfD89YH+9f97add1SlYEZUrCAyPqpEbUxo 4vk+2E/wSsCjkeruYiZ3pRocTF5YjmDprF5ikCBiYzXso0kL+TR1X+8cUaoA49LVdZ8D2hH8sYYAm a6t+/4o8u0qWD9XRqBZwiYfHHO5jTwzYDAUIyrkSgzmZ5f7Dm54wSRqxk7ZYgvTDAGhj9eD8e9JkN XExqFhkOrzLl39YGtqm8KZOedKs9DyTWaRffOn/D0WzLHh103ZJW5fmFd4UKmA3kIpoqJrLI14eR2 19qGU0iN3kWqceyThZGQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOmy-004AbU-F4; Mon, 20 Dec 2021 19:57:40 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOmj-004ASz-Q2 for linux-arm-kernel@lists.infradead.org; Mon, 20 Dec 2021 19:57:27 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C1337D6E; Mon, 20 Dec 2021 11:57:24 -0800 (PST) Received: from e120937-lin.home (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 672083F718; Mon, 20 Dec 2021 11:57:23 -0800 (PST) From: Cristian Marussi To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: sudeep.holla@arm.com, james.quinlan@broadcom.com, Jonathan.Cameron@Huawei.com, f.fainelli@gmail.com, etienne.carriere@linaro.org, vincent.guittot@linaro.org, souvik.chakravarty@arm.com, cristian.marussi@arm.com Subject: [PATCH v8 01/11] firmware: arm_scmi: Add configurable polling mode for transports Date: Mon, 20 Dec 2021 19:56:36 +0000 Message-Id: <20211220195646.44498-2-cristian.marussi@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211220195646.44498-1-cristian.marussi@arm.com> References: <20211220195646.44498-1-cristian.marussi@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211220_115725_966470_4B965C5B X-CRM114-Status: GOOD ( 19.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org SCMI communications along TX channels can optionally be provided of a completion interrupt; when such interrupt is not available, command transactions should rely on polling, where the SCMI core takes care to repeatedly evaluate the transport-specific .poll_done() function, if available, to determine if and when a request was fully completed or timed out. Such mechanism is already present and working on a single transfer base: SCMI protocols can indeed enable hdr.poll_completion on specific commands ahead of each transfer and cause that transaction to be handled with polling. Introduce a couple of flags to be able to enforce such polling behaviour globally at will: - scmi_desc.force_polling: to statically switch the whole transport to polling mode. - scmi_chan_info.no_completion_irq: to switch a single channel dynamically to polling mode if, at runtime, is determined that no completion interrupt was available for such channel. Signed-off-by: Cristian Marussi --- v7 --> v8 - removed internal poling_enabled flag, using macros v5 --> v6 - removed check on replies received by IRQs when xfer was requested as poll_completion (not all transport can suppress IRQs on an xfer basis) v4 --> v5 - make force_polling const - introduce polling_enabled flag to simplify checks on do_xfer v3 --> v4: - renamed .needs_polling flag to .no_completion_irq - refactored error path when polling needed but not supported --- drivers/firmware/arm_scmi/common.h | 8 ++++++++ drivers/firmware/arm_scmi/driver.c | 32 ++++++++++++++++++++++++++++++ 2 files changed, 40 insertions(+) diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h index 6438b5248c24..652e5d95ee65 100644 --- a/drivers/firmware/arm_scmi/common.h +++ b/drivers/firmware/arm_scmi/common.h @@ -339,11 +339,16 @@ void scmi_protocol_release(const struct scmi_handle *handle, u8 protocol_id); * @dev: Reference to device in the SCMI hierarchy corresponding to this * channel * @handle: Pointer to SCMI entity handle + * @no_completion_irq: Flag to indicate that this channel has no completion + * interrupt mechanism for synchronous commands. + * This can be dynamically set by transports at run-time + * inside their provided .chan_setup(). * @transport_info: Transport layer related information */ struct scmi_chan_info { struct device *dev; struct scmi_handle *handle; + bool no_completion_irq; void *transport_info; }; @@ -402,6 +407,8 @@ struct scmi_device *scmi_child_dev_find(struct device *parent, * be pending simultaneously in the system. May be overridden by the * get_max_msg op. * @max_msg_size: Maximum size of data per message that can be handled. + * @force_polling: Flag to force this whole transport to use SCMI core polling + * mechanism instead of completion interrupts even if available. */ struct scmi_desc { int (*transport_init)(void); @@ -410,6 +417,7 @@ struct scmi_desc { int max_rx_timeout_ms; int max_msg; int max_msg_size; + const bool force_polling; }; #ifdef CONFIG_ARM_SCMI_TRANSPORT_MAILBOX diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c index 476b91845e40..cb9fc12503f2 100644 --- a/drivers/firmware/arm_scmi/driver.c +++ b/drivers/firmware/arm_scmi/driver.c @@ -36,6 +36,23 @@ #define CREATE_TRACE_POINTS #include +#define IS_POLLING_REQUIRED(__c, __i) \ + ((__c)->no_completion_irq || (__i)->desc->force_polling) \ + +#define IS_TRANSPORT_POLLING_CAPABLE(__i) \ + ((__i)->desc->ops->poll_done) + +#define IS_POLLING_ENABLED(__c, __i) \ +({ \ + bool __ret; \ + typeof(__c) c_ = __c; \ + typeof(__i) i_ = __i; \ + \ + __ret = (IS_POLLING_REQUIRED(c_, i_) && \ + IS_TRANSPORT_POLLING_CAPABLE(i_)); \ + __ret; \ +}) + enum scmi_error_codes { SCMI_SUCCESS = 0, /* Success */ SCMI_ERR_SUPPORT = -1, /* Not supported */ @@ -817,6 +834,7 @@ static int do_xfer(const struct scmi_protocol_handle *ph, struct device *dev = info->dev; struct scmi_chan_info *cinfo; + /* Check for polling request on custom command xfers at first */ if (xfer->hdr.poll_completion && !info->desc->ops->poll_done) { dev_warn_once(dev, "Polling mode is not supported by transport.\n"); @@ -827,6 +845,10 @@ static int do_xfer(const struct scmi_protocol_handle *ph, if (unlikely(!cinfo)) return -EINVAL; + /* True ONLY if also supported by transport. */ + if (IS_POLLING_ENABLED(cinfo, info)) + xfer->hdr.poll_completion = true; + /* * Initialise protocol id now from protocol handle to avoid it being * overridden by mistake (or malice) by the protocol code mangling with @@ -1527,6 +1549,16 @@ static int scmi_chan_setup(struct scmi_info *info, struct device *dev, if (ret) return ret; + if (tx && IS_POLLING_REQUIRED(cinfo, info)) { + if (IS_TRANSPORT_POLLING_CAPABLE(info)) + dev_info(dev, + "Enabled polling mode TX channel - prot_id:%d\n", + prot_id); + else + dev_warn(dev, + "Polling mode NOT supported by transport.\n"); + } + idr_alloc: ret = idr_alloc(idr, cinfo, prot_id, prot_id + 1, GFP_KERNEL); if (ret != prot_id) { From patchwork Mon Dec 20 19:56:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cristian Marussi X-Patchwork-Id: 12696874 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E1713C433EF for ; Mon, 20 Dec 2021 19:59:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ErV+LUWWF6FJSgw2opiv1pi/Oj9lOuvckO3IhdfnlBI=; b=grUxPz0aQ2Fjav 0+8wlPYpGouWYr91zIvAxk8y/9HWedBlsokB5nzRJB+9Qkw4sn/+NMTgG7u3CIuBRBTAQ6yQXwTK1 2d2MtvJNlmZgN0i6YBybAmYgkxjUlIZmXebvxNc3bcn3vT2TqUxLEhNvfRzn2xRtR5P2hxg2Mnc9X xxTrtVum8ZjDUEVXVyJVasnr0NtEWHw5i19MVlt9P8FVvGIuVLZeAXeafnlF5Gg0JNTUEtNoQNZA0 A21mRq7ZgEzU3ZIINXyyKVXzO8KzJEcKtL2bN0C38TMfKVVGTZIXCJeINPNgApWaH+K3buzdTbtW3 q6TVmoc7DGwtPCvtnFMw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOnB-004Ahv-6z; Mon, 20 Dec 2021 19:57:53 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOml-004AU1-7l for linux-arm-kernel@lists.infradead.org; Mon, 20 Dec 2021 19:57:28 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id ADAF2ED1; Mon, 20 Dec 2021 11:57:26 -0800 (PST) Received: from e120937-lin.home (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 022953F718; Mon, 20 Dec 2021 11:57:24 -0800 (PST) From: Cristian Marussi To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: sudeep.holla@arm.com, james.quinlan@broadcom.com, Jonathan.Cameron@Huawei.com, f.fainelli@gmail.com, etienne.carriere@linaro.org, vincent.guittot@linaro.org, souvik.chakravarty@arm.com, cristian.marussi@arm.com Subject: [PATCH v8 02/11] firmware: arm_scmi: Make smc transport use common completions Date: Mon, 20 Dec 2021 19:56:37 +0000 Message-Id: <20211220195646.44498-3-cristian.marussi@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211220195646.44498-1-cristian.marussi@arm.com> References: <20211220195646.44498-1-cristian.marussi@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211220_115727_395942_6139B182 X-CRM114-Status: GOOD ( 15.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When a completion irq is available use it and delegate command completion handling to the core SCMI completion mechanism. If no completion irq is available revert to polling, using the core common polling machinery. Reviewed-by: Florian Fainelli Signed-off-by: Cristian Marussi --- v6 --> v7 - removed spurios blank line removal v4 --> v5 - removed RFC tag v3 --> v4 - renamed usage of .needs_polling to .no_completion_irq --- drivers/firmware/arm_scmi/smc.c | 39 +++++++++++++++++---------------- 1 file changed, 20 insertions(+), 19 deletions(-) diff --git a/drivers/firmware/arm_scmi/smc.c b/drivers/firmware/arm_scmi/smc.c index 4effecc3bb46..d6c6ad9f6bab 100644 --- a/drivers/firmware/arm_scmi/smc.c +++ b/drivers/firmware/arm_scmi/smc.c @@ -25,8 +25,6 @@ * @shmem: Transmit/Receive shared memory area * @shmem_lock: Lock to protect access to Tx/Rx shared memory area * @func_id: smc/hvc call function id - * @irq: Optional; employed when platforms indicates msg completion by intr. - * @tx_complete: Optional, employed only when irq is valid. */ struct scmi_smc { @@ -34,15 +32,14 @@ struct scmi_smc { struct scmi_shared_mem __iomem *shmem; struct mutex shmem_lock; u32 func_id; - int irq; - struct completion tx_complete; }; static irqreturn_t smc_msg_done_isr(int irq, void *data) { struct scmi_smc *scmi_info = data; - complete(&scmi_info->tx_complete); + scmi_rx_callback(scmi_info->cinfo, + shmem_read_header(scmi_info->shmem), NULL); return IRQ_HANDLED; } @@ -111,8 +108,8 @@ static int smc_chan_setup(struct scmi_chan_info *cinfo, struct device *dev, dev_err(dev, "failed to setup SCMI smc irq\n"); return ret; } - init_completion(&scmi_info->tx_complete); - scmi_info->irq = irq; + } else { + cinfo->no_completion_irq = true; } scmi_info->func_id = func_id; @@ -142,26 +139,22 @@ static int smc_send_message(struct scmi_chan_info *cinfo, struct scmi_smc *scmi_info = cinfo->transport_info; struct arm_smccc_res res; + /* + * Channel lock will be released only once response has been + * surely fully retrieved, so after .mark_txdone() + */ mutex_lock(&scmi_info->shmem_lock); shmem_tx_prepare(scmi_info->shmem, xfer); - if (scmi_info->irq) - reinit_completion(&scmi_info->tx_complete); - arm_smccc_1_1_invoke(scmi_info->func_id, 0, 0, 0, 0, 0, 0, 0, &res); - if (scmi_info->irq) - wait_for_completion(&scmi_info->tx_complete); - - scmi_rx_callback(scmi_info->cinfo, - shmem_read_header(scmi_info->shmem), NULL); - - mutex_unlock(&scmi_info->shmem_lock); - /* Only SMCCC_RET_NOT_SUPPORTED is valid error code */ - if (res.a0) + if (res.a0) { + mutex_unlock(&scmi_info->shmem_lock); return -EOPNOTSUPP; + } + return 0; } @@ -173,6 +166,13 @@ static void smc_fetch_response(struct scmi_chan_info *cinfo, shmem_fetch_response(scmi_info->shmem, xfer); } +static void smc_mark_txdone(struct scmi_chan_info *cinfo, int ret) +{ + struct scmi_smc *scmi_info = cinfo->transport_info; + + mutex_unlock(&scmi_info->shmem_lock); +} + static bool smc_poll_done(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer) { @@ -186,6 +186,7 @@ static const struct scmi_transport_ops scmi_smc_ops = { .chan_setup = smc_chan_setup, .chan_free = smc_chan_free, .send_message = smc_send_message, + .mark_txdone = smc_mark_txdone, .fetch_response = smc_fetch_response, .poll_done = smc_poll_done, }; From patchwork Mon Dec 20 19:56:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cristian Marussi X-Patchwork-Id: 12696875 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B7F75C433EF for ; Mon, 20 Dec 2021 19:59:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=O1r3s0BnKKTNnpJEbdegaa6VYH/Ks1+vRBJs+AqXNnw=; b=AUFbYCBwtA1k9D dLhcjNmmv0PrCN/pEaGEZrGlczLHnpN26tIhVOFcn2eonpGoZmu18sS1b1F19pR1zGk3rV1CeNgTm HQZxWOgFBCt77aD04ycYNhhVCedwnftiL6EzF+tHuT9ez+A9s9V8bHc8iMX+WuvLKW10/GAShydUq WcYXr3gxPYGF5ZoGgv6arkcfCcpKfw1O0VvymyKECDZ/DkvGKFVfCv08I+4aJZ/8E8c60ZyrbKk0q fYX7aT3KCUwTjaJ0wwC9WNDh5YMpnpqk3r49coDhPEl0hCfViKq2dC5HJlLGq+fvwXRm2ovLtGR7H 5UsAI6EOXEjP6aE8rkug==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOnM-004Alo-2Q; Mon, 20 Dec 2021 19:58:04 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOmn-004AU1-Qy for linux-arm-kernel@lists.infradead.org; Mon, 20 Dec 2021 19:57:31 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 924821042; Mon, 20 Dec 2021 11:57:28 -0800 (PST) Received: from e120937-lin.home (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E312F3F718; Mon, 20 Dec 2021 11:57:26 -0800 (PST) From: Cristian Marussi To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: sudeep.holla@arm.com, james.quinlan@broadcom.com, Jonathan.Cameron@Huawei.com, f.fainelli@gmail.com, etienne.carriere@linaro.org, vincent.guittot@linaro.org, souvik.chakravarty@arm.com, cristian.marussi@arm.com Subject: [PATCH v8 03/11] firmware: arm_scmi: Add sync_cmds_completed_on_ret transport flag Date: Mon, 20 Dec 2021 19:56:38 +0000 Message-Id: <20211220195646.44498-4-cristian.marussi@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211220195646.44498-1-cristian.marussi@arm.com> References: <20211220195646.44498-1-cristian.marussi@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211220_115730_005468_F5E50F57 X-CRM114-Status: GOOD ( 18.90 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add a flag to let the transport signal to the core if its handling of sync command implies that, after .send_message has returned successfully, the requested command can be assumed to be fully and completely executed on SCMI platform side so that any possible response value is already immediately available to be retrieved by a .fetch_response: in other words the polling phase can be skipped in such a case and the response values accessed straight away. Note that all of the above applies only when polling mode of operation was selected by the core: if instead a completion IRQ was found to be available the normal response processing path based on completions will still be followed. Signed-off-by: Cristian Marussi --- v7 --> v8 - renaming to sync_cmds_completed_on_ret - removed poling_capable flag, using macros v5 --> v6 - added polling_capable helper flag v4 --> v5 - removed RFC tag - consider sync_cmds_atomic_replies flag when deciding if polling is to be supported and .poll_done() is not provided. - reviewed commit message --- drivers/firmware/arm_scmi/common.h | 8 ++++++ drivers/firmware/arm_scmi/driver.c | 40 ++++++++++++++++++++++-------- 2 files changed, 38 insertions(+), 10 deletions(-) diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h index 652e5d95ee65..24b1d1ac5f12 100644 --- a/drivers/firmware/arm_scmi/common.h +++ b/drivers/firmware/arm_scmi/common.h @@ -409,6 +409,13 @@ struct scmi_device *scmi_child_dev_find(struct device *parent, * @max_msg_size: Maximum size of data per message that can be handled. * @force_polling: Flag to force this whole transport to use SCMI core polling * mechanism instead of completion interrupts even if available. + * @sync_cmds_completed_on_ret: Flag to indicate that the transport assures + * synchronous-command messages are atomically + * completed on .send_message: no need to poll + * actively waiting for a response. + * Used by core internally only when polling is + * selected as a waiting for reply method: i.e. + * if a completion irq was found use that anyway. */ struct scmi_desc { int (*transport_init)(void); @@ -418,6 +425,7 @@ struct scmi_desc { int max_msg; int max_msg_size; const bool force_polling; + const bool sync_cmds_completed_on_ret; }; #ifdef CONFIG_ARM_SCMI_TRANSPORT_MAILBOX diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c index cb9fc12503f2..d4dd42ebc5e8 100644 --- a/drivers/firmware/arm_scmi/driver.c +++ b/drivers/firmware/arm_scmi/driver.c @@ -40,7 +40,14 @@ ((__c)->no_completion_irq || (__i)->desc->force_polling) \ #define IS_TRANSPORT_POLLING_CAPABLE(__i) \ - ((__i)->desc->ops->poll_done) +({ \ + bool __ret; \ + typeof(__i) i_ = __i; \ + \ + __ret = ((i_)->desc->ops->poll_done || \ + (i_)->desc->sync_cmds_completed_on_ret); \ + __ret; \ +}) #define IS_POLLING_ENABLED(__c, __i) \ ({ \ @@ -780,10 +787,28 @@ static int scmi_wait_for_message_response(struct scmi_chan_info *cinfo, xfer->hdr.poll_completion); if (xfer->hdr.poll_completion) { - ktime_t stop = ktime_add_ms(ktime_get(), timeout_ms); + /* + * Real polling is needed only if transport has NOT declared + * itself to support synchronous commands replies. + */ + if (!info->desc->sync_cmds_completed_on_ret) { + /* + * Poll on xfer using transport provided .poll_done(); + * assumes no completion interrupt was available. + */ + ktime_t stop = ktime_add_ms(ktime_get(), timeout_ms); + + spin_until_cond(scmi_xfer_done_no_timeout(cinfo, + xfer, stop)); + if (ktime_after(ktime_get(), stop)) { + dev_err(dev, + "timed out in resp(caller: %pS) - polling\n", + (void *)_RET_IP_); + ret = -ETIMEDOUT; + } + } - spin_until_cond(scmi_xfer_done_no_timeout(cinfo, xfer, stop)); - if (ktime_before(ktime_get(), stop)) { + if (!ret) { unsigned long flags; /* @@ -796,11 +821,6 @@ static int scmi_wait_for_message_response(struct scmi_chan_info *cinfo, xfer->state = SCMI_XFER_RESP_OK; } spin_unlock_irqrestore(&xfer->lock, flags); - } else { - dev_err(dev, - "timed out in resp(caller: %pS) - polling\n", - (void *)_RET_IP_); - ret = -ETIMEDOUT; } } else { /* And we wait for the response. */ @@ -835,7 +855,7 @@ static int do_xfer(const struct scmi_protocol_handle *ph, struct scmi_chan_info *cinfo; /* Check for polling request on custom command xfers at first */ - if (xfer->hdr.poll_completion && !info->desc->ops->poll_done) { + if (xfer->hdr.poll_completion && !IS_TRANSPORT_POLLING_CAPABLE(info)) { dev_warn_once(dev, "Polling mode is not supported by transport.\n"); return -EINVAL; From patchwork Mon Dec 20 19:56:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cristian Marussi X-Patchwork-Id: 12696876 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EDCA2C433FE for ; Mon, 20 Dec 2021 19:59:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wG8fBHleCe7JZy7jPI54mgzODqIFzIVoy2++eCSi8GI=; b=W2qy5Eb0ofe2+S Ic5XNJ1JgqNv1OaY69wwAdws9VYGLrTDB2wQ8LMvMUaVc4o4G1QnwdXJ2p+rcggQDbG78/cvIyG5t B0d6ZicR8D2R2LSla8HFSjeVUORvggqm10re9czN/5k2h8ISU9C0fHsxhl4kwgagdnnO6yViN483x 8n0cY19Erg7zk8XwSyJteQbSiBo26L50JXhYEFT/MjTuJDZrMneSwoyF1bN6kMv7lzbuAigdMYCVK /0o1ZQ67UgRDwd9Sc7GjSw4t+DOHxCj6Uoz7H2G6mrfEwiiotowmkgx6/vUgTeoLE70AFj/00aJMz vqeCeaXMZAkctGc9D75w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOnb-004As4-D0; Mon, 20 Dec 2021 19:58:19 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOmp-004AXf-Pg for linux-arm-kernel@lists.infradead.org; Mon, 20 Dec 2021 19:57:33 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E2F9F1FB; Mon, 20 Dec 2021 11:57:30 -0800 (PST) Received: from e120937-lin.home (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C75483F718; Mon, 20 Dec 2021 11:57:28 -0800 (PST) From: Cristian Marussi To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: sudeep.holla@arm.com, james.quinlan@broadcom.com, Jonathan.Cameron@Huawei.com, f.fainelli@gmail.com, etienne.carriere@linaro.org, vincent.guittot@linaro.org, souvik.chakravarty@arm.com, cristian.marussi@arm.com Subject: [PATCH v8 04/11] firmware: arm_scmi: Make smc support sync_cmds_completed_on_ret Date: Mon, 20 Dec 2021 19:56:39 +0000 Message-Id: <20211220195646.44498-5-cristian.marussi@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211220195646.44498-1-cristian.marussi@arm.com> References: <20211220195646.44498-1-cristian.marussi@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211220_115731_929695_4E0A9901 X-CRM114-Status: GOOD ( 10.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Enable sync_cmds_completed_on_ret in the SMC transport descriptor and remove SMC specific .poll_done callback support since polling is bypassed when sync_cmds_completed_on_ret is set. Signed-off-by: Cristian Marussi --- v7 --> v8 - renaming to sync_cmds_completed_on_ret v4 --> v5 - removed RFC tag - added comment on setting flag - remove smc_poll_done --- drivers/firmware/arm_scmi/smc.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/drivers/firmware/arm_scmi/smc.c b/drivers/firmware/arm_scmi/smc.c index d6c6ad9f6bab..df0defd9f8bb 100644 --- a/drivers/firmware/arm_scmi/smc.c +++ b/drivers/firmware/arm_scmi/smc.c @@ -173,14 +173,6 @@ static void smc_mark_txdone(struct scmi_chan_info *cinfo, int ret) mutex_unlock(&scmi_info->shmem_lock); } -static bool -smc_poll_done(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer) -{ - struct scmi_smc *scmi_info = cinfo->transport_info; - - return shmem_poll_done(scmi_info->shmem, xfer); -} - static const struct scmi_transport_ops scmi_smc_ops = { .chan_available = smc_chan_available, .chan_setup = smc_chan_setup, @@ -188,7 +180,6 @@ static const struct scmi_transport_ops scmi_smc_ops = { .send_message = smc_send_message, .mark_txdone = smc_mark_txdone, .fetch_response = smc_fetch_response, - .poll_done = smc_poll_done, }; const struct scmi_desc scmi_smc_desc = { @@ -196,4 +187,13 @@ const struct scmi_desc scmi_smc_desc = { .max_rx_timeout_ms = 30, .max_msg = 20, .max_msg_size = 128, + /* + * Setting .sync_cmds_atomic_replies to true for SMC assumes that, + * once the SMC instruction has completed successfully, the issued + * SCMI command would have been already fully processed by the SCMI + * platform firmware and so any possible response value expected + * for the issued command will be immmediately ready to be fetched + * from the shared memory area. + */ + .sync_cmds_completed_on_ret = true, }; From patchwork Mon Dec 20 19:56:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cristian Marussi X-Patchwork-Id: 12696877 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D22E9C433F5 for ; Mon, 20 Dec 2021 19:59:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wuzG9uo5ycacTC1anOsTEGlKihCi8IJ6W6EaAOz0Ci8=; b=d84KyEzW+wIGpn wTuafaZCoGqkkT4wFRUFjejhRa/LA/NWIhGu0ZTqPJ/NmiYNDv5nHMHQUkVIinUOoQh3hN7JXL6vs Z64gH6TG0gCsJBT8xSd5J2+vCrbpcipMr/Bmjk4kHo71U6GiBCtcjMcbnSowg1VEkkOTq98mxGbDp 4sErEGHTjU1nw8+2MFVTSrCCCu5UJRxcjUjbw/5G7G8Atg4LrBUowsUrysDsWUTWJUjDiO75dkLcB 8y13CE2t7wnfMcVQKgGkQO3pPQZDhdyFJXeG2Lv3cdsmfqh6pV+VBpKwkOn+R00HqEsV2L7COzoCY mQx+h7f95n9x7vj3/Ypg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOnx-004B2S-1M; Mon, 20 Dec 2021 19:58:41 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOmt-004AZH-54 for linux-arm-kernel@lists.infradead.org; Mon, 20 Dec 2021 19:57:36 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7BA821FB; Mon, 20 Dec 2021 11:57:34 -0800 (PST) Received: from e120937-lin.home (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 636133F718; Mon, 20 Dec 2021 11:57:31 -0800 (PST) From: Cristian Marussi To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: sudeep.holla@arm.com, james.quinlan@broadcom.com, Jonathan.Cameron@Huawei.com, f.fainelli@gmail.com, etienne.carriere@linaro.org, vincent.guittot@linaro.org, souvik.chakravarty@arm.com, cristian.marussi@arm.com Subject: [PATCH v8 05/11] firmware: arm_scmi: Make optee support sync_cmds_completed_on_ret Date: Mon, 20 Dec 2021 19:56:40 +0000 Message-Id: <20211220195646.44498-6-cristian.marussi@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211220195646.44498-1-cristian.marussi@arm.com> References: <20211220195646.44498-1-cristian.marussi@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211220_115735_322710_3C72C0AC X-CRM114-Status: GOOD ( 11.56 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Declare each OPTEE SCMI channel as not having a completion_irq so as to enable polling mode and then enable also .sync_cmds_completed_on_ret flag in the OPTEE transport descriptor so that real polling is itself effectively bypassed on the rx path: once the optee command invocation has successfully returned the core will directly fetch the response from the shared memory area. Remove OPTEE SCMI transport specific .poll_done callback support since real polling is effectively bypassed when .sync_cmds_completed_on_ret is set. Add OPTEE SCMI transport specific .mark_txdone callback support in order to properly handle channel locking along the tx path. Cc: Etienne Carriere Signed-off-by: Cristian Marussi --- v7 -> v8 - renaming to sync_cmds_completed_on_ret v6 --> v7 - reviewed commit message --- drivers/firmware/arm_scmi/optee.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/drivers/firmware/arm_scmi/optee.c b/drivers/firmware/arm_scmi/optee.c index 175b39bcd470..f460e12be4ea 100644 --- a/drivers/firmware/arm_scmi/optee.c +++ b/drivers/firmware/arm_scmi/optee.c @@ -363,6 +363,9 @@ static int scmi_optee_chan_setup(struct scmi_chan_info *cinfo, struct device *de if (ret) goto err_close_sess; + /* Enable polling */ + cinfo->no_completion_irq = true; + mutex_lock(&scmi_optee_private->mu); list_add(&channel->link, &scmi_optee_private->channel_list); mutex_unlock(&scmi_optee_private->mu); @@ -423,9 +426,8 @@ static int scmi_optee_send_message(struct scmi_chan_info *cinfo, shmem_tx_prepare(shmem, xfer); ret = invoke_process_smt_channel(channel); - - scmi_rx_callback(cinfo, shmem_read_header(shmem), NULL); - mutex_unlock(&channel->mu); + if (ret) + mutex_unlock(&channel->mu); return ret; } @@ -439,13 +441,11 @@ static void scmi_optee_fetch_response(struct scmi_chan_info *cinfo, shmem_fetch_response(shmem, xfer); } -static bool scmi_optee_poll_done(struct scmi_chan_info *cinfo, - struct scmi_xfer *xfer) +static void scmi_optee_mark_txdone(struct scmi_chan_info *cinfo, int ret) { struct scmi_optee_channel *channel = cinfo->transport_info; - struct scmi_shared_mem *shmem = get_channel_shm(channel, xfer); - return shmem_poll_done(shmem, xfer); + mutex_unlock(&channel->mu); } static struct scmi_transport_ops scmi_optee_ops = { @@ -454,9 +454,9 @@ static struct scmi_transport_ops scmi_optee_ops = { .chan_setup = scmi_optee_chan_setup, .chan_free = scmi_optee_chan_free, .send_message = scmi_optee_send_message, + .mark_txdone = scmi_optee_mark_txdone, .fetch_response = scmi_optee_fetch_response, .clear_channel = scmi_optee_clear_channel, - .poll_done = scmi_optee_poll_done, }; static int scmi_optee_ctx_match(struct tee_ioctl_version_data *ver, const void *data) @@ -562,4 +562,5 @@ const struct scmi_desc scmi_optee_desc = { .max_rx_timeout_ms = 30, .max_msg = 20, .max_msg_size = SCMI_OPTEE_MAX_MSG_SIZE, + .sync_cmds_completed_on_ret = true, }; From patchwork Mon Dec 20 19:56:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cristian Marussi X-Patchwork-Id: 12696878 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DFA42C433F5 for ; Mon, 20 Dec 2021 20:00:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=adpDUMzTFF0ImwUZ6jShj1pz4MRH5K+KVPFcH+8mfw8=; b=NL2ZO7z3tekSVf +BQYsKb7Pd5/M5OjvV/cGINdtBgzsGTsoHtYS87bDNNNPNlycCl6a3pmKSwrCHLxXPpyyy1ZEtynU RIbBIdIh7bMo9ifOTe4Q3gKPQEZqipcZe/dKp1odUW14/etB385O/vXeq/+avBePSAlfpr5ILjozX KH3bjb0tUEz5Q+rMlQOeAersYKIdGGz8YAT92n/lHEAFfeywTumWABcw2OGNlVfTBpZ47XE0qz0JC aC0xUopowx978nbe0mA3eyt2nwj0NY05MqTE1JCEWDI8RVledM5b2KiZjwz7w8CyB1pONFOe08P9v 2a1m2TJOug0Przs65JVA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOoB-004BB2-Aa; Mon, 20 Dec 2021 19:58:55 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOmv-004Aaa-Mc for linux-arm-kernel@lists.infradead.org; Mon, 20 Dec 2021 19:57:39 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0C77FED1; Mon, 20 Dec 2021 11:57:37 -0800 (PST) Received: from e120937-lin.home (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BCC533F718; Mon, 20 Dec 2021 11:57:34 -0800 (PST) From: Cristian Marussi To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: sudeep.holla@arm.com, james.quinlan@broadcom.com, Jonathan.Cameron@Huawei.com, f.fainelli@gmail.com, etienne.carriere@linaro.org, vincent.guittot@linaro.org, souvik.chakravarty@arm.com, cristian.marussi@arm.com Subject: [PATCH v8 06/11] firmware: arm_scmi: Add support for atomic transports Date: Mon, 20 Dec 2021 19:56:41 +0000 Message-Id: <20211220195646.44498-7-cristian.marussi@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211220195646.44498-1-cristian.marussi@arm.com> References: <20211220195646.44498-1-cristian.marussi@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211220_115737_870780_EC0E798A X-CRM114-Status: GOOD ( 26.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org An SCMI transport can be configured as .atomic_enabled in order to signal to the SCMI core that all its TX path is executed in atomic context and that, when requested, polling mode should be used while waiting for command responses. When a specific platform configuration had properly configured such a transport as .atomic_enabled, the SCMI core will also take care not to sleep in the corresponding RX path while waiting for a response if that specific command transaction was requested as atomic using polling mode. Asynchronous commands should not be used in an atomic context and so a warning is emitted if polling was requested for an asynchronous command. Add also a method to check, from the SCMI drivers, if the underlying SCMI transport is currently configured to support atomic transactions: this will be used by upper layers to determine if atomic requests can be supported at all on this SCMI instance. Signed-off-by: Cristian Marussi --- v7 --> v8 - removed usage of poling_capable, use macros v6 --> v7 - reviewed commit message - converted async WARN_ON to WARN_ON_ONCE v5 --> v6 - removed atomic_capable - fully relying on transport polling capabilities - removed polling/atomic support for delayed_reponse and WARN - merged with is_transport_atomic() patch - is_transport_atomic() now considers polling_capable and atomic_enabled flags v4 --> v5 - added .atomic_enabled flag to decide wheter to enable atomic mode or not for atomic_capable transports - reviewed commit message --- drivers/firmware/arm_scmi/common.h | 4 +++ drivers/firmware/arm_scmi/driver.c | 51 ++++++++++++++++++++++++++++-- include/linux/scmi_protocol.h | 8 +++++ 3 files changed, 61 insertions(+), 2 deletions(-) diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h index 24b1d1ac5f12..01d42c2069d4 100644 --- a/drivers/firmware/arm_scmi/common.h +++ b/drivers/firmware/arm_scmi/common.h @@ -416,6 +416,9 @@ struct scmi_device *scmi_child_dev_find(struct device *parent, * Used by core internally only when polling is * selected as a waiting for reply method: i.e. * if a completion irq was found use that anyway. + * @atomic_enabled: Flag to indicate that this transport, which is assured not + * to sleep anywhere on the TX path, can be used in atomic mode + * when requested. */ struct scmi_desc { int (*transport_init)(void); @@ -426,6 +429,7 @@ struct scmi_desc { int max_msg_size; const bool force_polling; const bool sync_cmds_completed_on_ret; + const bool atomic_enabled; }; #ifdef CONFIG_ARM_SCMI_TRANSPORT_MAILBOX diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c index d4dd42ebc5e8..0406e030db1b 100644 --- a/drivers/firmware/arm_scmi/driver.c +++ b/drivers/firmware/arm_scmi/driver.c @@ -928,6 +928,20 @@ static void reset_rx_to_maxsz(const struct scmi_protocol_handle *ph, * @ph: Pointer to SCMI protocol handle * @xfer: Transfer to initiate and wait for response * + * Using asynchronous commands in atomic/polling mode should be avoided since + * it could cause long busy-waiting here, so ignore polling for the delayed + * response and WARN if it was requested for this command transaction since + * upper layers should refrain from issuing such kind of requests. + * + * The only other option would have been to refrain from using any asynchronous + * command even if made available, when an atomic transport is detected, and + * instead forcibly use the synchronous version (thing that can be easily + * attained at the protocol layer), but this would also have led to longer + * stalls of the channel for synchronous commands and possibly timeouts. + * (in other words there is usually a good reason if a platform provides an + * asynchronous version of a command and we should prefer to use it...just not + * when using atomic/polling mode) + * * Return: -ETIMEDOUT in case of no delayed response, if transmit error, * return corresponding error, else if all goes well, return 0. */ @@ -939,12 +953,24 @@ static int do_xfer_with_response(const struct scmi_protocol_handle *ph, xfer->async_done = &async_response; + /* + * Delayed responses should not be polled, so an async command should + * not have been used when requiring an atomic/poll context; WARN and + * perform instead a sleeping wait. + * (Note Async + IgnoreDelayedResponses are sent via do_xfer) + */ + WARN_ON_ONCE(xfer->hdr.poll_completion); + ret = do_xfer(ph, xfer); if (!ret) { - if (!wait_for_completion_timeout(xfer->async_done, timeout)) + if (!wait_for_completion_timeout(xfer->async_done, timeout)) { + dev_err(ph->dev, + "timed out in delayed resp(caller: %pS)\n", + (void *)_RET_IP_); ret = -ETIMEDOUT; - else if (xfer->hdr.status) + } else if (xfer->hdr.status) { ret = scmi_to_linux_errno(xfer->hdr.status); + } } xfer->async_done = NULL; @@ -1378,6 +1404,22 @@ static void scmi_devm_protocol_put(struct scmi_device *sdev, u8 protocol_id) WARN_ON(ret); } +/** + * scmi_is_transport_atomic - Method to check if underlying transport for an + * SCMI instance is configured as atomic. + * + * @handle: A reference to the SCMI platform instance. + * + * Return: True if transport is configured as atomic + */ +static bool scmi_is_transport_atomic(const struct scmi_handle *handle) +{ + struct scmi_info *info = handle_to_scmi_info(handle); + + return info->desc->atomic_enabled && + IS_TRANSPORT_POLLING_CAPABLE(info); +} + static inline struct scmi_handle *scmi_handle_get_from_info_unlocked(struct scmi_info *info) { @@ -1915,6 +1957,7 @@ static int scmi_probe(struct platform_device *pdev) handle->version = &info->version; handle->devm_protocol_get = scmi_devm_protocol_get; handle->devm_protocol_put = scmi_devm_protocol_put; + handle->is_transport_atomic = scmi_is_transport_atomic; if (desc->ops->link_supplier) { ret = desc->ops->link_supplier(dev); @@ -1933,6 +1976,10 @@ static int scmi_probe(struct platform_device *pdev) if (scmi_notification_init(handle)) dev_err(dev, "SCMI Notifications NOT available.\n"); + if (info->desc->atomic_enabled && !IS_TRANSPORT_POLLING_CAPABLE(info)) + dev_err(dev, + "Transport is not polling capable. Atomic mode not supported.\n"); + /* * Trigger SCMI Base protocol initialization. * It's mandatory and won't be ever released/deinit until the diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h index 80e781c51ddc..9f895cb81818 100644 --- a/include/linux/scmi_protocol.h +++ b/include/linux/scmi_protocol.h @@ -612,6 +612,13 @@ struct scmi_notify_ops { * @devm_protocol_get: devres managed method to acquire a protocol and get specific * operations and a dedicated protocol handler * @devm_protocol_put: devres managed method to release a protocol + * @is_transport_atomic: method to check if the underlying transport for this + * instance handle is configured to support atomic + * transactions for commands. + * Some users of the SCMI stack in the upper layers could + * be interested to know if they can assume SCMI + * command transactions associated to this handle will + * never sleep and act accordingly. * @notify_ops: pointer to set of notifications related operations */ struct scmi_handle { @@ -622,6 +629,7 @@ struct scmi_handle { (*devm_protocol_get)(struct scmi_device *sdev, u8 proto, struct scmi_protocol_handle **ph); void (*devm_protocol_put)(struct scmi_device *sdev, u8 proto); + bool (*is_transport_atomic)(const struct scmi_handle *handle); const struct scmi_notify_ops *notify_ops; }; From patchwork Mon Dec 20 19:56:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cristian Marussi X-Patchwork-Id: 12696879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F30C7C433EF for ; Mon, 20 Dec 2021 20:00:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ot5kiJcMkasCCVhu8kASKWxSVLaVR6+lEqfKvrPIJGw=; b=YoKM3WdyWppSr4 Br38XXE3VtJ0mmVh+57sRle4T5tN5RF+QZKFWVAjOzkbLIIH3skgtZCCrC1+FWBubPurALG+Kn6wf 8e++4DF6ZVeCbjjbX7ARWLTWC1jFyAB0XvdpxBL47qOoYAglB5owBJGE9AmhYpEiu4ObnNS7gDLu+ cEBjFbao3+B9Cwnyq9G2ywIljEUDTyrDs+rnyUI6wF6qMsYgMtwhejkrD0sEsuOUXZ9gGF9RfNbFs TPxygMBS5BIHnINvPCE1hW+SulZPJ3xTBnXAVLoSTTlCxAiNOye4gkBE0m9AO3LL0KEMaU5ZUmgrp hPrQ1O559ZA3uAuezEeA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOoU-004BNU-EH; Mon, 20 Dec 2021 19:59:14 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOmz-004AcQ-Ib for linux-arm-kernel@lists.infradead.org; Mon, 20 Dec 2021 19:57:45 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 78057D6E; Mon, 20 Dec 2021 11:57:40 -0800 (PST) Received: from e120937-lin.home (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 86AFB3F718; Mon, 20 Dec 2021 11:57:37 -0800 (PST) From: Cristian Marussi To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: sudeep.holla@arm.com, james.quinlan@broadcom.com, Jonathan.Cameron@Huawei.com, f.fainelli@gmail.com, etienne.carriere@linaro.org, vincent.guittot@linaro.org, souvik.chakravarty@arm.com, cristian.marussi@arm.com Subject: [PATCH v8 07/11] firmware: arm_scmi: Add atomic mode support to smc transport Date: Mon, 20 Dec 2021 19:56:42 +0000 Message-Id: <20211220195646.44498-8-cristian.marussi@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211220195646.44498-1-cristian.marussi@arm.com> References: <20211220195646.44498-1-cristian.marussi@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211220_115741_765623_A7A2954B X-CRM114-Status: GOOD ( 17.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add a Kernel configuration option to enable SCMI SMC transport atomic mode operation for selected SCMI transactions and leave it as default disabled. Substitute mutex usages with busy-waiting and declare smc transport as .atomic_enabled if such Kernel configuration option is enabled. Signed-off-by: Cristian Marussi --- v7 --> v8 - removed ifdeffery, using IS_ENABLED v5 --> v6 - remove usage of atomic_capable - removed needless union - reviewed Kconfig help v4 --> v5 - removed RFC tag - add CONFIG_ARM_SCMI_TRANSPORT_SMC_ATOMIC_ENABLE option - add .atomic_enable support - make atomic_capable dependent on CONFIG_ARM_SCMI_TRANSPORT_SMC_ATOMIC_ENABLE - make also usage of mutexes vs busy-waiting dependent on CONFIG_ARM_SCMI_TRANSPORT_SMC_ATOMIC_ENABLE --- drivers/firmware/arm_scmi/Kconfig | 14 ++++++++ drivers/firmware/arm_scmi/smc.c | 56 +++++++++++++++++++++++++++---- 2 files changed, 64 insertions(+), 6 deletions(-) diff --git a/drivers/firmware/arm_scmi/Kconfig b/drivers/firmware/arm_scmi/Kconfig index 638ecec89ff1..d429326433d1 100644 --- a/drivers/firmware/arm_scmi/Kconfig +++ b/drivers/firmware/arm_scmi/Kconfig @@ -78,6 +78,20 @@ config ARM_SCMI_TRANSPORT_SMC If you want the ARM SCMI PROTOCOL stack to include support for a transport based on SMC, answer Y. +config ARM_SCMI_TRANSPORT_SMC_ATOMIC_ENABLE + bool "Enable atomic mode support for SCMI SMC transport" + depends on ARM_SCMI_TRANSPORT_SMC + help + Enable support of atomic operation for SCMI SMC based transport. + + If you want the SCMI SMC based transport to operate in atomic + mode, avoiding any kind of sleeping behaviour for selected + transactions on the TX path, answer Y. + Enabling atomic mode operations allows any SCMI driver using this + transport to optionally ask for atomic SCMI transactions and operate + in atomic context too, at the price of using a number of busy-waiting + primitives all over instead. If unsure say N. + config ARM_SCMI_TRANSPORT_VIRTIO bool "SCMI transport based on VirtIO" depends on VIRTIO=y || VIRTIO=ARM_SCMI_PROTOCOL diff --git a/drivers/firmware/arm_scmi/smc.c b/drivers/firmware/arm_scmi/smc.c index df0defd9f8bb..6c7871a40611 100644 --- a/drivers/firmware/arm_scmi/smc.c +++ b/drivers/firmware/arm_scmi/smc.c @@ -7,6 +7,7 @@ */ #include +#include #include #include #include @@ -14,6 +15,7 @@ #include #include #include +#include #include #include "common.h" @@ -23,14 +25,20 @@ * * @cinfo: SCMI channel info * @shmem: Transmit/Receive shared memory area - * @shmem_lock: Lock to protect access to Tx/Rx shared memory area + * @shmem_lock: Lock to protect access to Tx/Rx shared memory area. + * Used when NOT operating in atomic mode. + * @inflight: Atomic flag to protect access to Tx/Rx shared memory area. + * Used when operating in atomic mode. * @func_id: smc/hvc call function id */ struct scmi_smc { struct scmi_chan_info *cinfo; struct scmi_shared_mem __iomem *shmem; + /* Protect access to shmem area */ struct mutex shmem_lock; +#define INFLIGHT_NONE MSG_TOKEN_MAX + atomic_t inflight; u32 func_id; }; @@ -54,6 +62,41 @@ static bool smc_chan_available(struct device *dev, int idx) return true; } +static inline void smc_channel_lock_init(struct scmi_smc *scmi_info) +{ + if (IS_ENABLED(CONFIG_ARM_SCMI_TRANSPORT_SMC_ATOMIC_ENABLE)) + atomic_set(&scmi_info->inflight, INFLIGHT_NONE); + else + mutex_init(&scmi_info->shmem_lock); +} + +static bool smc_xfer_inflight(struct scmi_xfer *xfer, atomic_t *inflight) +{ + int ret; + + ret = atomic_cmpxchg(inflight, INFLIGHT_NONE, xfer->hdr.seq); + + return ret == INFLIGHT_NONE; +} + +static inline void +smc_channel_lock_acquire(struct scmi_smc *scmi_info, + struct scmi_xfer *xfer __maybe_unused) +{ + if (IS_ENABLED(CONFIG_ARM_SCMI_TRANSPORT_SMC_ATOMIC_ENABLE)) + spin_until_cond(smc_xfer_inflight(xfer, &scmi_info->inflight)); + else + mutex_lock(&scmi_info->shmem_lock); +} + +static inline void smc_channel_lock_release(struct scmi_smc *scmi_info) +{ + if (IS_ENABLED(CONFIG_ARM_SCMI_TRANSPORT_SMC_ATOMIC_ENABLE)) + atomic_set(&scmi_info->inflight, INFLIGHT_NONE); + else + mutex_unlock(&scmi_info->shmem_lock); +} + static int smc_chan_setup(struct scmi_chan_info *cinfo, struct device *dev, bool tx) { @@ -114,7 +157,7 @@ static int smc_chan_setup(struct scmi_chan_info *cinfo, struct device *dev, scmi_info->func_id = func_id; scmi_info->cinfo = cinfo; - mutex_init(&scmi_info->shmem_lock); + smc_channel_lock_init(scmi_info); cinfo->transport_info = scmi_info; return 0; @@ -140,10 +183,10 @@ static int smc_send_message(struct scmi_chan_info *cinfo, struct arm_smccc_res res; /* - * Channel lock will be released only once response has been + * Channel will be released only once response has been * surely fully retrieved, so after .mark_txdone() */ - mutex_lock(&scmi_info->shmem_lock); + smc_channel_lock_acquire(scmi_info, xfer); shmem_tx_prepare(scmi_info->shmem, xfer); @@ -151,7 +194,7 @@ static int smc_send_message(struct scmi_chan_info *cinfo, /* Only SMCCC_RET_NOT_SUPPORTED is valid error code */ if (res.a0) { - mutex_unlock(&scmi_info->shmem_lock); + smc_channel_lock_release(scmi_info); return -EOPNOTSUPP; } @@ -170,7 +213,7 @@ static void smc_mark_txdone(struct scmi_chan_info *cinfo, int ret) { struct scmi_smc *scmi_info = cinfo->transport_info; - mutex_unlock(&scmi_info->shmem_lock); + smc_channel_lock_release(scmi_info); } static const struct scmi_transport_ops scmi_smc_ops = { @@ -196,4 +239,5 @@ const struct scmi_desc scmi_smc_desc = { * from the shared memory area. */ .sync_cmds_completed_on_ret = true, + .atomic_enabled = IS_ENABLED(CONFIG_ARM_SCMI_TRANSPORT_SMC_ATOMIC_ENABLE), }; From patchwork Mon Dec 20 19:56:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cristian Marussi X-Patchwork-Id: 12696880 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 624A9C433EF for ; Mon, 20 Dec 2021 20:00:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZAcNxZzyn2M5D8NLHnLcYuWuwPHcWtEPs7+5RypD8yc=; b=nNvMqMDCevjjdi Gr9WBheiN9/hicRffJVjYWn4/Wtd9Y95dURwgZEEnlMbgSuiVFiBCvTPihEGHZdlwiMYxLmNqU6dN nJc53r/WW3SlT/LA/1P20YlxhgO7/X7CKVueZZFhFrT2uUwHx+WVIBQn7DXKs9LfWD6jeI3flXldF mb8EMHhb/eUdD3LSmdnGJNcPSWKm7OvunuPkmEd5fOBM3CMEs4zTkFZyB+ukqrt8WXmjl67nUw9fG 6xpyfvi447MnN14rVF6klwHsHGcTXdrhtW7IGYkkvSUlUdNaEXfI0DJ3RBOobtd3DlCzS3l+vfyY7 R5+26lHHB5mKiUFAH4xA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOot-004BdG-D3; Mon, 20 Dec 2021 19:59:39 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOn1-004AdT-CG for linux-arm-kernel@lists.infradead.org; Mon, 20 Dec 2021 19:57:45 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9648B1042; Mon, 20 Dec 2021 11:57:42 -0800 (PST) Received: from e120937-lin.home (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B2C8D3F718; Mon, 20 Dec 2021 11:57:40 -0800 (PST) From: Cristian Marussi To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: sudeep.holla@arm.com, james.quinlan@broadcom.com, Jonathan.Cameron@Huawei.com, f.fainelli@gmail.com, etienne.carriere@linaro.org, vincent.guittot@linaro.org, souvik.chakravarty@arm.com, cristian.marussi@arm.com Subject: [PATCH v8 08/11] firmware: arm_scmi: Add new parameter to mark_txdone Date: Mon, 20 Dec 2021 19:56:43 +0000 Message-Id: <20211220195646.44498-9-cristian.marussi@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211220195646.44498-1-cristian.marussi@arm.com> References: <20211220195646.44498-1-cristian.marussi@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211220_115743_571206_975AE8C9 X-CRM114-Status: GOOD ( 11.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add a new xfer parameter to mark_txdone transport operation which enables the SCMI core to optionally pass back into the transport layer a reference to the xfer descriptor that is being handled. Reviewed-by: Florian Fainelli Signed-off-by: Cristian Marussi --- v6 --> v7 - use __unused macro for new param in existing transports --- drivers/firmware/arm_scmi/common.h | 3 ++- drivers/firmware/arm_scmi/driver.c | 2 +- drivers/firmware/arm_scmi/mailbox.c | 3 ++- drivers/firmware/arm_scmi/optee.c | 3 ++- drivers/firmware/arm_scmi/smc.c | 3 ++- 5 files changed, 9 insertions(+), 5 deletions(-) diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h index 01d42c2069d4..4fda84bfab42 100644 --- a/drivers/firmware/arm_scmi/common.h +++ b/drivers/firmware/arm_scmi/common.h @@ -378,7 +378,8 @@ struct scmi_transport_ops { unsigned int (*get_max_msg)(struct scmi_chan_info *base_cinfo); int (*send_message)(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer); - void (*mark_txdone)(struct scmi_chan_info *cinfo, int ret); + void (*mark_txdone)(struct scmi_chan_info *cinfo, int ret, + struct scmi_xfer *xfer); void (*fetch_response)(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer); void (*fetch_notification)(struct scmi_chan_info *cinfo, diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c index 0406e030db1b..b4bbfe89368d 100644 --- a/drivers/firmware/arm_scmi/driver.c +++ b/drivers/firmware/arm_scmi/driver.c @@ -902,7 +902,7 @@ static int do_xfer(const struct scmi_protocol_handle *ph, ret = scmi_to_linux_errno(xfer->hdr.status); if (info->desc->ops->mark_txdone) - info->desc->ops->mark_txdone(cinfo, ret); + info->desc->ops->mark_txdone(cinfo, ret, xfer); trace_scmi_xfer_end(xfer->transfer_id, xfer->hdr.id, xfer->hdr.protocol_id, xfer->hdr.seq, ret); diff --git a/drivers/firmware/arm_scmi/mailbox.c b/drivers/firmware/arm_scmi/mailbox.c index e09eb12bf421..08ff4d110beb 100644 --- a/drivers/firmware/arm_scmi/mailbox.c +++ b/drivers/firmware/arm_scmi/mailbox.c @@ -140,7 +140,8 @@ static int mailbox_send_message(struct scmi_chan_info *cinfo, return ret; } -static void mailbox_mark_txdone(struct scmi_chan_info *cinfo, int ret) +static void mailbox_mark_txdone(struct scmi_chan_info *cinfo, int ret, + struct scmi_xfer *__unused) { struct scmi_mailbox *smbox = cinfo->transport_info; diff --git a/drivers/firmware/arm_scmi/optee.c b/drivers/firmware/arm_scmi/optee.c index f460e12be4ea..734f1eeee161 100644 --- a/drivers/firmware/arm_scmi/optee.c +++ b/drivers/firmware/arm_scmi/optee.c @@ -441,7 +441,8 @@ static void scmi_optee_fetch_response(struct scmi_chan_info *cinfo, shmem_fetch_response(shmem, xfer); } -static void scmi_optee_mark_txdone(struct scmi_chan_info *cinfo, int ret) +static void scmi_optee_mark_txdone(struct scmi_chan_info *cinfo, int ret, + struct scmi_xfer *__unused) { struct scmi_optee_channel *channel = cinfo->transport_info; diff --git a/drivers/firmware/arm_scmi/smc.c b/drivers/firmware/arm_scmi/smc.c index 6c7871a40611..745acfdd0b3d 100644 --- a/drivers/firmware/arm_scmi/smc.c +++ b/drivers/firmware/arm_scmi/smc.c @@ -209,7 +209,8 @@ static void smc_fetch_response(struct scmi_chan_info *cinfo, shmem_fetch_response(scmi_info->shmem, xfer); } -static void smc_mark_txdone(struct scmi_chan_info *cinfo, int ret) +static void smc_mark_txdone(struct scmi_chan_info *cinfo, int ret, + struct scmi_xfer *__unused) { struct scmi_smc *scmi_info = cinfo->transport_info; From patchwork Mon Dec 20 19:56:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cristian Marussi X-Patchwork-Id: 12696881 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DC413C433F5 for ; Mon, 20 Dec 2021 20:01:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wRYgw+Ba0iyiG0NI/ILY3vLKPh45xnvi7bsmnDOU934=; b=E7SAWZYJ5PpRmm p89b5qTlcPhO7IQTg6QOStqdr0RCRuHlzmbH4zSJA9YzgdekYK9rPBG66w4rQaKmFjdynA07x3QGK OUglG825Dx/9aedYeZnSPfg/cj7BYhuf4xH3+frTDAAPim/nzYapFb8IF9aWIgXqfKY+hUrAj2Dx+ M6zepIiDS5C/k9JFCxLQTAiaWGI7gZ6vcVl9l37PPZGEWoiF9jsfBgTbC6tRnulCC/7EDx+0ZDIGZ dhS/NLwycg7KD0R7MWMopnTSswCNi8hDHlVDq5vuVzau3pjz7k+zapIEgNodF940EtXdT8/FKmOpt QPMzUWbmCJDxdxKhNeZg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOpB-004Br9-Q7; Mon, 20 Dec 2021 19:59:57 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOn3-004Aee-RS for linux-arm-kernel@lists.infradead.org; Mon, 20 Dec 2021 19:57:48 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 028301FB; Mon, 20 Dec 2021 11:57:45 -0800 (PST) Received: from e120937-lin.home (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CC3703F718; Mon, 20 Dec 2021 11:57:42 -0800 (PST) From: Cristian Marussi To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: sudeep.holla@arm.com, james.quinlan@broadcom.com, Jonathan.Cameron@Huawei.com, f.fainelli@gmail.com, etienne.carriere@linaro.org, vincent.guittot@linaro.org, souvik.chakravarty@arm.com, cristian.marussi@arm.com, "Michael S. Tsirkin" , Igor Skalkin , Peter Hilber , virtualization@lists.linux-foundation.org Subject: [PATCH v8 09/11] firmware: arm_scmi: Add atomic mode support to virtio transport Date: Mon, 20 Dec 2021 19:56:44 +0000 Message-Id: <20211220195646.44498-10-cristian.marussi@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211220195646.44498-1-cristian.marussi@arm.com> References: <20211220195646.44498-1-cristian.marussi@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211220_115746_047829_9CC74903 X-CRM114-Status: GOOD ( 35.12 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add support for .mark_txdone and .poll_done transport operations to SCMI VirtIO transport as pre-requisites to enable atomic operations. Add a Kernel configuration option to enable SCMI VirtIO transport polling and atomic mode for selected SCMI transactions while leaving it default disabled. Cc: "Michael S. Tsirkin" Cc: Igor Skalkin Cc: Peter Hilber Cc: virtualization@lists.linux-foundation.org Signed-off-by: Cristian Marussi --- v7 --> v8 - removed ifdeffery - reviewed comments - simplified spinlocking aroung scmi_feed_vq_tx/rx - added deferred worker for TX replies to aid while polling mode is active V6 --> V7 - added a few comments about virtio polling internals - fixed missing list_del on pending_cmds_list processing - shrinked spinlocked areas in virtio_poll_done - added proper spinlocking to scmi_vio_complete_cb while scanning list of pending cmds --- drivers/firmware/arm_scmi/Kconfig | 15 ++ drivers/firmware/arm_scmi/virtio.c | 291 ++++++++++++++++++++++++++--- 2 files changed, 280 insertions(+), 26 deletions(-) diff --git a/drivers/firmware/arm_scmi/Kconfig b/drivers/firmware/arm_scmi/Kconfig index d429326433d1..7794bd41eaa0 100644 --- a/drivers/firmware/arm_scmi/Kconfig +++ b/drivers/firmware/arm_scmi/Kconfig @@ -118,6 +118,21 @@ config ARM_SCMI_TRANSPORT_VIRTIO_VERSION1_COMPLIANCE the ones implemented by kvmtool) and let the core Kernel VirtIO layer take care of the needed conversions, say N. +config ARM_SCMI_TRANSPORT_VIRTIO_ATOMIC_ENABLE + bool "Enable atomic mode for SCMI VirtIO transport" + depends on ARM_SCMI_TRANSPORT_VIRTIO + help + Enable support of atomic operation for SCMI VirtIO based transport. + + If you want the SCMI VirtIO based transport to operate in atomic + mode, avoiding any kind of sleeping behaviour for selected + transactions on the TX path, answer Y. + + Enabling atomic mode operations allows any SCMI driver using this + transport to optionally ask for atomic SCMI transactions and operate + in atomic context too, at the price of using a number of busy-waiting + primitives all over instead. If unsure say N. + endif #ARM_SCMI_PROTOCOL config ARM_SCMI_POWER_DOMAIN diff --git a/drivers/firmware/arm_scmi/virtio.c b/drivers/firmware/arm_scmi/virtio.c index fd0f6f91fc0b..f589bbcc5db9 100644 --- a/drivers/firmware/arm_scmi/virtio.c +++ b/drivers/firmware/arm_scmi/virtio.c @@ -3,8 +3,8 @@ * Virtio Transport driver for Arm System Control and Management Interface * (SCMI). * - * Copyright (C) 2020-2021 OpenSynergy. - * Copyright (C) 2021 ARM Ltd. + * Copyright (C) 2020-2022 OpenSynergy. + * Copyright (C) 2021-2022 ARM Ltd. */ /** @@ -38,6 +38,9 @@ * @vqueue: Associated virtqueue * @cinfo: SCMI Tx or Rx channel * @free_list: List of unused scmi_vio_msg, maintained for Tx channels only + * @deferred_tx_work: Worker for TX deferred replies processing + * @deferred_tx_wq: Workqueue for TX deferred replies + * @pending_cmds_list: List of pre-fetched commands queueud for later processing * @is_rx: Whether channel is an Rx channel * @ready: Whether transport user is ready to hear about channel * @max_msg: Maximum number of pending messages for this channel. @@ -49,6 +52,9 @@ struct scmi_vio_channel { struct virtqueue *vqueue; struct scmi_chan_info *cinfo; struct list_head free_list; + struct list_head pending_cmds_list; + struct work_struct deferred_tx_work; + struct workqueue_struct *deferred_tx_wq; bool is_rx; bool ready; unsigned int max_msg; @@ -65,12 +71,22 @@ struct scmi_vio_channel { * @input: SDU used for (delayed) responses and notifications * @list: List which scmi_vio_msg may be part of * @rx_len: Input SDU size in bytes, once input has been received + * @poll_idx: Last used index registered for polling purposes if this message + * transaction reply was configured for polling. + * Note that since virtqueue used index is an unsigned 16-bit we can + * use some out-of-scale values to signify particular conditions. + * @poll_lock: Protect access to @poll_idx. */ struct scmi_vio_msg { struct scmi_msg_payld *request; struct scmi_msg_payld *input; struct list_head list; unsigned int rx_len; +#define VIO_MSG_NOT_POLLED 0xeeeeeeeeUL +#define VIO_MSG_POLL_DONE 0xffffffffUL + unsigned int poll_idx; + /* lock to protect access to poll_idx. */ + spinlock_t poll_lock; }; /* Only one SCMI VirtIO device can possibly exist */ @@ -81,40 +97,43 @@ static bool scmi_vio_have_vq_rx(struct virtio_device *vdev) return virtio_has_feature(vdev, VIRTIO_SCMI_F_P2A_CHANNELS); } +/* Expect to be called with vioch->lock acquired by the caller and IRQs off */ static int scmi_vio_feed_vq_rx(struct scmi_vio_channel *vioch, struct scmi_vio_msg *msg, struct device *dev) { struct scatterlist sg_in; int rc; - unsigned long flags; sg_init_one(&sg_in, msg->input, VIRTIO_SCMI_MAX_PDU_SIZE); - spin_lock_irqsave(&vioch->lock, flags); - rc = virtqueue_add_inbuf(vioch->vqueue, &sg_in, 1, msg, GFP_ATOMIC); if (rc) dev_err(dev, "failed to add to RX virtqueue (%d)\n", rc); else virtqueue_kick(vioch->vqueue); - spin_unlock_irqrestore(&vioch->lock, flags); - return rc; } +/* Expect to be called with vioch->lock acquired by the caller and IRQs off */ +static inline void scmi_vio_feed_vq_tx(struct scmi_vio_channel *vioch, + struct scmi_vio_msg *msg) +{ + spin_lock(&msg->poll_lock); + msg->poll_idx = VIO_MSG_NOT_POLLED; + spin_unlock(&msg->poll_lock); + + list_add(&msg->list, &vioch->free_list); +} + static void scmi_finalize_message(struct scmi_vio_channel *vioch, struct scmi_vio_msg *msg) { - if (vioch->is_rx) { + if (vioch->is_rx) scmi_vio_feed_vq_rx(vioch, msg, vioch->cinfo->dev); - } else { - /* Here IRQs are assumed to be already disabled by the caller */ - spin_lock(&vioch->lock); - list_add(&msg->list, &vioch->free_list); - spin_unlock(&vioch->lock); - } + else + scmi_vio_feed_vq_tx(vioch, msg); } static void scmi_vio_complete_cb(struct virtqueue *vqueue) @@ -144,6 +163,7 @@ static void scmi_vio_complete_cb(struct virtqueue *vqueue) virtqueue_disable_cb(vqueue); cb_enabled = false; } + msg = virtqueue_get_buf(vqueue, &length); if (!msg) { if (virtqueue_enable_cb(vqueue)) @@ -157,7 +177,9 @@ static void scmi_vio_complete_cb(struct virtqueue *vqueue) scmi_rx_callback(vioch->cinfo, msg_read_header(msg->input), msg); + spin_lock(&vioch->lock); scmi_finalize_message(vioch, msg); + spin_unlock(&vioch->lock); } /* @@ -176,6 +198,34 @@ static void scmi_vio_complete_cb(struct virtqueue *vqueue) spin_unlock_irqrestore(&vioch->ready_lock, ready_flags); } +static void scmi_vio_deferred_tx_worker(struct work_struct *work) +{ + unsigned long flags; + struct scmi_vio_channel *vioch; + struct scmi_vio_msg *msg, *tmp; + + vioch = container_of(work, struct scmi_vio_channel, deferred_tx_work); + + /* Process pre-fetched messages */ + spin_lock_irqsave(&vioch->lock, flags); + + /* Scan the list of possibly pre-fetched messages during polling. */ + list_for_each_entry_safe(msg, tmp, &vioch->pending_cmds_list, list) { + list_del(&msg->list); + + scmi_rx_callback(vioch->cinfo, + msg_read_header(msg->input), msg); + + /* Free the processed message once done */ + scmi_vio_feed_vq_tx(vioch, msg); + } + + spin_unlock_irqrestore(&vioch->lock, flags); + + /* Process possibly still pending messages */ + scmi_vio_complete_cb(vioch->vqueue); +} + static const char *const scmi_vio_vqueue_names[] = { "tx", "rx" }; static vq_callback_t *scmi_vio_complete_callbacks[] = { @@ -244,6 +294,19 @@ static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev, vioch = &((struct scmi_vio_channel *)scmi_vdev->priv)[index]; + /* Setup a deferred worker for polling. */ + if (tx && !vioch->deferred_tx_wq) { + vioch->deferred_tx_wq = + alloc_workqueue(dev_name(&scmi_vdev->dev), + WQ_UNBOUND | WQ_FREEZABLE | WQ_SYSFS, + 0); + if (!vioch->deferred_tx_wq) + return -ENOMEM; + + INIT_WORK(&vioch->deferred_tx_work, + scmi_vio_deferred_tx_worker); + } + for (i = 0; i < vioch->max_msg; i++) { struct scmi_vio_msg *msg; @@ -257,6 +320,7 @@ static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev, GFP_KERNEL); if (!msg->request) return -ENOMEM; + spin_lock_init(&msg->poll_lock); } msg->input = devm_kzalloc(cinfo->dev, VIRTIO_SCMI_MAX_PDU_SIZE, @@ -264,13 +328,12 @@ static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev, if (!msg->input) return -ENOMEM; - if (tx) { - spin_lock_irqsave(&vioch->lock, flags); - list_add_tail(&msg->list, &vioch->free_list); - spin_unlock_irqrestore(&vioch->lock, flags); - } else { + spin_lock_irqsave(&vioch->lock, flags); + if (tx) + scmi_vio_feed_vq_tx(vioch, msg); + else scmi_vio_feed_vq_rx(vioch, msg, cinfo->dev); - } + spin_unlock_irqrestore(&vioch->lock, flags); } spin_lock_irqsave(&vioch->lock, flags); @@ -296,6 +359,11 @@ static int virtio_chan_free(int id, void *p, void *data) vioch->ready = false; spin_unlock_irqrestore(&vioch->ready_lock, flags); + if (!vioch->is_rx && vioch->deferred_tx_wq) { + destroy_workqueue(vioch->deferred_tx_wq); + vioch->deferred_tx_wq = NULL; + } + scmi_free_channel(cinfo, data, id); spin_lock_irqsave(&vioch->lock, flags); @@ -324,7 +392,8 @@ static int virtio_send_message(struct scmi_chan_info *cinfo, } msg = list_first_entry(&vioch->free_list, typeof(*msg), list); - list_del(&msg->list); + /* Re-init element so we can discern anytime if it is still in-flight */ + list_del_init(&msg->list); msg_tx_prepare(msg->request, xfer); @@ -337,6 +406,19 @@ static int virtio_send_message(struct scmi_chan_info *cinfo, dev_err(vioch->cinfo->dev, "failed to add to TX virtqueue (%d)\n", rc); } else { + /* + * If polling was requested for this transaction: + * - retrieve last used index (will be used as polling reference) + * - bind the polled message to the xfer via .priv + */ + if (xfer->hdr.poll_completion) { + spin_lock(&msg->poll_lock); + msg->poll_idx = + virtqueue_enable_cb_prepare(vioch->vqueue); + spin_unlock(&msg->poll_lock); + /* Ensure initialized msg is visibly bound to xfer */ + smp_store_mb(xfer->priv, msg); + } virtqueue_kick(vioch->vqueue); } @@ -350,10 +432,8 @@ static void virtio_fetch_response(struct scmi_chan_info *cinfo, { struct scmi_vio_msg *msg = xfer->priv; - if (msg) { + if (msg) msg_fetch_response(msg->input, msg->rx_len, xfer); - xfer->priv = NULL; - } } static void virtio_fetch_notification(struct scmi_chan_info *cinfo, @@ -361,10 +441,165 @@ static void virtio_fetch_notification(struct scmi_chan_info *cinfo, { struct scmi_vio_msg *msg = xfer->priv; - if (msg) { + if (msg) msg_fetch_notification(msg->input, msg->rx_len, max_len, xfer); - xfer->priv = NULL; +} + +/** + * virtio_mark_txdone - Mark transmission done + * + * Free only successfully completed polling transfer messages. + * + * Note that in the SCMI VirtIO transport we never explicitly release timed-out + * messages by forcibly re-adding them to the free-list, even on timeout, inside + * the TX code path; we instead let IRQ/RX callbacks eventually clean up such + * messages once, finally, a late reply is received and discarded (if ever). + * + * This approach was deemed preferable since those pending timed-out buffers are + * still effectively owned by the SCMI platform VirtIO device even after timeout + * expiration: forcibly freeing and reusing them before they had been returned + * explicitly by the SCMI platform could lead to subtle bugs due to message + * corruption. + * An SCMI platform VirtIO device which never returns message buffers is + * anyway broken and it will quickly lead to exhaustion of available messages. + * + * For this same reason, here, we take care to free only the successfully + * completed polled messages, since they won't be freed elsewhere; late replies + * to timed-out polled messages would be anyway freed by RX callbacks instead. + * + * @cinfo: SCMI channel info + * @ret: Transmission return code + * @xfer: Transfer descriptor + */ +static void virtio_mark_txdone(struct scmi_chan_info *cinfo, int ret, + struct scmi_xfer *xfer) +{ + unsigned long flags; + struct scmi_vio_channel *vioch = cinfo->transport_info; + struct scmi_vio_msg *msg = xfer->priv; + + if (!msg) + return; + + /* Ensure msg is unbound from xfer before pushing onto the free list */ + smp_store_mb(xfer->priv, NULL); + + /* Is a successfully completed polled message still to be finalized ? */ + spin_lock_irqsave(&vioch->lock, flags); + if (!ret && xfer->hdr.poll_completion && list_empty(&msg->list)) + scmi_vio_feed_vq_tx(vioch, msg); + spin_unlock_irqrestore(&vioch->lock, flags); +} + +/** + * virtio_poll_done - Provide polling support for VirtIO transport + * + * @cinfo: SCMI channel info + * @xfer: Reference to the transfer being poll for. + * + * VirtIO core provides a polling mechanism based only on last used indexes: + * this means that it is possible to poll the virtqueues waiting for something + * new to arrive from the host side but the only way to check if the freshly + * arrived buffer was what we were waiting for is to compare the newly arrived + * message descriptors with the one we are polling on. + * + * As a consequence it can happen to dequeue something different from the buffer + * we were poll-waiting for: if that is the case such early fetched buffers are + * then added to a the @pending_cmds_list list for later processing by a + * dedicated deferred worker. + * + * So, basically, once something new is spotted we proceed to de-queue all the + * freshly received used buffers until we found the one we were polling on, or, + * we have 'seemingly' emptied the virtqueue; if some buffers are still pending + * in the vqueue at the end of the polling loop (possible due to inherent races + * in virtqueues handling mechanisms), we similarly kick the deferred worker + * and let it process those, to avoid indefinitely looping in the .poll_done + * helper. + * + * Note that we do NOT suppress notification with VIRTQ_USED_F_NO_NOTIFY even + * when polling since such flag is per-virtqueues and we do not want to + * suppress notifications as a whole: so, if the message we are polling for is + * delivered via usual IRQs callbacks, on another core which are IRQs-on, it + * will be handled as such by scmi_rx_callback() and the polling loop in the + * SCMI Core TX path will be transparently terminated anyway. + * + * Return: True once polling has successfully completed. + */ +static bool virtio_poll_done(struct scmi_chan_info *cinfo, + struct scmi_xfer *xfer) +{ + bool pending, ret = false; + unsigned int length, any_prefetched = 0; + unsigned long flags; + struct scmi_vio_msg *next_msg, *msg = xfer->priv; + struct scmi_vio_channel *vioch = cinfo->transport_info; + + if (!msg) + return true; + + spin_lock_irqsave(&msg->poll_lock, flags); + /* Processed already by other polling loop on another CPU ? */ + if (msg->poll_idx == VIO_MSG_POLL_DONE) { + spin_unlock_irqrestore(&msg->poll_lock, flags); + return true; } + + /* Has cmdq index moved at all ? */ + pending = virtqueue_poll(vioch->vqueue, msg->poll_idx); + spin_unlock_irqrestore(&msg->poll_lock, flags); + if (!pending) + return false; + + spin_lock_irqsave(&vioch->lock, flags); + virtqueue_disable_cb(vioch->vqueue); + + /* + * If something arrived we cannot be sure, without dequeueing, if it + * was the reply to the xfer we are polling for, or, to other, even + * possibly non-polling, pending xfers: process all new messages + * till the polled-for message is found OR the vqueue is empty. + */ + while ((next_msg = virtqueue_get_buf(vioch->vqueue, &length))) { + next_msg->rx_len = length; + /* Is the message we were polling for ? */ + if (next_msg == msg) { + ret = true; + break; + } + + spin_lock(&next_msg->poll_lock); + if (next_msg->poll_idx == VIO_MSG_NOT_POLLED) { + any_prefetched++; + list_add_tail(&next_msg->list, + &vioch->pending_cmds_list); + } else { + next_msg->poll_idx = VIO_MSG_POLL_DONE; + } + spin_unlock(&next_msg->poll_lock); + } + + /* + * When the polling loop has successfully terminated if something + * else was queued in the meantime, it will be served by a deferred + * worker OR by the normal IRQ/callback OR by other poll loops. + * + * If we are still looking for the polled reply, the polling index has + * to be updated to the current vqueue last used index. + */ + if (ret) { + pending = !virtqueue_enable_cb(vioch->vqueue); + } else { + spin_lock(&msg->poll_lock); + msg->poll_idx = virtqueue_enable_cb_prepare(vioch->vqueue); + pending = virtqueue_poll(vioch->vqueue, msg->poll_idx); + spin_unlock(&msg->poll_lock); + } + spin_unlock_irqrestore(&vioch->lock, flags); + + if (any_prefetched || pending) + queue_work(vioch->deferred_tx_wq, &vioch->deferred_tx_work); + + return ret; } static const struct scmi_transport_ops scmi_virtio_ops = { @@ -376,6 +611,8 @@ static const struct scmi_transport_ops scmi_virtio_ops = { .send_message = virtio_send_message, .fetch_response = virtio_fetch_response, .fetch_notification = virtio_fetch_notification, + .mark_txdone = virtio_mark_txdone, + .poll_done = virtio_poll_done, }; static int scmi_vio_probe(struct virtio_device *vdev) @@ -418,6 +655,7 @@ static int scmi_vio_probe(struct virtio_device *vdev) spin_lock_init(&channels[i].lock); spin_lock_init(&channels[i].ready_lock); INIT_LIST_HEAD(&channels[i].free_list); + INIT_LIST_HEAD(&channels[i].pending_cmds_list); channels[i].vqueue = vqs[i]; sz = virtqueue_get_vring_size(channels[i].vqueue); @@ -506,4 +744,5 @@ const struct scmi_desc scmi_virtio_desc = { .max_rx_timeout_ms = 60000, /* for non-realtime virtio devices */ .max_msg = 0, /* overridden by virtio_get_max_msg() */ .max_msg_size = VIRTIO_SCMI_MAX_MSG_SIZE, + .atomic_enabled = IS_ENABLED(CONFIG_ARM_SCMI_TRANSPORT_VIRTIO_ATOMIC_ENABLE), }; From patchwork Mon Dec 20 19:56:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cristian Marussi X-Patchwork-Id: 12696882 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 75D93C433EF for ; Mon, 20 Dec 2021 20:01:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Wzj5frMQNdNOr+84OfJfeo6gSNCX79MlK989gLCRMHc=; b=UHJgmGTC/aA7HW XMdAzGnGczM1rosgwxJZEYyL/a86ujklYemh9YGdXN/0ha1774Xiiu/hJpvuHGJP9jGouNsUJ7mPD KSWk8Fv83vrdGhhR7ttCgEEHv4RtD3x4t95DDdtPUtAEvAXSQNRB8MuZN1BXsSC+Mr+oluw53fhsf a/Zpa/AjPvLdFJY+fLm4Ol6fdw3zdIMvVC51rOiXkjByAL5oyhmpqcccOkxPCraf+tiXeuII3OZcC vbyZyHJN28E5zYlXE2FpXNxjE+gYz5rk2oYftdIHCsC5zI4bdojFj1l+Tc7gCgAk1WzH6i3IY3rk2 Pps2hJsST/Bkr2BgDJXQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOpY-004C4l-Cs; Mon, 20 Dec 2021 20:00:20 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOn5-004Afi-Dd for linux-arm-kernel@lists.infradead.org; Mon, 20 Dec 2021 19:57:49 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DD372D6E; Mon, 20 Dec 2021 11:57:46 -0800 (PST) Received: from e120937-lin.home (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 82CCB3F718; Mon, 20 Dec 2021 11:57:45 -0800 (PST) From: Cristian Marussi To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: sudeep.holla@arm.com, james.quinlan@broadcom.com, Jonathan.Cameron@Huawei.com, f.fainelli@gmail.com, etienne.carriere@linaro.org, vincent.guittot@linaro.org, souvik.chakravarty@arm.com, cristian.marussi@arm.com Subject: [PATCH v8 10/11] firmware: arm_scmi: Add atomic support to clock protocol Date: Mon, 20 Dec 2021 19:56:45 +0000 Message-Id: <20211220195646.44498-11-cristian.marussi@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211220195646.44498-1-cristian.marussi@arm.com> References: <20211220195646.44498-1-cristian.marussi@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211220_115747_621661_F0906B88 X-CRM114-Status: GOOD ( 10.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce new _atomic variant for SCMI clock protocol operations related to enable disable operations: when an atomic operation is required the xfer poll_completion flag is set for that transaction. Signed-off-by: Cristian Marussi --- drivers/firmware/arm_scmi/clock.c | 22 +++++++++++++++++++--- include/linux/scmi_protocol.h | 3 +++ 2 files changed, 22 insertions(+), 3 deletions(-) diff --git a/drivers/firmware/arm_scmi/clock.c b/drivers/firmware/arm_scmi/clock.c index 35b56c8ba0c0..72f930c0e3e2 100644 --- a/drivers/firmware/arm_scmi/clock.c +++ b/drivers/firmware/arm_scmi/clock.c @@ -273,7 +273,7 @@ static int scmi_clock_rate_set(const struct scmi_protocol_handle *ph, static int scmi_clock_config_set(const struct scmi_protocol_handle *ph, u32 clk_id, - u32 config) + u32 config, bool atomic) { int ret; struct scmi_xfer *t; @@ -284,6 +284,8 @@ scmi_clock_config_set(const struct scmi_protocol_handle *ph, u32 clk_id, if (ret) return ret; + t->hdr.poll_completion = atomic; + cfg = t->tx.buf; cfg->id = cpu_to_le32(clk_id); cfg->attributes = cpu_to_le32(config); @@ -296,12 +298,24 @@ scmi_clock_config_set(const struct scmi_protocol_handle *ph, u32 clk_id, static int scmi_clock_enable(const struct scmi_protocol_handle *ph, u32 clk_id) { - return scmi_clock_config_set(ph, clk_id, CLOCK_ENABLE); + return scmi_clock_config_set(ph, clk_id, CLOCK_ENABLE, false); } static int scmi_clock_disable(const struct scmi_protocol_handle *ph, u32 clk_id) { - return scmi_clock_config_set(ph, clk_id, 0); + return scmi_clock_config_set(ph, clk_id, 0, false); +} + +static int scmi_clock_enable_atomic(const struct scmi_protocol_handle *ph, + u32 clk_id) +{ + return scmi_clock_config_set(ph, clk_id, CLOCK_ENABLE, true); +} + +static int scmi_clock_disable_atomic(const struct scmi_protocol_handle *ph, + u32 clk_id) +{ + return scmi_clock_config_set(ph, clk_id, 0, true); } static int scmi_clock_count_get(const struct scmi_protocol_handle *ph) @@ -330,6 +344,8 @@ static const struct scmi_clk_proto_ops clk_proto_ops = { .rate_set = scmi_clock_rate_set, .enable = scmi_clock_enable, .disable = scmi_clock_disable, + .enable_atomic = scmi_clock_enable_atomic, + .disable_atomic = scmi_clock_disable_atomic, }; static int scmi_clock_protocol_init(const struct scmi_protocol_handle *ph) diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h index 9f895cb81818..d4971c991963 100644 --- a/include/linux/scmi_protocol.h +++ b/include/linux/scmi_protocol.h @@ -82,6 +82,9 @@ struct scmi_clk_proto_ops { u64 rate); int (*enable)(const struct scmi_protocol_handle *ph, u32 clk_id); int (*disable)(const struct scmi_protocol_handle *ph, u32 clk_id); + int (*enable_atomic)(const struct scmi_protocol_handle *ph, u32 clk_id); + int (*disable_atomic)(const struct scmi_protocol_handle *ph, + u32 clk_id); }; /** From patchwork Mon Dec 20 19:56:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cristian Marussi X-Patchwork-Id: 12696883 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 85856C433EF for ; Mon, 20 Dec 2021 20:02:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=jHDOWYAJlPfP3ROGkwq3dF4iIfuO4kg2wmZvRkA4sng=; b=aD6z1XT46dCXY7 qUPE65LRMl4UJZ0sfU/y4RvE1ejYpyT2L/gUALL3LY7+6RlCw0yO1FkX6yVkrMr/wkGp1qD6RRCn8 946d9zwo9HNLkoGlm3eRim2LTspQTWlb5emYl/Ol218z+s7qXRQG26Q3TPYQLNHF7TnYt4oKsXnYX +azJqng6OmVvMy2wny+xmnDX2wT0sKAnxV5CtvCgKJZ4BPqaxkv52fXFASWKlFsmiDXi30kBm8DTQ QE3o9X2W43mxBwdAyBbqWG7F7Rxf4HFzzwtY0//bmfIWCCumsU7gCmvaXY/JGbgSQpXq+bVPocxTU wXId7NJ2fMdhgKiCx4eQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOpt-004CHd-QU; Mon, 20 Dec 2021 20:00:42 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzOn8-004Ah0-Em for linux-arm-kernel@lists.infradead.org; Mon, 20 Dec 2021 19:57:52 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BD8FA1FB; Mon, 20 Dec 2021 11:57:49 -0800 (PST) Received: from e120937-lin.home (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 679883F718; Mon, 20 Dec 2021 11:57:47 -0800 (PST) From: Cristian Marussi To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: sudeep.holla@arm.com, james.quinlan@broadcom.com, Jonathan.Cameron@Huawei.com, f.fainelli@gmail.com, etienne.carriere@linaro.org, vincent.guittot@linaro.org, souvik.chakravarty@arm.com, cristian.marussi@arm.com, Michael Turquette , Stephen Boyd , linux-clk@vger.kernel.org Subject: [PATCH v8 11/11] clk: scmi: Support atomic clock enable/disable API Date: Mon, 20 Dec 2021 19:56:46 +0000 Message-Id: <20211220195646.44498-12-cristian.marussi@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211220195646.44498-1-cristian.marussi@arm.com> References: <20211220195646.44498-1-cristian.marussi@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211220_115750_634083_3A1581A6 X-CRM114-Status: GOOD ( 15.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Support also atomic enable/disable clk_ops beside the bare non-atomic one (prepare/unprepare) when the underlying SCMI transport is configured to support atomic transactions for synchronous commands. Cc: Michael Turquette Cc: Stephen Boyd Cc: linux-clk@vger.kernel.org Signed-off-by: Cristian Marussi Acked-by: Stephen Boyd --- NOTE THAT STILL THERE'S NO FINE GRAIN CONTROL OVER SELECTION OF WHICH CLOCK DEVICES CAN SUPPORT ATOMIC AND WHICH SHOULD NOT BASED ON CLOCK DEVICES ENABLE LATENCY. THIS HAS STILL TO BE ADDED IN SCMI PROTOCOL SPEC. v7 --> v8 - provide prepare/unprepare only on non-atomic transports V5 --> V6 - add concurrent availability of atomic and non atomic reqs --- drivers/clk/clk-scmi.c | 54 +++++++++++++++++++++++++++++++++++------- 1 file changed, 45 insertions(+), 9 deletions(-) diff --git a/drivers/clk/clk-scmi.c b/drivers/clk/clk-scmi.c index 1e357d364ca2..3da99f81ead9 100644 --- a/drivers/clk/clk-scmi.c +++ b/drivers/clk/clk-scmi.c @@ -88,21 +88,51 @@ static void scmi_clk_disable(struct clk_hw *hw) scmi_proto_clk_ops->disable(clk->ph, clk->id); } +static int scmi_clk_atomic_enable(struct clk_hw *hw) +{ + struct scmi_clk *clk = to_scmi_clk(hw); + + return scmi_proto_clk_ops->enable_atomic(clk->ph, clk->id); +} + +static void scmi_clk_atomic_disable(struct clk_hw *hw) +{ + struct scmi_clk *clk = to_scmi_clk(hw); + + scmi_proto_clk_ops->disable_atomic(clk->ph, clk->id); +} + +/* + * We can provide enable/disable atomic callbacks only if the underlying SCMI + * transport for an SCMI instance is configured to handle SCMI commands in an + * atomic manner. + * + * When no SCMI atomic transport support is available we instead provide only + * the prepare/unprepare API, as allowed by the clock framework when atomic + * calls are not available. + * + * Two distinct sets of clk_ops are provided since we could have multiple SCMI + * instances with different underlying transport quality, so they cannot be + * shared. + */ static const struct clk_ops scmi_clk_ops = { .recalc_rate = scmi_clk_recalc_rate, .round_rate = scmi_clk_round_rate, .set_rate = scmi_clk_set_rate, - /* - * We can't provide enable/disable callback as we can't perform the same - * in atomic context. Since the clock framework provides standard API - * clk_prepare_enable that helps cases using clk_enable in non-atomic - * context, it should be fine providing prepare/unprepare. - */ .prepare = scmi_clk_enable, .unprepare = scmi_clk_disable, }; -static int scmi_clk_ops_init(struct device *dev, struct scmi_clk *sclk) +static const struct clk_ops scmi_atomic_clk_ops = { + .recalc_rate = scmi_clk_recalc_rate, + .round_rate = scmi_clk_round_rate, + .set_rate = scmi_clk_set_rate, + .enable = scmi_clk_atomic_enable, + .disable = scmi_clk_atomic_disable, +}; + +static int scmi_clk_ops_init(struct device *dev, struct scmi_clk *sclk, + const struct clk_ops *scmi_ops) { int ret; unsigned long min_rate, max_rate; @@ -110,7 +140,7 @@ static int scmi_clk_ops_init(struct device *dev, struct scmi_clk *sclk) struct clk_init_data init = { .flags = CLK_GET_RATE_NOCACHE, .num_parents = 0, - .ops = &scmi_clk_ops, + .ops = scmi_ops, .name = sclk->info->name, }; @@ -145,6 +175,7 @@ static int scmi_clocks_probe(struct scmi_device *sdev) struct device_node *np = dev->of_node; const struct scmi_handle *handle = sdev->handle; struct scmi_protocol_handle *ph; + const struct clk_ops *scmi_ops; if (!handle) return -ENODEV; @@ -168,6 +199,11 @@ static int scmi_clocks_probe(struct scmi_device *sdev) clk_data->num = count; hws = clk_data->hws; + if (handle->is_transport_atomic(handle)) + scmi_ops = &scmi_atomic_clk_ops; + else + scmi_ops = &scmi_clk_ops; + for (idx = 0; idx < count; idx++) { struct scmi_clk *sclk; @@ -184,7 +220,7 @@ static int scmi_clocks_probe(struct scmi_device *sdev) sclk->id = idx; sclk->ph = ph; - err = scmi_clk_ops_init(dev, sclk); + err = scmi_clk_ops_init(dev, sclk, scmi_ops); if (err) { dev_err(dev, "failed to register clock %d\n", idx); devm_kfree(dev, sclk);