From patchwork Wed Jun 8 01:16:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Lew X-Patchwork-Id: 12872828 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6ED1C433EF for ; Wed, 8 Jun 2022 04:52:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231888AbiFHEwF (ORCPT ); Wed, 8 Jun 2022 00:52:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53058 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231948AbiFHEvU (ORCPT ); Wed, 8 Jun 2022 00:51:20 -0400 Received: from alexa-out-sd-01.qualcomm.com (alexa-out-sd-01.qualcomm.com [199.106.114.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5ED1A2717AB; Tue, 7 Jun 2022 18:17:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1654651042; x=1686187042; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=k1TdWJdCOte5lEmhirdFhp1kWl2phGEs6DbE8zLu2oQ=; b=wjLJxzHbopOqCRppzLNHsj9CT3DesharkOuAaaPH4GtHlFHrLP4aGx9Y v3O+OBTo6qUWPDgX/dhlscjfsfYH80S755qp6yYUWlaWJXn6ZNSzc0FHQ RHX/WTnNGXMpuaYWEa3aF8fcnds3xn09t+LlDUs2Qvmb8y1DDQBp/EUu+ k=; Received: from unknown (HELO ironmsg02-sd.qualcomm.com) ([10.53.140.142]) by alexa-out-sd-01.qualcomm.com with ESMTP; 07 Jun 2022 18:17:20 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg02-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jun 2022 18:17:20 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Tue, 7 Jun 2022 18:17:19 -0700 Received: from hu-clew-lv.qualcomm.com (10.49.16.6) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Tue, 7 Jun 2022 18:17:19 -0700 From: Chris Lew To: , CC: , , , Subject: [PATCH 1/4] rpmsg: core: Add rx done hooks Date: Tue, 7 Jun 2022 18:16:42 -0700 Message-ID: <1654651005-15475-2-git-send-email-quic_clew@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1654651005-15475-1-git-send-email-quic_clew@quicinc.com> References: <1654651005-15475-1-git-send-email-quic_clew@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-remoteproc@vger.kernel.org In order to reduce the amount of copies in the rpmsg framework, it is necessary for clients to take brief ownership of the receive buffer. Add the capability for clients to notify the rpmsg framework and the underlying transports when it is going to hold onto a buffer and also notify when the client is done with the buffer. In the .rx_cb of the rpmsg drivers, if they wish to use the received buffer at a later point, they should return RPMSG_DEFER. Otherwise returning RPMSG_HANDLED (0) will signal the framework that the client is done with the resources and can continue with cleanup. The clients should check if their rpmsg endpoint supports the rx_done operation with the new state variable in the rpmsg_endpoint since not all endpoints will have the ability to support this operation. Signed-off-by: Chris Lew --- drivers/rpmsg/rpmsg_core.c | 20 ++++++++++++++++++++ drivers/rpmsg/rpmsg_internal.h | 1 + include/linux/rpmsg.h | 24 ++++++++++++++++++++++++ 3 files changed, 45 insertions(+) diff --git a/drivers/rpmsg/rpmsg_core.c b/drivers/rpmsg/rpmsg_core.c index 290c1f02da10..359be643060f 100644 --- a/drivers/rpmsg/rpmsg_core.c +++ b/drivers/rpmsg/rpmsg_core.c @@ -351,6 +351,26 @@ ssize_t rpmsg_get_mtu(struct rpmsg_endpoint *ept) } EXPORT_SYMBOL(rpmsg_get_mtu); +/** + * rpmsg_rx_done() - release resources related to @data from a @rx_cb + * @ept: the rpmsg endpoint + * @data: payload from a message + * + * Returns 0 on success and an appropriate error value on failure. + */ +int rpmsg_rx_done(struct rpmsg_endpoint *ept, void *data) +{ + if (WARN_ON(!ept)) + return -EINVAL; + if (!ept->ops->rx_done) + return -ENXIO; + if (!ept->rx_done) + return -EINVAL; + + return ept->ops->rx_done(ept, data); +} +EXPORT_SYMBOL(rpmsg_rx_done); + /* * match a rpmsg channel with a channel info struct. * this is used to make sure we're not creating rpmsg devices for channels diff --git a/drivers/rpmsg/rpmsg_internal.h b/drivers/rpmsg/rpmsg_internal.h index a22cd4abe7d1..99cb86ce638e 100644 --- a/drivers/rpmsg/rpmsg_internal.h +++ b/drivers/rpmsg/rpmsg_internal.h @@ -76,6 +76,7 @@ struct rpmsg_endpoint_ops { __poll_t (*poll)(struct rpmsg_endpoint *ept, struct file *filp, poll_table *wait); ssize_t (*get_mtu)(struct rpmsg_endpoint *ept); + int (*rx_done)(struct rpmsg_endpoint *ept, void *data); }; struct device *rpmsg_find_device(struct device *parent, diff --git a/include/linux/rpmsg.h b/include/linux/rpmsg.h index 523c98b96cb4..8e34222e8bca 100644 --- a/include/linux/rpmsg.h +++ b/include/linux/rpmsg.h @@ -63,6 +63,18 @@ struct rpmsg_device { const struct rpmsg_device_ops *ops; }; +/** + * rpmsg rx callback return definitions + * @RPMSG_HANDLED: rpmsg user is done processing data, framework can free the + * resources related to the buffer + * @RPMSG_DEFER: rpmsg user is not done processing data, framework will hold + * onto resources related to the buffer until rpmsg_rx_done is + * called. User should check their endpoint to see if rx_done + * is a supported operation. + */ +#define RPMSG_HANDLED 0 +#define RPMSG_DEFER 1 + typedef int (*rpmsg_rx_cb_t)(struct rpmsg_device *, void *, int, void *, u32); /** @@ -71,6 +83,7 @@ typedef int (*rpmsg_rx_cb_t)(struct rpmsg_device *, void *, int, void *, u32); * @refcount: when this drops to zero, the ept is deallocated * @cb: rx callback handler * @cb_lock: must be taken before accessing/changing @cb + * @rx_done: if set, rpmsg endpoint supports rpmsg_rx_done * @addr: local rpmsg address * @priv: private data for the driver's use * @@ -93,6 +106,7 @@ struct rpmsg_endpoint { struct kref refcount; rpmsg_rx_cb_t cb; struct mutex cb_lock; + bool rx_done; u32 addr; void *priv; @@ -192,6 +206,8 @@ __poll_t rpmsg_poll(struct rpmsg_endpoint *ept, struct file *filp, ssize_t rpmsg_get_mtu(struct rpmsg_endpoint *ept); +int rpmsg_rx_done(struct rpmsg_endpoint *ept, void *data); + #else static inline int rpmsg_register_device_override(struct rpmsg_device *rpdev, @@ -316,6 +332,14 @@ static inline ssize_t rpmsg_get_mtu(struct rpmsg_endpoint *ept) return -ENXIO; } +static inline int rpmsg_rx_done(struct rpmsg_endpoint *ept, void *data) +{ + /* This shouldn't be possible */ + WARN_ON(1); + + return -ENXIO; +} + #endif /* IS_ENABLED(CONFIG_RPMSG) */ /* use a macro to avoid include chaining to get THIS_MODULE */ From patchwork Wed Jun 8 01:16:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Lew X-Patchwork-Id: 12872832 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B467ACCA481 for ; Wed, 8 Jun 2022 04:52:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231584AbiFHEw0 (ORCPT ); Wed, 8 Jun 2022 00:52:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231316AbiFHEvp (ORCPT ); Wed, 8 Jun 2022 00:51:45 -0400 Received: from alexa-out.qualcomm.com (alexa-out.qualcomm.com [129.46.98.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8F992717A3; Tue, 7 Jun 2022 18:17:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1654651040; x=1686187040; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=/4WqRcJOS9PY50o3EM4s4sR3QVY+ZH5V0fBKwluwEdU=; b=xPxwn/4n6JGw4mV0mAeMLhhgGiNzEAuBG7lhVTGlCyH6wzYZ4uvHM8kX 7C8wtIGHWtqiqf0lM2jiVs87lqCptntP15xoNP3lQDLDZwn5SPtMCVjJ2 kbF7KivFtM6arV5wVBMXo6N+BssC+jZm9zzQpb0Nq7JKHs+oSjPAhticA I=; Received: from ironmsg07-lv.qualcomm.com ([10.47.202.151]) by alexa-out.qualcomm.com with ESMTP; 07 Jun 2022 18:17:20 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg07-lv.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jun 2022 18:17:20 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Tue, 7 Jun 2022 18:17:19 -0700 Received: from hu-clew-lv.qualcomm.com (10.49.16.6) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Tue, 7 Jun 2022 18:17:19 -0700 From: Chris Lew To: , CC: , , , Subject: [PATCH 2/4] rpmsg: char: Add support to use rpmsg_rx_done Date: Tue, 7 Jun 2022 18:16:43 -0700 Message-ID: <1654651005-15475-3-git-send-email-quic_clew@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1654651005-15475-1-git-send-email-quic_clew@quicinc.com> References: <1654651005-15475-1-git-send-email-quic_clew@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-remoteproc@vger.kernel.org Add support into the rpmsg char driver to skip copying the data into an skb if the endpoint supports rpmsg_rx_done. If the endpoint supports the rx_done operation, allocate a zero sized skb and set the data to the buffer returned in the rx callback. When the packet is read from the character device, release the memory by calling rpmsg_rx_done(). Signed-off-by: Chris Lew Reviewed-by: Mathieu Poirier --- drivers/rpmsg/rpmsg_char.c | 50 ++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 48 insertions(+), 2 deletions(-) diff --git a/drivers/rpmsg/rpmsg_char.c b/drivers/rpmsg/rpmsg_char.c index b6183d4f62a2..be62ddcf356c 100644 --- a/drivers/rpmsg/rpmsg_char.c +++ b/drivers/rpmsg/rpmsg_char.c @@ -91,8 +91,8 @@ int rpmsg_chrdev_eptdev_destroy(struct device *dev, void *data) } EXPORT_SYMBOL(rpmsg_chrdev_eptdev_destroy); -static int rpmsg_ept_cb(struct rpmsg_device *rpdev, void *buf, int len, - void *priv, u32 addr) +static int rpmsg_ept_copy_cb(struct rpmsg_device *rpdev, void *buf, int len, + void *priv, u32 addr) { struct rpmsg_eptdev *eptdev = priv; struct sk_buff *skb; @@ -113,6 +113,43 @@ static int rpmsg_ept_cb(struct rpmsg_device *rpdev, void *buf, int len, return 0; } +static int rpmsg_ept_no_copy_cb(struct rpmsg_device *rpdev, void *buf, int len, + void *priv, u32 addr) +{ + struct rpmsg_eptdev *eptdev = priv; + struct sk_buff *skb; + + skb = alloc_skb(0, GFP_ATOMIC); + if (!skb) + return -ENOMEM; + + skb->head = buf; + skb->data = buf; + skb_reset_tail_pointer(skb); + skb_set_end_offset(skb, len); + skb_put(skb, len); + + spin_lock(&eptdev->queue_lock); + skb_queue_tail(&eptdev->queue, skb); + spin_unlock(&eptdev->queue_lock); + + /* wake up any blocking processes, waiting for new data */ + wake_up_interruptible(&eptdev->readq); + + return RPMSG_DEFER; +} + +static int rpmsg_ept_cb(struct rpmsg_device *rpdev, void *buf, int len, + void *priv, u32 addr) +{ + struct rpmsg_eptdev *eptdev = priv; + rpmsg_rx_cb_t cb; + + cb = (eptdev->ept->rx_done) ? rpmsg_ept_no_copy_cb : rpmsg_ept_copy_cb; + + return cb(rpdev, buf, len, priv, addr); +} + static int rpmsg_eptdev_open(struct inode *inode, struct file *filp) { struct rpmsg_eptdev *eptdev = cdev_to_eptdev(inode->i_cdev); @@ -210,6 +247,15 @@ static ssize_t rpmsg_eptdev_read_iter(struct kiocb *iocb, struct iov_iter *to) if (copy_to_iter(skb->data, use, to) != use) use = -EFAULT; + if (eptdev->ept->rx_done) { + rpmsg_rx_done(eptdev->ept, skb->data); + /* + * Data memory is freed by rpmsg_rx_done(), reset the skb data + * pointers so kfree_skb() does not try to free a second time. + */ + skb->head = NULL; + skb->data = NULL; + } kfree_skb(skb); return use; From patchwork Wed Jun 8 01:16:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Lew X-Patchwork-Id: 12872831 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E61EDC43334 for ; Wed, 8 Jun 2022 04:52:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229693AbiFHEwT (ORCPT ); Wed, 8 Jun 2022 00:52:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231495AbiFHEvv (ORCPT ); Wed, 8 Jun 2022 00:51:51 -0400 Received: from alexa-out-sd-01.qualcomm.com (alexa-out-sd-01.qualcomm.com [199.106.114.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8394262E1A; Tue, 7 Jun 2022 18:17:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1654651044; x=1686187044; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=HS2tVe61KzrUqms7DIAgt9EyNPA+XwdzGMJ9STg48/k=; b=xkjKm1PU8/e7H+lq9+VpOCWj2t/BX5f/26b8/sA7nLsX1/kdRVLX7RhN XInOcAd2LnRbewzALWePTN55YgaJSMFWI2ALotdxBpRAjAs1pyTcGcZ2s KD2pq3I362xLtzFjc5fGVF0ioYFcxX/CVn+fNK8IL1r6WF7D+LdnQa1A/ k=; Received: from unknown (HELO ironmsg01-sd.qualcomm.com) ([10.53.140.141]) by alexa-out-sd-01.qualcomm.com with ESMTP; 07 Jun 2022 18:17:20 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg01-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jun 2022 18:17:20 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Tue, 7 Jun 2022 18:17:20 -0700 Received: from hu-clew-lv.qualcomm.com (10.49.16.6) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Tue, 7 Jun 2022 18:17:19 -0700 From: Chris Lew To: , CC: , , , Subject: [PATCH 3/4] rpmsg: glink: Try to send rx done in irq Date: Tue, 7 Jun 2022 18:16:44 -0700 Message-ID: <1654651005-15475-4-git-send-email-quic_clew@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1654651005-15475-1-git-send-email-quic_clew@quicinc.com> References: <1654651005-15475-1-git-send-email-quic_clew@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-remoteproc@vger.kernel.org Some remote processors and usecases such as audio playback are sensitive to the response time of rx done. Try to send the rx done cmd from irq context. If trysend fails, defer the rx done work like before. Signed-off-by: Chris Lew --- drivers/rpmsg/qcom_glink_native.c | 60 ++++++++++++++++++++++++++------------- 1 file changed, 40 insertions(+), 20 deletions(-) diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c index 07586514991f..799e602113a1 100644 --- a/drivers/rpmsg/qcom_glink_native.c +++ b/drivers/rpmsg/qcom_glink_native.c @@ -497,12 +497,11 @@ static void qcom_glink_send_close_ack(struct qcom_glink *glink, qcom_glink_tx(glink, &req, sizeof(req), NULL, 0, true); } -static void qcom_glink_rx_done_work(struct work_struct *work) +static int qcom_glink_send_rx_done(struct qcom_glink *glink, + struct glink_channel *channel, + struct glink_core_rx_intent *intent, + bool wait) { - struct glink_channel *channel = container_of(work, struct glink_channel, - intent_work); - struct qcom_glink *glink = channel->glink; - struct glink_core_rx_intent *intent, *tmp; struct { u16 id; u16 lcid; @@ -510,26 +509,41 @@ static void qcom_glink_rx_done_work(struct work_struct *work) } __packed cmd; unsigned int cid = channel->lcid; - unsigned int iid; - bool reuse; + unsigned int iid = intent->id; + bool reuse = intent->reuse; + int ret; + + cmd.id = reuse ? RPM_CMD_RX_DONE_W_REUSE : RPM_CMD_RX_DONE; + cmd.lcid = cid; + cmd.liid = iid; + + ret = qcom_glink_tx(glink, &cmd, sizeof(cmd), NULL, 0, wait); + if (ret) + return ret; + + if (!reuse) { + kfree(intent->data); + kfree(intent); + } + + return 0; +} + +static void qcom_glink_rx_done_work(struct work_struct *work) +{ + struct glink_channel *channel = container_of(work, struct glink_channel, + intent_work); + struct qcom_glink *glink = channel->glink; + struct glink_core_rx_intent *intent, *tmp; unsigned long flags; spin_lock_irqsave(&channel->intent_lock, flags); list_for_each_entry_safe(intent, tmp, &channel->done_intents, node) { list_del(&intent->node); spin_unlock_irqrestore(&channel->intent_lock, flags); - iid = intent->id; - reuse = intent->reuse; - cmd.id = reuse ? RPM_CMD_RX_DONE_W_REUSE : RPM_CMD_RX_DONE; - cmd.lcid = cid; - cmd.liid = iid; + qcom_glink_send_rx_done(glink, channel, intent, true); - qcom_glink_tx(glink, &cmd, sizeof(cmd), NULL, 0, true); - if (!reuse) { - kfree(intent->data); - kfree(intent); - } spin_lock_irqsave(&channel->intent_lock, flags); } spin_unlock_irqrestore(&channel->intent_lock, flags); @@ -539,6 +553,8 @@ static void qcom_glink_rx_done(struct qcom_glink *glink, struct glink_channel *channel, struct glink_core_rx_intent *intent) { + int ret = -EAGAIN; + /* We don't send RX_DONE to intentless systems */ if (glink->intentless) { kfree(intent->data); @@ -555,10 +571,14 @@ static void qcom_glink_rx_done(struct qcom_glink *glink, /* Schedule the sending of a rx_done indication */ spin_lock(&channel->intent_lock); - list_add_tail(&intent->node, &channel->done_intents); - spin_unlock(&channel->intent_lock); + if (list_empty(&channel->done_intents)) + ret = qcom_glink_send_rx_done(glink, channel, intent, false); - schedule_work(&channel->intent_work); + if (ret) { + list_add_tail(&intent->node, &channel->done_intents); + schedule_work(&channel->intent_work); + } + spin_unlock(&channel->intent_lock); } /** From patchwork Wed Jun 8 01:16:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Lew X-Patchwork-Id: 12872830 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7402C43334 for ; Wed, 8 Jun 2022 04:52:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232388AbiFHEwO (ORCPT ); Wed, 8 Jun 2022 00:52:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230184AbiFHEvw (ORCPT ); Wed, 8 Jun 2022 00:51:52 -0400 Received: from alexa-out-sd-01.qualcomm.com (alexa-out-sd-01.qualcomm.com [199.106.114.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2CEA32717AF; Tue, 7 Jun 2022 18:17:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1654651044; x=1686187044; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=/nNkMm22PKPf3p8/rCKaXGPuNVIxoEoPsdpm5cQPmew=; b=LUNehITs6n18Jk2/zguEW3Fc8h1AWTHNOeug5ntjD5T6HHpnzjnycWeD 8YJ5A/GsWFUym8OuMsxdT3PgPDU7+5Idh5JPX/r8fBlLt23yjeQoTe+K5 FRJ1m9ByzOMtp1A3oSP1857IRLCbj7/CD3VxdzfiiiCBf6+4BLfNxwMZi k=; Received: from unknown (HELO ironmsg02-sd.qualcomm.com) ([10.53.140.142]) by alexa-out-sd-01.qualcomm.com with ESMTP; 07 Jun 2022 18:17:21 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg02-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jun 2022 18:17:20 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Tue, 7 Jun 2022 18:17:20 -0700 Received: from hu-clew-lv.qualcomm.com (10.49.16.6) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Tue, 7 Jun 2022 18:17:20 -0700 From: Chris Lew To: , CC: , , , Subject: [PATCH 4/4] rpmsg: glink: Add support for rpmsg_rx_done Date: Tue, 7 Jun 2022 18:16:45 -0700 Message-ID: <1654651005-15475-5-git-send-email-quic_clew@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1654651005-15475-1-git-send-email-quic_clew@quicinc.com> References: <1654651005-15475-1-git-send-email-quic_clew@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-remoteproc@vger.kernel.org Add the implementation for the hooks of rpmsg_rx_done. If a client signals they want to hold onto a buffer with RPMSG_DEFER in the rx_cb, glink will move that intent to a deferred cleanup list. On the new rpmsg rx_done call, the glink transport will search this deferred cleanup list for the matching buffer and release the intent. Signed-off-by: Chris Lew --- drivers/rpmsg/qcom_glink_native.c | 54 ++++++++++++++++++++++++++++++++++++--- 1 file changed, 51 insertions(+), 3 deletions(-) diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c index 799e602113a1..db0dcc04f393 100644 --- a/drivers/rpmsg/qcom_glink_native.c +++ b/drivers/rpmsg/qcom_glink_native.c @@ -146,6 +146,7 @@ enum { * @riids: idr of all remote intents * @intent_work: worker responsible for transmitting rx_done packets * @done_intents: list of intents that needs to be announced rx_done + * @defer_intents: list of intents held by the client released by rpmsg_rx_done * @buf: receive buffer, for gathering fragments * @buf_offset: write offset in @buf * @buf_size: size of current @buf @@ -174,6 +175,7 @@ struct glink_channel { struct idr riids; struct work_struct intent_work; struct list_head done_intents; + struct list_head defer_intents; struct glink_core_rx_intent *buf; int buf_offset; @@ -232,6 +234,7 @@ static struct glink_channel *qcom_glink_alloc_channel(struct qcom_glink *glink, init_completion(&channel->intent_req_comp); INIT_LIST_HEAD(&channel->done_intents); + INIT_LIST_HEAD(&channel->defer_intents); INIT_WORK(&channel->intent_work, qcom_glink_rx_done_work); idr_init(&channel->liids); @@ -261,6 +264,12 @@ static void qcom_glink_channel_release(struct kref *ref) kfree(intent); } } + list_for_each_entry_safe(intent, tmp, &channel->defer_intents, node) { + if (!intent->reuse) { + kfree(intent->data); + kfree(intent); + } + } idr_for_each_entry(&channel->liids, tmp, iid) { kfree(tmp->data); @@ -549,9 +558,10 @@ static void qcom_glink_rx_done_work(struct work_struct *work) spin_unlock_irqrestore(&channel->intent_lock, flags); } -static void qcom_glink_rx_done(struct qcom_glink *glink, +static void __qcom_glink_rx_done(struct qcom_glink *glink, struct glink_channel *channel, - struct glink_core_rx_intent *intent) + struct glink_core_rx_intent *intent, + bool defer) { int ret = -EAGAIN; @@ -569,6 +579,14 @@ static void qcom_glink_rx_done(struct qcom_glink *glink, spin_unlock(&channel->intent_lock); } + /* Move intent to defer list until client calls rpmsg_rx_done */ + if (defer) { + spin_lock(&channel->intent_lock); + list_add_tail(&intent->node, &channel->defer_intents); + spin_unlock(&channel->intent_lock); + return; + } + /* Schedule the sending of a rx_done indication */ spin_lock(&channel->intent_lock); if (list_empty(&channel->done_intents)) @@ -581,6 +599,28 @@ static void qcom_glink_rx_done(struct qcom_glink *glink, spin_unlock(&channel->intent_lock); } +static int qcom_glink_rx_done(struct rpmsg_endpoint *ept, void *data) +{ + struct glink_channel *channel = to_glink_channel(ept); + struct qcom_glink *glink = channel->glink; + struct glink_core_rx_intent *intent, *tmp; + unsigned long flags; + + spin_lock_irqsave(&channel->intent_lock, flags); + list_for_each_entry_safe(intent, tmp, &channel->defer_intents, node) { + if (intent->data == data) { + list_del(&intent->node); + spin_unlock_irqrestore(&channel->intent_lock, flags); + + qcom_glink_send_rx_done(glink, channel, intent, true); + return 0; + } + } + spin_unlock_irqrestore(&channel->intent_lock, flags); + + return -EINVAL; +} + /** * qcom_glink_receive_version() - receive version/features from remote system * @@ -841,6 +881,7 @@ static int qcom_glink_rx_data(struct qcom_glink *glink, size_t avail) } __packed hdr; unsigned int chunk_size; unsigned int left_size; + bool rx_done_defer; unsigned int rcid; unsigned int liid; int ret = 0; @@ -935,7 +976,12 @@ static int qcom_glink_rx_data(struct qcom_glink *glink, size_t avail) intent->offset = 0; channel->buf = NULL; - qcom_glink_rx_done(glink, channel, intent); + if (channel->ept.rx_done && ret == RPMSG_DEFER) + rx_done_defer = true; + else + rx_done_defer = false; + + __qcom_glink_rx_done(glink, channel, intent, rx_done_defer); } advance_rx: @@ -1212,6 +1258,7 @@ static struct rpmsg_endpoint *qcom_glink_create_ept(struct rpmsg_device *rpdev, ept->cb = cb; ept->priv = priv; ept->ops = &glink_endpoint_ops; + ept->rx_done = true; return ept; } @@ -1462,6 +1509,7 @@ static const struct rpmsg_endpoint_ops glink_endpoint_ops = { .sendto = qcom_glink_sendto, .trysend = qcom_glink_trysend, .trysendto = qcom_glink_trysendto, + .rx_done = qcom_glink_rx_done, }; static void qcom_glink_rpdev_release(struct device *dev)