From patchwork Wed Sep 16 19:56:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Kumar X-Patchwork-Id: 11780689 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 73AD06CA for ; Wed, 16 Sep 2020 19:56:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 530F021582 for ; Wed, 16 Sep 2020 19:56:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="uFSRvsl+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727901AbgIPT4l (ORCPT ); Wed, 16 Sep 2020 15:56:41 -0400 Received: from mail29.static.mailgun.info ([104.130.122.29]:13747 "EHLO mail29.static.mailgun.info" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727658AbgIPT4Z (ORCPT ); Wed, 16 Sep 2020 15:56:25 -0400 DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1600286184; h=References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=y4puZpwgg//JDZy+2Q3u7RfwKCMl4fLcPSjBRmNaB4A=; b=uFSRvsl+X7qpKwTPOgfOXBOcn2AUr50BGnGaNCQRkhfTOpnJvFPMdJm3vjCC1D/Mp5BmnGO6 J5FS6+1z6wmsijJdxP8syxZtLk/xRBmtZU9CY4mUyo55u0AD062ag2NlRkM9yeNtLTo65PvO xWVQJvGvtHVJVlUU0qlb50/Aues= X-Mailgun-Sending-Ip: 104.130.122.29 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n01.prod.us-west-2.postgun.com with SMTP id 5f626ddfc4180d293bda8ac0 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Wed, 16 Sep 2020 19:56:15 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id B22B9C433C8; Wed, 16 Sep 2020 19:56:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=ALL_TRUSTED,BAYES_00,SPF_FAIL, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from codeaurora.org (i-global254.qualcomm.com [199.106.103.254]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: hemantk) by smtp.codeaurora.org (Postfix) with ESMTPSA id 8CF29C433C8; Wed, 16 Sep 2020 19:56:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 8CF29C433C8 Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=fail smtp.mailfrom=hemantk@codeaurora.org From: Hemant Kumar To: manivannan.sadhasivam@linaro.org, gregkh@linuxfoundation.org Cc: linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, jhugo@codeaurora.org, bbhatt@codeaurora.org, Hemant Kumar Subject: [PATCH v6 1/4] bus: mhi: core: Add helper API to return number of free TREs Date: Wed, 16 Sep 2020 12:56:04 -0700 Message-Id: <1600286167-4432-2-git-send-email-hemantk@codeaurora.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1600286167-4432-1-git-send-email-hemantk@codeaurora.org> References: <1600286167-4432-1-git-send-email-hemantk@codeaurora.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Introduce mhi_get_no_free_descriptors() API to return number of TREs available to queue buffer. MHI clients can use this API to know before hand if ring is full without calling queue API. Signed-off-by: Hemant Kumar --- drivers/bus/mhi/core/main.c | 12 ++++++++++++ include/linux/mhi.h | 9 +++++++++ 2 files changed, 21 insertions(+) diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c index 2cff5dd..0599e7d 100644 --- a/drivers/bus/mhi/core/main.c +++ b/drivers/bus/mhi/core/main.c @@ -258,6 +258,18 @@ int mhi_destroy_device(struct device *dev, void *data) return 0; } +int mhi_get_no_free_descriptors(struct mhi_device *mhi_dev, + enum dma_data_direction dir) +{ + struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; + struct mhi_chan *mhi_chan = (dir == DMA_TO_DEVICE) ? + mhi_dev->ul_chan : mhi_dev->dl_chan; + struct mhi_ring *tre_ring = &mhi_chan->tre_ring; + + return get_nr_avail_ring_elements(mhi_cntrl, tre_ring); +} +EXPORT_SYMBOL_GPL(mhi_get_no_free_descriptors); + void mhi_notify(struct mhi_device *mhi_dev, enum mhi_callback cb_reason) { struct mhi_driver *mhi_drv; diff --git a/include/linux/mhi.h b/include/linux/mhi.h index a35d876..6565528 100644 --- a/include/linux/mhi.h +++ b/include/linux/mhi.h @@ -600,6 +600,15 @@ void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, void mhi_notify(struct mhi_device *mhi_dev, enum mhi_callback cb_reason); /** + * mhi_get_no_free_descriptors - Get transfer ring length + * Get # of TD available to queue buffers + * @mhi_dev: Device associated with the channels + * @dir: Direction of the channel + */ +int mhi_get_no_free_descriptors(struct mhi_device *mhi_dev, + enum dma_data_direction dir); + +/** * mhi_prepare_for_power_up - Do pre-initialization before power up. * This is optional, call this before power up if * the controller does not want bus framework to From patchwork Wed Sep 16 19:56:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Kumar X-Patchwork-Id: 11780693 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E5F6C59D for ; Wed, 16 Sep 2020 19:58:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C1F8E21582 for ; Wed, 16 Sep 2020 19:58:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="ZR7fONdA" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727119AbgIPT6T (ORCPT ); Wed, 16 Sep 2020 15:58:19 -0400 Received: from m43-7.mailgun.net ([69.72.43.7]:52902 "EHLO m43-7.mailgun.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727256AbgIPT4a (ORCPT ); Wed, 16 Sep 2020 15:56:30 -0400 DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1600286189; h=References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=Up7I/CriwmOnAvRCo5VD/fL2c6xGM0a9cxYx/5GGg1k=; b=ZR7fONdA+mLb5xdeuW9Uum5s6wn3qjxxPy1ddFORFYyujYpMB/fggCuAnvdPR4lKtbhDM+FH AOPjl4dQMjxW6nRND8BgnwrM19CcB0BUVjUwb1SVyKaff7teLuqrQWi9l7moxZvBNxN0pRkz ogTRCjlUX1B/pcamvxx/clh7vDs= X-Mailgun-Sending-Ip: 69.72.43.7 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n02.prod.us-west-2.postgun.com with SMTP id 5f626de04a8a578ddc09d7b1 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Wed, 16 Sep 2020 19:56:16 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id E9E39C433CA; Wed, 16 Sep 2020 19:56:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=ALL_TRUSTED,BAYES_00,SPF_FAIL, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from codeaurora.org (i-global254.qualcomm.com [199.106.103.254]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: hemantk) by smtp.codeaurora.org (Postfix) with ESMTPSA id 2E38BC433FE; Wed, 16 Sep 2020 19:56:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 2E38BC433FE Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=fail smtp.mailfrom=hemantk@codeaurora.org From: Hemant Kumar To: manivannan.sadhasivam@linaro.org, gregkh@linuxfoundation.org Cc: linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, jhugo@codeaurora.org, bbhatt@codeaurora.org, Hemant Kumar Subject: [PATCH v6 2/4] bus: mhi: core: Move MHI_MAX_MTU to external header file Date: Wed, 16 Sep 2020 12:56:05 -0700 Message-Id: <1600286167-4432-3-git-send-email-hemantk@codeaurora.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1600286167-4432-1-git-send-email-hemantk@codeaurora.org> References: <1600286167-4432-1-git-send-email-hemantk@codeaurora.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Currently this macro is defined in internal MHI header as a TRE length mask. Moving it to external header allows MHI client drivers to set this upper bound for the transmit buffer size. Signed-off-by: Hemant Kumar Reviewed-by: Manivannan Sadhasivam --- drivers/bus/mhi/core/internal.h | 1 - include/linux/mhi.h | 3 +++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h index 7989269..4abf0cf 100644 --- a/drivers/bus/mhi/core/internal.h +++ b/drivers/bus/mhi/core/internal.h @@ -453,7 +453,6 @@ enum mhi_pm_state { #define CMD_EL_PER_RING 128 #define PRIMARY_CMD_RING 0 #define MHI_DEV_WAKE_DB 127 -#define MHI_MAX_MTU 0xffff #define MHI_RANDOM_U32_NONZERO(bmsk) (prandom_u32_max(bmsk) + 1) enum mhi_er_type { diff --git a/include/linux/mhi.h b/include/linux/mhi.h index 6565528..610f3b0 100644 --- a/include/linux/mhi.h +++ b/include/linux/mhi.h @@ -16,6 +16,9 @@ #include #include +/* MHI client drivers to set this upper bound for tx buffer */ +#define MHI_MAX_MTU 0xffff + #define MHI_MAX_OEM_PK_HASH_SEGMENTS 16 struct mhi_chan; From patchwork Wed Sep 16 19:56:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Kumar X-Patchwork-Id: 11780697 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5A9596CA for ; Wed, 16 Sep 2020 19:59:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3340521582 for ; Wed, 16 Sep 2020 19:59:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="AB94+aRx" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727780AbgIPT7F (ORCPT ); Wed, 16 Sep 2020 15:59:05 -0400 Received: from mail29.static.mailgun.info ([104.130.122.29]:48314 "EHLO mail29.static.mailgun.info" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727241AbgIPT4R (ORCPT ); Wed, 16 Sep 2020 15:56:17 -0400 DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1600286177; h=References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=5AfOnS+3Y+Pteg+tc8ZLv/GKO8Jt2d+yM7XEqrxfjHE=; b=AB94+aRxNNcP0Dfeym0GUQwfspEjhYBZJ46liZzWhNQS6pxfWEOcklRO7jItZ99Xu3GyT12F RM5aGP7vOYBzArpEY1LT2D1rlL/Le8zVaOzi+5w01fNSBd5PD8lsGjP6KUaW5kAkZr/EIuaj FDIMOTxoJOd9X/8M9pnnM8pPgks= X-Mailgun-Sending-Ip: 104.130.122.29 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n07.prod.us-east-1.postgun.com with SMTP id 5f626de1fda7475cca571379 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Wed, 16 Sep 2020 19:56:17 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id 84777C43385; Wed, 16 Sep 2020 19:56:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=ALL_TRUSTED,BAYES_00,SPF_FAIL, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from codeaurora.org (i-global254.qualcomm.com [199.106.103.254]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: hemantk) by smtp.codeaurora.org (Postfix) with ESMTPSA id B603CC433FF; Wed, 16 Sep 2020 19:56:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org B603CC433FF Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=fail smtp.mailfrom=hemantk@codeaurora.org From: Hemant Kumar To: manivannan.sadhasivam@linaro.org, gregkh@linuxfoundation.org Cc: linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, jhugo@codeaurora.org, bbhatt@codeaurora.org, Hemant Kumar Subject: [PATCH v6 3/4] docs: Add documentation for userspace client interface Date: Wed, 16 Sep 2020 12:56:06 -0700 Message-Id: <1600286167-4432-4-git-send-email-hemantk@codeaurora.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1600286167-4432-1-git-send-email-hemantk@codeaurora.org> References: <1600286167-4432-1-git-send-email-hemantk@codeaurora.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org MHI userspace client driver is creating device file node for user application to perform file operations. File operations are handled by MHI core driver. Currently Loopback MHI channel is supported by this driver. Signed-off-by: Hemant Kumar --- Documentation/mhi/index.rst | 1 + Documentation/mhi/uci.rst | 39 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 40 insertions(+) create mode 100644 Documentation/mhi/uci.rst diff --git a/Documentation/mhi/index.rst b/Documentation/mhi/index.rst index 1d8dec3..c75a371 100644 --- a/Documentation/mhi/index.rst +++ b/Documentation/mhi/index.rst @@ -9,6 +9,7 @@ MHI mhi topology + uci .. only:: subproject and html diff --git a/Documentation/mhi/uci.rst b/Documentation/mhi/uci.rst new file mode 100644 index 0000000..5d92939 --- /dev/null +++ b/Documentation/mhi/uci.rst @@ -0,0 +1,39 @@ +.. SPDX-License-Identifier: GPL-2.0 + +================================= +Userspace Client Interface (UCI) +================================= + +UCI driver enables userspace clients to communicate to external MHI devices +like modem and WLAN. It creates standard character device file nodes for user +space clients to perform open, read, write, poll and close file operations. + +Device file node is created with format:- + +/dev/mhi__ + +controller_name is the name of underlying bus used to transfer data. +mhi_device_name is the name of the MHI channel being used by MHI client in +userspace to send or receive data using MHI protocol. + +There is a separate character device file node created for each channel specified +in mhi device id table. MHI channels are statically defined by MHI specification. +Driver currently supports LOOPBACK channel 0 (Host to device) and 1 (Device to Host). + +LOOPBACK Channel +---------------- + +Userspace MHI client using LOOPBACK channel opens device file node. As part of +open operation TREs to transfer ring of LOOPBACK channel 1 gets queued and channel +doorbell is rung. When userspace MHI client performs write operation on device node, +data buffer gets queued as a TRE to transfer ring of LOOPBACK channel 0. MHI Core +driver rings the channel doorbell for MHI device to move data over underlying bus. +When userspace MHI client driver performs read operation, same data gets looped back +to MHI host using LOOPBACK channel 1. LOOPBACK channel is used to verify data path +and data integrity between MHI Host and MHI device. + +Other Use Cases +--------------- + +Getting MHI device specific diagnostics information to userspace MHI diag client +using DIAG channel 4 (Host to device) and 5 (Device to Host). From patchwork Wed Sep 16 19:56:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Kumar X-Patchwork-Id: 11780691 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 189756CA for ; Wed, 16 Sep 2020 19:57:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D6FA521582 for ; Wed, 16 Sep 2020 19:57:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="IcbApZLZ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727138AbgIPT5F (ORCPT ); Wed, 16 Sep 2020 15:57:05 -0400 Received: from mail29.static.mailgun.info ([104.130.122.29]:57157 "EHLO mail29.static.mailgun.info" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727232AbgIPT4c (ORCPT ); Wed, 16 Sep 2020 15:56:32 -0400 DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1600286190; h=References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=F/83cCaOcxtJcev1N8iSLotoejjt9HnhvpM3WWETG6U=; b=IcbApZLZlf9lFUrA7qroGIRfDuumZx5Hz/qHn4CkRsrEhXeYFxuGQ08FS7g/YA9lYt58SrIt ksvTmHlQ5Uhg40wsDpc9509oHYLAR9t7iThvG279IxTaoKb34U2izxB2zjvMpU05mAapEma6 0ST91fdy5QGFArHz0Zj/Q9iHGQU= X-Mailgun-Sending-Ip: 104.130.122.29 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n01.prod.us-west-2.postgun.com with SMTP id 5f626de1c4180d293bda9009 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Wed, 16 Sep 2020 19:56:17 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id ADCC7C433F0; Wed, 16 Sep 2020 19:56:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=ALL_TRUSTED,BAYES_00,SPF_FAIL, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from codeaurora.org (i-global254.qualcomm.com [199.106.103.254]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: hemantk) by smtp.codeaurora.org (Postfix) with ESMTPSA id 571CBC433FE; Wed, 16 Sep 2020 19:56:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 571CBC433FE Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=fail smtp.mailfrom=hemantk@codeaurora.org From: Hemant Kumar To: manivannan.sadhasivam@linaro.org, gregkh@linuxfoundation.org Cc: linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, jhugo@codeaurora.org, bbhatt@codeaurora.org, Hemant Kumar Subject: [PATCH v6 4/4] bus: mhi: Add userspace client interface driver Date: Wed, 16 Sep 2020 12:56:07 -0700 Message-Id: <1600286167-4432-5-git-send-email-hemantk@codeaurora.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1600286167-4432-1-git-send-email-hemantk@codeaurora.org> References: <1600286167-4432-1-git-send-email-hemantk@codeaurora.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This MHI client driver allows userspace clients to transfer raw data between MHI device and host using standard file operations. Device file node is created with format /dev/mhi__ Currently it supports LOOPBACK channel. Signed-off-by: Hemant Kumar --- drivers/bus/mhi/Kconfig | 13 + drivers/bus/mhi/Makefile | 4 + drivers/bus/mhi/uci.c | 657 +++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 674 insertions(+) create mode 100644 drivers/bus/mhi/uci.c diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig index 6a217ff..8aebe8b 100644 --- a/drivers/bus/mhi/Kconfig +++ b/drivers/bus/mhi/Kconfig @@ -20,3 +20,16 @@ config MHI_BUS_DEBUG Enable debugfs support for use with the MHI transport. Allows reading and/or modifying some values within the MHI controller for debug and test purposes. + +config MHI_UCI + tristate "MHI UCI" + depends on MHI_BUS + help + MHI based userspace client interface driver is used for transferring + raw data between host and device using standard file operations from + userspace. Open, read, write, and close operations are supported + by this driver. Please check mhi_uci_match_table for all supported + channels that are exposed to userspace. + + To compile this driver as a module, choose M here: the module will be + called mhi_uci. diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile index 19e6443..80feefb 100644 --- a/drivers/bus/mhi/Makefile +++ b/drivers/bus/mhi/Makefile @@ -1,2 +1,6 @@ # core layer obj-y += core/ + +# MHI client +mhi_uci-y := uci.o +obj-$(CONFIG_MHI_UCI) += mhi_uci.o diff --git a/drivers/bus/mhi/uci.c b/drivers/bus/mhi/uci.c new file mode 100644 index 0000000..d6758f2 --- /dev/null +++ b/drivers/bus/mhi/uci.c @@ -0,0 +1,657 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.*/ + +#include +#include +#include +#include +#include + +#define DEVICE_NAME "mhi" +#define MHI_UCI_DRIVER_NAME "mhi_uci" +#define MAX_UCI_MINORS (128) + +static DEFINE_IDR(uci_idr); +static DEFINE_MUTEX(uci_idr_mutex); +static struct class *uci_dev_class; +static int uci_dev_major; + +/** + * struct uci_chan - MHI channel for a uci device + * @wq: wait queue for reader/writer + * @lock: spin lock + * @pending: list of rx buffers userspace is waiting to read + * @cur_buf: current buffer userspace is reading + * @rx_size: size of the current rx buffer userspace is reading + */ +struct uci_chan { + wait_queue_head_t wq; + + /* protects pending and cur_buf members in bh context */ + spinlock_t lock; + + struct list_head pending; + struct uci_buf *cur_buf; + size_t rx_size; +}; + +/** + * struct uci_buf - uci buffer + * @data: data buffer + * @len: length of data buffer + * @node: list node of the uci buffer + */ +struct uci_buf { + void *data; + size_t len; + struct list_head node; +}; + +/** + * struct uci_dev - MHI uci device + * @minor: uci device node minor number + * @mhi_dev: associated mhi device object + * @chan: MHI channel name + * @lock: mutex lock + * @ul_chan: uplink uci channel object + * @dl_chan: downlink uci channel object + * @mtu: max tx buffer length + * @actual_mtu: maximum size of incoming buffer + * @open: open called for device node + * @enabled: uci device probed + * @ref_count: uci_dev reference count + */ +struct uci_dev { + unsigned int minor; + struct mhi_device *mhi_dev; + const char *chan; + + /* protects uci_dev struct members */ + struct mutex lock; + + struct uci_chan ul_chan; + struct uci_chan dl_chan; + size_t mtu; + size_t actual_mtu; + bool enabled; + struct kref ref_count; +}; + +static int mhi_queue_inbound(struct uci_dev *udev) +{ + struct mhi_device *mhi_dev = udev->mhi_dev; + struct device *dev = &mhi_dev->dev; + size_t mtu = udev->mtu; + size_t actual_mtu = udev->actual_mtu; + int nr_trbs, i, ret = -EIO; + void *buf; + struct uci_buf *uci_buf; + + nr_trbs = mhi_get_no_free_descriptors(mhi_dev, DMA_FROM_DEVICE); + + for (i = 0; i < nr_trbs; i++) { + buf = kmalloc(mtu, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + uci_buf = buf + actual_mtu; + uci_buf->data = buf; + + dev_dbg(dev, "Allocated buf %d of %d size %ld\n", i, nr_trbs, + actual_mtu); + + ret = mhi_queue_buf(mhi_dev, DMA_FROM_DEVICE, buf, actual_mtu, + MHI_EOT); + if (ret) { + kfree(buf); + dev_err(dev, "Failed to queue buffer %d\n", i); + return ret; + } + } + + return ret; +} + +static void mhi_uci_dev_release(struct kref *ref) +{ + struct uci_dev *udev = + container_of(ref, struct uci_dev, ref_count); + + mutex_destroy(&udev->lock); + + dev_set_drvdata(&udev->mhi_dev->dev, NULL); + + kfree(udev); +} + +static int mhi_uci_release(struct inode *inode, struct file *file) +{ + struct uci_dev *udev = file->private_data; + struct uci_buf *itr, *tmp; + struct uci_chan *uchan; + + if (kref_read(&udev->ref_count) > 2) + goto exit_uci_release; + + if (udev->enabled) + mhi_unprepare_from_transfer(udev->mhi_dev); + + /* clean inbound channel */ + uchan = &udev->dl_chan; + + spin_lock_bh(&uchan->lock); + list_for_each_entry_safe(itr, tmp, &uchan->pending, node) { + list_del(&itr->node); + kfree(itr->data); + } + + if (uchan->cur_buf) + kfree(uchan->cur_buf->data); + + uchan->cur_buf = NULL; + spin_unlock_bh(&uchan->lock); + +exit_uci_release: + kref_put(&udev->ref_count, mhi_uci_dev_release); + + return 0; +} + +static __poll_t mhi_uci_poll(struct file *file, poll_table *wait) +{ + struct uci_dev *udev = file->private_data; + struct mhi_device *mhi_dev = udev->mhi_dev; + struct device *dev = &mhi_dev->dev; + struct uci_chan *uchan; + __poll_t mask = 0; + + poll_wait(file, &udev->dl_chan.wq, wait); + poll_wait(file, &udev->ul_chan.wq, wait); + + if (!udev->enabled) { + mask = EPOLLERR; + } else { + uchan = &udev->dl_chan; + spin_lock_bh(&uchan->lock); + if (!list_empty(&uchan->pending) || uchan->cur_buf) { + dev_dbg(dev, "Client can read from node\n"); + mask |= EPOLLIN | EPOLLRDNORM; + } + spin_unlock_bh(&uchan->lock); + } + + if (!udev->enabled) { + mask |= EPOLLERR; + } else if (mhi_get_no_free_descriptors(mhi_dev, DMA_TO_DEVICE) > 0) { + dev_dbg(dev, "Client can write to node\n"); + mask |= EPOLLOUT | EPOLLWRNORM; + } + + dev_dbg(dev, "Client attempted to poll, returning mask 0x%x\n", mask); + + return mask; +} + +static ssize_t mhi_uci_write(struct file *file, + const char __user *buf, + size_t count, + loff_t *offp) +{ + struct uci_dev *udev = file->private_data; + struct mhi_device *mhi_dev = udev->mhi_dev; + struct device *dev = &mhi_dev->dev; + struct uci_chan *uchan = &udev->ul_chan; + size_t bytes_xfered = 0; + int ret, nr_avail = 0; + + if (!buf || !count) + return -EINVAL; + + /* confirm channel is active */ + mutex_lock(&udev->lock); + if (!udev->enabled) { + ret = -ENODEV; + goto err_mtx_unlock; + } + + dev_dbg(dev, "%s: to xfer: %lu bytes\n", __func__, count); + + while (count) { + size_t xfer_size; + void *kbuf; + enum mhi_flags flags; + + mutex_unlock(&udev->lock); + /* wait for free descriptors */ + ret = wait_event_interruptible(uchan->wq, + (!udev->enabled) || + (nr_avail = mhi_get_no_free_descriptors(mhi_dev, + DMA_TO_DEVICE)) > 0); + + mutex_lock(&udev->lock); + if (ret == -ERESTARTSYS) { + dev_dbg(dev, "Exit signal caught for node\n"); + goto err_mtx_unlock; + } + + if (!udev->enabled) { + ret = -ENODEV; + goto err_mtx_unlock; + } + + xfer_size = min_t(size_t, count, udev->mtu); + kbuf = kmalloc(xfer_size, GFP_KERNEL); + if (!kbuf) { + ret = -ENOMEM; + goto err_mtx_unlock; + } + + ret = copy_from_user(kbuf, buf, xfer_size); + if (ret) { + kfree(kbuf); + ret = -EFAULT; + goto err_mtx_unlock; + } + + /* if ring is full after this force EOT */ + if (nr_avail > 1 && (count - xfer_size)) + flags = MHI_CHAIN; + else + flags = MHI_EOT; + + if (udev->enabled) + ret = mhi_queue_buf(mhi_dev, DMA_TO_DEVICE, kbuf, + xfer_size, flags); + else + ret = -ENODEV; + + if (ret) { + kfree(kbuf); + goto err_mtx_unlock; + } + + bytes_xfered += xfer_size; + count -= xfer_size; + buf += xfer_size; + } + + mutex_unlock(&udev->lock); + dev_dbg(dev, "%s: bytes xferred: %lu\n", __func__, bytes_xfered); + + return bytes_xfered; + +err_mtx_unlock: + mutex_unlock(&udev->lock); + + return ret; +} + +static ssize_t mhi_uci_read(struct file *file, + char __user *buf, + size_t count, + loff_t *ppos) +{ + struct uci_dev *udev = file->private_data; + struct mhi_device *mhi_dev = udev->mhi_dev; + struct uci_chan *uchan = &udev->dl_chan; + struct device *dev = &mhi_dev->dev; + struct uci_buf *ubuf; + char *ptr; + size_t to_copy; + int ret = 0; + + if (!buf) + return -EINVAL; + + mutex_lock(&udev->lock); + /* confirm channel is active */ + if (!udev->enabled) { + ret = -ENODEV; + goto err_mtx_unlock; + } + + spin_lock_bh(&uchan->lock); + /* No data available to read, wait */ + if (!uchan->cur_buf && list_empty(&uchan->pending)) { + dev_dbg(dev, "No data available to read waiting\n"); + + spin_unlock_bh(&uchan->lock); + mutex_unlock(&udev->lock); + ret = wait_event_interruptible(uchan->wq, + (!udev->enabled || + !list_empty(&uchan->pending))); + + mutex_lock(&udev->lock); + if (ret == -ERESTARTSYS) { + dev_dbg(dev, "Exit signal caught for node\n"); + goto err_mtx_unlock; + } + + if (!udev->enabled) { + ret = -ENODEV; + goto err_mtx_unlock; + } + spin_lock_bh(&uchan->lock); + } + + /* new read, get the next descriptor from the list */ + if (!uchan->cur_buf) { + ubuf = list_first_entry_or_null(&uchan->pending, + struct uci_buf, node); + if (!ubuf) { + ret = -EIO; + goto err_spin_unlock; + } + + list_del(&ubuf->node); + uchan->cur_buf = ubuf; + uchan->rx_size = ubuf->len; + dev_dbg(dev, "Got pkt of size: %zu\n", uchan->rx_size); + } + + ubuf = uchan->cur_buf; + + /* Copy the buffer to user space */ + to_copy = min_t(size_t, count, uchan->rx_size); + ptr = ubuf->data + (ubuf->len - uchan->rx_size); + spin_unlock_bh(&uchan->lock); + + ret = copy_to_user(buf, ptr, to_copy); + if (ret) { + ret = -EFAULT; + goto err_mtx_unlock; + } + + spin_lock_bh(&uchan->lock); + + dev_dbg(dev, "Copied %lu of %lu bytes\n", to_copy, uchan->rx_size); + uchan->rx_size -= to_copy; + + /* we finished with this buffer, queue it back to hardware */ + if (!uchan->rx_size) { + uchan->cur_buf = NULL; + + if (udev->enabled) + ret = mhi_queue_buf(mhi_dev, DMA_FROM_DEVICE, + ubuf->data, + udev->actual_mtu, MHI_EOT); + else + ret = -ENODEV; + + if (ret) { + dev_err(dev, "Failed to recycle element: %d\n", ret); + kfree(ubuf->data); + goto err_spin_unlock; + } + } + spin_unlock_bh(&uchan->lock); + mutex_unlock(&udev->lock); + + dev_dbg(dev, "%s: Returning %lu bytes\n", __func__, to_copy); + + return to_copy; + +err_spin_unlock: + spin_unlock_bh(&uchan->lock); +err_mtx_unlock: + mutex_unlock(&udev->lock); + return ret; +} + +static int mhi_uci_open(struct inode *inode, struct file *filp) +{ + struct uci_dev *udev = NULL; + unsigned int minor = iminor(inode); + int ret = -EIO; + struct uci_buf *buf_itr, *tmp; + struct uci_chan *dl_chan; + struct mhi_device *mhi_dev; + struct device *dev; + + mutex_lock(&uci_idr_mutex); + udev = idr_find(&uci_idr, minor); + mutex_unlock(&uci_idr_mutex); + if (!udev) { + pr_err("uci dev: minor %d not found\n", minor); + ret = -ENODEV; + goto error_no_dev; + } + + kref_get(&udev->ref_count); + + mhi_dev = udev->mhi_dev; + dev = &mhi_dev->dev; + + mutex_lock(&udev->lock); + if (kref_read(&udev->ref_count) > 2) { + dev_dbg(dev, "Node already opened\n"); + goto exit_uci_open; + } + + if (!udev->enabled) { + dev_info(dev, "Node exists, but is not in active state!\n"); + goto error_open_chan; + } + + dev_dbg(dev, "Starting channel\n"); + ret = mhi_prepare_for_transfer(udev->mhi_dev); + if (ret) { + dev_err(dev, "Error starting transfer channels\n"); + goto error_open_chan; + } + + ret = mhi_queue_inbound(udev); + if (ret) + goto error_rx_queue; + +exit_uci_open: + filp->private_data = udev; + mutex_unlock(&udev->lock); + + return 0; + +error_rx_queue: + dl_chan = &udev->dl_chan; + mhi_unprepare_from_transfer(udev->mhi_dev); + list_for_each_entry_safe(buf_itr, tmp, &dl_chan->pending, node) { + list_del(&buf_itr->node); + kfree(buf_itr->data); + } +error_open_chan: + mutex_unlock(&udev->lock); + kref_put(&udev->ref_count, mhi_uci_dev_release); +error_no_dev: + return ret; +} + +static const struct file_operations mhidev_fops = { + .owner = THIS_MODULE, + .open = mhi_uci_open, + .release = mhi_uci_release, + .read = mhi_uci_read, + .write = mhi_uci_write, + .poll = mhi_uci_poll, +}; + +static void mhi_ul_xfer_cb(struct mhi_device *mhi_dev, + struct mhi_result *mhi_result) +{ + struct uci_dev *udev = dev_get_drvdata(&mhi_dev->dev); + struct uci_chan *uchan = &udev->ul_chan; + struct device *dev = &mhi_dev->dev; + + dev_dbg(dev, "status: %d xfer_len: %zu\n", + mhi_result->transaction_status, mhi_result->bytes_xferd); + + kfree(mhi_result->buf_addr); + + if (!mhi_result->transaction_status) + wake_up(&uchan->wq); +} + +static void mhi_dl_xfer_cb(struct mhi_device *mhi_dev, + struct mhi_result *mhi_result) +{ + struct uci_dev *udev = dev_get_drvdata(&mhi_dev->dev); + struct uci_chan *uchan = &udev->dl_chan; + struct device *dev = &mhi_dev->dev; + struct uci_buf *buf; + + dev_dbg(dev, "status: %d receive_len: %zu\n", + mhi_result->transaction_status, mhi_result->bytes_xferd); + + if (mhi_result->transaction_status == -ENOTCONN) { + kfree(mhi_result->buf_addr); + return; + } + + spin_lock_bh(&uchan->lock); + buf = mhi_result->buf_addr + udev->actual_mtu; + buf->data = mhi_result->buf_addr; + buf->len = mhi_result->bytes_xferd; + list_add_tail(&buf->node, &uchan->pending); + spin_unlock_bh(&uchan->lock); + + wake_up(&uchan->wq); +} + +static int mhi_uci_probe(struct mhi_device *mhi_dev, + const struct mhi_device_id *id) +{ + struct uci_dev *udev; + struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; + struct device *dev; + int index, dir; + + udev = kzalloc(sizeof(*udev), GFP_KERNEL); + if (!udev) + return -ENOMEM; + + kref_init(&udev->ref_count); + mutex_init(&udev->lock); + udev->mhi_dev = mhi_dev; + + mutex_lock(&udev->lock); + + mutex_lock(&uci_idr_mutex); + index = idr_alloc(&uci_idr, udev, 0, MAX_UCI_MINORS, GFP_KERNEL); + mutex_unlock(&uci_idr_mutex); + if (index < 0) { + mutex_unlock(&udev->lock); + kfree(udev); + return index; + } + + udev->minor = index; + + /* create device file node /dev/mhi__ */ + dev = device_create(uci_dev_class, &mhi_dev->dev, + MKDEV(uci_dev_major, index), udev, + DEVICE_NAME "_%s_%s", + dev_name(mhi_cntrl->cntrl_dev), mhi_dev->name); + if (IS_ERR(dev)) { + mutex_lock(&uci_idr_mutex); + idr_remove(&uci_idr, udev->minor); + mutex_unlock(&uci_idr_mutex); + mutex_unlock(&udev->lock); + kfree(udev); + return PTR_ERR(dev); + } + + for (dir = 0; dir < 2; dir++) { + struct uci_chan *uchan = (dir) ? + &udev->ul_chan : &udev->dl_chan; + spin_lock_init(&uchan->lock); + init_waitqueue_head(&uchan->wq); + INIT_LIST_HEAD(&uchan->pending); + } + + udev->mtu = min_t(size_t, id->driver_data, MHI_MAX_MTU); + udev->actual_mtu = udev->mtu - sizeof(struct uci_buf); + dev_set_drvdata(&mhi_dev->dev, udev); + udev->enabled = true; + + mutex_unlock(&udev->lock); + + dev_info(&mhi_dev->dev, "probed uci dev: minor %d\n", index); + + return 0; +}; + +static void mhi_uci_remove(struct mhi_device *mhi_dev) +{ + struct uci_dev *udev = dev_get_drvdata(&mhi_dev->dev); + + mutex_lock(&udev->lock); + + /* disable the node */ + udev->enabled = false; + + wake_up(&udev->dl_chan.wq); + wake_up(&udev->ul_chan.wq); + + /* delete the node to prevent new opens */ + device_destroy(uci_dev_class, MKDEV(uci_dev_major, udev->minor)); + + mutex_lock(&uci_idr_mutex); + idr_remove(&uci_idr, udev->minor); + mutex_unlock(&uci_idr_mutex); + + mutex_unlock(&udev->lock); + + kref_put(&udev->ref_count, mhi_uci_dev_release); +} + +/* .driver_data stores max mtu */ +static const struct mhi_device_id mhi_uci_match_table[] = { + { .chan = "LOOPBACK", .driver_data = 0x1000}, + {}, +}; +MODULE_DEVICE_TABLE(mhi, mhi_uci_match_table); + +static struct mhi_driver mhi_uci_driver = { + .id_table = mhi_uci_match_table, + .remove = mhi_uci_remove, + .probe = mhi_uci_probe, + .ul_xfer_cb = mhi_ul_xfer_cb, + .dl_xfer_cb = mhi_dl_xfer_cb, + .driver = { + .name = MHI_UCI_DRIVER_NAME, + }, +}; + +static int mhi_uci_init(void) +{ + int ret; + + ret = register_chrdev(0, MHI_UCI_DRIVER_NAME, &mhidev_fops); + if (ret < 0) + return ret; + + uci_dev_major = ret; + uci_dev_class = class_create(THIS_MODULE, MHI_UCI_DRIVER_NAME); + if (IS_ERR(uci_dev_class)) { + unregister_chrdev(uci_dev_major, MHI_UCI_DRIVER_NAME); + return -ENODEV; + } + + ret = mhi_driver_register(&mhi_uci_driver); + if (ret) { + class_destroy(uci_dev_class); + unregister_chrdev(uci_dev_major, MHI_UCI_DRIVER_NAME); + } + + return ret; +} + +static void __exit mhi_uci_exit(void) +{ + mhi_driver_unregister(&mhi_uci_driver); + class_destroy(uci_dev_class); + unregister_chrdev(uci_dev_major, MHI_UCI_DRIVER_NAME); +} + +module_init(mhi_uci_init); +module_exit(mhi_uci_exit); +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("MHI UCI Driver");