From patchwork Thu Jan 23 11:18:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manivannan Sadhasivam X-Patchwork-Id: 11347369 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 70D24159A for ; Thu, 23 Jan 2020 11:18:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3B04F2467F for ; Thu, 23 Jan 2020 11:18:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="VoFs7IJj" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728925AbgAWLSu (ORCPT ); Thu, 23 Jan 2020 06:18:50 -0500 Received: from mail-pf1-f195.google.com ([209.85.210.195]:36995 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728760AbgAWLSu (ORCPT ); Thu, 23 Jan 2020 06:18:50 -0500 Received: by mail-pf1-f195.google.com with SMTP id p14so1396636pfn.4 for ; Thu, 23 Jan 2020 03:18:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pd3DU4hDhR0wrsou+RDTzlRP+J1cvRTXnpI2YcR2ymY=; b=VoFs7IJj2zSucETOi4/7cmbzJk9fkuwLB+bjc0SJKH/B8Q8PkB0jzoDhJP+Em0zXee xpj/1N8TzqeBtAeKf+K3qPJX70FcvkLaP5v+/nCYda/WqhBljgjRX5hpQu87jv7rNHdX IIwRPHI5HPYznMtVyQrLg1jIfJ1H4DEyE0CiqGSKpTm+m6qvZx0uCBqNwdJxNjjD//jN ty+feolUsbMWQyQFZuikFImVOSVCXl2cTtujkO6CmGTrnSHLepEw4ZrEdnfLiiVTf3ov DJxHMGmYlDc7W6fMFjUoLop/8rYlhUOSbGuy5XdB2KNKSADBZSdXLoeNnnsUARLXnpnK +Qug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pd3DU4hDhR0wrsou+RDTzlRP+J1cvRTXnpI2YcR2ymY=; b=FkJ6aRfa8iiaqMwF1yRFb9NCvHKCUVppEalRlnoR1xPBZC4oaxsD9vzBJEq7KCCddb sNuYT3qWKicN/dg40AEWGQlXnNbVDabm8+aJamQRNB+6xaKrdE3Vfu9ol3whTbRqt+vY rnmtavosA+HKdo/3v2GjZmdO7Tj3c4R6rmIDr3y12PArIrPTx76xK9COvZ8PUwOuivRP ikOMAc+P9HluOECu6UVQVJ8u6Uyot5+dpE5X87dcBt+pgGkkheBYGzNoX24eZ4HGX78U TNd8kIFX7YHRtpAHOzPdzBVtbwSEacWWGnng7QCrsJkSsZy8ufxuEop5RfN7yVkiXogT H2rw== X-Gm-Message-State: APjAAAWwaDenbRTE8vHZGQh29JXADwFvaKnQBcfbNn7XYwQuP5yQ7ZOy GH8pB33qRsLla3myaeSEWsG8 X-Google-Smtp-Source: APXvYqz+4Y1LRolbx/xvi6ZXl2OOZW/BmuY9IsEV2vPXEaCHs/E8tK27D4tb69q+YTD6bzSf7OLIyg== X-Received: by 2002:a63:6c09:: with SMTP id h9mr3343529pgc.34.1579778329306; Thu, 23 Jan 2020 03:18:49 -0800 (PST) Received: from localhost.localdomain ([103.59.133.81]) by smtp.googlemail.com with ESMTPSA id y6sm2627559pgc.10.2020.01.23.03.18.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2020 03:18:48 -0800 (PST) From: Manivannan Sadhasivam To: gregkh@linuxfoundation.org, arnd@arndb.de Cc: smohanad@codeaurora.org, jhugo@codeaurora.org, kvalo@codeaurora.org, bjorn.andersson@linaro.org, hemantk@codeaurora.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, Manivannan Sadhasivam , Jonathan Corbet , linux-doc@vger.kernel.org Subject: [PATCH 01/16] docs: Add documentation for MHI bus Date: Thu, 23 Jan 2020 16:48:21 +0530 Message-Id: <20200123111836.7414-2-manivannan.sadhasivam@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> References: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org MHI (Modem Host Interface) is a communication protocol used by the host processors to control and communicate with modems over a high speed peripheral bus or shared memory. The MHI protocol has been designed and developed by Qualcomm Innovation Center, Inc., for use in their modems. This commit adds the documentation for the bus and the implementation in Linux kernel. This is based on the patch submitted by Sujeev Dias: https://lkml.org/lkml/2018/7/9/987 Cc: Jonathan Corbet Cc: linux-doc@vger.kernel.org Signed-off-by: Sujeev Dias Signed-off-by: Siddartha Mohanadoss [mani: converted to .rst and splitted the patch] Signed-off-by: Manivannan Sadhasivam --- Documentation/index.rst | 1 + Documentation/mhi/index.rst | 18 +++ Documentation/mhi/mhi.rst | 218 +++++++++++++++++++++++++++++++++ Documentation/mhi/topology.rst | 60 +++++++++ 4 files changed, 297 insertions(+) create mode 100644 Documentation/mhi/index.rst create mode 100644 Documentation/mhi/mhi.rst create mode 100644 Documentation/mhi/topology.rst diff --git a/Documentation/index.rst b/Documentation/index.rst index e99d0bd2589d..edc9b211bbff 100644 --- a/Documentation/index.rst +++ b/Documentation/index.rst @@ -133,6 +133,7 @@ needed). misc-devices/index mic/index scheduler/index + mhi/index Architecture-agnostic documentation ----------------------------------- diff --git a/Documentation/mhi/index.rst b/Documentation/mhi/index.rst new file mode 100644 index 000000000000..1d8dec302780 --- /dev/null +++ b/Documentation/mhi/index.rst @@ -0,0 +1,18 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=== +MHI +=== + +.. toctree:: + :maxdepth: 1 + + mhi + topology + +.. only:: subproject and html + + Indices + ======= + + * :ref:`genindex` diff --git a/Documentation/mhi/mhi.rst b/Documentation/mhi/mhi.rst new file mode 100644 index 000000000000..718dbbdc7a04 --- /dev/null +++ b/Documentation/mhi/mhi.rst @@ -0,0 +1,218 @@ +.. SPDX-License-Identifier: GPL-2.0 + +========================== +MHI (Modem Host Interface) +========================== + +This document provides information about the MHI protocol. + +Overview +======== + +MHI is a protocol developed by Qualcomm Innovation Center, Inc., It is used +by the host processors to control and communicate with modem devices over high +speed peripheral buses or shared memory. Even though MHI can be easily adapted +to any peripheral buses, it is primarily used with PCIe based devices. MHI +provides logical channels over the physical buses and allows transporting the +modem protocols, such as IP data packets, modem control messages, and +diagnostics over at least one of those logical channels. Also, the MHI +protocol provides data acknowledgment feature and manages the power state of the +modems via one or more logical channels. + +MHI Internals +============= + +MMIO +---- + +MMIO (Memory mapped IO) consists of a set of registers in the device hardware, +which are mapped to the host memory space by the peripheral buses like PCIe. +Following are the major components of MMIO register space: + +MHI control registers: Access to MHI configurations registers + +MHI BHI registers: BHI (Boot Host Interface) registers are used by the host +for downloading the firmware to the device before MHI initialization. + +Channel Doorbell array: Channel Doorbell (DB) registers used by the host to +notify the device when there is new work to do. + +Event Doorbell array: Associated with event context array, the Event Doorbell +(DB) registers are used by the host to notify the device when new events are +available. + +Debug registers: A set of registers and counters used by the device to expose +debugging information like performance, functional, and stability to the host. + +Data structures +--------------- + +All data structures used by MHI are in the host system memory. Using the +physical interface, the device accesses those data structures. MHI data +structures and data buffers in the host system memory regions are mapped for +the device. + +Channel context array: All channel configurations are organized in channel +context data array. + +Transfer rings: Used by the host to schedule work items for a channel. The +transfer rings are organized as a circular queue of Transfer Descriptors (TD). + +Event context array: All event configurations are organized in the event context +data array. + +Event rings: Used by the device to send completion and state transition messages +to the host + +Command context array: All command configurations are organized in command +context data array. + +Command rings: Used by the host to send MHI commands to the device. The command +rings are organized as a circular queue of Command Descriptors (CD). + +Channels +-------- + +MHI channels are logical, unidirectional data pipes between a host and a device. +The concept of channels in MHI is similar to endpoints in USB. MHI supports up +to 256 channels. However, specific device implementations may support less than +the maximum number of channels allowed. + +Two unidirectional channels with their associated transfer rings form a +bidirectional data pipe, which can be used by the upper-layer protocols to +transport application data packets (such as IP packets, modem control messages, +diagnostics messages, and so on). Each channel is associated with a single +transfer ring. + +Transfer rings +-------------- + +Transfers between the host and device are organized by channels and defined by +Transfer Descriptors (TD). TDs are managed through transfer rings, which are +defined for each channel between the device and host and reside in the host +memory. TDs consist of one or more ring elements (or transfer blocks):: + + [Read Pointer (RP)] ----------->[Ring Element] } TD + [Write Pointer (WP)]- [Ring Element] + - [Ring Element] + --------->[Ring Element] + [Ring Element] + +Below is the basic usage of transfer rings: + +* Host allocates memory for transfer ring. +* Host sets the base pointer, read pointer, and write pointer in corresponding + channel context. +* Ring is considered empty when RP == WP. +* Ring is considered full when WP + 1 == RP. +* RP indicates the next element to be serviced by the device. +* When the host has a new buffer to send, it updates the ring element with + buffer information, increments the WP to the next element and rings the + associated channel DB. + +Event rings +----------- + +Events from the device to host are organized in event rings and defined by Event +Descriptors (ED). Event rings are used by the device to report events such as +data transfer completion status, command completion status, and state changes +to the host. Event rings are the array of EDs that resides in the host +memory. EDs consist of one or more ring elements (or transfer blocks):: + + [Read Pointer (RP)] ----------->[Ring Element] } ED + [Write Pointer (WP)]- [Ring Element] + - [Ring Element] + --------->[Ring Element] + [Ring Element] + +Below is the basic usage of event rings: + +* Host allocates memory for event ring. +* Host sets the base pointer, read pointer, and write pointer in corresponding + channel context. +* Both host and device has a local copy of RP, WP. +* Ring is considered empty (no events to service) when WP + 1 == RP. +* Ring is considered full of events when RP == WP. +* When there is a new event the device needs to send, the device updates ED + pointed by RP, increments the RP to the next element and triggers the + interrupt. + +Ring Element +------------ + +A Ring Element is a data structure used to transfer a single block +of data between the host and the device. Transfer ring element types contain a +single buffer pointer, the size of the buffer, and additional control +information. Other ring element types may only contain control and status +information. For single buffer operations, a ring descriptor is composed of a +single element. For large multi-buffer operations (such as scatter and gather), +elements can be chained to form a longer descriptor. + +MHI Operations +============== + +MHI States +---------- + +MHI_STATE_RESET +~~~~~~~~~~~~~~~ +MHI is in reset state after power-up or hardware reset. The host is not allowed +to access device MMIO register space. + +MHI_STATE_READY +~~~~~~~~~~~~~~~ +MHI is ready for initialization. The host can start MHI initialization by +programming MMIO registers. + +MHI_STATE_M0 +~~~~~~~~~~~~ +MHI is running and operational in the device. The host can start channels by +issuing channel start command. + +MHI_STATE_M1 +~~~~~~~~~~~~ +MHI operation is suspended by the device. This state is entered when the +device detects inactivity at the physical interface within a preset time. + +MHI_STATE_M2 +~~~~~~~~~~~~ +MHI is in low power state. MHI operation is suspended and the device may +enter lower power mode. + +MHI_STATE_M3 +~~~~~~~~~~~~ +MHI operation stopped by the host. This state is entered when the host suspends +MHI operation. + +MHI Initialization +------------------ + +After system boots, the device is enumerated over the physical interface. +In the case of PCIe, the device is enumerated and assigned BAR-0 for +the device's MMIO register space. To initialize the MHI in a device, +the host performs the following operations: + +* Allocates the MHI context for event, channel and command arrays. +* Initializes the context array, and prepares interrupts. +* Waits until the device enters READY state. +* Programs MHI MMIO registers and sets device into MHI_M0 state. +* Waits for the device to enter M0 state. + +MHI Data Transfer +----------------- + +MHI data transfer is initiated by the host to transfer data to the device. +Following are the sequence of operations performed by the host to transfer +data to device: + +* Host prepares TD with buffer information. +* Host increments the WP of the corresponding channel transfer ring. +* Host rings the channel DB register. +* Device wakes up to process the TD. +* Device generates a completion event for the processed TD by updating ED. +* Device increments the RP of the corresponding event ring. +* Device triggers IRQ to wake up the host. +* Host wakes up and checks the event ring for completion event. +* Host updates the WP of the corresponding event ring to indicate that the + data transfer has been completed successfully. + diff --git a/Documentation/mhi/topology.rst b/Documentation/mhi/topology.rst new file mode 100644 index 000000000000..90d80a7f116d --- /dev/null +++ b/Documentation/mhi/topology.rst @@ -0,0 +1,60 @@ +.. SPDX-License-Identifier: GPL-2.0 + +============ +MHI Topology +============ + +This document provides information about the MHI topology modeling and +representation in the kernel. + +MHI Controller +-------------- + +MHI controller driver manages the interaction with the MHI client devices +such as the external modems and WiFi chipsets. It is also the MHI bus master +which is in charge of managing the physical link between the host and device. +It is however not involved in the actual data transfer as the data transfer +is taken care by the physical bus such as PCIe. Each controller driver exposes +channels and events based on the client device type. + +Below are the roles of the MHI controller driver: + +* Turns on the physical bus and establishes the link to the device +* Configures IRQs, SMMU, and IOMEM +* Allocates struct mhi_controller and registers with the MHI bus framework + with channel and event configurations using mhi_register_controller. +* Initiates power on and shutdown sequence +* Initiates suspend and resume power management operations of the device. + +MHI Device +---------- + +MHI device is the logical device which binds to a maximum of two MHI channels +for bi-directional communication. Once MHI is in powered on state, the MHI +core will create MHI devices based on the channel configuration exposed +by the controller. There can be a single MHI device for each channel or for a +couple of channels. + +Each supported device is enumerated in:: + + /sys/bus/mhi/devices/ + +MHI Driver +---------- + +MHI driver is the client driver which binds to one or more MHI devices. The MHI +driver sends and receives the upper-layer protocol packets like IP packets, +modem control messages, and diagnostics messages over MHI. The MHI core will +bind the MHI devices to the MHI driver. + +Each supported driver is enumerated in:: + + /sys/bus/mhi/drivers/ + +Below are the roles of the MHI driver: + +* Registers the driver with the MHI bus framework using mhi_driver_register. +* Prepares the device for transfer by calling mhi_prepare_for_transfer. +* Initiates data transfer by calling mhi_queue_transfer. +* Once the data transfer is finished, calls mhi_unprepare_from_transfer to + end data transfer. From patchwork Thu Jan 23 11:18:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manivannan Sadhasivam X-Patchwork-Id: 11347371 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ED18A159A for ; Thu, 23 Jan 2020 11:18:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A2BD424655 for ; Thu, 23 Jan 2020 11:18:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="tUSRjpKl" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729014AbgAWLSz (ORCPT ); Thu, 23 Jan 2020 06:18:55 -0500 Received: from mail-pl1-f196.google.com ([209.85.214.196]:33115 "EHLO mail-pl1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729012AbgAWLSy (ORCPT ); Thu, 23 Jan 2020 06:18:54 -0500 Received: by mail-pl1-f196.google.com with SMTP id ay11so1238439plb.0 for ; Thu, 23 Jan 2020 03:18:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=G1uw+hK+2Pkn53LfdPuWrqrbTY3V22JidJC6vbHx7a4=; b=tUSRjpKlox8pWjmTJdSza05xXxX5sp9rh45+WCbPGdJQQr/jzA6OjMSboaIEw7WQfl g6hkzOrdGophNKIXbF9qfnLl9aFjeEbsq7ZXA5kcYHMIqJkVuWRmoFExRr9Oso69vk1E JiQ034rRhGYfFklnh8XkBLAFssSl1suyayWnghwvBeur1aTD/6kxU6xAumhop3Z+jzM6 Zb/OlloJeKK/TrmxtZPsbFPeQGPoU7Ay3hOqfkLwy92eJ0xp389iHEHyUPDJTYN1CTlr kGhlOu7zYgiIuZNaU5atwuBzMvB0sViMyVmilpczstiS7NTAOmU/Fl7iBfkRb3t4gCr1 g5CA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=G1uw+hK+2Pkn53LfdPuWrqrbTY3V22JidJC6vbHx7a4=; b=UXbGzwqNP2sMgyFEdo/4O8VC6cadBloUBpRqg1rkDH37XHez4OZI9IOLI/WOlXWO5K HJdNPNxDE5m2DnLgfhDUYAid7cjWTzPlkX79UE8pFuSMMlHNjGJhFQISFdbMpNyDBjpM KzYnJsVGyzyIwnX0iNxx7/hDF4FfpXqp9Od/ewv2gtmGKtzSpbJBZvZS6sENr2/TRUA0 qetxdYPbcWUyOdOQj0XEgYKreXg5mJbve8QiLSd6ys+R+H+gdmeSzmegA98yPaSJjW2N PmrzlG2FJ0gPFhvL6rEZvFAdU94LpfM0lJ8MBnCkztlKyMG2Ad68WcBASmYKlLzd68LC D2UQ== X-Gm-Message-State: APjAAAV9thDRS0GgMV215tIIBqqb1A6iy0yyVazXmD5WEyfAIJS30ueu mlN4LIsbCQf42Bzv04Y9Bh20 X-Google-Smtp-Source: APXvYqx+HL0KoeXz7IOAMeW2UGNip1PoludAsej/jro0EPwgPYlzTAQOo22fLcSv+8d7ZoQ672FoWg== X-Received: by 2002:a17:902:8a89:: with SMTP id p9mr15465094plo.286.1579778332971; Thu, 23 Jan 2020 03:18:52 -0800 (PST) Received: from localhost.localdomain ([103.59.133.81]) by smtp.googlemail.com with ESMTPSA id y6sm2627559pgc.10.2020.01.23.03.18.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2020 03:18:52 -0800 (PST) From: Manivannan Sadhasivam To: gregkh@linuxfoundation.org, arnd@arndb.de Cc: smohanad@codeaurora.org, jhugo@codeaurora.org, kvalo@codeaurora.org, bjorn.andersson@linaro.org, hemantk@codeaurora.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, Manivannan Sadhasivam Subject: [PATCH 02/16] bus: mhi: core: Add support for registering MHI controllers Date: Thu, 23 Jan 2020 16:48:22 +0530 Message-Id: <20200123111836.7414-3-manivannan.sadhasivam@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> References: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This commit adds support for registering MHI controller drivers with the MHI stack. MHI controller drivers manages the interaction with the MHI client devices such as the external modems and WiFi chipsets. They are also the MHI bus master in charge of managing the physical link between the host and client device. This is based on the patch submitted by Sujeev Dias: https://lkml.org/lkml/2018/7/9/987 Signed-off-by: Sujeev Dias Signed-off-by: Siddartha Mohanadoss [jhugo: added static config for controllers and fixed several bugs] Signed-off-by: Jeffrey Hugo [mani: removed DT dependency, splitted and cleaned up for upstream] Signed-off-by: Manivannan Sadhasivam --- drivers/bus/Kconfig | 1 + drivers/bus/Makefile | 3 + drivers/bus/mhi/Kconfig | 14 + drivers/bus/mhi/Makefile | 2 + drivers/bus/mhi/core/Makefile | 3 + drivers/bus/mhi/core/init.c | 404 +++++++++++++++++++++++++++++ drivers/bus/mhi/core/internal.h | 169 ++++++++++++ include/linux/mhi.h | 438 ++++++++++++++++++++++++++++++++ include/linux/mod_devicetable.h | 12 + 9 files changed, 1046 insertions(+) create mode 100644 drivers/bus/mhi/Kconfig create mode 100644 drivers/bus/mhi/Makefile create mode 100644 drivers/bus/mhi/core/Makefile create mode 100644 drivers/bus/mhi/core/init.c create mode 100644 drivers/bus/mhi/core/internal.h create mode 100644 include/linux/mhi.h diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig index 50200d1c06ea..383934e54786 100644 --- a/drivers/bus/Kconfig +++ b/drivers/bus/Kconfig @@ -202,5 +202,6 @@ config DA8XX_MSTPRI peripherals. source "drivers/bus/fsl-mc/Kconfig" +source "drivers/bus/mhi/Kconfig" endmenu diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile index 1320bcf9fa9d..05f32cd694a4 100644 --- a/drivers/bus/Makefile +++ b/drivers/bus/Makefile @@ -34,3 +34,6 @@ obj-$(CONFIG_UNIPHIER_SYSTEM_BUS) += uniphier-system-bus.o obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o obj-$(CONFIG_DA8XX_MSTPRI) += da8xx-mstpri.o + +# MHI +obj-$(CONFIG_MHI_BUS) += mhi/ diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig new file mode 100644 index 000000000000..a8bd9bd7db7c --- /dev/null +++ b/drivers/bus/mhi/Kconfig @@ -0,0 +1,14 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# MHI bus +# +# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved. +# + +config MHI_BUS + tristate "Modem Host Interface (MHI) bus" + help + Bus driver for MHI protocol. Modem Host Interface (MHI) is a + communication protocol used by the host processors to control + and communicate with modem devices over a high speed peripheral + bus or shared memory. diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile new file mode 100644 index 000000000000..19e6443b72df --- /dev/null +++ b/drivers/bus/mhi/Makefile @@ -0,0 +1,2 @@ +# core layer +obj-y += core/ diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/core/Makefile new file mode 100644 index 000000000000..2db32697c67f --- /dev/null +++ b/drivers/bus/mhi/core/Makefile @@ -0,0 +1,3 @@ +obj-$(CONFIG_MHI_BUS) := mhi.o + +mhi-y := init.o diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c new file mode 100644 index 000000000000..5b817ec250e0 --- /dev/null +++ b/drivers/bus/mhi/core/init.c @@ -0,0 +1,404 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved. + * + */ + +#define dev_fmt(fmt) "MHI: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "internal.h" + +static int parse_ev_cfg(struct mhi_controller *mhi_cntrl, + struct mhi_controller_config *config) +{ + int i, num; + struct mhi_event *mhi_event; + struct mhi_event_config *event_cfg; + + num = config->num_events; + mhi_cntrl->total_ev_rings = num; + mhi_cntrl->mhi_event = kcalloc(num, sizeof(*mhi_cntrl->mhi_event), + GFP_KERNEL); + if (!mhi_cntrl->mhi_event) + return -ENOMEM; + + /* Populate event ring */ + mhi_event = mhi_cntrl->mhi_event; + for (i = 0; i < num; i++) { + event_cfg = &config->event_cfg[i]; + + mhi_event->er_index = i; + mhi_event->ring.elements = event_cfg->num_elements; + mhi_event->intmod = event_cfg->irq_moderation_ms; + mhi_event->irq = event_cfg->irq; + + if (event_cfg->channel != U32_MAX) { + /* This event ring has a dedicated channel */ + mhi_event->chan = event_cfg->channel; + if (mhi_event->chan >= mhi_cntrl->max_chan) { + dev_err(mhi_cntrl->dev, + "Event Ring channel not available\n"); + goto error_ev_cfg; + } + + mhi_event->mhi_chan = + &mhi_cntrl->mhi_chan[mhi_event->chan]; + } + + /* Priority is fixed to 1 for now */ + mhi_event->priority = 1; + + mhi_event->db_cfg.brstmode = event_cfg->mode; + if (MHI_INVALID_BRSTMODE(mhi_event->db_cfg.brstmode)) + goto error_ev_cfg; + + mhi_event->data_type = event_cfg->data_type; + + mhi_event->hw_ring = event_cfg->hardware_event; + if (mhi_event->hw_ring) + mhi_cntrl->hw_ev_rings++; + else + mhi_cntrl->sw_ev_rings++; + + mhi_event->cl_manage = event_cfg->client_managed; + mhi_event->offload_ev = event_cfg->offload_channel; + mhi_event++; + } + + /* We need IRQ for each event ring + additional one for BHI */ + mhi_cntrl->nr_irqs_req = mhi_cntrl->total_ev_rings + 1; + + return 0; + +error_ev_cfg: + + kfree(mhi_cntrl->mhi_event); + return -EINVAL; +} + +static int parse_ch_cfg(struct mhi_controller *mhi_cntrl, + struct mhi_controller_config *config) +{ + int i; + u32 chan; + struct mhi_channel_config *ch_cfg; + + mhi_cntrl->max_chan = config->max_channels; + + /* + * The allocation of MHI channels can exceed 32KB in some scenarios, + * so to avoid any memory possible allocation failures, vzalloc is + * used here + */ + mhi_cntrl->mhi_chan = vzalloc(mhi_cntrl->max_chan * + sizeof(*mhi_cntrl->mhi_chan)); + if (!mhi_cntrl->mhi_chan) + return -ENOMEM; + + INIT_LIST_HEAD(&mhi_cntrl->lpm_chans); + + /* Populate channel configurations */ + for (i = 0; i < config->num_channels; i++) { + struct mhi_chan *mhi_chan; + + ch_cfg = &config->ch_cfg[i]; + + chan = ch_cfg->num; + if (chan >= mhi_cntrl->max_chan) { + dev_err(mhi_cntrl->dev, + "Channel %d not available\n", chan); + goto error_chan_cfg; + } + + mhi_chan = &mhi_cntrl->mhi_chan[chan]; + mhi_chan->name = ch_cfg->name; + mhi_chan->chan = chan; + + mhi_chan->tre_ring.elements = ch_cfg->num_elements; + if (!mhi_chan->tre_ring.elements) + goto error_chan_cfg; + + /* + * For some channels, local ring length should be bigger than + * the transfer ring length due to internal logical channels + * in device. So host can queue much more buffers than transfer + * ring length. Example, RSC channels should have a larger local + * channel length than transfer ring length. + */ + mhi_chan->buf_ring.elements = ch_cfg->local_elements; + if (!mhi_chan->buf_ring.elements) + mhi_chan->buf_ring.elements = mhi_chan->tre_ring.elements; + mhi_chan->er_index = ch_cfg->event_ring; + mhi_chan->dir = ch_cfg->dir; + + /* + * For most channels, chtype is identical to channel directions. + * So, if it is not defined then assign channel direction to + * chtype + */ + mhi_chan->type = ch_cfg->type; + if (!mhi_chan->type) + mhi_chan->type = (enum mhi_ch_type)mhi_chan->dir; + + mhi_chan->ee_mask = ch_cfg->ee_mask; + + mhi_chan->db_cfg.pollcfg = ch_cfg->pollcfg; + mhi_chan->xfer_type = ch_cfg->data_type; + + mhi_chan->lpm_notify = ch_cfg->lpm_notify; + mhi_chan->offload_ch = ch_cfg->offload_channel; + mhi_chan->db_cfg.reset_req = ch_cfg->doorbell_mode_switch; + mhi_chan->pre_alloc = ch_cfg->auto_queue; + mhi_chan->auto_start = ch_cfg->auto_start; + + /* + * If MHI host allocates buffers, then the channel direction + * should be DMA_FROM_DEVICE and the buffer type should be + * MHI_BUF_RAW + */ + if (mhi_chan->pre_alloc && (mhi_chan->dir != DMA_FROM_DEVICE || + mhi_chan->xfer_type != MHI_BUF_RAW)) { + dev_err(mhi_cntrl->dev, + "Invalid channel configuration\n"); + goto error_chan_cfg; + } + + /* + * Bi-directional and direction less channel must be an + * offload channel + */ + if ((mhi_chan->dir == DMA_BIDIRECTIONAL || + mhi_chan->dir == DMA_NONE) && !mhi_chan->offload_ch) { + dev_err(mhi_cntrl->dev, + "Invalid channel configuration\n"); + goto error_chan_cfg; + } + + if (!mhi_chan->offload_ch) { + mhi_chan->db_cfg.brstmode = ch_cfg->doorbell; + if (MHI_INVALID_BRSTMODE(mhi_chan->db_cfg.brstmode)) { + dev_err(mhi_cntrl->dev, + "Invalid Door bell mode\n"); + goto error_chan_cfg; + } + } + + mhi_chan->configured = true; + + if (mhi_chan->lpm_notify) + list_add_tail(&mhi_chan->node, &mhi_cntrl->lpm_chans); + } + + return 0; + +error_chan_cfg: + vfree(mhi_cntrl->mhi_chan); + + return -EINVAL; +} + +static int parse_config(struct mhi_controller *mhi_cntrl, + struct mhi_controller_config *config) +{ + int ret; + + /* Parse MHI channel configuration */ + ret = parse_ch_cfg(mhi_cntrl, config); + if (ret) + return ret; + + /* Parse MHI event configuration */ + ret = parse_ev_cfg(mhi_cntrl, config); + if (ret) + goto error_ev_cfg; + + mhi_cntrl->timeout_ms = config->timeout_ms; + if (!mhi_cntrl->timeout_ms) + mhi_cntrl->timeout_ms = MHI_TIMEOUT_MS; + + mhi_cntrl->bounce_buf = config->use_bounce_buf; + mhi_cntrl->buffer_len = config->buf_len; + if (!mhi_cntrl->buffer_len) + mhi_cntrl->buffer_len = MHI_MAX_MTU; + + return 0; + +error_ev_cfg: + vfree(mhi_cntrl->mhi_chan); + + return ret; +} + +int mhi_register_controller(struct mhi_controller *mhi_cntrl, + struct mhi_controller_config *config) +{ + int ret; + int i; + struct mhi_event *mhi_event; + struct mhi_chan *mhi_chan; + struct mhi_cmd *mhi_cmd; + struct mhi_device *mhi_dev; + + if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put) + return -EINVAL; + + if (!mhi_cntrl->status_cb || !mhi_cntrl->link_status) + return -EINVAL; + + ret = parse_config(mhi_cntrl, config); + if (ret) + return -EINVAL; + + mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS, + sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL); + if (!mhi_cntrl->mhi_cmd) { + ret = -ENOMEM; + goto error_alloc_cmd; + } + + INIT_LIST_HEAD(&mhi_cntrl->transition_list); + spin_lock_init(&mhi_cntrl->transition_lock); + spin_lock_init(&mhi_cntrl->wlock); + init_waitqueue_head(&mhi_cntrl->state_event); + + mhi_cmd = mhi_cntrl->mhi_cmd; + for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++) + spin_lock_init(&mhi_cmd->lock); + + mhi_event = mhi_cntrl->mhi_event; + for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) { + /* Skip for offload events */ + if (mhi_event->offload_ev) + continue; + + mhi_event->mhi_cntrl = mhi_cntrl; + spin_lock_init(&mhi_event->lock); + } + + mhi_chan = mhi_cntrl->mhi_chan; + for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) { + mutex_init(&mhi_chan->mutex); + init_completion(&mhi_chan->completion); + rwlock_init(&mhi_chan->lock); + } + + /* Register controller with MHI bus */ + mhi_dev = mhi_alloc_device(mhi_cntrl); + if (IS_ERR(mhi_dev)) { + dev_err(mhi_cntrl->dev, "Failed to allocate device\n"); + ret = PTR_ERR(mhi_dev); + goto error_alloc_dev; + } + + mhi_dev->dev_type = MHI_DEVICE_CONTROLLER; + mhi_dev->mhi_cntrl = mhi_cntrl; + dev_set_name(&mhi_dev->dev, "%s", mhi_cntrl->name); + + /* Init wakeup source */ + device_init_wakeup(&mhi_dev->dev, true); + + ret = device_add(&mhi_dev->dev); + if (ret) + goto error_add_dev; + + mhi_cntrl->mhi_dev = mhi_dev; + + return 0; + +error_add_dev: + mhi_dealloc_device(mhi_cntrl, mhi_dev); + +error_alloc_dev: + kfree(mhi_cntrl->mhi_cmd); + +error_alloc_cmd: + vfree(mhi_cntrl->mhi_chan); + kfree(mhi_cntrl->mhi_event); + + return ret; +} +EXPORT_SYMBOL_GPL(mhi_register_controller); + +void mhi_unregister_controller(struct mhi_controller *mhi_cntrl) +{ + struct mhi_device *mhi_dev = mhi_cntrl->mhi_dev; + + kfree(mhi_cntrl->mhi_cmd); + kfree(mhi_cntrl->mhi_event); + vfree(mhi_cntrl->mhi_chan); + + device_del(&mhi_dev->dev); + put_device(&mhi_dev->dev); +} +EXPORT_SYMBOL_GPL(mhi_unregister_controller); + +static void mhi_release_device(struct device *dev) +{ + struct mhi_device *mhi_dev = to_mhi_device(dev); + + if (mhi_dev->ul_chan) + mhi_dev->ul_chan->mhi_dev = NULL; + + if (mhi_dev->dl_chan) + mhi_dev->dl_chan->mhi_dev = NULL; + + kfree(mhi_dev); +} + +struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl) +{ + struct mhi_device *mhi_dev; + struct device *dev; + + mhi_dev = kzalloc(sizeof(*mhi_dev), GFP_KERNEL); + if (!mhi_dev) + return ERR_PTR(-ENOMEM); + + dev = &mhi_dev->dev; + device_initialize(dev); + dev->bus = &mhi_bus_type; + dev->release = mhi_release_device; + dev->parent = mhi_cntrl->dev; + mhi_dev->mhi_cntrl = mhi_cntrl; + atomic_set(&mhi_dev->dev_wake, 0); + + return mhi_dev; +} + +static int mhi_match(struct device *dev, struct device_driver *drv) +{ + return 0; +}; + +struct bus_type mhi_bus_type = { + .name = "mhi", + .dev_name = "mhi", + .match = mhi_match, +}; + +static int __init mhi_init(void) +{ + return bus_register(&mhi_bus_type); +} + +static void __exit mhi_exit(void) +{ + bus_unregister(&mhi_bus_type); +} + +postcore_initcall(mhi_init); +module_exit(mhi_exit); + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("MHI Host Interface"); diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h new file mode 100644 index 000000000000..21f686d3a140 --- /dev/null +++ b/drivers/bus/mhi/core/internal.h @@ -0,0 +1,169 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved. + * + */ + +#ifndef _MHI_INT_H +#define _MHI_INT_H + +extern struct bus_type mhi_bus_type; + +/* MHI transfer completion events */ +enum mhi_ev_ccs { + MHI_EV_CC_INVALID = 0x0, + MHI_EV_CC_SUCCESS = 0x1, + MHI_EV_CC_EOT = 0x2, + MHI_EV_CC_OVERFLOW = 0x3, + MHI_EV_CC_EOB = 0x4, + MHI_EV_CC_OOB = 0x5, + MHI_EV_CC_DB_MODE = 0x6, + MHI_EV_CC_UNDEFINED_ERR = 0x10, + MHI_EV_CC_BAD_TRE = 0x11, +}; + +enum mhi_ch_state { + MHI_CH_STATE_DISABLED = 0x0, + MHI_CH_STATE_ENABLED = 0x1, + MHI_CH_STATE_RUNNING = 0x2, + MHI_CH_STATE_SUSPENDED = 0x3, + MHI_CH_STATE_STOP = 0x4, + MHI_CH_STATE_ERROR = 0x5, +}; + +#define MHI_INVALID_BRSTMODE(mode) (mode != MHI_DB_BRST_DISABLE && \ + mode != MHI_DB_BRST_ENABLE) + +#define NR_OF_CMD_RINGS 1 +#define CMD_EL_PER_RING 128 +#define PRIMARY_CMD_RING 0 +#define MHI_MAX_MTU 0xffff + +enum mhi_er_type { + MHI_ER_TYPE_INVALID = 0x0, + MHI_ER_TYPE_VALID = 0x1, +}; + +enum mhi_ch_ee_mask { + MHI_CH_EE_PBL = BIT(MHI_EE_PBL), + MHI_CH_EE_SBL = BIT(MHI_EE_SBL), + MHI_CH_EE_AMSS = BIT(MHI_EE_AMSS), + MHI_CH_EE_RDDM = BIT(MHI_EE_RDDM), + MHI_CH_EE_PTHRU = BIT(MHI_EE_PTHRU), + MHI_CH_EE_WFW = BIT(MHI_EE_WFW), + MHI_CH_EE_EDL = BIT(MHI_EE_EDL), +}; + +struct db_cfg { + bool reset_req; + bool db_mode; + u32 pollcfg; + enum mhi_db_brst_mode brstmode; + dma_addr_t db_val; + void (*process_db)(struct mhi_controller *mhi_cntrl, + struct db_cfg *db_cfg, void __iomem *io_addr, + dma_addr_t db_val); +}; + +struct mhi_ring { + dma_addr_t dma_handle; + dma_addr_t iommu_base; + u64 *ctxt_wp; /* point to ctxt wp */ + void *pre_aligned; + void *base; + void *rp; + void *wp; + size_t el_size; + size_t len; + size_t elements; + size_t alloc_size; + void __iomem *db_addr; +}; + +struct mhi_cmd { + struct mhi_ring ring; + spinlock_t lock; +}; + +struct mhi_buf_info { + dma_addr_t p_addr; + void *v_addr; + void *bb_addr; + void *wp; + size_t len; + void *cb_buf; + enum dma_data_direction dir; +}; + +struct mhi_event { + u32 er_index; + u32 intmod; + u32 irq; + int chan; /* this event ring is dedicated to a channel (optional) */ + u32 priority; + enum mhi_er_data_type data_type; + struct mhi_ring ring; + struct db_cfg db_cfg; + bool hw_ring; + bool cl_manage; + bool offload_ev; /* managed by a device driver */ + spinlock_t lock; + struct mhi_chan *mhi_chan; /* dedicated to channel */ + struct tasklet_struct task; + int (*process_event)(struct mhi_controller *mhi_cntrl, + struct mhi_event *mhi_event, + u32 event_quota); + struct mhi_controller *mhi_cntrl; +}; + +struct mhi_chan { + u32 chan; + const char *name; + /* + * Important: When consuming, increment tre_ring first and when + * releasing, decrement buf_ring first. If tre_ring has space, buf_ring + * is guranteed to have space so we do not need to check both rings. + */ + struct mhi_ring buf_ring; + struct mhi_ring tre_ring; + u32 er_index; + u32 intmod; + enum mhi_ch_type type; + enum dma_data_direction dir; + struct db_cfg db_cfg; + enum mhi_ch_ee_mask ee_mask; + enum mhi_buf_type xfer_type; + enum mhi_ch_state ch_state; + enum mhi_ev_ccs ccs; + bool lpm_notify; + bool configured; + bool offload_ch; + bool pre_alloc; + bool auto_start; + int (*gen_tre)(struct mhi_controller *mhi_cntrl, + struct mhi_chan *mhi_chan, void *buf, void *cb, + size_t len, enum mhi_flags flags); + int (*queue_xfer)(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan, + void *buf, size_t len, enum mhi_flags mflags); + struct mhi_device *mhi_dev; + void (*xfer_cb)(struct mhi_device *mhi_dev, struct mhi_result *result); + struct mutex mutex; + struct completion completion; + rwlock_t lock; + struct list_head node; +}; + +/* Default MHI timeout */ +#define MHI_TIMEOUT_MS (1000) + +struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl); +static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl, + struct mhi_device *mhi_dev) +{ + kfree(mhi_dev); +} + +int mhi_destroy_device(struct device *dev, void *data); +void mhi_create_devices(struct mhi_controller *mhi_cntrl); + +#endif /* _MHI_INT_H */ diff --git a/include/linux/mhi.h b/include/linux/mhi.h new file mode 100644 index 000000000000..69cf9a4b06c7 --- /dev/null +++ b/include/linux/mhi.h @@ -0,0 +1,438 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved. + * + */ +#ifndef _MHI_H_ +#define _MHI_H_ + +#include +#include +#include +#include +#include +#include +#include +#include + +struct mhi_chan; +struct mhi_event; +struct mhi_ctxt; +struct mhi_cmd; +struct mhi_buf_info; + +/** + * enum mhi_callback - MHI callback + * @MHI_CB_IDLE: MHI entered idle state + * @MHI_CB_PENDING_DATA: New data available for client to process + * @MHI_CB_LPM_ENTER: MHI host entered low power mode + * @MHI_CB_LPM_EXIT: MHI host about to exit low power mode + * @MHI_CB_EE_RDDM: MHI device entered RDDM exec env + * @MHI_CB_EE_MISSION_MODE: MHI device entered Mission Mode exec env + * @MHI_CB_SYS_ERROR: MHI device entered error state (may recover) + * @MHI_CB_FATAL_ERROR: MHI device entered fatal error state + */ +enum mhi_callback { + MHI_CB_IDLE, + MHI_CB_PENDING_DATA, + MHI_CB_LPM_ENTER, + MHI_CB_LPM_EXIT, + MHI_CB_EE_RDDM, + MHI_CB_EE_MISSION_MODE, + MHI_CB_SYS_ERROR, + MHI_CB_FATAL_ERROR, +}; + +/** + * enum mhi_flags - Transfer flags + * @MHI_EOB: End of buffer for bulk transfer + * @MHI_EOT: End of transfer + * @MHI_CHAIN: Linked transfer + */ +enum mhi_flags { + MHI_EOB, + MHI_EOT, + MHI_CHAIN, +}; + +/** + * enum mhi_device_type - Device types + * @MHI_DEVICE_XFER: Handles data transfer + * @MHI_DEVICE_TIMESYNC: Use for timesync feature + * @MHI_DEVICE_CONTROLLER: Control device + */ +enum mhi_device_type { + MHI_DEVICE_XFER, + MHI_DEVICE_TIMESYNC, + MHI_DEVICE_CONTROLLER, +}; + +/** + * enum mhi_ch_type - Channel types + * @MHI_CH_TYPE_INVALID: Invalid channel type + * @MHI_CH_TYPE_OUTBOUND: Outbound channel to the device + * @MHI_CH_TYPE_INBOUND: Inbound channel from the device + * @MHI_CH_TYPE_INBOUND_COALESCED: Coalesced channel for the device to combine + * multiple packets and send them as a single + * large packet to reduce CPU consumption + */ +enum mhi_ch_type { + MHI_CH_TYPE_INVALID = 0, + MHI_CH_TYPE_OUTBOUND = DMA_TO_DEVICE, + MHI_CH_TYPE_INBOUND = DMA_FROM_DEVICE, + MHI_CH_TYPE_INBOUND_COALESCED = 3, +}; + +/** + * enum mhi_ee_type - Execution environment types + * @MHI_EE_PBL: Primary Bootloader + * @MHI_EE_SBL: Secondary Bootloader + * @MHI_EE_AMSS: Modem, aka the primary runtime EE + * @MHI_EE_RDDM: Ram dump download mode + * @MHI_EE_WFW: WLAN firmware mode + * @MHI_EE_PTHRU: Passthrough + * @MHI_EE_EDL: Embedded downloader + */ +enum mhi_ee_type { + MHI_EE_PBL, + MHI_EE_SBL, + MHI_EE_AMSS, + MHI_EE_RDDM, + MHI_EE_WFW, + MHI_EE_PTHRU, + MHI_EE_EDL, + MHI_EE_MAX_SUPPORTED = MHI_EE_EDL, + MHI_EE_DISABLE_TRANSITION, /* local EE, not related to mhi spec */ + MHI_EE_NOT_SUPPORTED, + MHI_EE_MAX, +}; + +/** + * enum mhi_buf_type - Accepted buffer type for the channel + * @MHI_BUF_RAW: Raw buffer + * @MHI_BUF_SKB: SKB struct + * @MHI_BUF_SCLIST: Scatter-gather list + * @MHI_BUF_NOP: CPU offload channel, host does not accept transfer + * @MHI_BUF_DMA: Receive DMA address mapped by client + * @MHI_BUF_RSC_DMA: RSC type premapped buffer + */ +enum mhi_buf_type { + MHI_BUF_RAW, + MHI_BUF_SKB, + MHI_BUF_SCLIST, + MHI_BUF_NOP, + MHI_BUF_DMA, + MHI_BUF_RSC_DMA, +}; + +/** + * enum mhi_er_data_type - Event ring data types + * @MHI_ER_DATA: Only client data over this ring + * @MHI_ER_CTRL: MHI control data and client data + * @MHI_ER_TSYNC: Time sync events + */ +enum mhi_er_data_type { + MHI_ER_DATA, + MHI_ER_CTRL, + MHI_ER_TSYNC, +}; + +/** + * enum mhi_db_brst_mode - Doorbell mode + * @MHI_DB_BRST_DISABLE: Burst mode disable + * @MHI_DB_BRST_ENABLE: Burst mode enable + */ +enum mhi_db_brst_mode { + MHI_DB_BRST_DISABLE = 0x2, + MHI_DB_BRST_ENABLE = 0x3, +}; + +/** + * struct mhi_channel_config - Channel configuration structure for controller + * @num: The number assigned to this channel + * @name: The name of this channel + * @num_elements: The number of elements that can be queued to this channel + * @local_elements: The local ring length of the channel + * @event_ring: The event rung index that services this channel + * @dir: Direction that data may flow on this channel + * @type: Channel type + * @ee_mask: Execution Environment mask for this channel + * @pollcfg: Polling configuration for burst mode. 0 is default. milliseconds + for UL channels, multiple of 8 ring elements for DL channels + * @data_type: Data type accepted by this channel + * @doorbell: Doorbell mode + * @lpm_notify: The channel master requires low power mode notifications + * @offload_channel: The client manages the channel completely + * @doorbell_mode_switch: Channel switches to doorbell mode on M0 transition + * @auto_queue: Framework will automatically queue buffers for DL traffic + * @auto_start: Automatically start (open) this channel + */ +struct mhi_channel_config { + u32 num; + char *name; + u32 num_elements; + u32 local_elements; + u32 event_ring; + enum dma_data_direction dir; + enum mhi_ch_type type; + u32 ee_mask; + u32 pollcfg; + enum mhi_buf_type data_type; + enum mhi_db_brst_mode doorbell; + bool lpm_notify; + bool offload_channel; + bool doorbell_mode_switch; + bool auto_queue; + bool auto_start; +}; + +/** + * struct mhi_event_config - Event ring configuration structure for controller + * @num_elements: The number of elements that can be queued to this ring + * @irq_moderation_ms: Delay irq for additional events to be aggregated + * @irq: IRQ associated with this ring + * @channel: Dedicated channel number. U32_MAX indicates a non-dedicated ring + * @mode: Doorbell mode + * @data_type: Type of data this ring will process + * @hardware_event: This ring is associated with hardware channels + * @client_managed: This ring is client managed + * @offload_channel: This ring is associated with an offloaded channel + * @priority: Priority of this ring. Use 1 for now + */ +struct mhi_event_config { + u32 num_elements; + u32 irq_moderation_ms; + u32 irq; + u32 channel; + enum mhi_db_brst_mode mode; + enum mhi_er_data_type data_type; + bool hardware_event; + bool client_managed; + bool offload_channel; + u32 priority; +}; + +/** + * struct mhi_controller_config - Root MHI controller configuration + * @max_channels: Maximum number of channels supported + * @timeout_ms: Timeout value for operations. 0 means use default + * @use_bounce_buf: Use a bounce buffer pool due to limited DDR access + * @m2_no_db: Host is not allowed to ring DB in M2 state + * @buf_len: Size of automatically allocated buffers. 0 means use default + * @num_channels: Number of channels defined in @ch_cfg + * @ch_cfg: Array of defined channels + * @num_events: Number of event rings defined in @event_cfg + * @event_cfg: Array of defined event rings + */ +struct mhi_controller_config { + u32 max_channels; + u32 timeout_ms; + bool use_bounce_buf; + bool m2_no_db; + u32 buf_len; + u32 num_channels; + struct mhi_channel_config *ch_cfg; + u32 num_events; + struct mhi_event_config *event_cfg; +}; + +/** + * struct mhi_controller - Master MHI controller structure + * @name: Name of the controller + * @dev: Driver model device node for the controller + * @mhi_dev: MHI device instance for the controller + * @dev_id: Device ID of the controller + * @bus_id: Physical bus instance used by the controller + * @regs: Base address of MHI MMIO register space + * @iova_start: IOMMU starting address for data + * @iova_stop: IOMMU stop address for data + * @fw_image: Firmware image name for normal booting + * @edl_image: Firmware image name for emergency download mode + * @fbc_download: MHI host needs to do complete image transfer + * @sbl_size: SBL image size + * @seg_len: BHIe vector size + * @max_chan: Maximum number of channels the controller supports + * @mhi_chan: Points to the channel configuration table + * @lpm_chans: List of channels that require LPM notifications + * @total_ev_rings: Total # of event rings allocated + * @hw_ev_rings: Number of hardware event rings + * @sw_ev_rings: Number of software event rings + * @nr_irqs_req: Number of IRQs required to operate + * @nr_irqs: Number of IRQ allocated by bus master + * @irq: base irq # to request + * @mhi_event: MHI event ring configurations table + * @mhi_cmd: MHI command ring configurations table + * @mhi_ctxt: MHI device context, shared memory between host and device + * @timeout_ms: Timeout in ms for state transitions + * @pm_mutex: Mutex for suspend/resume operation + * @pre_init: MHI host needs to do pre-initialization before power up + * @pm_lock: Lock for protecting MHI power management state + * @pm_state: MHI power management state + * @db_access: DB access states + * @ee: MHI device execution environment + * @wake_set: Device wakeup set flag + * @dev_wake: Device wakeup count + * @alloc_size: Total memory allocations size of the controller + * @pending_pkts: Pending packets for the controller + * @transition_list: List of MHI state transitions + * @wlock: Lock for protecting device wakeup + * @M0: M0 state counter for debugging + * @M2: M2 state counter for debugging + * @M3: M3 state counter for debugging + * @M3_FAST: M3 Fast state counter for debugging + * @st_worker: State transition worker + * @fw_worker: Firmware download worker + * @syserr_worker: System error worker + * @state_event: State change event + * @status_cb: CB function to notify various power states to bus master + * @link_status: CB function to query link status of the device + * @wake_get: CB function to assert device wake + * @wake_put: CB function to de-assert device wake + * @wake_toggle: CB function to assert and deasset (toggle) device wake + * @runtime_get: CB function to controller runtime resume + * @runtimet_put: CB function to decrement pm usage + * @lpm_disable: CB function to request disable link level low power modes + * @lpm_enable: CB function to request enable link level low power modes again + * @bounce_buf: Use of bounce buffer + * @buffer_len: Bounce buffer length + * @priv_data: Points to bus master's private data + */ +struct mhi_controller { + const char *name; + struct device *dev; + struct mhi_device *mhi_dev; + u32 dev_id; + u32 bus_id; + void __iomem *regs; + dma_addr_t iova_start; + dma_addr_t iova_stop; + const char *fw_image; + const char *edl_image; + bool fbc_download; + size_t sbl_size; + size_t seg_len; + u32 max_chan; + struct mhi_chan *mhi_chan; + struct list_head lpm_chans; + u32 total_ev_rings; + u32 hw_ev_rings; + u32 sw_ev_rings; + u32 nr_irqs_req; + u32 nr_irqs; + int *irq; + + struct mhi_event *mhi_event; + struct mhi_cmd *mhi_cmd; + struct mhi_ctxt *mhi_ctxt; + + u32 timeout_ms; + struct mutex pm_mutex; + bool pre_init; + rwlock_t pm_lock; + u32 pm_state; + u32 db_access; + enum mhi_ee_type ee; + bool wake_set; + atomic_t dev_wake; + atomic_t alloc_size; + atomic_t pending_pkts; + struct list_head transition_list; + spinlock_t transition_lock; + spinlock_t wlock; + u32 M0, M2, M3, M3_FAST; + struct work_struct st_worker; + struct work_struct fw_worker; + struct work_struct syserr_worker; + wait_queue_head_t state_event; + + void (*status_cb)(struct mhi_controller *mhi_cntrl, void *priv, + enum mhi_callback cb); + int (*link_status)(struct mhi_controller *mhi_cntrl, void *priv); + void (*wake_get)(struct mhi_controller *mhi_cntrl, bool override); + void (*wake_put)(struct mhi_controller *mhi_cntrl, bool override); + void (*wake_toggle)(struct mhi_controller *mhi_cntrl); + int (*runtime_get)(struct mhi_controller *mhi_cntrl, void *priv); + void (*runtime_put)(struct mhi_controller *mhi_cntrl, void *priv); + void (*lpm_disable)(struct mhi_controller *mhi_cntrl, void *priv); + void (*lpm_enable)(struct mhi_controller *mhi_cntrl, void *priv); + + bool bounce_buf; + size_t buffer_len; + void *priv_data; +}; + +/** + * struct mhi_device - Structure representing a MHI device which binds + * to channels + * @dev: Driver model device node for the MHI device + * @tiocm: Device current terminal settings + * @id: Pointer to MHI device ID struct + * @chan_name: Name of the channel to which the device binds + * @mhi_cntrl: Controller the device belongs to + * @ul_chan: UL channel for the device + * @dl_chan: DL channel for the device + * @dev_wake: Device wakeup counter + * @dev_type: MHI device type + */ +struct mhi_device { + struct device dev; + u32 tiocm; + const struct mhi_device_id *id; + const char *chan_name; + struct mhi_controller *mhi_cntrl; + struct mhi_chan *ul_chan; + struct mhi_chan *dl_chan; + atomic_t dev_wake; + enum mhi_device_type dev_type; +}; + +/** + * struct mhi_result - Completed buffer information + * @buf_addr: Address of data buffer + * @dir: Channel direction + * @bytes_xfer: # of bytes transferred + * @transaction_status: Status of last transaction + */ +struct mhi_result { + void *buf_addr; + enum dma_data_direction dir; + size_t bytes_xferd; + int transaction_status; +}; + +#define to_mhi_device(dev) container_of(dev, struct mhi_device, dev) + +/** + * mhi_controller_set_devdata - Set MHI controller private data + * @mhi_cntrl: MHI controller to set data + */ +static inline void mhi_controller_set_devdata(struct mhi_controller *mhi_cntrl, + void *priv) +{ + mhi_cntrl->priv_data = priv; +} + +/** + * mhi_controller_get_devdata - Get MHI controller private data + * @mhi_cntrl: MHI controller to get data + */ +static inline void *mhi_controller_get_devdata(struct mhi_controller *mhi_cntrl) +{ + return mhi_cntrl->priv_data; +} + +/** + * mhi_register_controller - Register MHI controller + * @mhi_cntrl: MHI controller to register + * @config: Configuration to use for the controller + */ +int mhi_register_controller(struct mhi_controller *mhi_cntrl, + struct mhi_controller_config *config); + +/** + * mhi_unregister_controller - Unregister MHI controller + * @mhi_cntrl: MHI controller to unregister + */ +void mhi_unregister_controller(struct mhi_controller *mhi_cntrl); + +#endif /* _MHI_H_ */ diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h index e3596db077dc..be15e997fe39 100644 --- a/include/linux/mod_devicetable.h +++ b/include/linux/mod_devicetable.h @@ -821,4 +821,16 @@ struct wmi_device_id { const void *context; }; +#define MHI_NAME_SIZE 32 + +/** + * struct mhi_device_id - MHI device identification + * @chan: MHI channel name + * @driver_data: driver data; + */ +struct mhi_device_id { + const char chan[MHI_NAME_SIZE]; + kernel_ulong_t driver_data; +}; + #endif /* LINUX_MOD_DEVICETABLE_H */ From patchwork Thu Jan 23 11:18:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manivannan Sadhasivam X-Patchwork-Id: 11347373 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1FA3717EA for ; Thu, 23 Jan 2020 11:19:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E915D24655 for ; Thu, 23 Jan 2020 11:18:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="FzQbWT4e" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729049AbgAWLS7 (ORCPT ); Thu, 23 Jan 2020 06:18:59 -0500 Received: from mail-pl1-f194.google.com ([209.85.214.194]:36075 "EHLO mail-pl1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729047AbgAWLS5 (ORCPT ); Thu, 23 Jan 2020 06:18:57 -0500 Received: by mail-pl1-f194.google.com with SMTP id a6so1229955plm.3 for ; Thu, 23 Jan 2020 03:18:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/ciesSehnrIZLgszGAVgbuMPEfl8YFzNthyWVaOTgD8=; b=FzQbWT4epM0w/2PUcR3c7S6UERgcxZ8NR14dOUmOeVn9tBjk7wO3oIPI3UpxXC5c5o FnIcs6xhpTyhKmk5z4IgjJlGKG4hem3e6vD80cNyUV5x3xpH7dc89FePmV8v6GtlmFa4 mOrNn0iicX3b8NzCsZLhQW6F7T0fRDJ3Yy+G9ktyxi7h1z0Ha7fzVycvJyaAdHV71YDo rCInuHUxFLiQCn+I1gAA9Qmw3VrI4nysRh5gngP3rzVWodBT84bqBwHZR0ECe1Hhy9Du 4Sfz2z3MLqvcommizRkWdx5lUtQFc9xid1Q79hXS06mM6zMLJJUMus+X6tIy2UwRB77S MmHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/ciesSehnrIZLgszGAVgbuMPEfl8YFzNthyWVaOTgD8=; b=Kc3o3ZQkhjpYtCR490+uGVZT65FsSyoeUZn2HI1h+yF8PXYdLc3cmgA3JU6XQhR3zJ ykWOmtCXL8cV+rwjR/bWlA9DXWBbfWX5SXf86nys3X5sKKpB8qHTP0pJCgoC63m6Gm0z g+KcK2+/UCOkfNhrGTxo8qsor/gCT7BM/jAJCKFOj7CU/R9uFyKFjLuNCmVPreMA6psP C5KXuy7MN0zOUkgtk1uVvbn1uUBF7WB/cCD9GfKCJCWDrla187YewsLyijGhg56vjnph G/cSmyKHJFR+oDvFA4m2h87MCnkXAognH9+XUk1d50VRkqfhkOkt1OFsIKV+Z9azoVN3 sOLQ== X-Gm-Message-State: APjAAAWh+todPoSMsHeAlajwqYMe0GKy5zvkbX5LZEOkKwiIBd6BCybR 8ykvU07nCh1dtKHfFs+FS5Va X-Google-Smtp-Source: APXvYqx7EmNwYLK9IVNhdpsqVKoIchq+rzj2hHKuLowgUMDJevua9Yu+M9JTbumsSprt//wwkp7mpA== X-Received: by 2002:a17:902:b611:: with SMTP id b17mr15663534pls.23.1579778336418; Thu, 23 Jan 2020 03:18:56 -0800 (PST) Received: from localhost.localdomain ([103.59.133.81]) by smtp.googlemail.com with ESMTPSA id y6sm2627559pgc.10.2020.01.23.03.18.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2020 03:18:55 -0800 (PST) From: Manivannan Sadhasivam To: gregkh@linuxfoundation.org, arnd@arndb.de Cc: smohanad@codeaurora.org, jhugo@codeaurora.org, kvalo@codeaurora.org, bjorn.andersson@linaro.org, hemantk@codeaurora.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, Manivannan Sadhasivam Subject: [PATCH 03/16] bus: mhi: core: Add support for registering MHI client drivers Date: Thu, 23 Jan 2020 16:48:23 +0530 Message-Id: <20200123111836.7414-4-manivannan.sadhasivam@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> References: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This commit adds support for registering MHI client drivers with the MHI stack. MHI client drivers binds to one or more MHI devices inorder to sends and receive the upper-layer protocol packets like IP packets, modem control messages, and diagnostics messages over MHI bus. This is based on the patch submitted by Sujeev Dias: https://lkml.org/lkml/2018/7/9/987 Signed-off-by: Sujeev Dias Signed-off-by: Siddartha Mohanadoss [mani: splitted and cleaned up for upstream] Signed-off-by: Manivannan Sadhasivam --- drivers/bus/mhi/core/init.c | 149 ++++++++++++++++++++++++++++++++++++ include/linux/mhi.h | 35 +++++++++ 2 files changed, 184 insertions(+) diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c index 5b817ec250e0..60dcf2ad3a5f 100644 --- a/drivers/bus/mhi/core/init.c +++ b/drivers/bus/mhi/core/init.c @@ -376,8 +376,157 @@ struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl) return mhi_dev; } +static int mhi_driver_probe(struct device *dev) +{ + struct mhi_device *mhi_dev = to_mhi_device(dev); + struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; + struct device_driver *drv = dev->driver; + struct mhi_driver *mhi_drv = to_mhi_driver(drv); + struct mhi_event *mhi_event; + struct mhi_chan *ul_chan = mhi_dev->ul_chan; + struct mhi_chan *dl_chan = mhi_dev->dl_chan; + + if (ul_chan) { + /* + * If channel supports LPM notifications then status_cb should + * be provided + */ + if (ul_chan->lpm_notify && !mhi_drv->status_cb) + return -EINVAL; + + /* For non-offload channels then xfer_cb should be provided */ + if (!ul_chan->offload_ch && !mhi_drv->ul_xfer_cb) + return -EINVAL; + + ul_chan->xfer_cb = mhi_drv->ul_xfer_cb; + } + + if (dl_chan) { + /* + * If channel supports LPM notifications then status_cb should + * be provided + */ + if (dl_chan->lpm_notify && !mhi_drv->status_cb) + return -EINVAL; + + /* For non-offload channels then xfer_cb should be provided */ + if (!dl_chan->offload_ch && !mhi_drv->dl_xfer_cb) + return -EINVAL; + + mhi_event = &mhi_cntrl->mhi_event[dl_chan->er_index]; + + /* + * If the channel event ring is managed by client, then + * status_cb must be provided so that the framework can + * notify pending data + */ + if (mhi_event->cl_manage && !mhi_drv->status_cb) + return -EINVAL; + + dl_chan->xfer_cb = mhi_drv->dl_xfer_cb; + } + + /* Call the user provided probe function */ + return mhi_drv->probe(mhi_dev, mhi_dev->id); +} + +static int mhi_driver_remove(struct device *dev) +{ + struct mhi_device *mhi_dev = to_mhi_device(dev); + struct mhi_driver *mhi_drv = to_mhi_driver(dev->driver); + struct mhi_chan *mhi_chan; + enum mhi_ch_state ch_state[] = { + MHI_CH_STATE_DISABLED, + MHI_CH_STATE_DISABLED + }; + int dir; + + /* Skip if it is a controller device */ + if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER) + return 0; + + /* Reset both channels */ + for (dir = 0; dir < 2; dir++) { + mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan; + + if (!mhi_chan) + continue; + + /* Wake all threads waiting for completion */ + write_lock_irq(&mhi_chan->lock); + mhi_chan->ccs = MHI_EV_CC_INVALID; + complete_all(&mhi_chan->completion); + write_unlock_irq(&mhi_chan->lock); + + /* Set the channel state to disabled */ + mutex_lock(&mhi_chan->mutex); + write_lock_irq(&mhi_chan->lock); + ch_state[dir] = mhi_chan->ch_state; + mhi_chan->ch_state = MHI_CH_STATE_SUSPENDED; + write_unlock_irq(&mhi_chan->lock); + + mutex_unlock(&mhi_chan->mutex); + } + + mhi_drv->remove(mhi_dev); + + /* De-init channel if it was enabled */ + for (dir = 0; dir < 2; dir++) { + mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan; + + if (!mhi_chan) + continue; + + mutex_lock(&mhi_chan->mutex); + + mhi_chan->ch_state = MHI_CH_STATE_DISABLED; + + mutex_unlock(&mhi_chan->mutex); + } + + return 0; +} + +int mhi_driver_register(struct mhi_driver *mhi_drv) +{ + struct device_driver *driver = &mhi_drv->driver; + + if (!mhi_drv->probe || !mhi_drv->remove) + return -EINVAL; + + driver->bus = &mhi_bus_type; + driver->probe = mhi_driver_probe; + driver->remove = mhi_driver_remove; + + return driver_register(driver); +} +EXPORT_SYMBOL_GPL(mhi_driver_register); + +void mhi_driver_unregister(struct mhi_driver *mhi_drv) +{ + driver_unregister(&mhi_drv->driver); +} +EXPORT_SYMBOL_GPL(mhi_driver_unregister); + static int mhi_match(struct device *dev, struct device_driver *drv) { + struct mhi_device *mhi_dev = to_mhi_device(dev); + struct mhi_driver *mhi_drv = to_mhi_driver(drv); + const struct mhi_device_id *id; + + /* + * If the device is a controller type then there is no client driver + * associated with it + */ + if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER) + return 0; + + for (id = mhi_drv->id_table; id->chan[0]; id++) + if (!strcmp(mhi_dev->chan_name, id->chan)) { + mhi_dev->id = id; + return 1; + } + return 0; }; diff --git a/include/linux/mhi.h b/include/linux/mhi.h index 69cf9a4b06c7..0fdad987dd70 100644 --- a/include/linux/mhi.h +++ b/include/linux/mhi.h @@ -400,6 +400,29 @@ struct mhi_result { int transaction_status; }; +/** + * struct mhi_driver - Structure representing a MHI client driver + * @probe: CB function for client driver probe function + * @remove: CB function for client driver remove function + * @ul_xfer_cb: CB function for UL data transfer + * @dl_xfer_cb: CB function for DL data transfer + * @status_cb: CB functions for asynchronous status + * @driver: Device driver model driver + */ +struct mhi_driver { + const struct mhi_device_id *id_table; + int (*probe)(struct mhi_device *mhi_dev, + const struct mhi_device_id *id); + void (*remove)(struct mhi_device *mhi_dev); + void (*ul_xfer_cb)(struct mhi_device *mhi_dev, + struct mhi_result *result); + void (*dl_xfer_cb)(struct mhi_device *mhi_dev, + struct mhi_result *result); + void (*status_cb)(struct mhi_device *mhi_dev, enum mhi_callback mhi_cb); + struct device_driver driver; +}; + +#define to_mhi_driver(drv) container_of(drv, struct mhi_driver, driver) #define to_mhi_device(dev) container_of(dev, struct mhi_device, dev) /** @@ -435,4 +458,16 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl, */ void mhi_unregister_controller(struct mhi_controller *mhi_cntrl); +/** + * mhi_driver_register - Register driver with MHI framework + * @mhi_drv: Driver associated with the device + */ +int mhi_driver_register(struct mhi_driver *mhi_drv); + +/** + * mhi_driver_unregister - Unregister a driver for mhi_devices + * @mhi_drv: Driver associated with the device + */ +void mhi_driver_unregister(struct mhi_driver *mhi_drv); + #endif /* _MHI_H_ */ From patchwork Thu Jan 23 11:18:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manivannan Sadhasivam X-Patchwork-Id: 11347375 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 39254159A for ; Thu, 23 Jan 2020 11:19:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0CDB124655 for ; Thu, 23 Jan 2020 11:19:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="wDgW9PTT" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729094AbgAWLTC (ORCPT ); Thu, 23 Jan 2020 06:19:02 -0500 Received: from mail-pf1-f193.google.com ([209.85.210.193]:40526 "EHLO mail-pf1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728913AbgAWLTA (ORCPT ); Thu, 23 Jan 2020 06:19:00 -0500 Received: by mail-pf1-f193.google.com with SMTP id q8so1392236pfh.7 for ; Thu, 23 Jan 2020 03:19:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=K5kiBZvmyuxYyEDeoHZDMOfXNytOnuRRE2t4tnTLS+A=; b=wDgW9PTTBIprQOzDpnxtIhxh+xwNqH2FZocK+pBX+dieFPNM3t4a9SEfDVqnnm0lRM G/fNJ61c64GxFVKsarDaD/juXPh/XUlEsNGHEfMb7cU4tgd6yPa+SSQe0lOsUF/g719m DzUWHcbQaFVcfU1f++SLf9Yo1ijScUjC0YpWYwtiXgI38EQyn9Fcii5RJqqEaEwCFNn3 ok6zqK/givp1KV8COd/+x/Be6DbcRdpshPWDymwF9S5PfzbGoEkzTMBFAniaEyoatMww lPUN4qGcjgrIlG3i+EhsgfsP97icIhFFeJ36OeHquATBgkI+Y39I2z0LbjVfdoIOsC7Z thEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=K5kiBZvmyuxYyEDeoHZDMOfXNytOnuRRE2t4tnTLS+A=; b=otmHFRxmJWVGdI+KlZufHfCDmj9f3l+Oe6UYLpBRbhMQvD5J/T+ogPnq2FcQRy+gAg flkMlEny6WhKmkt5ARsPBT6VXI1aV9KmgCu+SkpHw8elwb311e8bVG28vh2tItdXIjJ3 2C9cBgjhTi1E6G6ZI8I4xER/NsDRe8IIG4vRWN+6mM7V3Uym816Z/x1YYsNjzp+MOt9B MyZkqvXDgTib/JbDA50y3Xn+Gey4GTEsFC+8P1965db68A2dj63IlD4mp1u0OTi0+v+7 guFbJQA8rv2NEpRTXIi0gW33WimRa524UC9oIGwdFtt+kyMiKBvdxXcYnU3QNv6SsGKi NjPg== X-Gm-Message-State: APjAAAUT5RXfZA6o+Jonpnzlb3cKM/TnwHT2IfXm7C/+rkxLpVKECSQw XyirPfEsTjgJzJ42zaXoguUE X-Google-Smtp-Source: APXvYqwPCLeINqE3jnx81UWtk4G1qb5KtrS/bVSD4BboVF/QMqHqsKHEmfgXkl6Hps7BSFLVso4zSQ== X-Received: by 2002:a62:3343:: with SMTP id z64mr6726769pfz.150.1579778339810; Thu, 23 Jan 2020 03:18:59 -0800 (PST) Received: from localhost.localdomain ([103.59.133.81]) by smtp.googlemail.com with ESMTPSA id y6sm2627559pgc.10.2020.01.23.03.18.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2020 03:18:59 -0800 (PST) From: Manivannan Sadhasivam To: gregkh@linuxfoundation.org, arnd@arndb.de Cc: smohanad@codeaurora.org, jhugo@codeaurora.org, kvalo@codeaurora.org, bjorn.andersson@linaro.org, hemantk@codeaurora.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, Manivannan Sadhasivam Subject: [PATCH 04/16] bus: mhi: core: Add support for creating and destroying MHI devices Date: Thu, 23 Jan 2020 16:48:24 +0530 Message-Id: <20200123111836.7414-5-manivannan.sadhasivam@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> References: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This commit adds support for creating and destroying MHI devices. The MHI devices binds to the MHI channels and are used to transfer data between MHI host and client device. This is based on the patch submitted by Sujeev Dias: https://lkml.org/lkml/2018/7/9/989 Signed-off-by: Sujeev Dias Signed-off-by: Siddartha Mohanadoss [mani: splitted from pm patch and cleaned up for upstream] Signed-off-by: Manivannan Sadhasivam --- drivers/bus/mhi/core/Makefile | 2 +- drivers/bus/mhi/core/internal.h | 1 + drivers/bus/mhi/core/main.c | 123 ++++++++++++++++++++++++++++++++ include/linux/mhi.h | 6 ++ 4 files changed, 131 insertions(+), 1 deletion(-) create mode 100644 drivers/bus/mhi/core/main.c diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/core/Makefile index 2db32697c67f..77f7730da4bf 100644 --- a/drivers/bus/mhi/core/Makefile +++ b/drivers/bus/mhi/core/Makefile @@ -1,3 +1,3 @@ obj-$(CONFIG_MHI_BUS) := mhi.o -mhi-y := init.o +mhi-y := init.o main.o diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h index 21f686d3a140..ea7f1d7b0129 100644 --- a/drivers/bus/mhi/core/internal.h +++ b/drivers/bus/mhi/core/internal.h @@ -140,6 +140,7 @@ struct mhi_chan { bool offload_ch; bool pre_alloc; bool auto_start; + bool wake_capable; int (*gen_tre)(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan, void *buf, void *cb, size_t len, enum mhi_flags flags); diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c new file mode 100644 index 000000000000..216fd8691140 --- /dev/null +++ b/drivers/bus/mhi/core/main.c @@ -0,0 +1,123 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved. + * + */ + +#define dev_fmt(fmt) "MHI: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "internal.h" + +int mhi_destroy_device(struct device *dev, void *data) +{ + struct mhi_device *mhi_dev; + struct mhi_controller *mhi_cntrl; + + if (dev->bus != &mhi_bus_type) + return 0; + + mhi_dev = to_mhi_device(dev); + mhi_cntrl = mhi_dev->mhi_cntrl; + + /* Only destroy virtual devices thats attached to bus */ + if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER) + return 0; + + dev_dbg(mhi_cntrl->dev, "destroy device for chan:%s\n", + mhi_dev->chan_name); + + /* Notify the client and remove the device from MHI bus */ + device_del(dev); + put_device(dev); + + return 0; +} + +static void mhi_notify(struct mhi_device *mhi_dev, enum mhi_callback cb_reason) +{ + struct mhi_driver *mhi_drv; + + if (!mhi_dev->dev.driver) + return; + + mhi_drv = to_mhi_driver(mhi_dev->dev.driver); + + if (mhi_drv->status_cb) + mhi_drv->status_cb(mhi_dev, cb_reason); +} + +/* Bind MHI channels to MHI devices */ +void mhi_create_devices(struct mhi_controller *mhi_cntrl) +{ + int i; + struct mhi_chan *mhi_chan; + struct mhi_device *mhi_dev; + int ret; + + mhi_chan = mhi_cntrl->mhi_chan; + for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) { + if (!mhi_chan->configured || mhi_chan->mhi_dev || + !(mhi_chan->ee_mask & BIT(mhi_cntrl->ee))) + continue; + mhi_dev = mhi_alloc_device(mhi_cntrl); + if (!mhi_dev) + return; + + mhi_dev->dev_type = MHI_DEVICE_XFER; + switch (mhi_chan->dir) { + case DMA_TO_DEVICE: + mhi_dev->ul_chan = mhi_chan; + mhi_dev->ul_chan_id = mhi_chan->chan; + break; + case DMA_FROM_DEVICE: + /* We use dl_chan as offload channels */ + mhi_dev->dl_chan = mhi_chan; + mhi_dev->dl_chan_id = mhi_chan->chan; + break; + default: + dev_err(mhi_cntrl->dev, "Direction not supported\n"); + mhi_dealloc_device(mhi_cntrl, mhi_dev); + return; + } + + mhi_chan->mhi_dev = mhi_dev; + + /* Check next channel if it matches */ + if ((i + 1) < mhi_cntrl->max_chan && mhi_chan[1].configured) { + if (!strcmp(mhi_chan[1].name, mhi_chan->name)) { + i++; + mhi_chan++; + if (mhi_chan->dir == DMA_TO_DEVICE) { + mhi_dev->ul_chan = mhi_chan; + mhi_dev->ul_chan_id = mhi_chan->chan; + } else { + mhi_dev->dl_chan = mhi_chan; + mhi_dev->dl_chan_id = mhi_chan->chan; + } + mhi_chan->mhi_dev = mhi_dev; + } + } + + /* Channel name is same for both UL and DL */ + mhi_dev->chan_name = mhi_chan->name; + dev_set_name(&mhi_dev->dev, "%04x_%s", mhi_chan->chan, + mhi_dev->chan_name); + + /* Init wakeup source if available */ + if (mhi_dev->dl_chan && mhi_dev->dl_chan->wake_capable) + device_init_wakeup(&mhi_dev->dev, true); + + ret = device_add(&mhi_dev->dev); + if (ret) + mhi_dealloc_device(mhi_cntrl, mhi_dev); + } +} diff --git a/include/linux/mhi.h b/include/linux/mhi.h index 0fdad987dd70..cb6ddd23463c 100644 --- a/include/linux/mhi.h +++ b/include/linux/mhi.h @@ -166,6 +166,7 @@ enum mhi_db_brst_mode { * @doorbell_mode_switch: Channel switches to doorbell mode on M0 transition * @auto_queue: Framework will automatically queue buffers for DL traffic * @auto_start: Automatically start (open) this channel + * @wake-capable: Channel capable of waking up the system */ struct mhi_channel_config { u32 num; @@ -184,6 +185,7 @@ struct mhi_channel_config { bool doorbell_mode_switch; bool auto_queue; bool auto_start; + bool wake_capable; }; /** @@ -365,6 +367,8 @@ struct mhi_controller { * struct mhi_device - Structure representing a MHI device which binds * to channels * @dev: Driver model device node for the MHI device + * @ul_chan_id: MHI channel id for UL transfer + * @dl_chan_id: MHI channel id for DL transfer * @tiocm: Device current terminal settings * @id: Pointer to MHI device ID struct * @chan_name: Name of the channel to which the device binds @@ -376,6 +380,8 @@ struct mhi_controller { */ struct mhi_device { struct device dev; + int ul_chan_id; + int dl_chan_id; u32 tiocm; const struct mhi_device_id *id; const char *chan_name; From patchwork Thu Jan 23 11:18:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manivannan Sadhasivam X-Patchwork-Id: 11347377 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B727E17EA for ; Thu, 23 Jan 2020 11:19:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8040824655 for ; Thu, 23 Jan 2020 11:19:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="lIRn1UKk" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729122AbgAWLTF (ORCPT ); Thu, 23 Jan 2020 06:19:05 -0500 Received: from mail-pj1-f68.google.com ([209.85.216.68]:56151 "EHLO mail-pj1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726871AbgAWLTE (ORCPT ); Thu, 23 Jan 2020 06:19:04 -0500 Received: by mail-pj1-f68.google.com with SMTP id d5so1060572pjz.5 for ; Thu, 23 Jan 2020 03:19:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=SLQs4k3WojC3qzz8aoMnojas6ffyTEliChwZo/IscSU=; b=lIRn1UKk9sdtJ8riwi8HbeMhBfvX31a15JTAzufZIAd7WMQn2Umg1mC06Hdce1OF3Z HnGUcpYt1VGrM0wK0cICIsB69zR2uZPXbuNmoaXxK3zTwnoLbe4xkkcen0Z1NX3Mg/yR xTRbmTne6qL+K2CRmXfGm7BmameUSSYs60w1789ZOe9+p4SP5bug3Vg5PhvuyXauMl7n qQySGbBhSMsvge67QmbtB6XTa7rPlbwenj/OtlwTbUyHCtfpd6C0y9VVIxGzu6RJqpco dxPO6US814/zbslO4n4UOLjSwoVy8lLn8PzEo5my6SMCBeF5WwBN8wuGsqAGEas8PyUt WDEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=SLQs4k3WojC3qzz8aoMnojas6ffyTEliChwZo/IscSU=; b=WLpqsQGza0IG7n8YrwzVMO5xr1EiO//r084uGzsapOKHrNzZ/ebQ8ar/FXmFCWnb7X 7VvxpHT4uq8ZjZWzDR8Oy2juCBG0EQZSkK4jU8F3VoyHWbCekW0pdsyOV32kXbJriKie pAnQUrYhWjmk9aA+PEeMPnJC+bWgpqo88QYtCY0tFY0s6nngeLCmPyKVGQxLDBi6EEGw fj+7IQKNw1bb0hYOPc53iOh23KxqeHTywGqyN1yWjLf7UrIIakLbaofOacHoeLaVJhSJ t70r29jKYPf+4BxMLMb0K8eS7+7v8UQ4ggOyLjSZs9og5UBHVRbFjtn6Tutqqh4Doxqe JzaA== X-Gm-Message-State: APjAAAW4+DsgZ22Fxh2Ay6MYq3+Gpzfa7KuNcNVRjaRWnrTItifuVABD Ud2vj6GANn9f9LcTjhcJtvauBlKAlA== X-Google-Smtp-Source: APXvYqwc2PzWf0an7k82EEQ8dkb1oLnlC7dBKUsYyX8HIg2EZt4Q6kfVHI5u8+bD1y22mwqrxzJCmw== X-Received: by 2002:a17:90a:3ae5:: with SMTP id b92mr3849738pjc.26.1579778343574; Thu, 23 Jan 2020 03:19:03 -0800 (PST) Received: from localhost.localdomain ([103.59.133.81]) by smtp.googlemail.com with ESMTPSA id y6sm2627559pgc.10.2020.01.23.03.19.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2020 03:19:02 -0800 (PST) From: Manivannan Sadhasivam To: gregkh@linuxfoundation.org, arnd@arndb.de Cc: smohanad@codeaurora.org, jhugo@codeaurora.org, kvalo@codeaurora.org, bjorn.andersson@linaro.org, hemantk@codeaurora.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, Manivannan Sadhasivam Subject: [PATCH 05/16] bus: mhi: core: Add support for ringing channel/event ring doorbells Date: Thu, 23 Jan 2020 16:48:25 +0530 Message-Id: <20200123111836.7414-6-manivannan.sadhasivam@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> References: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This commit adds support for ringing channel and event ring doorbells by MHI host. The MHI host can use the channel and event ring doorbells for notifying the client device about processing transfer and event rings which it has queued using MMIO registers. This is based on the patch submitted by Sujeev Dias: https://lkml.org/lkml/2018/7/9/989 Signed-off-by: Sujeev Dias Signed-off-by: Siddartha Mohanadoss [mani: splitted from pm patch and cleaned up for upstream] Signed-off-by: Manivannan Sadhasivam --- drivers/bus/mhi/core/init.c | 140 ++++++++++++++++ drivers/bus/mhi/core/internal.h | 275 ++++++++++++++++++++++++++++++++ drivers/bus/mhi/core/main.c | 118 ++++++++++++++ include/linux/mhi.h | 5 + 4 files changed, 538 insertions(+) diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c index 60dcf2ad3a5f..588166b588b4 100644 --- a/drivers/bus/mhi/core/init.c +++ b/drivers/bus/mhi/core/init.c @@ -19,6 +19,136 @@ #include #include "internal.h" +int mhi_init_mmio(struct mhi_controller *mhi_cntrl) +{ + u32 val; + int i, ret; + struct mhi_chan *mhi_chan; + struct mhi_event *mhi_event; + void __iomem *base = mhi_cntrl->regs; + struct { + u32 offset; + u32 mask; + u32 shift; + u32 val; + } reg_info[] = { + { + CCABAP_HIGHER, U32_MAX, 0, + upper_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr), + }, + { + CCABAP_LOWER, U32_MAX, 0, + lower_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr), + }, + { + ECABAP_HIGHER, U32_MAX, 0, + upper_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr), + }, + { + ECABAP_LOWER, U32_MAX, 0, + lower_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr), + }, + { + CRCBAP_HIGHER, U32_MAX, 0, + upper_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr), + }, + { + CRCBAP_LOWER, U32_MAX, 0, + lower_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr), + }, + { + MHICFG, MHICFG_NER_MASK, MHICFG_NER_SHIFT, + mhi_cntrl->total_ev_rings, + }, + { + MHICFG, MHICFG_NHWER_MASK, MHICFG_NHWER_SHIFT, + mhi_cntrl->hw_ev_rings, + }, + { + MHICTRLBASE_HIGHER, U32_MAX, 0, + upper_32_bits(mhi_cntrl->iova_start), + }, + { + MHICTRLBASE_LOWER, U32_MAX, 0, + lower_32_bits(mhi_cntrl->iova_start), + }, + { + MHIDATABASE_HIGHER, U32_MAX, 0, + upper_32_bits(mhi_cntrl->iova_start), + }, + { + MHIDATABASE_LOWER, U32_MAX, 0, + lower_32_bits(mhi_cntrl->iova_start), + }, + { + MHICTRLLIMIT_HIGHER, U32_MAX, 0, + upper_32_bits(mhi_cntrl->iova_stop), + }, + { + MHICTRLLIMIT_LOWER, U32_MAX, 0, + lower_32_bits(mhi_cntrl->iova_stop), + }, + { + MHIDATALIMIT_HIGHER, U32_MAX, 0, + upper_32_bits(mhi_cntrl->iova_stop), + }, + { + MHIDATALIMIT_LOWER, U32_MAX, 0, + lower_32_bits(mhi_cntrl->iova_stop), + }, + { 0, 0, 0 } + }; + + dev_dbg(mhi_cntrl->dev, "Initializing MHI registers\n"); + + /* Read channel db offset */ + ret = mhi_read_reg_field(mhi_cntrl, base, CHDBOFF, CHDBOFF_CHDBOFF_MASK, + CHDBOFF_CHDBOFF_SHIFT, &val); + if (ret) { + dev_err(mhi_cntrl->dev, "Unable to read CHDBOFF register\n"); + return -EIO; + } + + /* Setup wake db */ + mhi_cntrl->wake_db = base + val + (8 * MHI_DEV_WAKE_DB); + mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 4, 0); + mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 0, 0); + mhi_cntrl->wake_set = false; + + /* Setup channel db address for each channel in tre_ring */ + mhi_chan = mhi_cntrl->mhi_chan; + for (i = 0; i < mhi_cntrl->max_chan; i++, val += 8, mhi_chan++) + mhi_chan->tre_ring.db_addr = base + val; + + /* Read event ring db offset */ + ret = mhi_read_reg_field(mhi_cntrl, base, ERDBOFF, ERDBOFF_ERDBOFF_MASK, + ERDBOFF_ERDBOFF_SHIFT, &val); + if (ret) { + dev_err(mhi_cntrl->dev, "Unable to read ERDBOFF register\n"); + return -EIO; + } + + /* Setup event db address for each ev_ring */ + mhi_event = mhi_cntrl->mhi_event; + for (i = 0; i < mhi_cntrl->total_ev_rings; i++, val += 8, mhi_event++) { + if (mhi_event->offload_ev) + continue; + + mhi_event->ring.db_addr = base + val; + } + + /* Setup DB register for primary CMD rings */ + mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING].ring.db_addr = base + CRDB_LOWER; + + /* Write to MMIO registers */ + for (i = 0; reg_info[i].offset; i++) + mhi_write_reg_field(mhi_cntrl, base, reg_info[i].offset, + reg_info[i].mask, reg_info[i].shift, + reg_info[i].val); + + return 0; +} + static int parse_ev_cfg(struct mhi_controller *mhi_cntrl, struct mhi_controller_config *config) { @@ -63,6 +193,11 @@ static int parse_ev_cfg(struct mhi_controller *mhi_cntrl, if (MHI_INVALID_BRSTMODE(mhi_event->db_cfg.brstmode)) goto error_ev_cfg; + if (mhi_event->db_cfg.brstmode == MHI_DB_BRST_ENABLE) + mhi_event->db_cfg.process_db = mhi_db_brstmode; + else + mhi_event->db_cfg.process_db = mhi_db_brstmode_disable; + mhi_event->data_type = event_cfg->data_type; mhi_event->hw_ring = event_cfg->hardware_event; @@ -194,6 +329,11 @@ static int parse_ch_cfg(struct mhi_controller *mhi_cntrl, } } + if (mhi_chan->db_cfg.brstmode == MHI_DB_BRST_ENABLE) + mhi_chan->db_cfg.process_db = mhi_db_brstmode; + else + mhi_chan->db_cfg.process_db = mhi_db_brstmode_disable; + mhi_chan->configured = true; if (mhi_chan->lpm_notify) diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h index ea7f1d7b0129..a4d10916984a 100644 --- a/drivers/bus/mhi/core/internal.h +++ b/drivers/bus/mhi/core/internal.h @@ -9,6 +9,255 @@ extern struct bus_type mhi_bus_type; +/* MHI MMIO register mapping */ +#define PCI_INVALID_READ(val) (val == U32_MAX) + +#define MHIREGLEN (0x0) +#define MHIREGLEN_MHIREGLEN_MASK (0xFFFFFFFF) +#define MHIREGLEN_MHIREGLEN_SHIFT (0) + +#define MHIVER (0x8) +#define MHIVER_MHIVER_MASK (0xFFFFFFFF) +#define MHIVER_MHIVER_SHIFT (0) + +#define MHICFG (0x10) +#define MHICFG_NHWER_MASK (0xFF000000) +#define MHICFG_NHWER_SHIFT (24) +#define MHICFG_NER_MASK (0xFF0000) +#define MHICFG_NER_SHIFT (16) +#define MHICFG_NHWCH_MASK (0xFF00) +#define MHICFG_NHWCH_SHIFT (8) +#define MHICFG_NCH_MASK (0xFF) +#define MHICFG_NCH_SHIFT (0) + +#define CHDBOFF (0x18) +#define CHDBOFF_CHDBOFF_MASK (0xFFFFFFFF) +#define CHDBOFF_CHDBOFF_SHIFT (0) + +#define ERDBOFF (0x20) +#define ERDBOFF_ERDBOFF_MASK (0xFFFFFFFF) +#define ERDBOFF_ERDBOFF_SHIFT (0) + +#define BHIOFF (0x28) +#define BHIOFF_BHIOFF_MASK (0xFFFFFFFF) +#define BHIOFF_BHIOFF_SHIFT (0) + +#define BHIEOFF (0x2C) +#define BHIEOFF_BHIEOFF_MASK (0xFFFFFFFF) +#define BHIEOFF_BHIEOFF_SHIFT (0) + +#define DEBUGOFF (0x30) +#define DEBUGOFF_DEBUGOFF_MASK (0xFFFFFFFF) +#define DEBUGOFF_DEBUGOFF_SHIFT (0) + +#define MHICTRL (0x38) +#define MHICTRL_MHISTATE_MASK (0x0000FF00) +#define MHICTRL_MHISTATE_SHIFT (8) +#define MHICTRL_RESET_MASK (0x2) +#define MHICTRL_RESET_SHIFT (1) + +#define MHISTATUS (0x48) +#define MHISTATUS_MHISTATE_MASK (0x0000FF00) +#define MHISTATUS_MHISTATE_SHIFT (8) +#define MHISTATUS_SYSERR_MASK (0x4) +#define MHISTATUS_SYSERR_SHIFT (2) +#define MHISTATUS_READY_MASK (0x1) +#define MHISTATUS_READY_SHIFT (0) + +#define CCABAP_LOWER (0x58) +#define CCABAP_LOWER_CCABAP_LOWER_MASK (0xFFFFFFFF) +#define CCABAP_LOWER_CCABAP_LOWER_SHIFT (0) + +#define CCABAP_HIGHER (0x5C) +#define CCABAP_HIGHER_CCABAP_HIGHER_MASK (0xFFFFFFFF) +#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT (0) + +#define ECABAP_LOWER (0x60) +#define ECABAP_LOWER_ECABAP_LOWER_MASK (0xFFFFFFFF) +#define ECABAP_LOWER_ECABAP_LOWER_SHIFT (0) + +#define ECABAP_HIGHER (0x64) +#define ECABAP_HIGHER_ECABAP_HIGHER_MASK (0xFFFFFFFF) +#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT (0) + +#define CRCBAP_LOWER (0x68) +#define CRCBAP_LOWER_CRCBAP_LOWER_MASK (0xFFFFFFFF) +#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT (0) + +#define CRCBAP_HIGHER (0x6C) +#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK (0xFFFFFFFF) +#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT (0) + +#define CRDB_LOWER (0x70) +#define CRDB_LOWER_CRDB_LOWER_MASK (0xFFFFFFFF) +#define CRDB_LOWER_CRDB_LOWER_SHIFT (0) + +#define CRDB_HIGHER (0x74) +#define CRDB_HIGHER_CRDB_HIGHER_MASK (0xFFFFFFFF) +#define CRDB_HIGHER_CRDB_HIGHER_SHIFT (0) + +#define MHICTRLBASE_LOWER (0x80) +#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK (0xFFFFFFFF) +#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT (0) + +#define MHICTRLBASE_HIGHER (0x84) +#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK (0xFFFFFFFF) +#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT (0) + +#define MHICTRLLIMIT_LOWER (0x88) +#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK (0xFFFFFFFF) +#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT (0) + +#define MHICTRLLIMIT_HIGHER (0x8C) +#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK (0xFFFFFFFF) +#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT (0) + +#define MHIDATABASE_LOWER (0x98) +#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK (0xFFFFFFFF) +#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT (0) + +#define MHIDATABASE_HIGHER (0x9C) +#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK (0xFFFFFFFF) +#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT (0) + +#define MHIDATALIMIT_LOWER (0xA0) +#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK (0xFFFFFFFF) +#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT (0) + +#define MHIDATALIMIT_HIGHER (0xA4) +#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK (0xFFFFFFFF) +#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT (0) + +/* Host request register */ +#define MHI_SOC_RESET_REQ_OFFSET (0xB0) +#define MHI_SOC_RESET_REQ BIT(0) + +/* MHI BHI offfsets */ +#define BHI_BHIVERSION_MINOR (0x00) +#define BHI_BHIVERSION_MAJOR (0x04) +#define BHI_IMGADDR_LOW (0x08) +#define BHI_IMGADDR_HIGH (0x0C) +#define BHI_IMGSIZE (0x10) +#define BHI_RSVD1 (0x14) +#define BHI_IMGTXDB (0x18) +#define BHI_TXDB_SEQNUM_BMSK (0x3FFFFFFF) +#define BHI_TXDB_SEQNUM_SHFT (0) +#define BHI_RSVD2 (0x1C) +#define BHI_INTVEC (0x20) +#define BHI_RSVD3 (0x24) +#define BHI_EXECENV (0x28) +#define BHI_STATUS (0x2C) +#define BHI_ERRCODE (0x30) +#define BHI_ERRDBG1 (0x34) +#define BHI_ERRDBG2 (0x38) +#define BHI_ERRDBG3 (0x3C) +#define BHI_SERIALNU (0x40) +#define BHI_SBLANTIROLLVER (0x44) +#define BHI_NUMSEG (0x48) +#define BHI_MSMHWID(n) (0x4C + (0x4 * n)) +#define BHI_OEMPKHASH(n) (0x64 + (0x4 * n)) +#define BHI_RSVD5 (0xC4) +#define BHI_STATUS_MASK (0xC0000000) +#define BHI_STATUS_SHIFT (30) +#define BHI_STATUS_ERROR (3) +#define BHI_STATUS_SUCCESS (2) +#define BHI_STATUS_RESET (0) + +/* MHI BHIE offsets */ +#define BHIE_MSMSOCID_OFFS (0x0000) +#define BHIE_TXVECADDR_LOW_OFFS (0x002C) +#define BHIE_TXVECADDR_HIGH_OFFS (0x0030) +#define BHIE_TXVECSIZE_OFFS (0x0034) +#define BHIE_TXVECDB_OFFS (0x003C) +#define BHIE_TXVECDB_SEQNUM_BMSK (0x3FFFFFFF) +#define BHIE_TXVECDB_SEQNUM_SHFT (0) +#define BHIE_TXVECSTATUS_OFFS (0x0044) +#define BHIE_TXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF) +#define BHIE_TXVECSTATUS_SEQNUM_SHFT (0) +#define BHIE_TXVECSTATUS_STATUS_BMSK (0xC0000000) +#define BHIE_TXVECSTATUS_STATUS_SHFT (30) +#define BHIE_TXVECSTATUS_STATUS_RESET (0x00) +#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL (0x02) +#define BHIE_TXVECSTATUS_STATUS_ERROR (0x03) +#define BHIE_RXVECADDR_LOW_OFFS (0x0060) +#define BHIE_RXVECADDR_HIGH_OFFS (0x0064) +#define BHIE_RXVECSIZE_OFFS (0x0068) +#define BHIE_RXVECDB_OFFS (0x0070) +#define BHIE_RXVECDB_SEQNUM_BMSK (0x3FFFFFFF) +#define BHIE_RXVECDB_SEQNUM_SHFT (0) +#define BHIE_RXVECSTATUS_OFFS (0x0078) +#define BHIE_RXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF) +#define BHIE_RXVECSTATUS_SEQNUM_SHFT (0) +#define BHIE_RXVECSTATUS_STATUS_BMSK (0xC0000000) +#define BHIE_RXVECSTATUS_STATUS_SHFT (30) +#define BHIE_RXVECSTATUS_STATUS_RESET (0x00) +#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL (0x02) +#define BHIE_RXVECSTATUS_STATUS_ERROR (0x03) + +struct mhi_event_ctxt { + u32 reserved : 8; + u32 intmodc : 8; + u32 intmodt : 16; + u32 ertype; + u32 msivec; + + u64 rbase __packed __aligned(4); + u64 rlen __packed __aligned(4); + u64 rp __packed __aligned(4); + u64 wp __packed __aligned(4); +}; + +struct mhi_chan_ctxt { + u32 chstate : 8; + u32 brstmode : 2; + u32 pollcfg : 6; + u32 reserved : 16; + u32 chtype; + u32 erindex; + + u64 rbase __packed __aligned(4); + u64 rlen __packed __aligned(4); + u64 rp __packed __aligned(4); + u64 wp __packed __aligned(4); +}; + +struct mhi_cmd_ctxt { + u32 reserved0; + u32 reserved1; + u32 reserved2; + + u64 rbase __packed __aligned(4); + u64 rlen __packed __aligned(4); + u64 rp __packed __aligned(4); + u64 wp __packed __aligned(4); +}; + +struct mhi_ctxt { + struct mhi_event_ctxt *er_ctxt; + struct mhi_chan_ctxt *chan_ctxt; + struct mhi_cmd_ctxt *cmd_ctxt; + dma_addr_t er_ctxt_addr; + dma_addr_t chan_ctxt_addr; + dma_addr_t cmd_ctxt_addr; +}; + +struct mhi_tre { + u64 ptr; + u32 dword[2]; +}; + +struct bhi_vec_entry { + u64 dma_addr; + u64 size; +}; + +enum mhi_cmd_type { + MHI_CMD_NOP = 1, + MHI_CMD_RESET_CHAN = 16, + MHI_CMD_STOP_CHAN = 17, + MHI_CMD_START_CHAN = 18, +}; + /* MHI transfer completion events */ enum mhi_ev_ccs { MHI_EV_CC_INVALID = 0x0, @@ -37,6 +286,7 @@ enum mhi_ch_state { #define NR_OF_CMD_RINGS 1 #define CMD_EL_PER_RING 128 #define PRIMARY_CMD_RING 0 +#define MHI_DEV_WAKE_DB 127 #define MHI_MAX_MTU 0xffff enum mhi_er_type { @@ -167,4 +417,29 @@ static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl, int mhi_destroy_device(struct device *dev, void *data); void mhi_create_devices(struct mhi_controller *mhi_cntrl); +/* Register access methods */ +void mhi_db_brstmode(struct mhi_controller *mhi_cntrl, struct db_cfg *db_cfg, + void __iomem *db_addr, dma_addr_t db_val); +void mhi_db_brstmode_disable(struct mhi_controller *mhi_cntrl, + struct db_cfg *db_mode, void __iomem *db_addr, + dma_addr_t db_val); +int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl, + void __iomem *base, u32 offset, u32 *out); +int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl, + void __iomem *base, u32 offset, u32 mask, + u32 shift, u32 *out); +void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base, + u32 offset, u32 val); +void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base, + u32 offset, u32 mask, u32 shift, u32 val); +void mhi_ring_er_db(struct mhi_event *mhi_event); +void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr, + dma_addr_t db_val); +void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd); +void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl, + struct mhi_chan *mhi_chan); + +/* Initialization methods */ +int mhi_init_mmio(struct mhi_controller *mhi_cntrl); + #endif /* _MHI_INT_H */ diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c index 216fd8691140..134ef9b2cc78 100644 --- a/drivers/bus/mhi/core/main.c +++ b/drivers/bus/mhi/core/main.c @@ -17,6 +17,124 @@ #include #include "internal.h" +int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl, + void __iomem *base, u32 offset, u32 *out) +{ + u32 tmp = readl_relaxed(base + offset); + + /* If there is any unexpected value, query the link status */ + if (PCI_INVALID_READ(tmp) && + mhi_cntrl->link_status(mhi_cntrl, mhi_cntrl->priv_data)) + return -EIO; + + *out = tmp; + + return 0; +} + +int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl, + void __iomem *base, u32 offset, + u32 mask, u32 shift, u32 *out) +{ + u32 tmp; + int ret; + + ret = mhi_read_reg(mhi_cntrl, base, offset, &tmp); + if (ret) + return ret; + + *out = (tmp & mask) >> shift; + + return 0; +} + +void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base, + u32 offset, u32 val) +{ + writel_relaxed(val, base + offset); +} + +void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base, + u32 offset, u32 mask, u32 shift, u32 val) +{ + int ret; + u32 tmp; + + ret = mhi_read_reg(mhi_cntrl, base, offset, &tmp); + if (ret) + return; + + tmp &= ~mask; + tmp |= (val << shift); + mhi_write_reg(mhi_cntrl, base, offset, tmp); +} + +void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr, + dma_addr_t db_val) +{ + mhi_write_reg(mhi_cntrl, db_addr, 4, upper_32_bits(db_val)); + mhi_write_reg(mhi_cntrl, db_addr, 0, lower_32_bits(db_val)); +} + +void mhi_db_brstmode(struct mhi_controller *mhi_cntrl, + struct db_cfg *db_cfg, + void __iomem *db_addr, + dma_addr_t db_val) +{ + if (db_cfg->db_mode) { + db_cfg->db_val = db_val; + mhi_write_db(mhi_cntrl, db_addr, db_val); + db_cfg->db_mode = 0; + } +} + +void mhi_db_brstmode_disable(struct mhi_controller *mhi_cntrl, + struct db_cfg *db_cfg, + void __iomem *db_addr, + dma_addr_t db_val) +{ + db_cfg->db_val = db_val; + mhi_write_db(mhi_cntrl, db_addr, db_val); +} + +void mhi_ring_er_db(struct mhi_event *mhi_event) +{ + struct mhi_ring *ring = &mhi_event->ring; + + mhi_event->db_cfg.process_db(mhi_event->mhi_cntrl, &mhi_event->db_cfg, + ring->db_addr, *ring->ctxt_wp); +} + +void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd) +{ + dma_addr_t db; + struct mhi_ring *ring = &mhi_cmd->ring; + + db = ring->iommu_base + (ring->wp - ring->base); + *ring->ctxt_wp = db; + mhi_write_db(mhi_cntrl, ring->db_addr, db); +} + +void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl, + struct mhi_chan *mhi_chan) +{ + struct mhi_ring *ring = &mhi_chan->tre_ring; + dma_addr_t db; + + db = ring->iommu_base + (ring->wp - ring->base); + *ring->ctxt_wp = db; + mhi_chan->db_cfg.process_db(mhi_cntrl, &mhi_chan->db_cfg, + ring->db_addr, db); +} + +enum mhi_ee_type mhi_get_exec_env(struct mhi_controller *mhi_cntrl) +{ + u32 exec; + int ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_EXECENV, &exec); + + return (ret) ? MHI_EE_MAX : exec; +} + int mhi_destroy_device(struct device *dev, void *data) { struct mhi_device *mhi_dev; diff --git a/include/linux/mhi.h b/include/linux/mhi.h index cb6ddd23463c..d08f212cdfd0 100644 --- a/include/linux/mhi.h +++ b/include/linux/mhi.h @@ -246,6 +246,8 @@ struct mhi_controller_config { * @dev_id: Device ID of the controller * @bus_id: Physical bus instance used by the controller * @regs: Base address of MHI MMIO register space + * @bhi: Points to base of MHI BHI register space + * @wake_db: MHI WAKE doorbell register address * @iova_start: IOMMU starting address for data * @iova_stop: IOMMU stop address for data * @fw_image: Firmware image name for normal booting @@ -306,6 +308,9 @@ struct mhi_controller { u32 dev_id; u32 bus_id; void __iomem *regs; + void __iomem *bhi; + void __iomem *wake_db; + dma_addr_t iova_start; dma_addr_t iova_stop; const char *fw_image; From patchwork Thu Jan 23 11:18:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manivannan Sadhasivam X-Patchwork-Id: 11347379 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 00D611580 for ; Thu, 23 Jan 2020 11:19:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AB67A2467F for ; Thu, 23 Jan 2020 11:19:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="CgcYWLHa" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729140AbgAWLTK (ORCPT ); Thu, 23 Jan 2020 06:19:10 -0500 Received: from mail-pg1-f195.google.com ([209.85.215.195]:34726 "EHLO mail-pg1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729138AbgAWLTI (ORCPT ); Thu, 23 Jan 2020 06:19:08 -0500 Received: by mail-pg1-f195.google.com with SMTP id r11so1239423pgf.1 for ; Thu, 23 Jan 2020 03:19:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=bck4FEIZwLJxEgJfodppN6GjeHSlm20yQGpAdfjcv24=; b=CgcYWLHaWfUJBdPNciAIoWQMZbTR7GeUl2i74VKnTP8ilxuwUV7qgjHZQHKiFNjpf1 HN+50lN/cGnQ6TxDgigJkb4ZfSUo8id0rsMk+l79J64pKrdwAR0si2WPx+sb9tnJJUjr Bw/lHdWp4Z7UlIu6KheJ1SWQzL27LFRGjzgWQbszBavMqN/BVrDBiLhb87IicXzfOAYH 1d1IFwMEaMb2Cyfe2HACUFzDv6d9ec6o1Jh0HBt/nBuOdr8ZbMIYRLj4YxjpvGjZAily EzPDuYFbwMrtv/KwJ1sNVs+6CfNwY1ZQ8DSPjZtVxqT20EFi/Nje61/JjsP4zwG2DCk2 ksZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=bck4FEIZwLJxEgJfodppN6GjeHSlm20yQGpAdfjcv24=; b=t3axTrf5ue4cFm9fU8sGi0aurSn8r2dONQB218PLHiWtbnD21yztxNdAn8BY0SrQII Ics2s7JuN1PDIZEmaLc/KDVFSd1xf7TTZcJEnzgq2m8OCimXREyp0UzjKV+cxMlQGMUz qg2EuIjmRr7Wqb4X2VozD2vK+m8RBqUmIjY706OItvLmnBv45/uBQjbjOdCn08tu6ZCi 8cIqZuyeUTRfwmQ4ffKJcQrTOxn+SAWxvr6rg3bOP8NDgUwpTuoeNuCIqcwHB2SJgtgi 4jTphvWOra8p43mdkDdi1B34vKYS8bAprTCtgTp9qqVUvthcEZC4YOjRXf3BlbMI7p2Z BenQ== X-Gm-Message-State: APjAAAVsmJUTlLPCxguHMlqzeP649Q5cSigfS4GVkmmp0vJFbKy9Myo1 MFQoIDW0rEikdTkZ8hFz6FkC X-Google-Smtp-Source: APXvYqxT5O9JwdZi+JeXKrtNDG1huoK20WtYmF/fV9NOAVkEM8H9jed1HLomSjJcpd5+BqavmBrpEA== X-Received: by 2002:a63:1853:: with SMTP id 19mr3322193pgy.170.1579778347013; Thu, 23 Jan 2020 03:19:07 -0800 (PST) Received: from localhost.localdomain ([103.59.133.81]) by smtp.googlemail.com with ESMTPSA id y6sm2627559pgc.10.2020.01.23.03.19.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2020 03:19:06 -0800 (PST) From: Manivannan Sadhasivam To: gregkh@linuxfoundation.org, arnd@arndb.de Cc: smohanad@codeaurora.org, jhugo@codeaurora.org, kvalo@codeaurora.org, bjorn.andersson@linaro.org, hemantk@codeaurora.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, Manivannan Sadhasivam Subject: [PATCH 06/16] bus: mhi: core: Add support for PM state transitions Date: Thu, 23 Jan 2020 16:48:26 +0530 Message-Id: <20200123111836.7414-7-manivannan.sadhasivam@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> References: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This commit adds support for transitioning the MHI states as a part of the power management operations. Helpers functions are provided for the state transitions, which will be consumed by the actual power management routines. This is based on the patch submitted by Sujeev Dias: https://lkml.org/lkml/2018/7/9/989 Signed-off-by: Sujeev Dias Signed-off-by: Siddartha Mohanadoss [jhugo: removed dma_zalloc_coherent() and fixed several bugs] Signed-off-by: Jeffrey Hugo [mani: splitted the pm patch and cleaned up for upstream] Signed-off-by: Manivannan Sadhasivam --- drivers/bus/mhi/core/Makefile | 2 +- drivers/bus/mhi/core/init.c | 65 +++ drivers/bus/mhi/core/internal.h | 175 ++++++++ drivers/bus/mhi/core/main.c | 9 + drivers/bus/mhi/core/pm.c | 685 ++++++++++++++++++++++++++++++++ include/linux/mhi.h | 51 +++ 6 files changed, 986 insertions(+), 1 deletion(-) create mode 100644 drivers/bus/mhi/core/pm.c diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/core/Makefile index 77f7730da4bf..a0070f9cdfcd 100644 --- a/drivers/bus/mhi/core/Makefile +++ b/drivers/bus/mhi/core/Makefile @@ -1,3 +1,3 @@ obj-$(CONFIG_MHI_BUS) := mhi.o -mhi-y := init.o main.o +mhi-y := init.o main.o pm.o diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c index 588166b588b4..83a03493c757 100644 --- a/drivers/bus/mhi/core/init.c +++ b/drivers/bus/mhi/core/init.c @@ -19,6 +19,62 @@ #include #include "internal.h" +const char * const mhi_ee_str[MHI_EE_MAX] = { + [MHI_EE_PBL] = "PBL", + [MHI_EE_SBL] = "SBL", + [MHI_EE_AMSS] = "AMSS", + [MHI_EE_RDDM] = "RDDM", + [MHI_EE_WFW] = "WFW", + [MHI_EE_PTHRU] = "PASS THRU", + [MHI_EE_EDL] = "EDL", + [MHI_EE_DISABLE_TRANSITION] = "DISABLE", + [MHI_EE_NOT_SUPPORTED] = "NOT SUPPORTED", +}; + +const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX] = { + [DEV_ST_TRANSITION_PBL] = "PBL", + [DEV_ST_TRANSITION_READY] = "READY", + [DEV_ST_TRANSITION_SBL] = "SBL", + [DEV_ST_TRANSITION_MISSION_MODE] = "MISSION_MODE", +}; + +const char * const mhi_state_str[MHI_STATE_MAX] = { + [MHI_STATE_RESET] = "RESET", + [MHI_STATE_READY] = "READY", + [MHI_STATE_M0] = "M0", + [MHI_STATE_M1] = "M1", + [MHI_STATE_M2] = "M2", + [MHI_STATE_M3] = "M3", + [MHI_STATE_M3_FAST] = "M3_FAST", + [MHI_STATE_BHI] = "BHI", + [MHI_STATE_SYS_ERR] = "SYS_ERR", +}; + +static const char * const mhi_pm_state_str[] = { + [MHI_PM_STATE_DISABLE] = "DISABLE", + [MHI_PM_STATE_POR] = "POR", + [MHI_PM_STATE_M0] = "M0", + [MHI_PM_STATE_M2] = "M2", + [MHI_PM_STATE_M3_ENTER] = "M?->M3", + [MHI_PM_STATE_M3] = "M3", + [MHI_PM_STATE_M3_EXIT] = "M3->M0", + [MHI_PM_STATE_FW_DL_ERR] = "FW DL Error", + [MHI_PM_STATE_SYS_ERR_DETECT] = "SYS_ERR Detect", + [MHI_PM_STATE_SYS_ERR_PROCESS] = "SYS_ERR Process", + [MHI_PM_STATE_SHUTDOWN_PROCESS] = "SHUTDOWN Process", + [MHI_PM_STATE_LD_ERR_FATAL_DETECT] = "LD or Error Fatal Detect", +}; + +const char *to_mhi_pm_state_str(enum mhi_pm_state state) +{ + int index = find_last_bit((unsigned long *)&state, 32); + + if (index >= ARRAY_SIZE(mhi_pm_state_str)) + return "Invalid State"; + + return mhi_pm_state_str[index]; +} + int mhi_init_mmio(struct mhi_controller *mhi_cntrl) { u32 val; @@ -372,6 +428,11 @@ static int parse_config(struct mhi_controller *mhi_cntrl, if (!mhi_cntrl->buffer_len) mhi_cntrl->buffer_len = MHI_MAX_MTU; + /* By default, host is allowed to ring DB in both M0 and M2 states */ + mhi_cntrl->db_access = MHI_PM_M0 | MHI_PM_M2; + if (config->m2_no_db) + mhi_cntrl->db_access &= ~MHI_PM_M2; + return 0; error_ev_cfg: @@ -408,8 +469,12 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl, } INIT_LIST_HEAD(&mhi_cntrl->transition_list); + mutex_init(&mhi_cntrl->pm_mutex); + rwlock_init(&mhi_cntrl->pm_lock); spin_lock_init(&mhi_cntrl->transition_lock); spin_lock_init(&mhi_cntrl->wlock); + INIT_WORK(&mhi_cntrl->st_worker, mhi_pm_st_worker); + INIT_WORK(&mhi_cntrl->syserr_worker, mhi_pm_sys_err_worker); init_waitqueue_head(&mhi_cntrl->state_event); mhi_cmd = mhi_cntrl->mhi_cmd; diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h index a4d10916984a..2dba4923482a 100644 --- a/drivers/bus/mhi/core/internal.h +++ b/drivers/bus/mhi/core/internal.h @@ -258,6 +258,79 @@ enum mhi_cmd_type { MHI_CMD_START_CHAN = 18, }; +/* No operation command */ +#define MHI_TRE_CMD_NOOP_PTR (0) +#define MHI_TRE_CMD_NOOP_DWORD0 (0) +#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_NOP << 16) + +/* Channel reset command */ +#define MHI_TRE_CMD_RESET_PTR (0) +#define MHI_TRE_CMD_RESET_DWORD0 (0) +#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \ + (MHI_CMD_RESET_CHAN << 16)) + +/* Channel stop command */ +#define MHI_TRE_CMD_STOP_PTR (0) +#define MHI_TRE_CMD_STOP_DWORD0 (0) +#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | \ + (MHI_CMD_STOP_CHAN << 16)) + +/* Channel start command */ +#define MHI_TRE_CMD_START_PTR (0) +#define MHI_TRE_CMD_START_DWORD0 (0) +#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \ + (MHI_CMD_START_CHAN << 16)) + +#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF) +#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF) + +/* Event descriptor macros */ +#define MHI_TRE_EV_PTR(ptr) (ptr) +#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len) +#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16)) +#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr) +#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF) +#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF) +#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF) +#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF) +#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF) +#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF) +#define MHI_TRE_GET_EV_SEQ(tre) ((tre)->dword[0]) +#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr) +#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits((tre)->ptr) +#define MHI_TRE_GET_EV_VEID(tre) (((tre)->dword[0] >> 16) & 0xFF) +#define MHI_TRE_GET_EV_LINKSPEED(tre) (((tre)->dword[1] >> 24) & 0xFF) +#define MHI_TRE_GET_EV_LINKWIDTH(tre) ((tre)->dword[0] & 0xFF) + +/* Transfer descriptor macros */ +#define MHI_TRE_DATA_PTR(ptr) (ptr) +#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU) +#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \ + | (ieot << 9) | (ieob << 8) | chain) + +/* RSC transfer descriptor macros */ +#define MHI_RSCTRE_DATA_PTR(ptr, len) (((u64)len << 48) | ptr) +#define MHI_RSCTRE_DATA_DWORD0(cookie) (cookie) +#define MHI_RSCTRE_DATA_DWORD1 (MHI_PKT_TYPE_COALESCING << 16) + +enum mhi_pkt_type { + MHI_PKT_TYPE_INVALID = 0x0, + MHI_PKT_TYPE_NOOP_CMD = 0x1, + MHI_PKT_TYPE_TRANSFER = 0x2, + MHI_PKT_TYPE_COALESCING = 0x8, + MHI_PKT_TYPE_RESET_CHAN_CMD = 0x10, + MHI_PKT_TYPE_STOP_CHAN_CMD = 0x11, + MHI_PKT_TYPE_START_CHAN_CMD = 0x12, + MHI_PKT_TYPE_STATE_CHANGE_EVENT = 0x20, + MHI_PKT_TYPE_CMD_COMPLETION_EVENT = 0x21, + MHI_PKT_TYPE_TX_EVENT = 0x22, + MHI_PKT_TYPE_RSC_TX_EVENT = 0x28, + MHI_PKT_TYPE_EE_EVENT = 0x40, + MHI_PKT_TYPE_TSYNC_EVENT = 0x48, + MHI_PKT_TYPE_BW_REQ_EVENT = 0x50, + MHI_PKT_TYPE_STALE_EVENT, /* internal event */ +}; + /* MHI transfer completion events */ enum mhi_ev_ccs { MHI_EV_CC_INVALID = 0x0, @@ -283,6 +356,81 @@ enum mhi_ch_state { #define MHI_INVALID_BRSTMODE(mode) (mode != MHI_DB_BRST_DISABLE && \ mode != MHI_DB_BRST_ENABLE) +extern const char * const mhi_ee_str[MHI_EE_MAX]; +#define TO_MHI_EXEC_STR(ee) (((ee) >= MHI_EE_MAX) ? \ + "INVALID_EE" : mhi_ee_str[ee]) + +#define MHI_IN_PBL(ee) (ee == MHI_EE_PBL || ee == MHI_EE_PTHRU || \ + ee == MHI_EE_EDL) + +#define MHI_IN_MISSION_MODE(ee) (ee == MHI_EE_AMSS || ee == MHI_EE_WFW) + +enum dev_st_transition { + DEV_ST_TRANSITION_PBL, + DEV_ST_TRANSITION_READY, + DEV_ST_TRANSITION_SBL, + DEV_ST_TRANSITION_MISSION_MODE, + DEV_ST_TRANSITION_MAX, +}; + +extern const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX]; +#define TO_DEV_STATE_TRANS_STR(state) (((state) >= DEV_ST_TRANSITION_MAX) ? \ + "INVALID_STATE" : dev_state_tran_str[state]) + +extern const char * const mhi_state_str[MHI_STATE_MAX]; +#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \ + !mhi_state_str[state]) ? \ + "INVALID_STATE" : mhi_state_str[state]) + +/* internal power states */ +enum mhi_pm_state { + MHI_PM_STATE_DISABLE, + MHI_PM_STATE_POR, + MHI_PM_STATE_M0, + MHI_PM_STATE_M2, + MHI_PM_STATE_M3_ENTER, + MHI_PM_STATE_M3, + MHI_PM_STATE_M3_EXIT, + MHI_PM_STATE_FW_DL_ERR, + MHI_PM_STATE_SYS_ERR_DETECT, + MHI_PM_STATE_SYS_ERR_PROCESS, + MHI_PM_STATE_SHUTDOWN_PROCESS, + MHI_PM_STATE_LD_ERR_FATAL_DETECT, + MHI_PM_STATE_MAX +}; + +#define MHI_PM_DISABLE BIT(0) +#define MHI_PM_POR BIT(1) +#define MHI_PM_M0 BIT(2) +#define MHI_PM_M2 BIT(3) +#define MHI_PM_M3_ENTER BIT(4) +#define MHI_PM_M3 BIT(5) +#define MHI_PM_M3_EXIT BIT(6) +/* firmware download failure state */ +#define MHI_PM_FW_DL_ERR BIT(7) +#define MHI_PM_SYS_ERR_DETECT BIT(8) +#define MHI_PM_SYS_ERR_PROCESS BIT(9) +#define MHI_PM_SHUTDOWN_PROCESS BIT(10) +/* link not accessible */ +#define MHI_PM_LD_ERR_FATAL_DETECT BIT(11) + +#define MHI_REG_ACCESS_VALID(pm_state) ((pm_state & (MHI_PM_POR | MHI_PM_M0 | \ + MHI_PM_M2 | MHI_PM_M3_ENTER | MHI_PM_M3_EXIT | \ + MHI_PM_SYS_ERR_DETECT | MHI_PM_SYS_ERR_PROCESS | \ + MHI_PM_SHUTDOWN_PROCESS | MHI_PM_FW_DL_ERR))) +#define MHI_PM_IN_ERROR_STATE(pm_state) (pm_state >= MHI_PM_FW_DL_ERR) +#define MHI_PM_IN_FATAL_STATE(pm_state) (pm_state == MHI_PM_LD_ERR_FATAL_DETECT) +#define MHI_DB_ACCESS_VALID(mhi_cntrl) (mhi_cntrl->pm_state & \ + mhi_cntrl->db_access) +#define MHI_WAKE_DB_CLEAR_VALID(pm_state) (pm_state & (MHI_PM_M0 | \ + MHI_PM_M2 | MHI_PM_M3_EXIT)) +#define MHI_WAKE_DB_SET_VALID(pm_state) (pm_state & MHI_PM_M2) +#define MHI_WAKE_DB_FORCE_SET_VALID(pm_state) MHI_WAKE_DB_CLEAR_VALID(pm_state) +#define MHI_EVENT_ACCESS_INVALID(pm_state) (pm_state == MHI_PM_DISABLE || \ + MHI_PM_IN_ERROR_STATE(pm_state)) +#define MHI_PM_IN_SUSPEND_STATE(pm_state) (pm_state & \ + (MHI_PM_M3_ENTER | MHI_PM_M3)) + #define NR_OF_CMD_RINGS 1 #define CMD_EL_PER_RING 128 #define PRIMARY_CMD_RING 0 @@ -315,6 +463,16 @@ struct db_cfg { dma_addr_t db_val); }; +struct mhi_pm_transitions { + enum mhi_pm_state from_state; + u32 to_states; +}; + +struct state_transition { + struct list_head node; + enum dev_st_transition state; +}; + struct mhi_ring { dma_addr_t dma_handle; dma_addr_t iommu_base; @@ -417,6 +575,23 @@ static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl, int mhi_destroy_device(struct device *dev, void *data); void mhi_create_devices(struct mhi_controller *mhi_cntrl); +/* Power management APIs */ +enum mhi_pm_state __must_check mhi_tryset_pm_state( + struct mhi_controller *mhi_cntrl, + enum mhi_pm_state state); +const char *to_mhi_pm_state_str(enum mhi_pm_state state); +enum mhi_ee_type mhi_get_exec_env(struct mhi_controller *mhi_cntrl); +int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl, + enum dev_st_transition state); +void mhi_pm_st_worker(struct work_struct *work); +void mhi_pm_sys_err_worker(struct work_struct *work); +int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl); +void mhi_ctrl_ev_task(unsigned long data); +int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl); +void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl); +int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl); +int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl); + /* Register access methods */ void mhi_db_brstmode(struct mhi_controller *mhi_cntrl, struct db_cfg *db_cfg, void __iomem *db_addr, dma_addr_t db_val); diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c index 134ef9b2cc78..df980fb3b6db 100644 --- a/drivers/bus/mhi/core/main.c +++ b/drivers/bus/mhi/core/main.c @@ -135,6 +135,15 @@ enum mhi_ee_type mhi_get_exec_env(struct mhi_controller *mhi_cntrl) return (ret) ? MHI_EE_MAX : exec; } +enum mhi_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl) +{ + u32 state; + int ret = mhi_read_reg_field(mhi_cntrl, mhi_cntrl->regs, MHISTATUS, + MHISTATUS_MHISTATE_MASK, + MHISTATUS_MHISTATE_SHIFT, &state); + return ret ? MHI_STATE_MAX : state; +} + int mhi_destroy_device(struct device *dev, void *data) { struct mhi_device *mhi_dev; diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c new file mode 100644 index 000000000000..75d43b1039ea --- /dev/null +++ b/drivers/bus/mhi/core/pm.c @@ -0,0 +1,685 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved. + * + */ + +#define dev_fmt(fmt) "MHI: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "internal.h" + +/* + * Not all MHI state transitions are synchronous. Transitions like Linkdown, + * SYS_ERR, and shutdown can happen anytime asynchronously. This function will + * transition to a new state only if we're allowed to. + * + * Priority increases as we go down. For instance, from any state in L0, the + * transition can be made to states in L1, L2 and L3. A notable exception to + * this rule is state DISABLE. From DISABLE state we can only transition to + * POR state. Also, while in L2 state, user cannot jump back to previous + * L1 or L0 states. + * + * Valid transitions: + * L0: DISABLE <--> POR + * POR <--> POR + * POR -> M0 -> M2 --> M0 + * POR -> FW_DL_ERR + * FW_DL_ERR <--> FW_DL_ERR + * M0 <--> M0 + * M0 -> FW_DL_ERR + * M0 -> M3_ENTER -> M3 -> M3_EXIT --> M0 + * L1: SYS_ERR_DETECT -> SYS_ERR_PROCESS --> POR + * L2: SHUTDOWN_PROCESS -> DISABLE + * L3: LD_ERR_FATAL_DETECT <--> LD_ERR_FATAL_DETECT + * LD_ERR_FATAL_DETECT -> SHUTDOWN_PROCESS + */ +static struct mhi_pm_transitions const dev_state_transitions[] = { + /* L0 States */ + { + MHI_PM_DISABLE, + MHI_PM_POR + }, + { + MHI_PM_POR, + MHI_PM_POR | MHI_PM_DISABLE | MHI_PM_M0 | + MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS | + MHI_PM_LD_ERR_FATAL_DETECT | MHI_PM_FW_DL_ERR + }, + { + MHI_PM_M0, + MHI_PM_M0 | MHI_PM_M2 | MHI_PM_M3_ENTER | + MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS | + MHI_PM_LD_ERR_FATAL_DETECT | MHI_PM_FW_DL_ERR + }, + { + MHI_PM_M2, + MHI_PM_M0 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS | + MHI_PM_LD_ERR_FATAL_DETECT + }, + { + MHI_PM_M3_ENTER, + MHI_PM_M3 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS | + MHI_PM_LD_ERR_FATAL_DETECT + }, + { + MHI_PM_M3, + MHI_PM_M3_EXIT | MHI_PM_SYS_ERR_DETECT | + MHI_PM_SHUTDOWN_PROCESS | MHI_PM_LD_ERR_FATAL_DETECT + }, + { + MHI_PM_M3_EXIT, + MHI_PM_M0 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS | + MHI_PM_LD_ERR_FATAL_DETECT + }, + { + MHI_PM_FW_DL_ERR, + MHI_PM_FW_DL_ERR | MHI_PM_SYS_ERR_DETECT | + MHI_PM_SHUTDOWN_PROCESS | MHI_PM_LD_ERR_FATAL_DETECT + }, + /* L1 States */ + { + MHI_PM_SYS_ERR_DETECT, + MHI_PM_SYS_ERR_PROCESS | MHI_PM_SHUTDOWN_PROCESS | + MHI_PM_LD_ERR_FATAL_DETECT + }, + { + MHI_PM_SYS_ERR_PROCESS, + MHI_PM_POR | MHI_PM_SHUTDOWN_PROCESS | + MHI_PM_LD_ERR_FATAL_DETECT + }, + /* L2 States */ + { + MHI_PM_SHUTDOWN_PROCESS, + MHI_PM_DISABLE | MHI_PM_LD_ERR_FATAL_DETECT + }, + /* L3 States */ + { + MHI_PM_LD_ERR_FATAL_DETECT, + MHI_PM_LD_ERR_FATAL_DETECT | MHI_PM_SHUTDOWN_PROCESS + }, +}; + +enum mhi_pm_state __must_check mhi_tryset_pm_state(struct mhi_controller *mhi_cntrl, + enum mhi_pm_state state) +{ + unsigned long cur_state = mhi_cntrl->pm_state; + int index = find_last_bit(&cur_state, 32); + + if (unlikely(index >= ARRAY_SIZE(dev_state_transitions))) + return cur_state; + + if (unlikely(dev_state_transitions[index].from_state != cur_state)) + return cur_state; + + if (unlikely(!(dev_state_transitions[index].to_states & state))) + return cur_state; + + mhi_cntrl->pm_state = state; + return mhi_cntrl->pm_state; +} + +void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, enum mhi_state state) +{ + if (state == MHI_STATE_RESET) { + mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL, + MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 1); + } else { + mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL, + MHICTRL_MHISTATE_MASK, + MHICTRL_MHISTATE_SHIFT, state); + } +} + +/* Handle device ready state transition */ +int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl) +{ + void __iomem *base = mhi_cntrl->regs; + u32 reset = 1, ready = 0; + struct mhi_event *mhi_event; + enum mhi_pm_state cur_state; + int ret, i; + + /* Wait for RESET to be cleared and READY bit to be set by the device */ + wait_event_timeout(mhi_cntrl->state_event, + MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state) || + mhi_read_reg_field(mhi_cntrl, base, MHICTRL, + MHICTRL_RESET_MASK, + MHICTRL_RESET_SHIFT, &reset) || + mhi_read_reg_field(mhi_cntrl, base, MHISTATUS, + MHISTATUS_READY_MASK, + MHISTATUS_READY_SHIFT, &ready) || + (!reset && ready), + msecs_to_jiffies(mhi_cntrl->timeout_ms)); + + /* Check if device entered error state */ + if (MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state)) { + dev_err(mhi_cntrl->dev, "Device link is not accessible\n"); + return -EIO; + } + + /* Timeout if device did not transition to ready state */ + if (reset || !ready) { + dev_err(mhi_cntrl->dev, "Device Ready timeout\n"); + return -ETIMEDOUT; + } + + dev_dbg(mhi_cntrl->dev, "Device in READY State\n"); + write_lock_irq(&mhi_cntrl->pm_lock); + cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_POR); + mhi_cntrl->dev_state = MHI_STATE_READY; + write_unlock_irq(&mhi_cntrl->pm_lock); + + if (cur_state != MHI_PM_POR) { + dev_err(mhi_cntrl->dev, "Error moving to state %s from %s\n", + to_mhi_pm_state_str(MHI_PM_POR), + to_mhi_pm_state_str(cur_state)); + return -EIO; + } + + read_lock_bh(&mhi_cntrl->pm_lock); + if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) { + dev_err(mhi_cntrl->dev, "Device registers not accessible\n"); + goto error_mmio; + } + + /* Configure MMIO registers */ + ret = mhi_init_mmio(mhi_cntrl); + if (ret) { + dev_err(mhi_cntrl->dev, "Error configuring MMIO registers\n"); + goto error_mmio; + } + + /* Add elements to all SW event rings */ + mhi_event = mhi_cntrl->mhi_event; + for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) { + struct mhi_ring *ring = &mhi_event->ring; + + /* Skip if this is an offload or HW event */ + if (mhi_event->offload_ev || mhi_event->hw_ring) + continue; + + ring->wp = ring->base + ring->len - ring->el_size; + *ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size; + /* Update all cores */ + smp_wmb(); + + /* Ring the event ring db */ + spin_lock_irq(&mhi_event->lock); + mhi_ring_er_db(mhi_event); + spin_unlock_irq(&mhi_event->lock); + } + + /* Set MHI to M0 state */ + mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M0); + read_unlock_bh(&mhi_cntrl->pm_lock); + + return 0; + +error_mmio: + read_unlock_bh(&mhi_cntrl->pm_lock); + + return -EIO; +} + +int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl) +{ + enum mhi_pm_state cur_state; + struct mhi_chan *mhi_chan; + int i; + + write_lock_irq(&mhi_cntrl->pm_lock); + mhi_cntrl->dev_state = MHI_STATE_M0; + cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M0); + write_unlock_irq(&mhi_cntrl->pm_lock); + if (unlikely(cur_state != MHI_PM_M0)) { + dev_err(mhi_cntrl->dev, "Unable to transition to M0 state\n"); + return -EIO; + } + + mhi_cntrl->M0++; + + /* Wake up the device */ + read_lock_bh(&mhi_cntrl->pm_lock); + mhi_cntrl->wake_get(mhi_cntrl, true); + + /* Ring all event rings and CMD ring only if we're in mission mode */ + if (MHI_IN_MISSION_MODE(mhi_cntrl->ee)) { + struct mhi_event *mhi_event = mhi_cntrl->mhi_event; + struct mhi_cmd *mhi_cmd = + &mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING]; + + for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) { + if (mhi_event->offload_ev) + continue; + + spin_lock_irq(&mhi_event->lock); + mhi_ring_er_db(mhi_event); + spin_unlock_irq(&mhi_event->lock); + } + + /* Only ring primary cmd ring if ring is not empty */ + spin_lock_irq(&mhi_cmd->lock); + if (mhi_cmd->ring.rp != mhi_cmd->ring.wp) + mhi_ring_cmd_db(mhi_cntrl, mhi_cmd); + spin_unlock_irq(&mhi_cmd->lock); + } + + /* Ring channel DB registers */ + mhi_chan = mhi_cntrl->mhi_chan; + for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) { + struct mhi_ring *tre_ring = &mhi_chan->tre_ring; + + write_lock_irq(&mhi_chan->lock); + if (mhi_chan->db_cfg.reset_req) + mhi_chan->db_cfg.db_mode = true; + + /* Only ring DB if ring is not empty */ + if (tre_ring->base && tre_ring->wp != tre_ring->rp) + mhi_ring_chan_db(mhi_cntrl, mhi_chan); + write_unlock_irq(&mhi_chan->lock); + } + + mhi_cntrl->wake_put(mhi_cntrl, false); + read_unlock_bh(&mhi_cntrl->pm_lock); + wake_up_all(&mhi_cntrl->state_event); + + return 0; +} + +/* + * After receiving the MHI state change event from the device indicating the + * transition to M1 state, the host can transition the device to M2 state + * for keeping it in low power state. + */ +void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl) +{ + enum mhi_pm_state state; + + write_lock_irq(&mhi_cntrl->pm_lock); + state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M2); + if (state == MHI_PM_M2) { + mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M2); + mhi_cntrl->dev_state = MHI_STATE_M2; + mhi_cntrl->M2++; + + write_unlock_irq(&mhi_cntrl->pm_lock); + wake_up_all(&mhi_cntrl->state_event); + + /* If there are any pending resources, exit M2 immediately */ + if (unlikely(atomic_read(&mhi_cntrl->pending_pkts) || + atomic_read(&mhi_cntrl->dev_wake))) { + dev_dbg(mhi_cntrl->dev, + "Exiting M2, pending_pkts: %d dev_wake: %d\n", + atomic_read(&mhi_cntrl->pending_pkts), + atomic_read(&mhi_cntrl->dev_wake)); + read_lock_bh(&mhi_cntrl->pm_lock); + mhi_cntrl->wake_get(mhi_cntrl, true); + mhi_cntrl->wake_put(mhi_cntrl, true); + read_unlock_bh(&mhi_cntrl->pm_lock); + } else { + mhi_cntrl->status_cb(mhi_cntrl, mhi_cntrl->priv_data, + MHI_CB_IDLE); + } + } else { + write_unlock_irq(&mhi_cntrl->pm_lock); + } +} + +/* MHI M3 completion handler */ +int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl) +{ + enum mhi_pm_state state; + + write_lock_irq(&mhi_cntrl->pm_lock); + mhi_cntrl->dev_state = MHI_STATE_M3; + state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M3); + write_unlock_irq(&mhi_cntrl->pm_lock); + if (state != MHI_PM_M3) { + dev_err(mhi_cntrl->dev, "Unable to transition to M3 state\n"); + return -EIO; + } + + wake_up_all(&mhi_cntrl->state_event); + mhi_cntrl->M3++; + + return 0; +} + +/* Handle device Mission Mode transition */ +static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl) +{ + int i, ret; + struct mhi_event *mhi_event; + + dev_dbg(mhi_cntrl->dev, "Processing Mission Mode transition\n"); + + write_lock_irq(&mhi_cntrl->pm_lock); + if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) + mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl); + write_unlock_irq(&mhi_cntrl->pm_lock); + + if (!MHI_IN_MISSION_MODE(mhi_cntrl->ee)) + return -EIO; + + wake_up_all(&mhi_cntrl->state_event); + + mhi_cntrl->status_cb(mhi_cntrl, mhi_cntrl->priv_data, + MHI_CB_EE_MISSION_MODE); + + /* Force MHI to be in M0 state before continuing */ + ret = __mhi_device_get_sync(mhi_cntrl); + if (ret) + return ret; + + read_lock_bh(&mhi_cntrl->pm_lock); + + if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) { + ret = -EIO; + goto error_mission_mode; + } + + /* Add elements to all HW event rings */ + mhi_event = mhi_cntrl->mhi_event; + for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) { + struct mhi_ring *ring = &mhi_event->ring; + + if (mhi_event->offload_ev || !mhi_event->hw_ring) + continue; + + ring->wp = ring->base + ring->len - ring->el_size; + *ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size; + /* Update to all cores */ + smp_wmb(); + + spin_lock_irq(&mhi_event->lock); + if (MHI_DB_ACCESS_VALID(mhi_cntrl)) + mhi_ring_er_db(mhi_event); + spin_unlock_irq(&mhi_event->lock); + } + + read_unlock_bh(&mhi_cntrl->pm_lock); + + /* + * The MHI devices are only created when the client device switches its + * Execution Environment (EE) to either SBL or AMSS states + */ + mhi_create_devices(mhi_cntrl); + + read_lock_bh(&mhi_cntrl->pm_lock); + +error_mission_mode: + mhi_cntrl->wake_put(mhi_cntrl, false); + read_unlock_bh(&mhi_cntrl->pm_lock); + + return ret; +} + +/* Handle SYS_ERR and Shutdown transitions */ +static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl, + enum mhi_pm_state transition_state) +{ + enum mhi_pm_state cur_state, prev_state; + struct mhi_event *mhi_event; + struct mhi_cmd_ctxt *cmd_ctxt; + struct mhi_cmd *mhi_cmd; + struct mhi_event_ctxt *er_ctxt; + int ret, i; + + dev_dbg(mhi_cntrl->dev, + "Transitioning from PM state: %s to: %s\n", + to_mhi_pm_state_str(mhi_cntrl->pm_state), + to_mhi_pm_state_str(transition_state)); + + /* We must notify MHI control driver so it can clean up first */ + if (transition_state == MHI_PM_SYS_ERR_PROCESS) { + mhi_cntrl->status_cb(mhi_cntrl, mhi_cntrl->priv_data, + MHI_CB_SYS_ERROR); + } + + mutex_lock(&mhi_cntrl->pm_mutex); + write_lock_irq(&mhi_cntrl->pm_lock); + prev_state = mhi_cntrl->pm_state; + cur_state = mhi_tryset_pm_state(mhi_cntrl, transition_state); + if (cur_state == transition_state) { + mhi_cntrl->ee = MHI_EE_DISABLE_TRANSITION; + mhi_cntrl->dev_state = MHI_STATE_RESET; + } + write_unlock_irq(&mhi_cntrl->pm_lock); + + /* Wake up threads waiting for state transition */ + wake_up_all(&mhi_cntrl->state_event); + + if (cur_state != transition_state) { + dev_err(mhi_cntrl->dev, + "Failed to transition to state: %s from: %s\n", + to_mhi_pm_state_str(transition_state), + to_mhi_pm_state_str(cur_state)); + mutex_unlock(&mhi_cntrl->pm_mutex); + return; + } + + /* Trigger MHI RESET so that the device will not access host memory */ + if (MHI_REG_ACCESS_VALID(prev_state)) { + u32 in_reset = -1; + unsigned long timeout = msecs_to_jiffies(mhi_cntrl->timeout_ms); + + dev_dbg(mhi_cntrl->dev, "Triggering MHI Reset in device\n"); + mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET); + + /* Wait for the reset bit to be cleared by the device */ + ret = wait_event_timeout(mhi_cntrl->state_event, + mhi_read_reg_field(mhi_cntrl, + mhi_cntrl->regs, + MHICTRL, + MHICTRL_RESET_MASK, + MHICTRL_RESET_SHIFT, + &in_reset) || + !in_reset, timeout); + if ((!ret || in_reset) && cur_state == MHI_PM_SYS_ERR_PROCESS) { + dev_err(mhi_cntrl->dev, + "Device failed to exit MHI Reset state\n"); + mutex_unlock(&mhi_cntrl->pm_mutex); + return; + } + + /* + * Device will clear BHI_INTVEC as a part of RESET processing, + * hence re-program it + */ + mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0); + } + + dev_dbg(mhi_cntrl->dev, + "Waiting for all pending event ring processing to complete\n"); + mhi_event = mhi_cntrl->mhi_event; + for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) { + if (mhi_event->offload_ev) + continue; + tasklet_kill(&mhi_event->task); + } + + /* Release lock and wait for all pending threads to complete */ + mutex_unlock(&mhi_cntrl->pm_mutex); + dev_dbg(mhi_cntrl->dev, + "Waiting for all pending threads to complete\n"); + wake_up_all(&mhi_cntrl->state_event); + flush_work(&mhi_cntrl->st_worker); + flush_work(&mhi_cntrl->fw_worker); + + dev_dbg(mhi_cntrl->dev, + "Reset all active channels and remove MHI devices\n"); + device_for_each_child(mhi_cntrl->dev, NULL, mhi_destroy_device); + + mutex_lock(&mhi_cntrl->pm_mutex); + + WARN_ON(atomic_read(&mhi_cntrl->dev_wake)); + WARN_ON(atomic_read(&mhi_cntrl->pending_pkts)); + + /* Reset the ev rings and cmd rings */ + dev_dbg(mhi_cntrl->dev, "Resetting EV CTXT and CMD CTXT\n"); + mhi_cmd = mhi_cntrl->mhi_cmd; + cmd_ctxt = mhi_cntrl->mhi_ctxt->cmd_ctxt; + for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++, cmd_ctxt++) { + struct mhi_ring *ring = &mhi_cmd->ring; + + ring->rp = ring->base; + ring->wp = ring->base; + cmd_ctxt->rp = cmd_ctxt->rbase; + cmd_ctxt->wp = cmd_ctxt->rbase; + } + + mhi_event = mhi_cntrl->mhi_event; + er_ctxt = mhi_cntrl->mhi_ctxt->er_ctxt; + for (i = 0; i < mhi_cntrl->total_ev_rings; i++, er_ctxt++, + mhi_event++) { + struct mhi_ring *ring = &mhi_event->ring; + + /* Skip offload events */ + if (mhi_event->offload_ev) + continue; + + ring->rp = ring->base; + ring->wp = ring->base; + er_ctxt->rp = er_ctxt->rbase; + er_ctxt->wp = er_ctxt->rbase; + } + + if (cur_state == MHI_PM_SYS_ERR_PROCESS) { + mhi_ready_state_transition(mhi_cntrl); + } else { + /* Move to disable state */ + write_lock_irq(&mhi_cntrl->pm_lock); + cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_DISABLE); + write_unlock_irq(&mhi_cntrl->pm_lock); + if (unlikely(cur_state != MHI_PM_DISABLE)) + dev_err(mhi_cntrl->dev, + "Error moving from PM state: %s to: %s\n", + to_mhi_pm_state_str(cur_state), + to_mhi_pm_state_str(MHI_PM_DISABLE)); + } + + dev_dbg(mhi_cntrl->dev, "Exiting with PM state: %s, MHI state: %s\n", + to_mhi_pm_state_str(mhi_cntrl->pm_state), + TO_MHI_STATE_STR(mhi_cntrl->dev_state)); + + mutex_unlock(&mhi_cntrl->pm_mutex); +} + +/* Queue a new work item and schedule work */ +int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl, + enum dev_st_transition state) +{ + struct state_transition *item = kmalloc(sizeof(*item), GFP_ATOMIC); + unsigned long flags; + + if (!item) + return -ENOMEM; + + item->state = state; + spin_lock_irqsave(&mhi_cntrl->transition_lock, flags); + list_add_tail(&item->node, &mhi_cntrl->transition_list); + spin_unlock_irqrestore(&mhi_cntrl->transition_lock, flags); + + schedule_work(&mhi_cntrl->st_worker); + + return 0; +} + +/* SYS_ERR worker */ +void mhi_pm_sys_err_worker(struct work_struct *work) +{ + struct mhi_controller *mhi_cntrl = container_of(work, + struct mhi_controller, + syserr_worker); + + mhi_pm_disable_transition(mhi_cntrl, MHI_PM_SYS_ERR_PROCESS); +} + +/* Device State Transition worker */ +void mhi_pm_st_worker(struct work_struct *work) +{ + struct state_transition *itr, *tmp; + LIST_HEAD(head); + struct mhi_controller *mhi_cntrl = container_of(work, + struct mhi_controller, + st_worker); + spin_lock_irq(&mhi_cntrl->transition_lock); + list_splice_tail_init(&mhi_cntrl->transition_list, &head); + spin_unlock_irq(&mhi_cntrl->transition_lock); + + list_for_each_entry_safe(itr, tmp, &head, node) { + list_del(&itr->node); + dev_dbg(mhi_cntrl->dev, "Handling state transition: %s\n", + TO_DEV_STATE_TRANS_STR(itr->state)); + + switch (itr->state) { + case DEV_ST_TRANSITION_PBL: + write_lock_irq(&mhi_cntrl->pm_lock); + if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) + mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl); + write_unlock_irq(&mhi_cntrl->pm_lock); + if (MHI_IN_PBL(mhi_cntrl->ee)) + wake_up_all(&mhi_cntrl->state_event); + break; + case DEV_ST_TRANSITION_SBL: + write_lock_irq(&mhi_cntrl->pm_lock); + mhi_cntrl->ee = MHI_EE_SBL; + write_unlock_irq(&mhi_cntrl->pm_lock); + /* + * The MHI devices are only created when the client + * device switches its Execution Environment (EE) to + * either SBL or AMSS states + */ + mhi_create_devices(mhi_cntrl); + break; + case DEV_ST_TRANSITION_MISSION_MODE: + mhi_pm_mission_mode_transition(mhi_cntrl); + break; + case DEV_ST_TRANSITION_READY: + mhi_ready_state_transition(mhi_cntrl); + break; + default: + break; + } + kfree(itr); + } +} + +int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl) +{ + int ret; + + /* Wake up the device */ + read_lock_bh(&mhi_cntrl->pm_lock); + mhi_cntrl->wake_get(mhi_cntrl, true); + if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) { + pm_wakeup_event(&mhi_cntrl->mhi_dev->dev, 0); + mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data); + mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data); + } + read_unlock_bh(&mhi_cntrl->pm_lock); + + ret = wait_event_timeout(mhi_cntrl->state_event, + mhi_cntrl->pm_state == MHI_PM_M0 || + MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state), + msecs_to_jiffies(mhi_cntrl->timeout_ms)); + + if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) { + read_lock_bh(&mhi_cntrl->pm_lock); + mhi_cntrl->wake_put(mhi_cntrl, false); + read_unlock_bh(&mhi_cntrl->pm_lock); + return -EIO; + } + + return 0; +} diff --git a/include/linux/mhi.h b/include/linux/mhi.h index d08f212cdfd0..80633099b90d 100644 --- a/include/linux/mhi.h +++ b/include/linux/mhi.h @@ -107,6 +107,31 @@ enum mhi_ee_type { MHI_EE_MAX, }; +/** + * enum mhi_state - MHI states + * @MHI_STATE_RESET: Reset state + * @MHI_STATE_READY: Ready state + * @MHI_STATE_M0: M0 state + * @MHI_STATE_M1: M1 state + * @MHI_STATE_M2: M2 state + * @MHI_STATE_M3: M3 state + * @MHI_STATE_M3_FAST: M3 Fast state + * @MHI_STATE_BHI: BHI state + * @MHI_STATE_SYS_ERR: System Error state + */ +enum mhi_state { + MHI_STATE_RESET = 0x0, + MHI_STATE_READY = 0x1, + MHI_STATE_M0 = 0x2, + MHI_STATE_M1 = 0x3, + MHI_STATE_M2 = 0x4, + MHI_STATE_M3 = 0x5, + MHI_STATE_M3_FAST = 0x6, + MHI_STATE_BHI = 0x7, + MHI_STATE_SYS_ERR = 0xFF, + MHI_STATE_MAX, +}; + /** * enum mhi_buf_type - Accepted buffer type for the channel * @MHI_BUF_RAW: Raw buffer @@ -274,6 +299,7 @@ struct mhi_controller_config { * @pm_state: MHI power management state * @db_access: DB access states * @ee: MHI device execution environment + * @dev_state: MHI device state * @wake_set: Device wakeup set flag * @dev_wake: Device wakeup count * @alloc_size: Total memory allocations size of the controller @@ -339,6 +365,7 @@ struct mhi_controller { u32 pm_state; u32 db_access; enum mhi_ee_type ee; + enum mhi_state dev_state; bool wake_set; atomic_t dev_wake; atomic_t alloc_size; @@ -411,6 +438,22 @@ struct mhi_result { int transaction_status; }; +/** + * struct mhi_buf - MHI Buffer description + * @buf: Virtual address of the buffer + * @dma_addr: IOMMU address of the buffer + * @len: # of bytes + * @name: Buffer label. For offload channel, configurations name must be: + * ECA - Event context array data + * CCA - Channel context array data + */ +struct mhi_buf { + void *buf; + dma_addr_t dma_addr; + size_t len; + const char *name; +}; + /** * struct mhi_driver - Structure representing a MHI client driver * @probe: CB function for client driver probe function @@ -481,4 +524,12 @@ int mhi_driver_register(struct mhi_driver *mhi_drv); */ void mhi_driver_unregister(struct mhi_driver *mhi_drv); +/** + * mhi_set_mhi_state - Set MHI device state + * @mhi_cntrl: MHI controller + * @state: State to set + */ +void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, + enum mhi_state state); + #endif /* _MHI_H_ */ From patchwork Thu Jan 23 11:18:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manivannan Sadhasivam X-Patchwork-Id: 11347381 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EB2F8159A for ; Thu, 23 Jan 2020 11:19:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A25FD24685 for ; Thu, 23 Jan 2020 11:19:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="ELxpGqLV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728809AbgAWLTM (ORCPT ); Thu, 23 Jan 2020 06:19:12 -0500 Received: from mail-pg1-f194.google.com ([209.85.215.194]:39217 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729146AbgAWLTL (ORCPT ); Thu, 23 Jan 2020 06:19:11 -0500 Received: by mail-pg1-f194.google.com with SMTP id 4so1224591pgd.6 for ; Thu, 23 Jan 2020 03:19:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=YBuKLpg1prNcrhsTMooV7hTGso7ASfJyr8ui0RkAyVo=; b=ELxpGqLViTeVwBpt7CVhi0xhYg7GOtTXKJlCoQX/SP8tpwgSecHo/hWmUf/WLL7Bnj C8uKagliGfAVE5fPJeiJkzKpLtlA+TghCipGXr7CJ4LYwSpYVJHNDI3kFa+UgjWeWE6u sRVJMVmZto+WrcesEEUH5g+pMIj6EG+QXwOjoX91eUyvhsk8KoOaJRBLJwumIgBIRa77 5pOnMHbmIjb+AcQh7VBEVNJsiD4xIdS2c9iq36QFEeLDCm8TAC8uaPRn6/H9oRKj2Pu7 KNt85rPhOTwGS5u8+yDajXgyo9W5nquhXAvbH8otHge/iXGSP3L/jUZhg9GzT4KOHe8Q eMHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=YBuKLpg1prNcrhsTMooV7hTGso7ASfJyr8ui0RkAyVo=; b=t1J7P86oOwmPbdmBHAQegV3XJXOi3ksmEG8/wsOnfTicdlD6WUwkSZpb/ikxAt4cwq 8r5qF1+1XWKaNgTcJJFW4vq0mqyZpVx0gecYLAic/Roa0IsrOpXJda95JYc+gpuzLT5B 99zG26gN4mZcUlctyQimggx87b2tR8uaISW8oXSv76SY2RuHVfi47OZW2aEw418PbGpo 1/3VnKoO08E9wTv2JBSXCuOooJE5cYgv4UTxrsuxsWVCoIdmM9lC6F4AxLAL8jiwDWYG lj9pPOSyj6iynrdZfJMc7HOcXw7z/w0sEcBfurwZQ/hgGBitZcQIXyPEtgifbQcViLlo wHsg== X-Gm-Message-State: APjAAAV/SbwZn/9S91Ty5dR1FgsMNvwdKC/IpUMlsaWQwDpYUi6dTI7g uHEibVg34p/1gq6ggHuT73JQ X-Google-Smtp-Source: APXvYqxWhhOTPoLDL7TMs/w8FJEcNZaREFvAo/niX9V3Vw+uYUvkLfWkY93paZwYbA0nkNXF5YP0Og== X-Received: by 2002:a63:7b44:: with SMTP id k4mr3407802pgn.140.1579778350482; Thu, 23 Jan 2020 03:19:10 -0800 (PST) Received: from localhost.localdomain ([103.59.133.81]) by smtp.googlemail.com with ESMTPSA id y6sm2627559pgc.10.2020.01.23.03.19.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2020 03:19:09 -0800 (PST) From: Manivannan Sadhasivam To: gregkh@linuxfoundation.org, arnd@arndb.de Cc: smohanad@codeaurora.org, jhugo@codeaurora.org, kvalo@codeaurora.org, bjorn.andersson@linaro.org, hemantk@codeaurora.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, Manivannan Sadhasivam Subject: [PATCH 07/16] bus: mhi: core: Add support for basic PM operations Date: Thu, 23 Jan 2020 16:48:27 +0530 Message-Id: <20200123111836.7414-8-manivannan.sadhasivam@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> References: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This commit adds support for basic MHI PM operations such as mhi_async_power_up, mhi_sync_power_up, and mhi_power_down. These routines places the MHI bus into respective power domain states and calls the state_transition APIs when necessary. The MHI controller driver is expected to call these PM routines for MHI powerup and powerdown. This is based on the patch submitted by Sujeev Dias: https://lkml.org/lkml/2018/7/9/989 Signed-off-by: Sujeev Dias Signed-off-by: Siddartha Mohanadoss [mani: splitted the pm patch and cleaned up for upstream] Signed-off-by: Manivannan Sadhasivam --- drivers/bus/mhi/core/Makefile | 2 +- drivers/bus/mhi/core/boot.c | 89 +++++++++ drivers/bus/mhi/core/init.c | 313 ++++++++++++++++++++++++++++++++ drivers/bus/mhi/core/internal.h | 37 ++++ drivers/bus/mhi/core/main.c | 89 +++++++++ drivers/bus/mhi/core/pm.c | 218 ++++++++++++++++++++++ include/linux/mhi.h | 51 ++++++ 7 files changed, 798 insertions(+), 1 deletion(-) create mode 100644 drivers/bus/mhi/core/boot.c diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/core/Makefile index a0070f9cdfcd..66e2700c9032 100644 --- a/drivers/bus/mhi/core/Makefile +++ b/drivers/bus/mhi/core/Makefile @@ -1,3 +1,3 @@ obj-$(CONFIG_MHI_BUS) := mhi.o -mhi-y := init.o main.o pm.o +mhi-y := init.o main.o pm.o boot.o diff --git a/drivers/bus/mhi/core/boot.c b/drivers/bus/mhi/core/boot.c new file mode 100644 index 000000000000..0996f18c4281 --- /dev/null +++ b/drivers/bus/mhi/core/boot.c @@ -0,0 +1,89 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved. + * + */ + +#define dev_fmt(fmt) "MHI: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "internal.h" + +void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl, + struct image_info *image_info) +{ + int i; + struct mhi_buf *mhi_buf = image_info->mhi_buf; + + for (i = 0; i < image_info->entries; i++, mhi_buf++) + mhi_free_coherent(mhi_cntrl, mhi_buf->len, mhi_buf->buf, + mhi_buf->dma_addr); + + kfree(image_info->mhi_buf); + kfree(image_info); +} + +int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl, + struct image_info **image_info, + size_t alloc_size) +{ + size_t seg_size = mhi_cntrl->seg_len; + int segments = DIV_ROUND_UP(alloc_size, seg_size) + 1; + int i; + struct image_info *img_info; + struct mhi_buf *mhi_buf; + + img_info = kzalloc(sizeof(*img_info), GFP_KERNEL); + if (!img_info) + return -ENOMEM; + + /* Allocate memory for entries */ + img_info->mhi_buf = kcalloc(segments, sizeof(*img_info->mhi_buf), + GFP_KERNEL); + if (!img_info->mhi_buf) + goto error_alloc_mhi_buf; + + /* Allocate and populate vector table */ + mhi_buf = img_info->mhi_buf; + for (i = 0; i < segments; i++, mhi_buf++) { + size_t vec_size = seg_size; + + /* Vector table is the last entry */ + if (i == segments - 1) + vec_size = sizeof(struct bhi_vec_entry) * i; + + mhi_buf->len = vec_size; + mhi_buf->buf = mhi_alloc_coherent(mhi_cntrl, vec_size, + &mhi_buf->dma_addr, + GFP_KERNEL); + if (!mhi_buf->buf) + goto error_alloc_segment; + } + + img_info->bhi_vec = img_info->mhi_buf[segments - 1].buf; + img_info->entries = segments; + *image_info = img_info; + + return 0; + +error_alloc_segment: + for (--i, --mhi_buf; i >= 0; i--, mhi_buf--) + mhi_free_coherent(mhi_cntrl, mhi_buf->len, mhi_buf->buf, + mhi_buf->dma_addr); + +error_alloc_mhi_buf: + kfree(img_info); + + return -ENOMEM; +} diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c index 83a03493c757..7fff92e9661b 100644 --- a/drivers/bus/mhi/core/init.c +++ b/drivers/bus/mhi/core/init.c @@ -75,6 +75,284 @@ const char *to_mhi_pm_state_str(enum mhi_pm_state state) return mhi_pm_state_str[index]; } +/* MHI protocol requires the transfer ring to be aligned with ring length */ +static int mhi_alloc_aligned_ring(struct mhi_controller *mhi_cntrl, + struct mhi_ring *ring, + u64 len) +{ + ring->alloc_size = len + (len - 1); + ring->pre_aligned = mhi_alloc_coherent(mhi_cntrl, ring->alloc_size, + &ring->dma_handle, GFP_KERNEL); + if (!ring->pre_aligned) + return -ENOMEM; + + ring->iommu_base = (ring->dma_handle + (len - 1)) & ~(len - 1); + ring->base = ring->pre_aligned + (ring->iommu_base - ring->dma_handle); + + return 0; +} + +void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl) +{ + int i; + struct mhi_event *mhi_event = mhi_cntrl->mhi_event; + + for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) { + if (mhi_event->offload_ev) + continue; + + free_irq(mhi_cntrl->irq[mhi_event->irq], mhi_event); + } + + free_irq(mhi_cntrl->irq[0], mhi_cntrl); +} + +int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl) +{ + int i; + int ret; + struct mhi_event *mhi_event = mhi_cntrl->mhi_event; + + /* Setup BHI_INTVEC IRQ */ + ret = request_threaded_irq(mhi_cntrl->irq[0], mhi_intvec_handler, + mhi_intvec_threaded_handler, + IRQF_SHARED | IRQF_NO_SUSPEND, + "bhi", mhi_cntrl); + if (ret) + return ret; + + for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) { + if (mhi_event->offload_ev) + continue; + + ret = request_irq(mhi_cntrl->irq[mhi_event->irq], + mhi_irq_handler, + IRQF_SHARED | IRQF_NO_SUSPEND, + "mhi", mhi_event); + if (ret) { + dev_err(mhi_cntrl->dev, + "Error requesting irq:%d for ev:%d\n", + mhi_cntrl->irq[mhi_event->irq], i); + goto error_request; + } + } + + return 0; + +error_request: + for (--i, --mhi_event; i >= 0; i--, mhi_event--) { + if (mhi_event->offload_ev) + continue; + + free_irq(mhi_cntrl->irq[mhi_event->irq], mhi_event); + } + free_irq(mhi_cntrl->irq[0], mhi_cntrl); + + return ret; +} + +void mhi_deinit_dev_ctxt(struct mhi_controller *mhi_cntrl) +{ + int i; + struct mhi_ctxt *mhi_ctxt = mhi_cntrl->mhi_ctxt; + struct mhi_cmd *mhi_cmd; + struct mhi_event *mhi_event; + struct mhi_ring *ring; + + mhi_cmd = mhi_cntrl->mhi_cmd; + for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++) { + ring = &mhi_cmd->ring; + mhi_free_coherent(mhi_cntrl, ring->alloc_size, + ring->pre_aligned, ring->dma_handle); + ring->base = NULL; + ring->iommu_base = 0; + } + + mhi_free_coherent(mhi_cntrl, + sizeof(*mhi_ctxt->cmd_ctxt) * NR_OF_CMD_RINGS, + mhi_ctxt->cmd_ctxt, mhi_ctxt->cmd_ctxt_addr); + + mhi_event = mhi_cntrl->mhi_event; + for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) { + if (mhi_event->offload_ev) + continue; + + ring = &mhi_event->ring; + mhi_free_coherent(mhi_cntrl, ring->alloc_size, + ring->pre_aligned, ring->dma_handle); + ring->base = NULL; + ring->iommu_base = 0; + } + + mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->er_ctxt) * + mhi_cntrl->total_ev_rings, mhi_ctxt->er_ctxt, + mhi_ctxt->er_ctxt_addr); + + mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->chan_ctxt) * + mhi_cntrl->max_chan, mhi_ctxt->chan_ctxt, + mhi_ctxt->chan_ctxt_addr); + + kfree(mhi_ctxt); + mhi_cntrl->mhi_ctxt = NULL; +} + +int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl) +{ + struct mhi_ctxt *mhi_ctxt; + struct mhi_chan_ctxt *chan_ctxt; + struct mhi_event_ctxt *er_ctxt; + struct mhi_cmd_ctxt *cmd_ctxt; + struct mhi_chan *mhi_chan; + struct mhi_event *mhi_event; + struct mhi_cmd *mhi_cmd; + int ret = -ENOMEM, i; + + atomic_set(&mhi_cntrl->dev_wake, 0); + atomic_set(&mhi_cntrl->alloc_size, 0); + atomic_set(&mhi_cntrl->pending_pkts, 0); + + mhi_ctxt = kzalloc(sizeof(*mhi_ctxt), GFP_KERNEL); + if (!mhi_ctxt) + return -ENOMEM; + + /* Setup channel ctxt */ + mhi_ctxt->chan_ctxt = mhi_alloc_coherent(mhi_cntrl, + sizeof(*mhi_ctxt->chan_ctxt) * + mhi_cntrl->max_chan, + &mhi_ctxt->chan_ctxt_addr, + GFP_KERNEL); + if (!mhi_ctxt->chan_ctxt) + goto error_alloc_chan_ctxt; + + mhi_chan = mhi_cntrl->mhi_chan; + chan_ctxt = mhi_ctxt->chan_ctxt; + for (i = 0; i < mhi_cntrl->max_chan; i++, chan_ctxt++, mhi_chan++) { + /* Skip if it is an offload channel */ + if (mhi_chan->offload_ch) + continue; + + chan_ctxt->chstate = MHI_CH_STATE_DISABLED; + chan_ctxt->brstmode = mhi_chan->db_cfg.brstmode; + chan_ctxt->pollcfg = mhi_chan->db_cfg.pollcfg; + chan_ctxt->chtype = mhi_chan->type; + chan_ctxt->erindex = mhi_chan->er_index; + + mhi_chan->ch_state = MHI_CH_STATE_DISABLED; + mhi_chan->tre_ring.db_addr = (void __iomem *)&chan_ctxt->wp; + } + + /* Setup event context */ + mhi_ctxt->er_ctxt = mhi_alloc_coherent(mhi_cntrl, + sizeof(*mhi_ctxt->er_ctxt) * + mhi_cntrl->total_ev_rings, + &mhi_ctxt->er_ctxt_addr, + GFP_KERNEL); + if (!mhi_ctxt->er_ctxt) + goto error_alloc_er_ctxt; + + er_ctxt = mhi_ctxt->er_ctxt; + mhi_event = mhi_cntrl->mhi_event; + for (i = 0; i < mhi_cntrl->total_ev_rings; i++, er_ctxt++, + mhi_event++) { + struct mhi_ring *ring = &mhi_event->ring; + + /* Skip if it is an offload event */ + if (mhi_event->offload_ev) + continue; + + er_ctxt->intmodc = 0; + er_ctxt->intmodt = mhi_event->intmod; + er_ctxt->ertype = MHI_ER_TYPE_VALID; + er_ctxt->msivec = mhi_event->irq; + mhi_event->db_cfg.db_mode = true; + + ring->el_size = sizeof(struct mhi_tre); + ring->len = ring->el_size * ring->elements; + ret = mhi_alloc_aligned_ring(mhi_cntrl, ring, ring->len); + if (ret) + goto error_alloc_er; + + /* + * If the read pointer equals to the write pointer, then the + * ring is empty + */ + ring->rp = ring->wp = ring->base; + er_ctxt->rbase = ring->iommu_base; + er_ctxt->rp = er_ctxt->wp = er_ctxt->rbase; + er_ctxt->rlen = ring->len; + ring->ctxt_wp = &er_ctxt->wp; + } + + /* Setup cmd context */ + mhi_ctxt->cmd_ctxt = mhi_alloc_coherent(mhi_cntrl, + sizeof(*mhi_ctxt->cmd_ctxt) * + NR_OF_CMD_RINGS, + &mhi_ctxt->cmd_ctxt_addr, + GFP_KERNEL); + if (!mhi_ctxt->cmd_ctxt) + goto error_alloc_er; + + mhi_cmd = mhi_cntrl->mhi_cmd; + cmd_ctxt = mhi_ctxt->cmd_ctxt; + for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++, cmd_ctxt++) { + struct mhi_ring *ring = &mhi_cmd->ring; + + ring->el_size = sizeof(struct mhi_tre); + ring->elements = CMD_EL_PER_RING; + ring->len = ring->el_size * ring->elements; + ret = mhi_alloc_aligned_ring(mhi_cntrl, ring, ring->len); + if (ret) + goto error_alloc_cmd; + + ring->rp = ring->wp = ring->base; + cmd_ctxt->rbase = ring->iommu_base; + cmd_ctxt->rp = cmd_ctxt->wp = cmd_ctxt->rbase; + cmd_ctxt->rlen = ring->len; + ring->ctxt_wp = &cmd_ctxt->wp; + } + + mhi_cntrl->mhi_ctxt = mhi_ctxt; + + return 0; + +error_alloc_cmd: + for (--i, --mhi_cmd; i >= 0; i--, mhi_cmd--) { + struct mhi_ring *ring = &mhi_cmd->ring; + + mhi_free_coherent(mhi_cntrl, ring->alloc_size, + ring->pre_aligned, ring->dma_handle); + } + mhi_free_coherent(mhi_cntrl, + sizeof(*mhi_ctxt->cmd_ctxt) * NR_OF_CMD_RINGS, + mhi_ctxt->cmd_ctxt, mhi_ctxt->cmd_ctxt_addr); + i = mhi_cntrl->total_ev_rings; + mhi_event = mhi_cntrl->mhi_event + i; + +error_alloc_er: + for (--i, --mhi_event; i >= 0; i--, mhi_event--) { + struct mhi_ring *ring = &mhi_event->ring; + + if (mhi_event->offload_ev) + continue; + + mhi_free_coherent(mhi_cntrl, ring->alloc_size, + ring->pre_aligned, ring->dma_handle); + } + mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->er_ctxt) * + mhi_cntrl->total_ev_rings, mhi_ctxt->er_ctxt, + mhi_ctxt->er_ctxt_addr); + +error_alloc_er_ctxt: + mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->chan_ctxt) * + mhi_cntrl->max_chan, mhi_ctxt->chan_ctxt, + mhi_ctxt->chan_ctxt_addr); + +error_alloc_chan_ctxt: + kfree(mhi_ctxt); + + return ret; +} + int mhi_init_mmio(struct mhi_controller *mhi_cntrl) { u32 val; @@ -548,6 +826,41 @@ void mhi_unregister_controller(struct mhi_controller *mhi_cntrl) } EXPORT_SYMBOL_GPL(mhi_unregister_controller); +int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl) +{ + int ret; + + mutex_lock(&mhi_cntrl->pm_mutex); + + ret = mhi_init_dev_ctxt(mhi_cntrl); + if (ret) + goto error_dev_ctxt; + + mhi_cntrl->pre_init = true; + + mutex_unlock(&mhi_cntrl->pm_mutex); + + return 0; + +error_dev_ctxt: + mutex_unlock(&mhi_cntrl->pm_mutex); + + return ret; +} +EXPORT_SYMBOL_GPL(mhi_prepare_for_power_up); + +void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl) +{ + if (mhi_cntrl->fbc_image) { + mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image); + mhi_cntrl->fbc_image = NULL; + } + + mhi_deinit_dev_ctxt(mhi_cntrl); + mhi_cntrl->pre_init = false; +} +EXPORT_SYMBOL_GPL(mhi_unprepare_after_power_down); + static void mhi_release_device(struct device *dev) { struct mhi_device *mhi_dev = to_mhi_device(dev); diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h index 2dba4923482a..d920264ded21 100644 --- a/drivers/bus/mhi/core/internal.h +++ b/drivers/bus/mhi/core/internal.h @@ -575,6 +575,11 @@ static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl, int mhi_destroy_device(struct device *dev, void *data); void mhi_create_devices(struct mhi_controller *mhi_cntrl); +int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl, + struct image_info **image_info, size_t alloc_size); +void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl, + struct image_info *image_info); + /* Power management APIs */ enum mhi_pm_state __must_check mhi_tryset_pm_state( struct mhi_controller *mhi_cntrl, @@ -616,5 +621,37 @@ void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl, /* Initialization methods */ int mhi_init_mmio(struct mhi_controller *mhi_cntrl); +int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl); +void mhi_deinit_dev_ctxt(struct mhi_controller *mhi_cntrl); +int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl); +void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl); + +/* Memory allocation methods */ +static inline void *mhi_alloc_coherent(struct mhi_controller *mhi_cntrl, + size_t size, + dma_addr_t *dma_handle, + gfp_t gfp) +{ + void *buf = dma_alloc_coherent(mhi_cntrl->dev, size, dma_handle, gfp); + + if (buf) + atomic_add(size, &mhi_cntrl->alloc_size); + + return buf; +} + +static inline void mhi_free_coherent(struct mhi_controller *mhi_cntrl, + size_t size, + void *vaddr, + dma_addr_t dma_handle) +{ + atomic_sub(size, &mhi_cntrl->alloc_size); + dma_free_coherent(mhi_cntrl->dev, size, vaddr, dma_handle); +} + +/* ISR handlers */ +irqreturn_t mhi_irq_handler(int irq_number, void *dev); +irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *dev); +irqreturn_t mhi_intvec_handler(int irq_number, void *dev); #endif /* _MHI_INT_H */ diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c index df980fb3b6db..453feba8f02a 100644 --- a/drivers/bus/mhi/core/main.c +++ b/drivers/bus/mhi/core/main.c @@ -144,6 +144,11 @@ enum mhi_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl) return ret ? MHI_STATE_MAX : state; } +static void *mhi_to_virtual(struct mhi_ring *ring, dma_addr_t addr) +{ + return (addr - ring->iommu_base) + ring->base; +} + int mhi_destroy_device(struct device *dev, void *data) { struct mhi_device *mhi_dev; @@ -248,3 +253,87 @@ void mhi_create_devices(struct mhi_controller *mhi_cntrl) mhi_dealloc_device(mhi_cntrl, mhi_dev); } } + +irqreturn_t mhi_irq_handler(int irq_number, void *dev) +{ + struct mhi_event *mhi_event = dev; + struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl; + struct mhi_event_ctxt *er_ctxt = + &mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index]; + struct mhi_ring *ev_ring = &mhi_event->ring; + void *dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp); + + /* Only proceed if event ring has pending events */ + if (ev_ring->rp == dev_rp) + return IRQ_HANDLED; + + /* For client managed event ring, notify pending data */ + if (mhi_event->cl_manage) { + struct mhi_chan *mhi_chan = mhi_event->mhi_chan; + struct mhi_device *mhi_dev = mhi_chan->mhi_dev; + + if (mhi_dev) + mhi_notify(mhi_dev, MHI_CB_PENDING_DATA); + } else { + tasklet_schedule(&mhi_event->task); + } + + return IRQ_HANDLED; +} + +irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *dev) +{ + struct mhi_controller *mhi_cntrl = dev; + enum mhi_state state = MHI_STATE_MAX; + enum mhi_pm_state pm_state = 0; + enum mhi_ee_type ee = 0; + + write_lock_irq(&mhi_cntrl->pm_lock); + if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) { + state = mhi_get_mhi_state(mhi_cntrl); + ee = mhi_cntrl->ee; + mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl); + } + + if (state == MHI_STATE_SYS_ERR) { + dev_dbg(mhi_cntrl->dev, "System error detected\n"); + pm_state = mhi_tryset_pm_state(mhi_cntrl, + MHI_PM_SYS_ERR_DETECT); + } + write_unlock_irq(&mhi_cntrl->pm_lock); + + /* If device in RDDM don't bother processing SYS error */ + if (mhi_cntrl->ee == MHI_EE_RDDM) { + if (mhi_cntrl->ee != ee) { + mhi_cntrl->status_cb(mhi_cntrl, mhi_cntrl->priv_data, + MHI_CB_EE_RDDM); + wake_up_all(&mhi_cntrl->state_event); + } + goto exit_intvec; + } + + if (pm_state == MHI_PM_SYS_ERR_DETECT) { + wake_up_all(&mhi_cntrl->state_event); + + /* For fatal errors, we let controller decide next step */ + if (MHI_IN_PBL(ee)) + mhi_cntrl->status_cb(mhi_cntrl, mhi_cntrl->priv_data, + MHI_CB_FATAL_ERROR); + else + schedule_work(&mhi_cntrl->syserr_worker); + } + +exit_intvec: + + return IRQ_HANDLED; +} + +irqreturn_t mhi_intvec_handler(int irq_number, void *dev) +{ + struct mhi_controller *mhi_cntrl = dev; + + /* Wake up events waiting for state change */ + wake_up_all(&mhi_cntrl->state_event); + + return IRQ_WAKE_THREAD; +} diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c index 75d43b1039ea..b67ae2455fc5 100644 --- a/drivers/bus/mhi/core/pm.c +++ b/drivers/bus/mhi/core/pm.c @@ -140,6 +140,17 @@ void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, enum mhi_state state) } } +/* NOP for backward compatibility, host allowed to ring DB in M2 state */ +static void mhi_toggle_dev_wake_nop(struct mhi_controller *mhi_cntrl) +{ +} + +static void mhi_toggle_dev_wake(struct mhi_controller *mhi_cntrl) +{ + mhi_cntrl->wake_get(mhi_cntrl, false); + mhi_cntrl->wake_put(mhi_cntrl, true); +} + /* Handle device ready state transition */ int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl) { @@ -683,3 +694,210 @@ int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl) return 0; } + +/* Assert device wake db */ +static void mhi_assert_dev_wake(struct mhi_controller *mhi_cntrl, bool force) +{ + unsigned long flags; + + /* + * If force flag is set, then increment the wake count value and + * ring wake db + */ + if (unlikely(force)) { + spin_lock_irqsave(&mhi_cntrl->wlock, flags); + atomic_inc(&mhi_cntrl->dev_wake); + if (MHI_WAKE_DB_FORCE_SET_VALID(mhi_cntrl->pm_state) && + !mhi_cntrl->wake_set) { + mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 1); + mhi_cntrl->wake_set = true; + } + spin_unlock_irqrestore(&mhi_cntrl->wlock, flags); + } else { + /* + * If resources are already requested, then just increment + * the wake count value and return + */ + if (likely(atomic_add_unless(&mhi_cntrl->dev_wake, 1, 0))) + return; + + spin_lock_irqsave(&mhi_cntrl->wlock, flags); + if ((atomic_inc_return(&mhi_cntrl->dev_wake) == 1) && + MHI_WAKE_DB_SET_VALID(mhi_cntrl->pm_state) && + !mhi_cntrl->wake_set) { + mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 1); + mhi_cntrl->wake_set = true; + } + spin_unlock_irqrestore(&mhi_cntrl->wlock, flags); + } +} + +/* De-assert device wake db */ +static void mhi_deassert_dev_wake(struct mhi_controller *mhi_cntrl, + bool override) +{ + unsigned long flags; + + /* + * Only continue if there is a single resource, else just decrement + * and return + */ + if (likely(atomic_add_unless(&mhi_cntrl->dev_wake, -1, 1))) + return; + + spin_lock_irqsave(&mhi_cntrl->wlock, flags); + if ((atomic_dec_return(&mhi_cntrl->dev_wake) == 0) && + MHI_WAKE_DB_CLEAR_VALID(mhi_cntrl->pm_state) && !override && + mhi_cntrl->wake_set) { + mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 0); + mhi_cntrl->wake_set = false; + } + spin_unlock_irqrestore(&mhi_cntrl->wlock, flags); +} + +int mhi_async_power_up(struct mhi_controller *mhi_cntrl) +{ + int ret; + u32 val; + enum mhi_ee_type current_ee; + enum dev_st_transition next_state; + + dev_info(mhi_cntrl->dev, "Requested to power ON\n"); + + if (mhi_cntrl->nr_irqs < mhi_cntrl->total_ev_rings) + return -EINVAL; + + /* Supply default wake routines if not provided by controller driver */ + if (!mhi_cntrl->wake_get || !mhi_cntrl->wake_put || + !mhi_cntrl->wake_toggle) { + mhi_cntrl->wake_get = mhi_assert_dev_wake; + mhi_cntrl->wake_put = mhi_deassert_dev_wake; + mhi_cntrl->wake_toggle = (mhi_cntrl->db_access & MHI_PM_M2) ? + mhi_toggle_dev_wake_nop : mhi_toggle_dev_wake; + } + + mutex_lock(&mhi_cntrl->pm_mutex); + mhi_cntrl->pm_state = MHI_PM_DISABLE; + + if (!mhi_cntrl->pre_init) { + /* Setup device context */ + ret = mhi_init_dev_ctxt(mhi_cntrl); + if (ret) + goto error_dev_ctxt; + } + + ret = mhi_init_irq_setup(mhi_cntrl); + if (ret) + goto error_setup_irq; + + /* Setup BHI offset & INTVEC */ + write_lock_irq(&mhi_cntrl->pm_lock); + ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, BHIOFF, &val); + if (ret) { + write_unlock_irq(&mhi_cntrl->pm_lock); + goto error_bhi_offset; + } + + mhi_cntrl->bhi = mhi_cntrl->regs + val; + + /* Setup BHIE offset */ + if (mhi_cntrl->fbc_download) { + ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, BHIEOFF, &val); + if (ret) { + write_unlock_irq(&mhi_cntrl->pm_lock); + dev_err(mhi_cntrl->dev, "Error reading BHIE offset\n"); + goto error_bhi_offset; + } + + mhi_cntrl->bhie = mhi_cntrl->regs + val; + } + + mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0); + mhi_cntrl->pm_state = MHI_PM_POR; + mhi_cntrl->ee = MHI_EE_MAX; + current_ee = mhi_get_exec_env(mhi_cntrl); + write_unlock_irq(&mhi_cntrl->pm_lock); + + /* Confirm that the device is in valid exec env */ + if (!MHI_IN_PBL(current_ee) && current_ee != MHI_EE_AMSS) { + dev_err(mhi_cntrl->dev, "Not a valid EE for power on\n"); + ret = -EIO; + goto error_bhi_offset; + } + + /* Transition to next state */ + next_state = MHI_IN_PBL(current_ee) ? + DEV_ST_TRANSITION_PBL : DEV_ST_TRANSITION_READY; + + if (next_state == DEV_ST_TRANSITION_PBL) + schedule_work(&mhi_cntrl->fw_worker); + + mhi_queue_state_transition(mhi_cntrl, next_state); + + mutex_unlock(&mhi_cntrl->pm_mutex); + + dev_info(mhi_cntrl->dev, "Power on setup success\n"); + + return 0; + +error_bhi_offset: + mhi_deinit_free_irq(mhi_cntrl); + +error_setup_irq: + if (!mhi_cntrl->pre_init) + mhi_deinit_dev_ctxt(mhi_cntrl); + +error_dev_ctxt: + mutex_unlock(&mhi_cntrl->pm_mutex); + + return ret; +} +EXPORT_SYMBOL_GPL(mhi_async_power_up); + +void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful) +{ + enum mhi_pm_state cur_state; + + /* If it's not a graceful shutdown, force MHI to linkdown state */ + if (!graceful) { + mutex_lock(&mhi_cntrl->pm_mutex); + write_lock_irq(&mhi_cntrl->pm_lock); + cur_state = mhi_tryset_pm_state(mhi_cntrl, + MHI_PM_LD_ERR_FATAL_DETECT); + write_unlock_irq(&mhi_cntrl->pm_lock); + mutex_unlock(&mhi_cntrl->pm_mutex); + if (cur_state != MHI_PM_LD_ERR_FATAL_DETECT) + dev_dbg(mhi_cntrl->dev, + "Failed to move to state: %s from: %s\n", + to_mhi_pm_state_str(MHI_PM_LD_ERR_FATAL_DETECT), + to_mhi_pm_state_str(mhi_cntrl->pm_state)); + } + mhi_pm_disable_transition(mhi_cntrl, MHI_PM_SHUTDOWN_PROCESS); + mhi_deinit_free_irq(mhi_cntrl); + + if (!mhi_cntrl->pre_init) { + /* Free all allocated resources */ + if (mhi_cntrl->fbc_image) { + mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image); + mhi_cntrl->fbc_image = NULL; + } + mhi_deinit_dev_ctxt(mhi_cntrl); + } +} +EXPORT_SYMBOL_GPL(mhi_power_down); + +int mhi_sync_power_up(struct mhi_controller *mhi_cntrl) +{ + int ret = mhi_async_power_up(mhi_cntrl); + + if (ret) + return ret; + + wait_event_timeout(mhi_cntrl->state_event, + MHI_IN_MISSION_MODE(mhi_cntrl->ee) || + MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state), + msecs_to_jiffies(mhi_cntrl->timeout_ms)); + + return (MHI_IN_MISSION_MODE(mhi_cntrl->ee)) ? 0 : -EIO; +} +EXPORT_SYMBOL(mhi_sync_power_up); diff --git a/include/linux/mhi.h b/include/linux/mhi.h index 80633099b90d..04c500323214 100644 --- a/include/linux/mhi.h +++ b/include/linux/mhi.h @@ -83,6 +83,17 @@ enum mhi_ch_type { MHI_CH_TYPE_INBOUND_COALESCED = 3, }; +/** + * struct image_info - Firmware and RDDM table table + * @mhi_buf - Buffer for firmware and RDDM table + * @entries - # of entries in table + */ +struct image_info { + struct mhi_buf *mhi_buf; + struct bhi_vec_entry *bhi_vec; + u32 entries; +}; + /** * enum mhi_ee_type - Execution environment types * @MHI_EE_PBL: Primary Bootloader @@ -272,6 +283,7 @@ struct mhi_controller_config { * @bus_id: Physical bus instance used by the controller * @regs: Base address of MHI MMIO register space * @bhi: Points to base of MHI BHI register space + * @bhie: Points to base of MHI BHIe register space * @wake_db: MHI WAKE doorbell register address * @iova_start: IOMMU starting address for data * @iova_stop: IOMMU stop address for data @@ -280,6 +292,7 @@ struct mhi_controller_config { * @fbc_download: MHI host needs to do complete image transfer * @sbl_size: SBL image size * @seg_len: BHIe vector size + * @fbc_image: Points to firmware image buffer * @max_chan: Maximum number of channels the controller supports * @mhi_chan: Points to the channel configuration table * @lpm_chans: List of channels that require LPM notifications @@ -335,6 +348,7 @@ struct mhi_controller { u32 bus_id; void __iomem *regs; void __iomem *bhi; + void __iomem *bhie; void __iomem *wake_db; dma_addr_t iova_start; @@ -344,6 +358,7 @@ struct mhi_controller { bool fbc_download; size_t sbl_size; size_t seg_len; + struct image_info *fbc_image; u32 max_chan; struct mhi_chan *mhi_chan; struct list_head lpm_chans; @@ -532,4 +547,40 @@ void mhi_driver_unregister(struct mhi_driver *mhi_drv); void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, enum mhi_state state); +/** + * mhi_prepare_for_power_up - Do pre-initialization before power up. + * This is optional, call this before power up if + * the controller does not want bus framework to + * automatically free any allocated memory during + * shutdown process. + * @mhi_cntrl: MHI controller + */ +int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl); + +/** + * mhi_async_power_up - Start MHI power up sequence + * @mhi_cntrl: MHI controller + */ +int mhi_async_power_up(struct mhi_controller *mhi_cntrl); + +/** + * mhi_sync_power_up - Start MHI power up sequence and wait till the device + * device enters valid EE state + * @mhi_cntrl: MHI controller + */ +int mhi_sync_power_up(struct mhi_controller *mhi_cntrl); + +/** + * mhi_power_down - Start MHI power down sequence + * @mhi_cntrl: MHI controller + * @graceful: Link is still accessible, so do a graceful shutdown process + */ +void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful); + +/** + * mhi_unprepare_after_power_down - Free any allocated memory after power down + * @mhi_cntrl: MHI controller + */ +void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl); + #endif /* _MHI_H_ */ From patchwork Thu Jan 23 11:18:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manivannan Sadhasivam X-Patchwork-Id: 11347383 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A788F159A for ; Thu, 23 Jan 2020 11:19:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7B6A32467F for ; Thu, 23 Jan 2020 11:19:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="ao6lIidT" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729152AbgAWLTP (ORCPT ); Thu, 23 Jan 2020 06:19:15 -0500 Received: from mail-pl1-f193.google.com ([209.85.214.193]:42755 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729146AbgAWLTO (ORCPT ); Thu, 23 Jan 2020 06:19:14 -0500 Received: by mail-pl1-f193.google.com with SMTP id p9so1217420plk.9 for ; Thu, 23 Jan 2020 03:19:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=uT4jStC2COjiLy5iu1J5/xW3m4eW/2z2+pN3SjDLaws=; b=ao6lIidT/kfOyv95yQA8Lu9FCrKSRWJimu2Kx1IRPv29RCD/NPi/FYYUmD6MFmGVwK UKHs5ohaOOEyTGSjWmAJp8yRxkE1UZVozkeDbH4EXbwtIBqgk7/h3exTCOsPjt8TdHZ3 Gn8tseWyt6RpnU3DF4T2HhDkiy1x6Bb7TOc9FQ6eoLMQrgO5FrFrn6XzFHe5/4xN3v0q bYOq7VcSco/MOnbIlpCb4YWu5UQxp7r5f9hTv6YsXMnERmf7jUF7UYsZsHbmroIoN68E L8KiVCeagoeWVY581IktPXa8IquC5I/M+KKyBUWUFihw5r2UnDmUvEWHzQ0GGizZLM2C JLjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=uT4jStC2COjiLy5iu1J5/xW3m4eW/2z2+pN3SjDLaws=; b=JautQGGXVYt5OcBMYGypP4oledku9/ZbZgezrvJw0Im6DLCn0DpxV1dRgKYKbbxnqe CcmTY8Vz8mikl513i330QZKJBBBIB5WRRK0ZwGwkliw1UyUcyIWJuBUHXgDOrjRG8w5z tWtocLWK32H76dcfmHpipTQ6VZEAhLnRRG5Rmq5ki5dMhT01i3H+r1QzEmM7jv7Fab7r v6uouPomDjYoxG47HVJ87ktop0BC2CPg9Y0EDwzypSloIFQxxIE70SCnT//i+byvEMSu RsUtDLgbwkw446qjwW4YekYFyemmsaJs//07GYdFm8vlQC18W8vagvXaE7Aq7C8O5xLX pJRA== X-Gm-Message-State: APjAAAUbnptORm68e+AxWhpXfx5CSYhE1gXbwTozsqThAsBhXKpmJPtc 9YpeQTG1D7tTR4hJxzIzQwSs X-Google-Smtp-Source: APXvYqzeDK3voTfF3J7OPkgCU/t8XojBSc7HjGmUL5qKGGDBq8HpiMj5//lngt96xarRCqULWgXebA== X-Received: by 2002:a17:902:8d8a:: with SMTP id v10mr15412330plo.90.1579778353855; Thu, 23 Jan 2020 03:19:13 -0800 (PST) Received: from localhost.localdomain ([103.59.133.81]) by smtp.googlemail.com with ESMTPSA id y6sm2627559pgc.10.2020.01.23.03.19.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2020 03:19:13 -0800 (PST) From: Manivannan Sadhasivam To: gregkh@linuxfoundation.org, arnd@arndb.de Cc: smohanad@codeaurora.org, jhugo@codeaurora.org, kvalo@codeaurora.org, bjorn.andersson@linaro.org, hemantk@codeaurora.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, Manivannan Sadhasivam Subject: [PATCH 08/16] bus: mhi: core: Add support for downloading firmware over BHIe Date: Thu, 23 Jan 2020 16:48:28 +0530 Message-Id: <20200123111836.7414-9-manivannan.sadhasivam@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> References: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org MHI supports downloading the device firmware over BHI/BHIe (Boot Host Interface) protocol. Hence, this commit adds necessary helpers, which will be called during device power up stage. This is based on the patch submitted by Sujeev Dias: https://lkml.org/lkml/2018/7/9/989 Signed-off-by: Sujeev Dias Signed-off-by: Siddartha Mohanadoss [mani: splitted the data transfer patch and cleaned up for upstream] Signed-off-by: Manivannan Sadhasivam --- drivers/bus/mhi/core/boot.c | 268 ++++++++++++++++++++++++++++++++ drivers/bus/mhi/core/init.c | 1 + drivers/bus/mhi/core/internal.h | 1 + 3 files changed, 270 insertions(+) diff --git a/drivers/bus/mhi/core/boot.c b/drivers/bus/mhi/core/boot.c index 0996f18c4281..36956fb6eff2 100644 --- a/drivers/bus/mhi/core/boot.c +++ b/drivers/bus/mhi/core/boot.c @@ -20,6 +20,121 @@ #include #include "internal.h" +/* Download AMSS image to device */ +static int mhi_fw_load_amss(struct mhi_controller *mhi_cntrl, + const struct mhi_buf *mhi_buf) +{ + void __iomem *base = mhi_cntrl->bhie; + rwlock_t *pm_lock = &mhi_cntrl->pm_lock; + u32 tx_status, sequence_id; + + read_lock_bh(pm_lock); + if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) { + read_unlock_bh(pm_lock); + return -EIO; + } + + mhi_write_reg(mhi_cntrl, base, BHIE_TXVECADDR_HIGH_OFFS, + upper_32_bits(mhi_buf->dma_addr)); + + mhi_write_reg(mhi_cntrl, base, BHIE_TXVECADDR_LOW_OFFS, + lower_32_bits(mhi_buf->dma_addr)); + + mhi_write_reg(mhi_cntrl, base, BHIE_TXVECSIZE_OFFS, mhi_buf->len); + + sequence_id = prandom_u32() & BHIE_TXVECSTATUS_SEQNUM_BMSK; + mhi_write_reg_field(mhi_cntrl, base, BHIE_TXVECDB_OFFS, + BHIE_TXVECDB_SEQNUM_BMSK, BHIE_TXVECDB_SEQNUM_SHFT, + sequence_id); + read_unlock_bh(pm_lock); + + /* Wait for the image download to complete */ + wait_event_timeout(mhi_cntrl->state_event, + MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) || + mhi_read_reg_field(mhi_cntrl, base, + BHIE_TXVECSTATUS_OFFS, + BHIE_TXVECSTATUS_STATUS_BMSK, + BHIE_TXVECSTATUS_STATUS_SHFT, + &tx_status) || tx_status, + msecs_to_jiffies(mhi_cntrl->timeout_ms)); + + if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) + return -EIO; + + return (tx_status == BHIE_TXVECSTATUS_STATUS_XFER_COMPL) ? 0 : -EIO; +} + +/* Download SBL image to device */ +static int mhi_fw_load_sbl(struct mhi_controller *mhi_cntrl, + dma_addr_t dma_addr, + size_t size) +{ + u32 tx_status, val, session_id; + int i, ret; + void __iomem *base = mhi_cntrl->bhi; + rwlock_t *pm_lock = &mhi_cntrl->pm_lock; + struct { + char *name; + u32 offset; + } error_reg[] = { + { "ERROR_CODE", BHI_ERRCODE }, + { "ERROR_DBG1", BHI_ERRDBG1 }, + { "ERROR_DBG2", BHI_ERRDBG2 }, + { "ERROR_DBG3", BHI_ERRDBG3 }, + { NULL }, + }; + + read_lock_bh(pm_lock); + if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) { + read_unlock_bh(pm_lock); + goto invalid_pm_state; + } + + /* Start SBL download via BHI protocol */ + mhi_write_reg(mhi_cntrl, base, BHI_STATUS, 0); + mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_HIGH, + upper_32_bits(dma_addr)); + mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_LOW, + lower_32_bits(dma_addr)); + mhi_write_reg(mhi_cntrl, base, BHI_IMGSIZE, size); + session_id = prandom_u32() & BHI_TXDB_SEQNUM_BMSK; + mhi_write_reg(mhi_cntrl, base, BHI_IMGTXDB, session_id); + read_unlock_bh(pm_lock); + + /* Wait for the image download to complete */ + ret = wait_event_timeout(mhi_cntrl->state_event, + MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) || + mhi_read_reg_field(mhi_cntrl, base, BHI_STATUS, + BHI_STATUS_MASK, BHI_STATUS_SHIFT, + &tx_status) || tx_status, + msecs_to_jiffies(mhi_cntrl->timeout_ms)); + if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) + goto invalid_pm_state; + + if (tx_status == BHI_STATUS_ERROR) { + dev_err(mhi_cntrl->dev, "Image transfer failed\n"); + read_lock_bh(pm_lock); + if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) { + for (i = 0; error_reg[i].name; i++) { + ret = mhi_read_reg(mhi_cntrl, base, + error_reg[i].offset, &val); + if (ret) + break; + dev_err(mhi_cntrl->dev, "Reg: %s value: 0x%x\n", + error_reg[i].name, val); + } + } + read_unlock_bh(pm_lock); + goto invalid_pm_state; + } + + return (!ret) ? -ETIMEDOUT : 0; + +invalid_pm_state: + + return -EIO; +} + void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl, struct image_info *image_info) { @@ -87,3 +202,156 @@ int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl, return -ENOMEM; } + +static void mhi_firmware_copy(struct mhi_controller *mhi_cntrl, + const struct firmware *firmware, + struct image_info *img_info) +{ + size_t remainder = firmware->size; + size_t to_cpy; + const u8 *buf = firmware->data; + int i = 0; + struct mhi_buf *mhi_buf = img_info->mhi_buf; + struct bhi_vec_entry *bhi_vec = img_info->bhi_vec; + + while (remainder) { + to_cpy = min(remainder, mhi_buf->len); + memcpy(mhi_buf->buf, buf, to_cpy); + bhi_vec->dma_addr = mhi_buf->dma_addr; + bhi_vec->size = to_cpy; + + buf += to_cpy; + remainder -= to_cpy; + i++; + bhi_vec++; + mhi_buf++; + } +} + +void mhi_fw_load_worker(struct work_struct *work) +{ + int ret; + struct mhi_controller *mhi_cntrl; + const char *fw_name; + const struct firmware *firmware = NULL; + struct image_info *image_info; + void *buf; + dma_addr_t dma_addr; + size_t size; + + mhi_cntrl = container_of(work, struct mhi_controller, fw_worker); + + dev_dbg(mhi_cntrl->dev, "Waiting for device to enter PBL from: %s\n", + TO_MHI_EXEC_STR(mhi_cntrl->ee)); + + ret = wait_event_timeout(mhi_cntrl->state_event, + MHI_IN_PBL(mhi_cntrl->ee) || + MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state), + msecs_to_jiffies(mhi_cntrl->timeout_ms)); + + if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) { + dev_err(mhi_cntrl->dev, "Device MHI is not in valid state\n"); + return; + } + + /* If device is in pass through, do reset to ready state transition */ + if (mhi_cntrl->ee == MHI_EE_PTHRU) + goto fw_load_ee_pthru; + + fw_name = (mhi_cntrl->ee == MHI_EE_EDL) ? + mhi_cntrl->edl_image : mhi_cntrl->fw_image; + + if (!fw_name || (mhi_cntrl->fbc_download && (!mhi_cntrl->sbl_size || + !mhi_cntrl->seg_len))) { + dev_err(mhi_cntrl->dev, + "No firmware image defined or !sbl_size || !seg_len\n"); + return; + } + + ret = request_firmware(&firmware, fw_name, mhi_cntrl->dev); + if (ret) { + dev_err(mhi_cntrl->dev, "Error loading firmware: %d\n", + ret); + return; + } + + size = (mhi_cntrl->fbc_download) ? mhi_cntrl->sbl_size : firmware->size; + + /* SBL size provided is maximum size, not necessarily the image size */ + if (size > firmware->size) + size = firmware->size; + + buf = mhi_alloc_coherent(mhi_cntrl, size, &dma_addr, GFP_KERNEL); + if (!buf) { + release_firmware(firmware); + return; + } + + /* Download SBL image */ + memcpy(buf, firmware->data, size); + ret = mhi_fw_load_sbl(mhi_cntrl, dma_addr, size); + mhi_free_coherent(mhi_cntrl, size, buf, dma_addr); + + if (!mhi_cntrl->fbc_download || ret || mhi_cntrl->ee == MHI_EE_EDL) + release_firmware(firmware); + + /* Error or in EDL mode, we're done */ + if (ret || mhi_cntrl->ee == MHI_EE_EDL) + return; + + write_lock_irq(&mhi_cntrl->pm_lock); + mhi_cntrl->dev_state = MHI_STATE_RESET; + write_unlock_irq(&mhi_cntrl->pm_lock); + + /* + * If we're doing fbc, populate vector tables while + * device transitioning into MHI READY state + */ + if (mhi_cntrl->fbc_download) { + ret = mhi_alloc_bhie_table(mhi_cntrl, &mhi_cntrl->fbc_image, + firmware->size); + if (ret) + goto error_alloc_fw_table; + + /* Load the firmware into BHIE vec table */ + mhi_firmware_copy(mhi_cntrl, firmware, mhi_cntrl->fbc_image); + } + +fw_load_ee_pthru: + /* Transitioning into MHI RESET->READY state */ + ret = mhi_ready_state_transition(mhi_cntrl); + + if (!mhi_cntrl->fbc_download) + return; + + if (ret) + goto error_read; + + /* Wait for the SBL event */ + ret = wait_event_timeout(mhi_cntrl->state_event, + mhi_cntrl->ee == MHI_EE_SBL || + MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state), + msecs_to_jiffies(mhi_cntrl->timeout_ms)); + + if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) { + dev_err(mhi_cntrl->dev, "MHI did not enter SBL\n"); + goto error_read; + } + + /* Start full firmware image download */ + image_info = mhi_cntrl->fbc_image; + ret = mhi_fw_load_amss(mhi_cntrl, + /* Vector table is the last entry */ + &image_info->mhi_buf[image_info->entries - 1]); + + release_firmware(firmware); + + return; + +error_read: + mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image); + mhi_cntrl->fbc_image = NULL; + +error_alloc_fw_table: + release_firmware(firmware); +} diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c index 7fff92e9661b..2f06bf958f58 100644 --- a/drivers/bus/mhi/core/init.c +++ b/drivers/bus/mhi/core/init.c @@ -753,6 +753,7 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl, spin_lock_init(&mhi_cntrl->wlock); INIT_WORK(&mhi_cntrl->st_worker, mhi_pm_st_worker); INIT_WORK(&mhi_cntrl->syserr_worker, mhi_pm_sys_err_worker); + INIT_WORK(&mhi_cntrl->fw_worker, mhi_fw_load_worker); init_waitqueue_head(&mhi_cntrl->state_event); mhi_cmd = mhi_cntrl->mhi_cmd; diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h index d920264ded21..eab9c051ca5e 100644 --- a/drivers/bus/mhi/core/internal.h +++ b/drivers/bus/mhi/core/internal.h @@ -590,6 +590,7 @@ int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl, enum dev_st_transition state); void mhi_pm_st_worker(struct work_struct *work); void mhi_pm_sys_err_worker(struct work_struct *work); +void mhi_fw_load_worker(struct work_struct *work); int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl); void mhi_ctrl_ev_task(unsigned long data); int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl); From patchwork Thu Jan 23 11:18:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manivannan Sadhasivam X-Patchwork-Id: 11347399 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DB76A1580 for ; Thu, 23 Jan 2020 11:19:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A5B9D2467F for ; Thu, 23 Jan 2020 11:19:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="MRgTl5Az" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729199AbgAWLTU (ORCPT ); Thu, 23 Jan 2020 06:19:20 -0500 Received: from mail-pj1-f66.google.com ([209.85.216.66]:37278 "EHLO mail-pj1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726194AbgAWLTS (ORCPT ); Thu, 23 Jan 2020 06:19:18 -0500 Received: by mail-pj1-f66.google.com with SMTP id m13so1147205pjb.2 for ; Thu, 23 Jan 2020 03:19:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=7x+RRKDQ+82parLCWdi8TmyvsjI5/vfpTHYZFDjvwR0=; b=MRgTl5Az91oaXaAfTU5deBxF8g2k5XHvtJaXQ+EN7dKBk1Uc0L5q82PJ1x30tnNWLE Zqr5fsErGMdPAPvzB4uKOd5B4sr+7iTtaYEFkpsRPk1zfHiuz/Sxp0eG1DiiBElWTh4u qfS+mKVIKDnnHmyBju6wRBo7vb5YAORb5AmAJryn2JeQ1WPpyzKHGLKfIcROg0frNRuF t41IhPywJo42L91rHKmBOq4zXYnINIVqGnno61NDZWOvn3SMJXNi7PIPw8D+au+H3/fj q2lFVgqH+bX8ymuD9fsSGb8ts7BHMt8OHiJvY2f6+g49oTRKIRZ49ESm1ioOfyzSIfZM cmDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=7x+RRKDQ+82parLCWdi8TmyvsjI5/vfpTHYZFDjvwR0=; b=PFr7NtCI7/Yq/+l1jm/DcnCV5k0qrc+Vwu0e49FDQf9QxT0IEdxKnAXbgL2b5+ACR1 /sDDpihNNDOWZWsljZAdvywxh5G/fFJSXoMa3f4wZ89b8H7o4fBEoH2QGJRDNeB/7YgG cuie2mVEfeSZe6yabWw8CbfYSTRvObV76SH38nUg/flRi+txgRTjH2rCQudexAOvah8o j/yyZDaTbBDiKdMLTI/P4WT15g5z0qSG07uWgtki9qYnqPB3FKF03H5iKw9k/JAWp2KE 3L9DjDUYUek6Gg28bLzHCnV77Pznh1rzD+7G1GDRUl6TYaKRxTrryUFHEHCM6H+soZiO m98w== X-Gm-Message-State: APjAAAVZPvdCjWvKnqzvZcjnjRX5L6iMiWp2GeJIfqHMQ8Q6h/gSnmg8 ZUrE+yGW3FY+zlCe/AZ0h3fJ X-Google-Smtp-Source: APXvYqwXpk3yf/PGAa/Oqsk5+ZlEbUjkByryMmw7NQA+qPRKaJEQIhmL9OAtaJDbSjnd7nAsh/iTQw== X-Received: by 2002:a17:902:fe0d:: with SMTP id g13mr2067516plj.124.1579778357255; Thu, 23 Jan 2020 03:19:17 -0800 (PST) Received: from localhost.localdomain ([103.59.133.81]) by smtp.googlemail.com with ESMTPSA id y6sm2627559pgc.10.2020.01.23.03.19.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2020 03:19:16 -0800 (PST) From: Manivannan Sadhasivam To: gregkh@linuxfoundation.org, arnd@arndb.de Cc: smohanad@codeaurora.org, jhugo@codeaurora.org, kvalo@codeaurora.org, bjorn.andersson@linaro.org, hemantk@codeaurora.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, Manivannan Sadhasivam Subject: [PATCH 09/16] bus: mhi: core: Add support for downloading RDDM image during panic Date: Thu, 23 Jan 2020 16:48:29 +0530 Message-Id: <20200123111836.7414-10-manivannan.sadhasivam@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> References: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org MHI protocol supports downloading RDDM (RAM Dump) image from the device through BHIE. This is useful to debugging as the RDDM image can capture the firmware state. This is based on the patch submitted by Sujeev Dias: https://lkml.org/lkml/2018/7/9/989 Signed-off-by: Sujeev Dias Signed-off-by: Siddartha Mohanadoss [mani: splitted the data transfer patch and cleaned up for upstream] Signed-off-by: Manivannan Sadhasivam --- drivers/bus/mhi/core/boot.c | 153 ++++++++++++++++++++++++++++++++ drivers/bus/mhi/core/init.c | 38 ++++++++ drivers/bus/mhi/core/internal.h | 2 + drivers/bus/mhi/core/pm.c | 31 +++++++ include/linux/mhi.h | 24 +++++ 5 files changed, 248 insertions(+) diff --git a/drivers/bus/mhi/core/boot.c b/drivers/bus/mhi/core/boot.c index 36956fb6eff2..facfec26eca1 100644 --- a/drivers/bus/mhi/core/boot.c +++ b/drivers/bus/mhi/core/boot.c @@ -20,6 +20,159 @@ #include #include "internal.h" +/* Setup RDDM vector table for RDDM transfer and program RXVEC */ +void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl, + struct image_info *img_info) +{ + struct mhi_buf *mhi_buf = img_info->mhi_buf; + struct bhi_vec_entry *bhi_vec = img_info->bhi_vec; + void __iomem *base = mhi_cntrl->bhie; + u32 sequence_id; + unsigned int i; + + for (i = 0; i < img_info->entries - 1; i++, mhi_buf++, bhi_vec++) { + bhi_vec->dma_addr = mhi_buf->dma_addr; + bhi_vec->size = mhi_buf->len; + } + + dev_dbg(mhi_cntrl->dev, "BHIe programming for RDDM\n"); + + mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_HIGH_OFFS, + upper_32_bits(mhi_buf->dma_addr)); + + mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_LOW_OFFS, + lower_32_bits(mhi_buf->dma_addr)); + + mhi_write_reg(mhi_cntrl, base, BHIE_RXVECSIZE_OFFS, mhi_buf->len); + sequence_id = prandom_u32() & BHIE_RXVECSTATUS_SEQNUM_BMSK; + + if (unlikely(!sequence_id)) + sequence_id = 1; + + mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS, + BHIE_RXVECDB_SEQNUM_BMSK, BHIE_RXVECDB_SEQNUM_SHFT, + sequence_id); + + dev_dbg(mhi_cntrl->dev, "Address: %p and len: 0x%lx sequence: %u\n", + &mhi_buf->dma_addr, mhi_buf->len, sequence_id); +} + +/* Collect RDDM buffer during kernel panic */ +static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl) +{ + int ret; + u32 rx_status; + enum mhi_ee_type ee; + const u32 delayus = 2000; + u32 retry = (mhi_cntrl->timeout_ms * 1000) / delayus; + const u32 rddm_timeout_us = 200000; + int rddm_retry = rddm_timeout_us / delayus; + void __iomem *base = mhi_cntrl->bhie; + + dev_dbg(mhi_cntrl->dev, + "Entered with pm_state:%s dev_state:%s ee:%s\n", + to_mhi_pm_state_str(mhi_cntrl->pm_state), + TO_MHI_STATE_STR(mhi_cntrl->dev_state), + TO_MHI_EXEC_STR(mhi_cntrl->ee)); + + /* + * This should only be executing during a kernel panic, we expect all + * other cores to shutdown while we're collecting RDDM buffer. After + * returning from this function, we expect the device to reset. + * + * Normaly, we read/write pm_state only after grabbing the + * pm_lock, since we're in a panic, skipping it. Also there is no + * gurantee that this state change would take effect since + * we're setting it w/o grabbing pm_lock + */ + mhi_cntrl->pm_state = MHI_PM_LD_ERR_FATAL_DETECT; + /* update should take the effect immediately */ + smp_wmb(); + + /* + * Make sure device is not already in RDDM. In case the device asserts + * and a kernel panic follows, device will already be in RDDM. + * Do not trigger SYS ERR again and proceed with waiting for + * image download completion. + */ + ee = mhi_get_exec_env(mhi_cntrl); + if (ee != MHI_EE_RDDM) { + dev_dbg(mhi_cntrl->dev, + "Trigger device into RDDM mode using SYS ERR\n"); + mhi_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR); + + dev_dbg(mhi_cntrl->dev, "Waiting for device to enter RDDM\n"); + while (rddm_retry--) { + ee = mhi_get_exec_env(mhi_cntrl); + if (ee == MHI_EE_RDDM) + break; + + udelay(delayus); + } + + if (rddm_retry <= 0) { + /* Hardware reset so force device to enter RDDM */ + dev_dbg(mhi_cntrl->dev, + "Did not enter RDDM, do a host req reset\n"); + mhi_write_reg(mhi_cntrl, mhi_cntrl->regs, + MHI_SOC_RESET_REQ_OFFSET, + MHI_SOC_RESET_REQ); + udelay(delayus); + } + + ee = mhi_get_exec_env(mhi_cntrl); + } + + dev_dbg(mhi_cntrl->dev, + "Waiting for image download completion, current EE: %s\n", + TO_MHI_EXEC_STR(ee)); + + while (retry--) { + ret = mhi_read_reg_field(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS, + BHIE_RXVECSTATUS_STATUS_BMSK, + BHIE_RXVECSTATUS_STATUS_SHFT, + &rx_status); + if (ret) + return -EIO; + + if (rx_status == BHIE_RXVECSTATUS_STATUS_XFER_COMPL) + return 0; + + udelay(delayus); + } + + ee = mhi_get_exec_env(mhi_cntrl); + ret = mhi_read_reg(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS, &rx_status); + + dev_err(mhi_cntrl->dev, "Did not complete RDDM transfer\n"); + dev_err(mhi_cntrl->dev, "Current EE: %s\n", TO_MHI_EXEC_STR(ee)); + dev_err(mhi_cntrl->dev, "RXVEC_STATUS: 0x%x\n", rx_status); + + return -EIO; +} + +/* Download RDDM image from device */ +int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl, bool in_panic) +{ + void __iomem *base = mhi_cntrl->bhie; + u32 rx_status; + + if (in_panic) + return __mhi_download_rddm_in_panic(mhi_cntrl); + + /* Wait for the image download to complete */ + wait_event_timeout(mhi_cntrl->state_event, + mhi_read_reg_field(mhi_cntrl, base, + BHIE_RXVECSTATUS_OFFS, + BHIE_RXVECSTATUS_STATUS_BMSK, + BHIE_RXVECSTATUS_STATUS_SHFT, + &rx_status) || rx_status, + msecs_to_jiffies(mhi_cntrl->timeout_ms)); + + return (rx_status == BHIE_RXVECSTATUS_STATUS_XFER_COMPL) ? 0 : -EIO; +} +EXPORT_SYMBOL_GPL(mhi_download_rddm_img); + /* Download AMSS image to device */ static int mhi_fw_load_amss(struct mhi_controller *mhi_cntrl, const struct mhi_buf *mhi_buf) diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c index 2f06bf958f58..f54429c9b7fc 100644 --- a/drivers/bus/mhi/core/init.c +++ b/drivers/bus/mhi/core/init.c @@ -830,6 +830,7 @@ EXPORT_SYMBOL_GPL(mhi_unregister_controller); int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl) { int ret; + u32 bhie_off; mutex_lock(&mhi_cntrl->pm_mutex); @@ -837,12 +838,44 @@ int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl) if (ret) goto error_dev_ctxt; + /* + * Allocate RDDM table if specified, this table is for debugging purpose + */ + if (mhi_cntrl->rddm_size) { + mhi_alloc_bhie_table(mhi_cntrl, &mhi_cntrl->rddm_image, + mhi_cntrl->rddm_size); + + /* + * This controller supports RDDM, so we need to manually clear + * BHIE RX registers since POR values are undefined. + */ + ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, BHIEOFF, + &bhie_off); + if (ret) { + dev_err(mhi_cntrl->dev, "Error getting BHIE offset\n"); + goto bhie_error; + } + + memset_io(mhi_cntrl->regs + bhie_off + BHIE_RXVECADDR_LOW_OFFS, + 0, BHIE_RXVECSTATUS_OFFS - BHIE_RXVECADDR_LOW_OFFS + + 4); + + if (mhi_cntrl->rddm_image) + mhi_rddm_prepare(mhi_cntrl, mhi_cntrl->rddm_image); + } + mhi_cntrl->pre_init = true; mutex_unlock(&mhi_cntrl->pm_mutex); return 0; +bhie_error: + if (mhi_cntrl->rddm_image) { + mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->rddm_image); + mhi_cntrl->rddm_image = NULL; + } + error_dev_ctxt: mutex_unlock(&mhi_cntrl->pm_mutex); @@ -857,6 +890,11 @@ void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl) mhi_cntrl->fbc_image = NULL; } + if (mhi_cntrl->rddm_image) { + mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->rddm_image); + mhi_cntrl->rddm_image = NULL; + } + mhi_deinit_dev_ctxt(mhi_cntrl); mhi_cntrl->pre_init = false; } diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h index eab9c051ca5e..889e91bcb2f8 100644 --- a/drivers/bus/mhi/core/internal.h +++ b/drivers/bus/mhi/core/internal.h @@ -626,6 +626,8 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl); void mhi_deinit_dev_ctxt(struct mhi_controller *mhi_cntrl); int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl); void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl); +void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl, + struct image_info *img_info); /* Memory allocation methods */ static inline void *mhi_alloc_coherent(struct mhi_controller *mhi_cntrl, diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c index b67ae2455fc5..0bdc667830f0 100644 --- a/drivers/bus/mhi/core/pm.c +++ b/drivers/bus/mhi/core/pm.c @@ -453,6 +453,16 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl, /* We must notify MHI control driver so it can clean up first */ if (transition_state == MHI_PM_SYS_ERR_PROCESS) { + /* + * If controller supports RDDM, we do not process + * SYS error state, instead we will jump directly + * to RDDM state + */ + if (mhi_cntrl->rddm_image) { + dev_dbg(mhi_cntrl->dev, + "Controller supports RDDM, so skip SYS_ERR\n"); + return; + } mhi_cntrl->status_cb(mhi_cntrl, mhi_cntrl->priv_data, MHI_CB_SYS_ERROR); } @@ -901,3 +911,24 @@ int mhi_sync_power_up(struct mhi_controller *mhi_cntrl) return (MHI_IN_MISSION_MODE(mhi_cntrl->ee)) ? 0 : -EIO; } EXPORT_SYMBOL(mhi_sync_power_up); + +int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl) +{ + int ret; + + /* Check if device is already in RDDM */ + if (mhi_cntrl->ee == MHI_EE_RDDM) + return 0; + + dev_dbg(mhi_cntrl->dev, "Triggering SYS_ERR to force RDDM state\n"); + mhi_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR); + + /* Wait for RDDM event */ + ret = wait_event_timeout(mhi_cntrl->state_event, + mhi_cntrl->ee == MHI_EE_RDDM, + msecs_to_jiffies(mhi_cntrl->timeout_ms)); + ret = ret ? 0 : -EIO; + + return ret; +} +EXPORT_SYMBOL_GPL(mhi_force_rddm_mode); diff --git a/include/linux/mhi.h b/include/linux/mhi.h index 04c500323214..1b018e0d04f4 100644 --- a/include/linux/mhi.h +++ b/include/linux/mhi.h @@ -290,9 +290,11 @@ struct mhi_controller_config { * @fw_image: Firmware image name for normal booting * @edl_image: Firmware image name for emergency download mode * @fbc_download: MHI host needs to do complete image transfer + * @rddm_size: RAM dump size that host should allocate for debugging purpose * @sbl_size: SBL image size * @seg_len: BHIe vector size * @fbc_image: Points to firmware image buffer + * @rddm_image: Points to RAM dump buffer * @max_chan: Maximum number of channels the controller supports * @mhi_chan: Points to the channel configuration table * @lpm_chans: List of channels that require LPM notifications @@ -356,9 +358,11 @@ struct mhi_controller { const char *fw_image; const char *edl_image; bool fbc_download; + size_t rddm_size; size_t sbl_size; size_t seg_len; struct image_info *fbc_image; + struct image_info *rddm_image; u32 max_chan; struct mhi_chan *mhi_chan; struct list_head lpm_chans; @@ -583,4 +587,24 @@ void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful); */ void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl); +/** + * mhi_download_rddm_img - Download ramdump image from device for + * debugging purpose. + * @mhi_cntrl: MHI controller + * @in_panic: Download rddm image during kernel panic + */ +int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl, bool in_panic); + +/** + * mhi_force_rddm_mode - Force device into rddm mode + * @mhi_cntrl: MHI controller + */ +int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl); + +/** + * mhi_get_mhi_state - Get MHI state of the device + * @mhi_cntrl: MHI controller + */ +enum mhi_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl); + #endif /* _MHI_H_ */ From patchwork Thu Jan 23 11:18:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manivannan Sadhasivam X-Patchwork-Id: 11347385 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 17AB11580 for ; Thu, 23 Jan 2020 11:19:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CB42224655 for ; Thu, 23 Jan 2020 11:19:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="Ap2kkLw1" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729221AbgAWLT0 (ORCPT ); Thu, 23 Jan 2020 06:19:26 -0500 Received: from mail-pj1-f67.google.com ([209.85.216.67]:56181 "EHLO mail-pj1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729222AbgAWLTW (ORCPT ); Thu, 23 Jan 2020 06:19:22 -0500 Received: by mail-pj1-f67.google.com with SMTP id d5so1060936pjz.5 for ; Thu, 23 Jan 2020 03:19:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=TL5jPJyaHOJr3n0Nwsyj6BmbowZte59fMzKq9fzNWO8=; b=Ap2kkLw1nD5gWSCMNLiT5U/DYidI0VYk2gTPlZLRYT7AKABhBjrzESQnYPIn+KPV66 2RAXmiKOP6xByvwtR6jArZoVFQoxdBx8pasdoVmgg4xvpbUKACXUQcsSDO6K9nYo7B+r OR5PglW4ei8pEPdYMjRak1DRNFxHLZjYkq5kGTOvteDlVLePTy/YohMAjOrFW05SHRAz rh2WAtWxG01VWX3HsiOFftSyZF38utBEjvM9I7o5KSP8VLL4X8l6+/J9Tau1w3eEtHTC 1uXw4IrNtTwna2eSMItkAdTjzJR/+wXzs2XRSUvqlmQhNcefLwobT7atoYvndJj6fvI/ M5Pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=TL5jPJyaHOJr3n0Nwsyj6BmbowZte59fMzKq9fzNWO8=; b=uF8BnrkV1BE2smfg/xiOVG+vnzm9Ucfnnp7bVh745afpmD8hyb3+pXa3Wo9KPrBUm2 vaTlxSpKaed7aOPN/9drVNjUTyDUPA8ehIHv5QgXgmyK/A/b3vSWpojzGLrtXN1L6pgz nMhyo4uB4Pqi1GHC/JLCZ86hI82VQKrd9WhfLG4YZjCCywgkVbCasEX9tHoH+Pr3daf+ CVwrwNjETzRe2cwnDOjIFwc+Im7dTIbw15NYVdpHhpf8Ql0ZIE+MZ5Y01TFePD7hg1Cy MV/YWVW9oNfmrBDLD978bM8bVgy/gQUCgzUs2XaEeLPiL+/V823y9KDQ+ZMriAfeabYk 6Yjg== X-Gm-Message-State: APjAAAXYTHPEjUVKpqfElMs8BA/bHsuNfYY2WS1zdTrRE8bq02MtV7y0 9TyP97DAMCXGKjq7LM0jttCo X-Google-Smtp-Source: APXvYqzvpOc4E9Xr9lz74dvzCDzRuNcXZjG6RNcSlyc42+bxZlmgMdCZtrl0dgS1kD5izq1033Cnqw== X-Received: by 2002:a17:90a:f998:: with SMTP id cq24mr3850341pjb.6.1579778360622; Thu, 23 Jan 2020 03:19:20 -0800 (PST) Received: from localhost.localdomain ([103.59.133.81]) by smtp.googlemail.com with ESMTPSA id y6sm2627559pgc.10.2020.01.23.03.19.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2020 03:19:20 -0800 (PST) From: Manivannan Sadhasivam To: gregkh@linuxfoundation.org, arnd@arndb.de Cc: smohanad@codeaurora.org, jhugo@codeaurora.org, kvalo@codeaurora.org, bjorn.andersson@linaro.org, hemantk@codeaurora.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, Manivannan Sadhasivam Subject: [PATCH 10/16] bus: mhi: core: Add support for processing events from client device Date: Thu, 23 Jan 2020 16:48:30 +0530 Message-Id: <20200123111836.7414-11-manivannan.sadhasivam@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> References: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This commit adds support for processing the MHI data and control events from the client device. The client device can report various events such as EE events, state change events by interrupting the host through IRQ and adding events to the event rings allocated by the host during initialization. This is based on the patch submitted by Sujeev Dias: https://lkml.org/lkml/2018/7/9/988 Signed-off-by: Sujeev Dias Signed-off-by: Siddartha Mohanadoss [mani: splitted the data transfer patch and cleaned up for upstream] Signed-off-by: Manivannan Sadhasivam --- drivers/bus/mhi/core/init.c | 19 ++ drivers/bus/mhi/core/internal.h | 10 + drivers/bus/mhi/core/main.c | 471 ++++++++++++++++++++++++++++++++ include/linux/mhi.h | 15 + 4 files changed, 515 insertions(+) diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c index f54429c9b7fc..c946693bdae4 100644 --- a/drivers/bus/mhi/core/init.c +++ b/drivers/bus/mhi/core/init.c @@ -534,6 +534,19 @@ static int parse_ev_cfg(struct mhi_controller *mhi_cntrl, mhi_event->data_type = event_cfg->data_type; + switch (mhi_event->data_type) { + case MHI_ER_DATA: + mhi_event->process_event = mhi_process_data_event_ring; + break; + case MHI_ER_CTRL: + mhi_event->process_event = mhi_process_ctrl_ev_ring; + break; + default: + dev_err(mhi_cntrl->dev, + "Event Ring type not supported\n"); + goto error_ev_cfg; + } + mhi_event->hw_ring = event_cfg->hardware_event; if (mhi_event->hw_ring) mhi_cntrl->hw_ev_rings++; @@ -768,6 +781,12 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl, mhi_event->mhi_cntrl = mhi_cntrl; spin_lock_init(&mhi_event->lock); + if (mhi_event->data_type == MHI_ER_CTRL) + tasklet_init(&mhi_event->task, mhi_ctrl_ev_task, + (ulong)mhi_event); + else + tasklet_init(&mhi_event->task, mhi_ev_task, + (ulong)mhi_event); } mhi_chan = mhi_cntrl->mhi_chan; diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h index 889e91bcb2f8..314d0909c372 100644 --- a/drivers/bus/mhi/core/internal.h +++ b/drivers/bus/mhi/core/internal.h @@ -500,6 +500,8 @@ struct mhi_buf_info { void *wp; size_t len; void *cb_buf; + bool used; /* Indicates whether the buffer is used or not */ + bool pre_mapped; /* Already pre-mapped by client */ enum dma_data_direction dir; }; @@ -652,6 +654,14 @@ static inline void mhi_free_coherent(struct mhi_controller *mhi_cntrl, dma_free_coherent(mhi_cntrl->dev, size, vaddr, dma_handle); } +/* Event processing methods */ +void mhi_ctrl_ev_task(unsigned long data); +void mhi_ev_task(unsigned long data); +int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl, + struct mhi_event *mhi_event, u32 event_quota); +int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl, + struct mhi_event *mhi_event, u32 event_quota); + /* ISR handlers */ irqreturn_t mhi_irq_handler(int irq_number, void *dev); irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *dev); diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c index 453feba8f02a..8450c74b4525 100644 --- a/drivers/bus/mhi/core/main.c +++ b/drivers/bus/mhi/core/main.c @@ -149,6 +149,16 @@ static void *mhi_to_virtual(struct mhi_ring *ring, dma_addr_t addr) return (addr - ring->iommu_base) + ring->base; } +static void mhi_del_ring_element(struct mhi_controller *mhi_cntrl, + struct mhi_ring *ring) +{ + ring->rp += ring->el_size; + if (ring->rp >= (ring->base + ring->len)) + ring->rp = ring->base; + /* smp update */ + smp_wmb(); +} + int mhi_destroy_device(struct device *dev, void *data) { struct mhi_device *mhi_dev; @@ -337,3 +347,464 @@ irqreturn_t mhi_intvec_handler(int irq_number, void *dev) return IRQ_WAKE_THREAD; } + +static void mhi_recycle_ev_ring_element(struct mhi_controller *mhi_cntrl, + struct mhi_ring *ring) +{ + dma_addr_t ctxt_wp; + + /* Update the WP */ + ring->wp += ring->el_size; + ctxt_wp = *ring->ctxt_wp + ring->el_size; + + if (ring->wp >= (ring->base + ring->len)) { + ring->wp = ring->base; + ctxt_wp = ring->iommu_base; + } + + *ring->ctxt_wp = ctxt_wp; + + /* Update the RP */ + ring->rp += ring->el_size; + if (ring->rp >= (ring->base + ring->len)) + ring->rp = ring->base; + + /* Update to all cores */ + smp_wmb(); +} + +static int parse_xfer_event(struct mhi_controller *mhi_cntrl, + struct mhi_tre *event, + struct mhi_chan *mhi_chan) +{ + struct mhi_ring *buf_ring, *tre_ring; + u32 ev_code; + struct mhi_result result; + unsigned long flags = 0; + + ev_code = MHI_TRE_GET_EV_CODE(event); + buf_ring = &mhi_chan->buf_ring; + tre_ring = &mhi_chan->tre_ring; + + result.transaction_status = (ev_code == MHI_EV_CC_OVERFLOW) ? + -EOVERFLOW : 0; + + /* + * If it's a DB Event then we need to grab the lock + * with preemption disabled and as a write because we + * have to update db register and there are chances that + * another thread could be doing the same. + */ + if (ev_code >= MHI_EV_CC_OOB) + write_lock_irqsave(&mhi_chan->lock, flags); + else + read_lock_bh(&mhi_chan->lock); + + if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED) + goto end_process_tx_event; + + switch (ev_code) { + case MHI_EV_CC_OVERFLOW: + case MHI_EV_CC_EOB: + case MHI_EV_CC_EOT: + { + dma_addr_t ptr = MHI_TRE_GET_EV_PTR(event); + struct mhi_tre *local_rp, *ev_tre; + void *dev_rp; + struct mhi_buf_info *buf_info; + u16 xfer_len; + + /* Get the TRB this event points to */ + ev_tre = mhi_to_virtual(tre_ring, ptr); + + /* device rp after servicing the TREs */ + dev_rp = ev_tre + 1; + if (dev_rp >= (tre_ring->base + tre_ring->len)) + dev_rp = tre_ring->base; + + result.dir = mhi_chan->dir; + + /* local rp */ + local_rp = tre_ring->rp; + while (local_rp != dev_rp) { + buf_info = buf_ring->rp; + /* if it's last tre get len from the event */ + if (local_rp == ev_tre) + xfer_len = MHI_TRE_GET_EV_LEN(event); + else + xfer_len = buf_info->len; + + result.buf_addr = buf_info->cb_buf; + result.bytes_xferd = xfer_len; + mhi_del_ring_element(mhi_cntrl, buf_ring); + mhi_del_ring_element(mhi_cntrl, tre_ring); + local_rp = tre_ring->rp; + + /* notify client */ + mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result); + + if (mhi_chan->dir == DMA_TO_DEVICE) + atomic_dec(&mhi_cntrl->pending_pkts); + } + break; + } /* CC_EOT */ + case MHI_EV_CC_OOB: + case MHI_EV_CC_DB_MODE: + { + unsigned long flags; + + mhi_chan->db_cfg.db_mode = 1; + read_lock_irqsave(&mhi_cntrl->pm_lock, flags); + if (tre_ring->wp != tre_ring->rp && + MHI_DB_ACCESS_VALID(mhi_cntrl)) { + mhi_ring_chan_db(mhi_cntrl, mhi_chan); + } + read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags); + break; + } + case MHI_EV_CC_BAD_TRE: + default: + dev_err(mhi_cntrl->dev, "Unknown event 0x%x\n", ev_code); + break; + } /* switch(MHI_EV_READ_CODE(EV_TRB_CODE,event)) */ + +end_process_tx_event: + if (ev_code >= MHI_EV_CC_OOB) + write_unlock_irqrestore(&mhi_chan->lock, flags); + else + read_unlock_bh(&mhi_chan->lock); + + return 0; +} + +static int parse_rsc_event(struct mhi_controller *mhi_cntrl, + struct mhi_tre *event, + struct mhi_chan *mhi_chan) +{ + struct mhi_ring *buf_ring, *tre_ring; + struct mhi_buf_info *buf_info; + struct mhi_result result; + int ev_code; + u32 cookie; /* offset to local descriptor */ + u16 xfer_len; + + buf_ring = &mhi_chan->buf_ring; + tre_ring = &mhi_chan->tre_ring; + + ev_code = MHI_TRE_GET_EV_CODE(event); + cookie = MHI_TRE_GET_EV_COOKIE(event); + xfer_len = MHI_TRE_GET_EV_LEN(event); + + /* Received out of bound cookie */ + WARN_ON(cookie >= buf_ring->len); + + buf_info = buf_ring->base + cookie; + + result.transaction_status = (ev_code == MHI_EV_CC_OVERFLOW) ? + -EOVERFLOW : 0; + result.bytes_xferd = xfer_len; + result.buf_addr = buf_info->cb_buf; + result.dir = mhi_chan->dir; + + read_lock_bh(&mhi_chan->lock); + + if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED) + goto end_process_rsc_event; + + WARN_ON(!buf_info->used); + + /* notify the client */ + mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result); + + /* + * Note: We're arbitrarily incrementing RP even though, completion + * packet we processed might not be the same one, reason we can do this + * is because device guaranteed to cache descriptors in order it + * receive, so even though completion event is different we can re-use + * all descriptors in between. + * Example: + * Transfer Ring has descriptors: A, B, C, D + * Last descriptor host queue is D (WP) and first descriptor + * host queue is A (RP). + * The completion event we just serviced is descriptor C. + * Then we can safely queue descriptors to replace A, B, and C + * even though host did not receive any completions. + */ + mhi_del_ring_element(mhi_cntrl, tre_ring); + buf_info->used = false; + +end_process_rsc_event: + read_unlock_bh(&mhi_chan->lock); + + return 0; +} + +static void mhi_process_cmd_completion(struct mhi_controller *mhi_cntrl, + struct mhi_tre *tre) +{ + dma_addr_t ptr = MHI_TRE_GET_EV_PTR(tre); + struct mhi_cmd *cmd_ring = &mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING]; + struct mhi_ring *mhi_ring = &cmd_ring->ring; + struct mhi_tre *cmd_pkt; + struct mhi_chan *mhi_chan; + u32 chan; + + cmd_pkt = mhi_to_virtual(mhi_ring, ptr); + + chan = MHI_TRE_GET_CMD_CHID(cmd_pkt); + mhi_chan = &mhi_cntrl->mhi_chan[chan]; + write_lock_bh(&mhi_chan->lock); + mhi_chan->ccs = MHI_TRE_GET_EV_CODE(tre); + complete(&mhi_chan->completion); + write_unlock_bh(&mhi_chan->lock); + + mhi_del_ring_element(mhi_cntrl, mhi_ring); +} + +int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl, + struct mhi_event *mhi_event, + u32 event_quota) +{ + struct mhi_tre *dev_rp, *local_rp; + struct mhi_ring *ev_ring = &mhi_event->ring; + struct mhi_event_ctxt *er_ctxt = + &mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index]; + int count = 0; + u32 chan; + struct mhi_chan *mhi_chan; + + /* + * This is a quick check to avoid unnecessary event processing + * in case MHI is already in error state, but it's still possible + * to transition to error state while processing events + */ + if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state))) + return -EIO; + + dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp); + local_rp = ev_ring->rp; + + while (dev_rp != local_rp) { + enum mhi_pkt_type type = MHI_TRE_GET_EV_TYPE(local_rp); + + switch (type) { + case MHI_PKT_TYPE_BW_REQ_EVENT: + { + struct mhi_link_info *link_info; + + link_info = &mhi_cntrl->mhi_link_info; + write_lock_irq(&mhi_cntrl->pm_lock); + link_info->target_link_speed = + MHI_TRE_GET_EV_LINKSPEED(local_rp); + link_info->target_link_width = + MHI_TRE_GET_EV_LINKWIDTH(local_rp); + write_unlock_irq(&mhi_cntrl->pm_lock); + dev_dbg(mhi_cntrl->dev, "Received BW_REQ event\n"); + mhi_cntrl->status_cb(mhi_cntrl, mhi_cntrl->priv_data, + MHI_CB_BW_REQ); + break; + } + case MHI_PKT_TYPE_STATE_CHANGE_EVENT: + { + enum mhi_state new_state; + + new_state = MHI_TRE_GET_EV_STATE(local_rp); + + dev_dbg(mhi_cntrl->dev, + "State change event to state: %s\n", + TO_MHI_STATE_STR(new_state)); + + switch (new_state) { + case MHI_STATE_M0: + mhi_pm_m0_transition(mhi_cntrl); + break; + case MHI_STATE_M1: + mhi_pm_m1_transition(mhi_cntrl); + break; + case MHI_STATE_M3: + mhi_pm_m3_transition(mhi_cntrl); + break; + case MHI_STATE_SYS_ERR: + { + enum mhi_pm_state new_state; + + dev_dbg(mhi_cntrl->dev, + "System error detected\n"); + write_lock_irq(&mhi_cntrl->pm_lock); + new_state = mhi_tryset_pm_state(mhi_cntrl, + MHI_PM_SYS_ERR_DETECT); + write_unlock_irq(&mhi_cntrl->pm_lock); + if (new_state == MHI_PM_SYS_ERR_DETECT) + schedule_work(&mhi_cntrl->syserr_worker); + break; + } + default: + dev_err(mhi_cntrl->dev, "Invalid state: %s\n", + TO_MHI_STATE_STR(new_state)); + } + + break; + } + case MHI_PKT_TYPE_CMD_COMPLETION_EVENT: + mhi_process_cmd_completion(mhi_cntrl, local_rp); + break; + case MHI_PKT_TYPE_EE_EVENT: + { + enum dev_st_transition st = DEV_ST_TRANSITION_MAX; + enum mhi_ee_type event = MHI_TRE_GET_EV_EXECENV(local_rp); + + dev_dbg(mhi_cntrl->dev, "Received EE event: %s\n", + TO_MHI_EXEC_STR(event)); + switch (event) { + case MHI_EE_SBL: + st = DEV_ST_TRANSITION_SBL; + break; + case MHI_EE_WFW: + case MHI_EE_AMSS: + st = DEV_ST_TRANSITION_MISSION_MODE; + break; + case MHI_EE_RDDM: + mhi_cntrl->status_cb(mhi_cntrl, + mhi_cntrl->priv_data, + MHI_CB_EE_RDDM); + write_lock_irq(&mhi_cntrl->pm_lock); + mhi_cntrl->ee = event; + write_unlock_irq(&mhi_cntrl->pm_lock); + wake_up_all(&mhi_cntrl->state_event); + break; + default: + dev_err(mhi_cntrl->dev, + "Unhandled EE event: 0x%x\n", type); + } + if (st != DEV_ST_TRANSITION_MAX) + mhi_queue_state_transition(mhi_cntrl, st); + + break; + } + case MHI_PKT_TYPE_TX_EVENT: + chan = MHI_TRE_GET_EV_CHID(local_rp); + mhi_chan = &mhi_cntrl->mhi_chan[chan]; + parse_xfer_event(mhi_cntrl, local_rp, mhi_chan); + event_quota--; + break; + default: + dev_err(mhi_cntrl->dev, "Unhandled event type: %d\n", + type); + break; + } + + mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring); + local_rp = ev_ring->rp; + dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp); + count++; + } + + read_lock_bh(&mhi_cntrl->pm_lock); + if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) + mhi_ring_er_db(mhi_event); + read_unlock_bh(&mhi_cntrl->pm_lock); + + return count; +} + +int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl, + struct mhi_event *mhi_event, + u32 event_quota) +{ + struct mhi_tre *dev_rp, *local_rp; + struct mhi_ring *ev_ring = &mhi_event->ring; + struct mhi_event_ctxt *er_ctxt = + &mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index]; + int count = 0; + u32 chan; + struct mhi_chan *mhi_chan; + + if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state))) + return -EIO; + + dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp); + local_rp = ev_ring->rp; + + while (dev_rp != local_rp && event_quota > 0) { + enum mhi_pkt_type type = MHI_TRE_GET_EV_TYPE(local_rp); + + chan = MHI_TRE_GET_EV_CHID(local_rp); + mhi_chan = &mhi_cntrl->mhi_chan[chan]; + + if (likely(type == MHI_PKT_TYPE_TX_EVENT)) { + parse_xfer_event(mhi_cntrl, local_rp, mhi_chan); + event_quota--; + } else if (type == MHI_PKT_TYPE_RSC_TX_EVENT) { + parse_rsc_event(mhi_cntrl, local_rp, mhi_chan); + event_quota--; + } + + mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring); + local_rp = ev_ring->rp; + dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp); + count++; + } + read_lock_bh(&mhi_cntrl->pm_lock); + if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) + mhi_ring_er_db(mhi_event); + read_unlock_bh(&mhi_cntrl->pm_lock); + + return count; +} + +void mhi_ev_task(unsigned long data) +{ + struct mhi_event *mhi_event = (struct mhi_event *)data; + struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl; + + /* process all pending events */ + spin_lock_bh(&mhi_event->lock); + mhi_event->process_event(mhi_cntrl, mhi_event, U32_MAX); + spin_unlock_bh(&mhi_event->lock); +} + +void mhi_ctrl_ev_task(unsigned long data) +{ + struct mhi_event *mhi_event = (struct mhi_event *)data; + struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl; + enum mhi_state state; + enum mhi_pm_state pm_state = 0; + int ret; + + /* + * We can check PM state w/o a lock here because there is no way + * PM state can change from reg access valid to no access while this + * thread being executed. + */ + if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) { + /* + * We may have a pending event but not allowed to + * process it since we are probably in a suspended state, + * so trigger a resume. + */ + mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data); + mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data); + + return; + } + + /* Process ctrl events events */ + ret = mhi_event->process_event(mhi_cntrl, mhi_event, U32_MAX); + + /* + * We received an IRQ but no events to process, maybe device went to + * SYS_ERR state? Check the state to confirm. + */ + if (!ret) { + write_lock_irq(&mhi_cntrl->pm_lock); + state = mhi_get_mhi_state(mhi_cntrl); + if (state == MHI_STATE_SYS_ERR) { + dev_dbg(mhi_cntrl->dev, "System error detected\n"); + pm_state = mhi_tryset_pm_state(mhi_cntrl, + MHI_PM_SYS_ERR_DETECT); + } + write_unlock_irq(&mhi_cntrl->pm_lock); + if (pm_state == MHI_PM_SYS_ERR_DETECT) + schedule_work(&mhi_cntrl->syserr_worker); + } +} diff --git a/include/linux/mhi.h b/include/linux/mhi.h index 1b018e0d04f4..3e8f797c4c51 100644 --- a/include/linux/mhi.h +++ b/include/linux/mhi.h @@ -31,6 +31,7 @@ struct mhi_buf_info; * @MHI_CB_EE_MISSION_MODE: MHI device entered Mission Mode exec env * @MHI_CB_SYS_ERROR: MHI device entered error state (may recover) * @MHI_CB_FATAL_ERROR: MHI device entered fatal error state + * @MHI_CB_BW_REQ: Received a bandwidth switch request from device */ enum mhi_callback { MHI_CB_IDLE, @@ -41,6 +42,7 @@ enum mhi_callback { MHI_CB_EE_MISSION_MODE, MHI_CB_SYS_ERROR, MHI_CB_FATAL_ERROR, + MHI_CB_BW_REQ, }; /** @@ -94,6 +96,16 @@ struct image_info { u32 entries; }; +/** + * struct mhi_link_info - BW requirement + * target_link_speed - Link speed as defined by TLS bits in LinkControl reg + * target_link_width - Link width as defined by NLW bits in LinkStatus reg + */ +struct mhi_link_info { + unsigned int target_link_speed; + unsigned int target_link_width; +}; + /** * enum mhi_ee_type - Execution environment types * @MHI_EE_PBL: Primary Bootloader @@ -321,6 +333,7 @@ struct mhi_controller_config { * @pending_pkts: Pending packets for the controller * @transition_list: List of MHI state transitions * @wlock: Lock for protecting device wakeup + * @mhi_link_info: Device bandwidth info * @M0: M0 state counter for debugging * @M2: M2 state counter for debugging * @M3: M3 state counter for debugging @@ -392,6 +405,8 @@ struct mhi_controller { struct list_head transition_list; spinlock_t transition_lock; spinlock_t wlock; + struct mhi_link_info mhi_link_info; + u32 M0, M2, M3, M3_FAST; struct work_struct st_worker; struct work_struct fw_worker; From patchwork Thu Jan 23 11:18:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manivannan Sadhasivam X-Patchwork-Id: 11347387 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 54FA917EA for ; Thu, 23 Jan 2020 11:19:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1FE6524655 for ; Thu, 23 Jan 2020 11:19:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="MStoc1/C" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729242AbgAWLTZ (ORCPT ); Thu, 23 Jan 2020 06:19:25 -0500 Received: from mail-pg1-f193.google.com ([209.85.215.193]:40398 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729221AbgAWLTZ (ORCPT ); Thu, 23 Jan 2020 06:19:25 -0500 Received: by mail-pg1-f193.google.com with SMTP id k25so1219921pgt.7 for ; Thu, 23 Jan 2020 03:19:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=MofCp19EL7Yl04KGdwi8OKkvHPdsHEWl2HDv1LowyGw=; b=MStoc1/CfRpJ5ndAfNSC/LrzAdKn+0jg1uoZfdkHrUr6ybpVoPMQjqcJNTMTL7K1ta Yekq0EAXJX3Vo2HA0bEAwU1slQQHhZBAM96hYqhnY+tAYE5nXUmKi+dPyHZQbnHbQP5t S6Xn5Y5d/u5s22UoUcOpqfxljYcutrxKrTx1AJfpFfBLrjY2eBAvuu+7guF50vn7loEB SV4dorp9NDuqxubQdVyHpFDgpe/tonSYpB1IZT8QAbX6i1dTHdj+aZx5MZ/i3aRmHxF2 NFC8ojDXdBm2amjMloJoVyJfBZpgz4dEwAnzs8b3HCUI8zBKpWBjbagxJWTXuwqjhDqq mcsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=MofCp19EL7Yl04KGdwi8OKkvHPdsHEWl2HDv1LowyGw=; b=PnNnAJLFkOf1Mnu6gxv19TMPkoW6PBZQAEzb/RNnCavgstLPf4YZhfRpeK4z6wl7a9 nanBc23UOHQ6YiP1U3l43U6zaj8AOLdKu3x6sWuxbzvsMXNpgnnE3xUiUdGx0KGeVrY2 bUF4Dima1C98r2aNNH+g3SMicJg6dOsv9qItTr1miky2ltyW1FR6FXeUod89fnWlfHHu 7JLdVGBxL/zQgqPyeqqyGmzsk51XgR+d7AYLSQM73mRdYj7bSkwlf2BlsNb4eaxlSSiS ZGYH7guRtU57yLhDpvcMuiIPdEjhifRLrp3ptVQk+xZaMRsoMQHDN218M2huCtw+clFH avqw== X-Gm-Message-State: APjAAAXrV35+fc/OJLxiO6TectdMCz/x6xuXt9WGQrpYgPCr9Vk7MRsU EVY6QpbqyDSR96jD6RwUee1k X-Google-Smtp-Source: APXvYqwRqk1mrKicTd1ocLC2QNtYCRMNoer0Xqz2QSebAtcke4nOi6n/JC+sMjvKyCL2vfF48lX4yg== X-Received: by 2002:a62:18f:: with SMTP id 137mr6953186pfb.84.1579778364015; Thu, 23 Jan 2020 03:19:24 -0800 (PST) Received: from localhost.localdomain ([103.59.133.81]) by smtp.googlemail.com with ESMTPSA id y6sm2627559pgc.10.2020.01.23.03.19.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2020 03:19:23 -0800 (PST) From: Manivannan Sadhasivam To: gregkh@linuxfoundation.org, arnd@arndb.de Cc: smohanad@codeaurora.org, jhugo@codeaurora.org, kvalo@codeaurora.org, bjorn.andersson@linaro.org, hemantk@codeaurora.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, Manivannan Sadhasivam Subject: [PATCH 11/16] bus: mhi: core: Add support for data transfer Date: Thu, 23 Jan 2020 16:48:31 +0530 Message-Id: <20200123111836.7414-12-manivannan.sadhasivam@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> References: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add support for transferring data between external modem and host processor using MHI protocol. This is based on the patch submitted by Sujeev Dias: https://lkml.org/lkml/2018/7/9/988 Signed-off-by: Sujeev Dias Signed-off-by: Siddartha Mohanadoss [mani: splitted the data transfer patch and cleaned up for upstream] Signed-off-by: Manivannan Sadhasivam --- drivers/bus/mhi/core/init.c | 157 ++++++- drivers/bus/mhi/core/internal.h | 33 ++ drivers/bus/mhi/core/main.c | 777 +++++++++++++++++++++++++++++++- drivers/bus/mhi/core/pm.c | 40 ++ include/linux/mhi.h | 55 +++ 5 files changed, 1053 insertions(+), 9 deletions(-) diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c index c946693bdae4..40dcf8353f6f 100644 --- a/drivers/bus/mhi/core/init.c +++ b/drivers/bus/mhi/core/init.c @@ -483,6 +483,68 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl) return 0; } +void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl, + struct mhi_chan *mhi_chan) +{ + struct mhi_ring *buf_ring; + struct mhi_ring *tre_ring; + struct mhi_chan_ctxt *chan_ctxt; + + buf_ring = &mhi_chan->buf_ring; + tre_ring = &mhi_chan->tre_ring; + chan_ctxt = &mhi_cntrl->mhi_ctxt->chan_ctxt[mhi_chan->chan]; + + mhi_free_coherent(mhi_cntrl, tre_ring->alloc_size, + tre_ring->pre_aligned, tre_ring->dma_handle); + vfree(buf_ring->base); + + buf_ring->base = tre_ring->base = NULL; + chan_ctxt->rbase = 0; +} + +int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl, + struct mhi_chan *mhi_chan) +{ + struct mhi_ring *buf_ring; + struct mhi_ring *tre_ring; + struct mhi_chan_ctxt *chan_ctxt; + int ret; + + buf_ring = &mhi_chan->buf_ring; + tre_ring = &mhi_chan->tre_ring; + tre_ring->el_size = sizeof(struct mhi_tre); + tre_ring->len = tre_ring->el_size * tre_ring->elements; + chan_ctxt = &mhi_cntrl->mhi_ctxt->chan_ctxt[mhi_chan->chan]; + ret = mhi_alloc_aligned_ring(mhi_cntrl, tre_ring, tre_ring->len); + if (ret) + return -ENOMEM; + + buf_ring->el_size = sizeof(struct mhi_buf_info); + buf_ring->len = buf_ring->el_size * buf_ring->elements; + buf_ring->base = vzalloc(buf_ring->len); + + if (!buf_ring->base) { + mhi_free_coherent(mhi_cntrl, tre_ring->alloc_size, + tre_ring->pre_aligned, tre_ring->dma_handle); + return -ENOMEM; + } + + chan_ctxt->chstate = MHI_CH_STATE_ENABLED; + chan_ctxt->rbase = tre_ring->iommu_base; + chan_ctxt->rp = chan_ctxt->wp = chan_ctxt->rbase; + chan_ctxt->rlen = tre_ring->len; + tre_ring->ctxt_wp = &chan_ctxt->wp; + + tre_ring->rp = tre_ring->wp = tre_ring->base; + buf_ring->rp = buf_ring->wp = buf_ring->base; + mhi_chan->db_cfg.db_mode = 1; + + /* Update to all cores */ + smp_wmb(); + + return 0; +} + static int parse_ev_cfg(struct mhi_controller *mhi_cntrl, struct mhi_controller_config *config) { @@ -638,6 +700,31 @@ static int parse_ch_cfg(struct mhi_controller *mhi_cntrl, mhi_chan->db_cfg.pollcfg = ch_cfg->pollcfg; mhi_chan->xfer_type = ch_cfg->data_type; + switch (mhi_chan->xfer_type) { + case MHI_BUF_RAW: + mhi_chan->gen_tre = mhi_gen_tre; + mhi_chan->queue_xfer = mhi_queue_buf; + break; + case MHI_BUF_SKB: + mhi_chan->queue_xfer = mhi_queue_skb; + break; + case MHI_BUF_SCLIST: + mhi_chan->gen_tre = mhi_gen_tre; + mhi_chan->queue_xfer = mhi_queue_sclist; + break; + case MHI_BUF_NOP: + mhi_chan->queue_xfer = mhi_queue_nop; + break; + case MHI_BUF_DMA: + case MHI_BUF_RSC_DMA: + mhi_chan->queue_xfer = mhi_queue_dma; + break; + default: + dev_err(mhi_cntrl->dev, + "Channel datatype not supported\n"); + goto error_chan_cfg; + } + mhi_chan->lpm_notify = ch_cfg->lpm_notify; mhi_chan->offload_ch = ch_cfg->offload_channel; mhi_chan->db_cfg.reset_req = ch_cfg->doorbell_mode_switch; @@ -667,6 +754,13 @@ static int parse_ch_cfg(struct mhi_controller *mhi_cntrl, goto error_chan_cfg; } + /* + * If MHI host pre-allocates buffers then client drivers + * cannot queue + */ + if (mhi_chan->pre_alloc) + mhi_chan->queue_xfer = mhi_queue_nop; + if (!mhi_chan->offload_ch) { mhi_chan->db_cfg.brstmode = ch_cfg->doorbell; if (MHI_INVALID_BRSTMODE(mhi_chan->db_cfg.brstmode)) { @@ -796,6 +890,14 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl, rwlock_init(&mhi_chan->lock); } + if (mhi_cntrl->bounce_buf) { + mhi_cntrl->map_single = mhi_map_single_use_bb; + mhi_cntrl->unmap_single = mhi_unmap_single_use_bb; + } else { + mhi_cntrl->map_single = mhi_map_single_no_bb; + mhi_cntrl->unmap_single = mhi_unmap_single_no_bb; + } + /* Register controller with MHI bus */ mhi_dev = mhi_alloc_device(mhi_cntrl); if (IS_ERR(mhi_dev)) { @@ -961,6 +1063,14 @@ static int mhi_driver_probe(struct device *dev) struct mhi_event *mhi_event; struct mhi_chan *ul_chan = mhi_dev->ul_chan; struct mhi_chan *dl_chan = mhi_dev->dl_chan; + int ret; + + /* Bring device out of LPM */ + ret = mhi_device_get_sync(mhi_dev); + if (ret) + return ret; + + ret = -EINVAL; if (ul_chan) { /* @@ -968,13 +1078,18 @@ static int mhi_driver_probe(struct device *dev) * be provided */ if (ul_chan->lpm_notify && !mhi_drv->status_cb) - return -EINVAL; + goto exit_probe; /* For non-offload channels then xfer_cb should be provided */ if (!ul_chan->offload_ch && !mhi_drv->ul_xfer_cb) - return -EINVAL; + goto exit_probe; ul_chan->xfer_cb = mhi_drv->ul_xfer_cb; + if (ul_chan->auto_start) { + ret = mhi_prepare_channel(mhi_cntrl, ul_chan); + if (ret) + goto exit_probe; + } } if (dl_chan) { @@ -983,11 +1098,11 @@ static int mhi_driver_probe(struct device *dev) * be provided */ if (dl_chan->lpm_notify && !mhi_drv->status_cb) - return -EINVAL; + goto exit_probe; /* For non-offload channels then xfer_cb should be provided */ if (!dl_chan->offload_ch && !mhi_drv->dl_xfer_cb) - return -EINVAL; + goto exit_probe; mhi_event = &mhi_cntrl->mhi_event[dl_chan->er_index]; @@ -997,19 +1112,36 @@ static int mhi_driver_probe(struct device *dev) * notify pending data */ if (mhi_event->cl_manage && !mhi_drv->status_cb) - return -EINVAL; + goto exit_probe; dl_chan->xfer_cb = mhi_drv->dl_xfer_cb; } /* Call the user provided probe function */ - return mhi_drv->probe(mhi_dev, mhi_dev->id); + ret = mhi_drv->probe(mhi_dev, mhi_dev->id); + if (ret) + goto exit_probe; + + if (dl_chan && dl_chan->auto_start) + mhi_prepare_channel(mhi_cntrl, dl_chan); + + mhi_device_put(mhi_dev); + + return ret; + +exit_probe: + mhi_unprepare_from_transfer(mhi_dev); + + mhi_device_put(mhi_dev); + + return ret; } static int mhi_driver_remove(struct device *dev) { struct mhi_device *mhi_dev = to_mhi_device(dev); struct mhi_driver *mhi_drv = to_mhi_driver(dev->driver); + struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; struct mhi_chan *mhi_chan; enum mhi_ch_state ch_state[] = { MHI_CH_STATE_DISABLED, @@ -1041,6 +1173,10 @@ static int mhi_driver_remove(struct device *dev) mhi_chan->ch_state = MHI_CH_STATE_SUSPENDED; write_unlock_irq(&mhi_chan->lock); + /* Reset the non-offload channel */ + if (!mhi_chan->offload_ch) + mhi_reset_chan(mhi_cntrl, mhi_chan); + mutex_unlock(&mhi_chan->mutex); } @@ -1055,11 +1191,20 @@ static int mhi_driver_remove(struct device *dev) mutex_lock(&mhi_chan->mutex); + if (ch_state[dir] == MHI_CH_STATE_ENABLED && + !mhi_chan->offload_ch) + mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan); + mhi_chan->ch_state = MHI_CH_STATE_DISABLED; mutex_unlock(&mhi_chan->mutex); } + read_lock_bh(&mhi_cntrl->pm_lock); + while (atomic_read(&mhi_dev->dev_wake)) + mhi_device_put(mhi_dev); + read_unlock_bh(&mhi_cntrl->pm_lock); + return 0; } diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h index 314d0909c372..93003295405b 100644 --- a/drivers/bus/mhi/core/internal.h +++ b/drivers/bus/mhi/core/internal.h @@ -599,6 +599,8 @@ int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl); void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl); int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl); int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl); +int mhi_send_cmd(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan, + enum mhi_cmd_type cmd); /* Register access methods */ void mhi_db_brstmode(struct mhi_controller *mhi_cntrl, struct db_cfg *db_cfg, @@ -630,6 +632,14 @@ int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl); void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl); void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl, struct image_info *img_info); +int mhi_prepare_channel(struct mhi_controller *mhi_cntrl, + struct mhi_chan *mhi_chan); +int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl, + struct mhi_chan *mhi_chan); +void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl, + struct mhi_chan *mhi_chan); +void mhi_reset_chan(struct mhi_controller *mhi_cntrl, + struct mhi_chan *mhi_chan); /* Memory allocation methods */ static inline void *mhi_alloc_coherent(struct mhi_controller *mhi_cntrl, @@ -667,4 +677,27 @@ irqreturn_t mhi_irq_handler(int irq_number, void *dev); irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *dev); irqreturn_t mhi_intvec_handler(int irq_number, void *dev); +/* Queue transfer methods */ +int mhi_queue_dma(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan, + void *buf, size_t len, enum mhi_flags mflags); +int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan, + void *buf, void *cb, size_t buf_len, enum mhi_flags flags); +int mhi_queue_buf(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan, + void *buf, size_t len, enum mhi_flags mflags); +int mhi_queue_skb(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan, + void *buf, size_t len, enum mhi_flags mflags); +int mhi_queue_sclist(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan, + void *buf, size_t len, enum mhi_flags mflags); +int mhi_queue_nop(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan, + void *buf, size_t len, enum mhi_flags mflags); + +int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl, + struct mhi_buf_info *buf_info); +int mhi_map_single_use_bb(struct mhi_controller *mhi_cntrl, + struct mhi_buf_info *buf_info); +void mhi_unmap_single_no_bb(struct mhi_controller *mhi_cntrl, + struct mhi_buf_info *buf_info); +void mhi_unmap_single_use_bb(struct mhi_controller *mhi_cntrl, + struct mhi_buf_info *buf_info); + #endif /* _MHI_INT_H */ diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c index 8450c74b4525..89632e4920c1 100644 --- a/drivers/bus/mhi/core/main.c +++ b/drivers/bus/mhi/core/main.c @@ -144,11 +144,82 @@ enum mhi_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl) return ret ? MHI_STATE_MAX : state; } +int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl, + struct mhi_buf_info *buf_info) +{ + buf_info->p_addr = dma_map_single(mhi_cntrl->dev, buf_info->v_addr, + buf_info->len, buf_info->dir); + if (dma_mapping_error(mhi_cntrl->dev, buf_info->p_addr)) + return -ENOMEM; + + return 0; +} + +int mhi_map_single_use_bb(struct mhi_controller *mhi_cntrl, + struct mhi_buf_info *buf_info) +{ + void *buf = mhi_alloc_coherent(mhi_cntrl, buf_info->len, + &buf_info->p_addr, GFP_ATOMIC); + + if (!buf) + return -ENOMEM; + + if (buf_info->dir == DMA_TO_DEVICE) + memcpy(buf, buf_info->v_addr, buf_info->len); + + buf_info->bb_addr = buf; + + return 0; +} + +void mhi_unmap_single_no_bb(struct mhi_controller *mhi_cntrl, + struct mhi_buf_info *buf_info) +{ + dma_unmap_single(mhi_cntrl->dev, buf_info->p_addr, buf_info->len, + buf_info->dir); +} + +void mhi_unmap_single_use_bb(struct mhi_controller *mhi_cntrl, + struct mhi_buf_info *buf_info) +{ + if (buf_info->dir == DMA_FROM_DEVICE) + memcpy(buf_info->v_addr, buf_info->bb_addr, buf_info->len); + + mhi_free_coherent(mhi_cntrl, buf_info->len, buf_info->bb_addr, + buf_info->p_addr); +} + +static int get_nr_avail_ring_elements(struct mhi_controller *mhi_cntrl, + struct mhi_ring *ring) +{ + int nr_el; + + if (ring->wp < ring->rp) { + nr_el = ((ring->rp - ring->wp) / ring->el_size) - 1; + } else { + nr_el = (ring->rp - ring->base) / ring->el_size; + nr_el += ((ring->base + ring->len - ring->wp) / + ring->el_size) - 1; + } + + return nr_el; +} + static void *mhi_to_virtual(struct mhi_ring *ring, dma_addr_t addr) { return (addr - ring->iommu_base) + ring->base; } +static void mhi_add_ring_element(struct mhi_controller *mhi_cntrl, + struct mhi_ring *ring) +{ + ring->wp += ring->el_size; + if (ring->wp >= (ring->base + ring->len)) + ring->wp = ring->base; + /* smp update */ + smp_wmb(); +} + static void mhi_del_ring_element(struct mhi_controller *mhi_cntrl, struct mhi_ring *ring) { @@ -417,23 +488,25 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl, /* Get the TRB this event points to */ ev_tre = mhi_to_virtual(tre_ring, ptr); - /* device rp after servicing the TREs */ dev_rp = ev_tre + 1; if (dev_rp >= (tre_ring->base + tre_ring->len)) dev_rp = tre_ring->base; result.dir = mhi_chan->dir; - /* local rp */ local_rp = tre_ring->rp; while (local_rp != dev_rp) { buf_info = buf_ring->rp; - /* if it's last tre get len from the event */ + /* If it's the last TRE, get length from the event */ if (local_rp == ev_tre) xfer_len = MHI_TRE_GET_EV_LEN(event); else xfer_len = buf_info->len; + /* Unmap if it's not pre-mapped by client */ + if (likely(!buf_info->pre_mapped)) + mhi_cntrl->unmap_single(mhi_cntrl, buf_info); + result.buf_addr = buf_info->cb_buf; result.bytes_xferd = xfer_len; mhi_del_ring_element(mhi_cntrl, buf_ring); @@ -445,6 +518,22 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl, if (mhi_chan->dir == DMA_TO_DEVICE) atomic_dec(&mhi_cntrl->pending_pkts); + + /* + * Recycle the buffer if buffer is pre-allocated, + * if there is an error, not much we can do apart + * from dropping the packet + */ + if (mhi_chan->pre_alloc) { + if (mhi_queue_buf(mhi_chan->mhi_dev, mhi_chan, + buf_info->cb_buf, + buf_info->len, MHI_EOT)) { + dev_err(mhi_cntrl->dev, + "Error recycling buffer for chan:%d\n", + mhi_chan->chan); + kfree(buf_info->cb_buf); + } + } } break; } /* CC_EOT */ @@ -808,3 +897,685 @@ void mhi_ctrl_ev_task(unsigned long data) schedule_work(&mhi_cntrl->syserr_worker); } } + +static bool mhi_is_ring_full(struct mhi_controller *mhi_cntrl, + struct mhi_ring *ring) +{ + void *tmp = ring->wp + ring->el_size; + + if (tmp >= (ring->base + ring->len)) + tmp = ring->base; + + return (tmp == ring->rp); +} + +/* TODO: Scatter-Gather transfer not implemented */ +int mhi_queue_sclist(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan, + void *buf, size_t len, enum mhi_flags mflags) +{ + return -EINVAL; +} + +/* + * MHI client drivers are not allowed to queue buffer for pre allocated + * channels. Hence, this function just returns -EINVAL. + */ +int mhi_queue_nop(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan, + void *buf, size_t len, enum mhi_flags mflags) +{ + return -EINVAL; +} + +int mhi_queue_skb(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan, + void *buf, size_t len, enum mhi_flags mflags) +{ + struct sk_buff *skb = buf; + struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; + struct mhi_ring *tre_ring = &mhi_chan->tre_ring; + struct mhi_ring *buf_ring = &mhi_chan->buf_ring; + struct mhi_buf_info *buf_info; + struct mhi_tre *mhi_tre; + int ret; + + if (mhi_is_ring_full(mhi_cntrl, tre_ring)) + return -ENOMEM; + + read_lock_bh(&mhi_cntrl->pm_lock); + if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) { + read_unlock_bh(&mhi_cntrl->pm_lock); + return -EIO; + } + + /* we're in M3 or transitioning to M3 */ + if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) { + mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data); + mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data); + } + + /* Toggle wake to exit out of M2 */ + mhi_cntrl->wake_toggle(mhi_cntrl); + + /* Generate the TRE */ + buf_info = buf_ring->wp; + + buf_info->v_addr = skb->data; + buf_info->cb_buf = skb; + buf_info->wp = tre_ring->wp; + buf_info->dir = mhi_chan->dir; + buf_info->len = len; + ret = mhi_cntrl->map_single(mhi_cntrl, buf_info); + if (ret) + goto map_error; + + mhi_tre = tre_ring->wp; + + mhi_tre->ptr = MHI_TRE_DATA_PTR(buf_info->p_addr); + mhi_tre->dword[0] = MHI_TRE_DATA_DWORD0(buf_info->len); + mhi_tre->dword[1] = MHI_TRE_DATA_DWORD1(1, 1, 0, 0); + + /* increment WP */ + mhi_add_ring_element(mhi_cntrl, tre_ring); + mhi_add_ring_element(mhi_cntrl, buf_ring); + + if (mhi_chan->dir == DMA_TO_DEVICE) + atomic_inc(&mhi_cntrl->pending_pkts); + + if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) { + read_lock_bh(&mhi_chan->lock); + mhi_ring_chan_db(mhi_cntrl, mhi_chan); + read_unlock_bh(&mhi_chan->lock); + } + + read_unlock_bh(&mhi_cntrl->pm_lock); + + return 0; + +map_error: + read_unlock_bh(&mhi_cntrl->pm_lock); + + return ret; +} + +int mhi_queue_dma(struct mhi_device *mhi_dev, + struct mhi_chan *mhi_chan, + void *buf, + size_t len, + enum mhi_flags mflags) +{ + struct mhi_buf *mhi_buf = buf; + struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; + struct mhi_ring *tre_ring = &mhi_chan->tre_ring; + struct mhi_ring *buf_ring = &mhi_chan->buf_ring; + struct mhi_buf_info *buf_info; + struct mhi_tre *mhi_tre; + + if (mhi_is_ring_full(mhi_cntrl, tre_ring)) + return -ENOMEM; + + read_lock_bh(&mhi_cntrl->pm_lock); + if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) { + dev_err(mhi_cntrl->dev, + "MHI is not in activate state, PM state: %s\n", + to_mhi_pm_state_str(mhi_cntrl->pm_state)); + read_unlock_bh(&mhi_cntrl->pm_lock); + + return -EIO; + } + + /* we're in M3 or transitioning to M3 */ + if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) { + mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data); + mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data); + } + + /* Toggle wake to exit out of M2 */ + mhi_cntrl->wake_toggle(mhi_cntrl); + + /* Generate the TRE */ + buf_info = buf_ring->wp; + WARN_ON(buf_info->used); + buf_info->p_addr = mhi_buf->dma_addr; + buf_info->pre_mapped = true; + buf_info->cb_buf = mhi_buf; + buf_info->wp = tre_ring->wp; + buf_info->dir = mhi_chan->dir; + buf_info->len = len; + + mhi_tre = tre_ring->wp; + + if (mhi_chan->xfer_type == MHI_BUF_RSC_DMA) { + buf_info->used = true; + mhi_tre->ptr = + MHI_RSCTRE_DATA_PTR(buf_info->p_addr, buf_info->len); + mhi_tre->dword[0] = + MHI_RSCTRE_DATA_DWORD0(buf_ring->wp - buf_ring->base); + mhi_tre->dword[1] = MHI_RSCTRE_DATA_DWORD1; + } else { + mhi_tre->ptr = MHI_TRE_DATA_PTR(buf_info->p_addr); + mhi_tre->dword[0] = MHI_TRE_DATA_DWORD0(buf_info->len); + mhi_tre->dword[1] = MHI_TRE_DATA_DWORD1(1, 1, 0, 0); + } + + /* increment WP */ + mhi_add_ring_element(mhi_cntrl, tre_ring); + mhi_add_ring_element(mhi_cntrl, buf_ring); + + if (mhi_chan->dir == DMA_TO_DEVICE) + atomic_inc(&mhi_cntrl->pending_pkts); + + if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) { + read_lock_bh(&mhi_chan->lock); + mhi_ring_chan_db(mhi_cntrl, mhi_chan); + read_unlock_bh(&mhi_chan->lock); + } + + read_unlock_bh(&mhi_cntrl->pm_lock); + + return 0; +} + +int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan, + void *buf, void *cb, size_t buf_len, enum mhi_flags flags) +{ + struct mhi_ring *buf_ring, *tre_ring; + struct mhi_tre *mhi_tre; + struct mhi_buf_info *buf_info; + int eot, eob, chain, bei; + int ret; + + buf_ring = &mhi_chan->buf_ring; + tre_ring = &mhi_chan->tre_ring; + + buf_info = buf_ring->wp; + buf_info->v_addr = buf; + buf_info->cb_buf = cb; + buf_info->wp = tre_ring->wp; + buf_info->dir = mhi_chan->dir; + buf_info->len = buf_len; + + ret = mhi_cntrl->map_single(mhi_cntrl, buf_info); + if (ret) + return ret; + + eob = !!(flags & MHI_EOB); + eot = !!(flags & MHI_EOT); + chain = !!(flags & MHI_CHAIN); + bei = !!(mhi_chan->intmod); + + mhi_tre = tre_ring->wp; + mhi_tre->ptr = MHI_TRE_DATA_PTR(buf_info->p_addr); + mhi_tre->dword[0] = MHI_TRE_DATA_DWORD0(buf_len); + mhi_tre->dword[1] = MHI_TRE_DATA_DWORD1(bei, eot, eob, chain); + + /* increment WP */ + mhi_add_ring_element(mhi_cntrl, tre_ring); + mhi_add_ring_element(mhi_cntrl, buf_ring); + + return 0; +} + +int mhi_queue_transfer(struct mhi_device *mhi_dev, + enum dma_data_direction dir, void *buf, size_t len, + enum mhi_flags mflags) +{ + if (dir == DMA_TO_DEVICE) + return mhi_dev->ul_chan->queue_xfer(mhi_dev, mhi_dev->ul_chan, + buf, len, mflags); + else + return mhi_dev->dl_chan->queue_xfer(mhi_dev, mhi_dev->dl_chan, + buf, len, mflags); +} +EXPORT_SYMBOL_GPL(mhi_queue_transfer); + +int mhi_queue_buf(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan, + void *buf, size_t len, enum mhi_flags mflags) +{ + struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; + struct mhi_ring *tre_ring; + unsigned long flags; + int ret; + + /* + * this check here only as a guard, it's always + * possible mhi can enter error while executing rest of function, + * which is not fatal so we do not need to hold pm_lock + */ + if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) + return -EIO; + + tre_ring = &mhi_chan->tre_ring; + if (mhi_is_ring_full(mhi_cntrl, tre_ring)) + return -ENOMEM; + + ret = mhi_chan->gen_tre(mhi_cntrl, mhi_chan, buf, buf, len, mflags); + if (unlikely(ret)) + return ret; + + read_lock_irqsave(&mhi_cntrl->pm_lock, flags); + + /* we're in M3 or transitioning to M3 */ + if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) { + mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data); + mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data); + } + + /* Toggle wake to exit out of M2 */ + mhi_cntrl->wake_toggle(mhi_cntrl); + + if (mhi_chan->dir == DMA_TO_DEVICE) + atomic_inc(&mhi_cntrl->pending_pkts); + + if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) { + unsigned long flags; + + read_lock_irqsave(&mhi_chan->lock, flags); + mhi_ring_chan_db(mhi_cntrl, mhi_chan); + read_unlock_irqrestore(&mhi_chan->lock, flags); + } + + read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags); + + return 0; +} + +int mhi_send_cmd(struct mhi_controller *mhi_cntrl, + struct mhi_chan *mhi_chan, + enum mhi_cmd_type cmd) +{ + struct mhi_tre *cmd_tre = NULL; + struct mhi_cmd *mhi_cmd = &mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING]; + struct mhi_ring *ring = &mhi_cmd->ring; + int chan = 0; + + if (mhi_chan) + chan = mhi_chan->chan; + + spin_lock_bh(&mhi_cmd->lock); + if (!get_nr_avail_ring_elements(mhi_cntrl, ring)) { + spin_unlock_bh(&mhi_cmd->lock); + return -ENOMEM; + } + + /* prepare the cmd tre */ + cmd_tre = ring->wp; + switch (cmd) { + case MHI_CMD_RESET_CHAN: + cmd_tre->ptr = MHI_TRE_CMD_RESET_PTR; + cmd_tre->dword[0] = MHI_TRE_CMD_RESET_DWORD0; + cmd_tre->dword[1] = MHI_TRE_CMD_RESET_DWORD1(chan); + break; + case MHI_CMD_START_CHAN: + cmd_tre->ptr = MHI_TRE_CMD_START_PTR; + cmd_tre->dword[0] = MHI_TRE_CMD_START_DWORD0; + cmd_tre->dword[1] = MHI_TRE_CMD_START_DWORD1(chan); + break; + default: + dev_err(mhi_cntrl->dev, "Command not supported\n"); + break; + } + + /* queue to hardware */ + mhi_add_ring_element(mhi_cntrl, ring); + read_lock_bh(&mhi_cntrl->pm_lock); + if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) + mhi_ring_cmd_db(mhi_cntrl, mhi_cmd); + read_unlock_bh(&mhi_cntrl->pm_lock); + spin_unlock_bh(&mhi_cmd->lock); + + return 0; +} + +static void __mhi_unprepare_channel(struct mhi_controller *mhi_cntrl, + struct mhi_chan *mhi_chan) +{ + int ret; + + dev_dbg(mhi_cntrl->dev, "Entered: unprepare channel:%d\n", + mhi_chan->chan); + + /* no more processing events for this channel */ + mutex_lock(&mhi_chan->mutex); + write_lock_irq(&mhi_chan->lock); + if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED) { + write_unlock_irq(&mhi_chan->lock); + mutex_unlock(&mhi_chan->mutex); + return; + } + + mhi_chan->ch_state = MHI_CH_STATE_DISABLED; + write_unlock_irq(&mhi_chan->lock); + + reinit_completion(&mhi_chan->completion); + read_lock_bh(&mhi_cntrl->pm_lock); + if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) { + read_unlock_bh(&mhi_cntrl->pm_lock); + goto error_invalid_state; + } + + mhi_cntrl->wake_toggle(mhi_cntrl); + read_unlock_bh(&mhi_cntrl->pm_lock); + + mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data); + mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data); + ret = mhi_send_cmd(mhi_cntrl, mhi_chan, MHI_CMD_RESET_CHAN); + if (ret) + goto error_invalid_state; + + /* even if it fails we will still reset */ + ret = wait_for_completion_timeout(&mhi_chan->completion, + msecs_to_jiffies(mhi_cntrl->timeout_ms)); + if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS) + dev_err(mhi_cntrl->dev, + "Failed to receive cmd completion, still resetting\n"); + +error_invalid_state: + if (!mhi_chan->offload_ch) { + mhi_reset_chan(mhi_cntrl, mhi_chan); + mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan); + } + dev_dbg(mhi_cntrl->dev, "chan:%d successfully resetted\n", + mhi_chan->chan); + mutex_unlock(&mhi_chan->mutex); +} + +int mhi_prepare_channel(struct mhi_controller *mhi_cntrl, + struct mhi_chan *mhi_chan) +{ + int ret = 0; + + dev_dbg(mhi_cntrl->dev, "Preparing channel: %d\n", + mhi_chan->chan); + + if (!(BIT(mhi_cntrl->ee) & mhi_chan->ee_mask)) { + dev_err(mhi_cntrl->dev, + "Current EE: %s Required EE Mask: 0x%x for chan: %s\n", + TO_MHI_EXEC_STR(mhi_cntrl->ee), mhi_chan->ee_mask, + mhi_chan->name); + return -ENOTCONN; + } + + mutex_lock(&mhi_chan->mutex); + + /* If channel is not in disable state, do not allow it to start */ + if (mhi_chan->ch_state != MHI_CH_STATE_DISABLED) { + ret = -EIO; + dev_dbg(mhi_cntrl->dev, + "channel: %d is not in disabled state\n", + mhi_chan->chan); + goto error_init_chan; + } + + /* Check of client manages channel context for offload channels */ + if (!mhi_chan->offload_ch) { + ret = mhi_init_chan_ctxt(mhi_cntrl, mhi_chan); + if (ret) + goto error_init_chan; + } + + reinit_completion(&mhi_chan->completion); + read_lock_bh(&mhi_cntrl->pm_lock); + if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) { + read_unlock_bh(&mhi_cntrl->pm_lock); + ret = -EIO; + goto error_pm_state; + } + + mhi_cntrl->wake_toggle(mhi_cntrl); + read_unlock_bh(&mhi_cntrl->pm_lock); + mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data); + mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data); + + ret = mhi_send_cmd(mhi_cntrl, mhi_chan, MHI_CMD_START_CHAN); + if (ret) + goto error_pm_state; + + ret = wait_for_completion_timeout(&mhi_chan->completion, + msecs_to_jiffies(mhi_cntrl->timeout_ms)); + if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS) { + ret = -EIO; + goto error_pm_state; + } + + write_lock_irq(&mhi_chan->lock); + mhi_chan->ch_state = MHI_CH_STATE_ENABLED; + write_unlock_irq(&mhi_chan->lock); + + /* Pre-allocate buffer for xfer ring */ + if (mhi_chan->pre_alloc) { + int nr_el = get_nr_avail_ring_elements(mhi_cntrl, + &mhi_chan->tre_ring); + size_t len = mhi_cntrl->buffer_len; + + while (nr_el--) { + void *buf; + + buf = kmalloc(len, GFP_KERNEL); + if (!buf) { + ret = -ENOMEM; + goto error_pre_alloc; + } + + /* Prepare transfer descriptors */ + ret = mhi_chan->gen_tre(mhi_cntrl, mhi_chan, buf, buf, + len, MHI_EOT); + if (ret) { + kfree(buf); + goto error_pre_alloc; + } + } + + read_lock_bh(&mhi_cntrl->pm_lock); + if (MHI_DB_ACCESS_VALID(mhi_cntrl)) { + read_lock_irq(&mhi_chan->lock); + mhi_ring_chan_db(mhi_cntrl, mhi_chan); + read_unlock_irq(&mhi_chan->lock); + } + read_unlock_bh(&mhi_cntrl->pm_lock); + } + + mutex_unlock(&mhi_chan->mutex); + + dev_dbg(mhi_cntrl->dev, "Chan: %d successfully moved to start state\n", + mhi_chan->chan); + + return 0; + +error_pm_state: + if (!mhi_chan->offload_ch) + mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan); + +error_init_chan: + mutex_unlock(&mhi_chan->mutex); + + return ret; + +error_pre_alloc: + mutex_unlock(&mhi_chan->mutex); + __mhi_unprepare_channel(mhi_cntrl, mhi_chan); + + return ret; +} + +static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl, + struct mhi_event *mhi_event, + struct mhi_event_ctxt *er_ctxt, + int chan) + +{ + struct mhi_tre *dev_rp, *local_rp; + struct mhi_ring *ev_ring; + unsigned long flags; + + dev_dbg(mhi_cntrl->dev, + "Marking all events for chan: %d as stale\n", chan); + + ev_ring = &mhi_event->ring; + + /* mark all stale events related to channel as STALE event */ + spin_lock_irqsave(&mhi_event->lock, flags); + dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp); + + local_rp = ev_ring->rp; + while (dev_rp != local_rp) { + if (MHI_TRE_GET_EV_TYPE(local_rp) == + MHI_PKT_TYPE_TX_EVENT && + chan == MHI_TRE_GET_EV_CHID(local_rp)) + local_rp->dword[1] = MHI_TRE_EV_DWORD1(chan, + MHI_PKT_TYPE_STALE_EVENT); + local_rp++; + if (local_rp == (ev_ring->base + ev_ring->len)) + local_rp = ev_ring->base; + } + + + dev_dbg(mhi_cntrl->dev, + "Finished marking events as stale events\n"); + spin_unlock_irqrestore(&mhi_event->lock, flags); +} + +static void mhi_reset_data_chan(struct mhi_controller *mhi_cntrl, + struct mhi_chan *mhi_chan) +{ + struct mhi_ring *buf_ring, *tre_ring; + struct mhi_result result; + + /* Reset any pending buffers */ + buf_ring = &mhi_chan->buf_ring; + tre_ring = &mhi_chan->tre_ring; + result.transaction_status = -ENOTCONN; + result.bytes_xferd = 0; + while (tre_ring->rp != tre_ring->wp) { + struct mhi_buf_info *buf_info = buf_ring->rp; + + if (mhi_chan->dir == DMA_TO_DEVICE) + atomic_dec(&mhi_cntrl->pending_pkts); + + if (!buf_info->pre_mapped) + mhi_cntrl->unmap_single(mhi_cntrl, buf_info); + + mhi_del_ring_element(mhi_cntrl, buf_ring); + mhi_del_ring_element(mhi_cntrl, tre_ring); + + if (mhi_chan->pre_alloc) { + kfree(buf_info->cb_buf); + } else { + result.buf_addr = buf_info->cb_buf; + mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result); + } + } +} + +static void mhi_reset_rsc_chan(struct mhi_controller *mhi_cntrl, + struct mhi_chan *mhi_chan) +{ + struct mhi_ring *buf_ring, *tre_ring; + struct mhi_result result; + struct mhi_buf_info *buf_info; + + /* Reset any pending buffers */ + buf_ring = &mhi_chan->buf_ring; + tre_ring = &mhi_chan->tre_ring; + result.transaction_status = -ENOTCONN; + result.bytes_xferd = 0; + + buf_info = buf_ring->base; + for (; (void *)buf_info < buf_ring->base + buf_ring->len; buf_info++) { + if (!buf_info->used) + continue; + + result.buf_addr = buf_info->cb_buf; + mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result); + buf_info->used = false; + } +} + +void mhi_reset_chan(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan) +{ + + struct mhi_event *mhi_event; + struct mhi_event_ctxt *er_ctxt; + int chan = mhi_chan->chan; + + /* Nothing to reset, client doesn't queue buffers */ + if (mhi_chan->offload_ch) + return; + + read_lock_bh(&mhi_cntrl->pm_lock); + mhi_event = &mhi_cntrl->mhi_event[mhi_chan->er_index]; + er_ctxt = &mhi_cntrl->mhi_ctxt->er_ctxt[mhi_chan->er_index]; + + mhi_mark_stale_events(mhi_cntrl, mhi_event, er_ctxt, chan); + + if (mhi_chan->xfer_type == MHI_BUF_RSC_DMA) + mhi_reset_rsc_chan(mhi_cntrl, mhi_chan); + else + mhi_reset_data_chan(mhi_cntrl, mhi_chan); + + read_unlock_bh(&mhi_cntrl->pm_lock); +} + +/* Move channel to start state */ +int mhi_prepare_for_transfer(struct mhi_device *mhi_dev) +{ + int ret, dir; + struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; + struct mhi_chan *mhi_chan; + + for (dir = 0; dir < 2; dir++) { + mhi_chan = dir ? mhi_dev->dl_chan : mhi_dev->ul_chan; + + if (!mhi_chan) + continue; + + ret = mhi_prepare_channel(mhi_cntrl, mhi_chan); + if (ret) + goto error_open_chan; + } + + return 0; + +error_open_chan: + for (--dir; dir >= 0; dir--) { + mhi_chan = dir ? mhi_dev->dl_chan : mhi_dev->ul_chan; + + if (!mhi_chan) + continue; + + __mhi_unprepare_channel(mhi_cntrl, mhi_chan); + } + + return ret; +} +EXPORT_SYMBOL_GPL(mhi_prepare_for_transfer); + +void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev) +{ + struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; + struct mhi_chan *mhi_chan; + int dir; + + for (dir = 0; dir < 2; dir++) { + mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan; + + if (!mhi_chan) + continue; + + __mhi_unprepare_channel(mhi_cntrl, mhi_chan); + } +} +EXPORT_SYMBOL_GPL(mhi_unprepare_from_transfer); + +int mhi_poll(struct mhi_device *mhi_dev, u32 budget) +{ + struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; + struct mhi_chan *mhi_chan = mhi_dev->dl_chan; + struct mhi_event *mhi_event = &mhi_cntrl->mhi_event[mhi_chan->er_index]; + int ret; + + spin_lock_bh(&mhi_event->lock); + ret = mhi_event->process_event(mhi_cntrl, mhi_event, budget); + spin_unlock_bh(&mhi_event->lock); + + return ret; +} +EXPORT_SYMBOL_GPL(mhi_poll); diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c index 0bdc667830f0..a0ec76c56c6b 100644 --- a/drivers/bus/mhi/core/pm.c +++ b/drivers/bus/mhi/core/pm.c @@ -932,3 +932,43 @@ int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl) return ret; } EXPORT_SYMBOL_GPL(mhi_force_rddm_mode); + +void mhi_device_get(struct mhi_device *mhi_dev) +{ + struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; + + atomic_inc(&mhi_dev->dev_wake); + read_lock_bh(&mhi_cntrl->pm_lock); + mhi_cntrl->wake_get(mhi_cntrl, true); + read_unlock_bh(&mhi_cntrl->pm_lock); +} +EXPORT_SYMBOL_GPL(mhi_device_get); + +int mhi_device_get_sync(struct mhi_device *mhi_dev) +{ + struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; + int ret; + + ret = __mhi_device_get_sync(mhi_cntrl); + if (!ret) + atomic_inc(&mhi_dev->dev_wake); + + return ret; +} +EXPORT_SYMBOL_GPL(mhi_device_get_sync); + +void mhi_device_put(struct mhi_device *mhi_dev) +{ + struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; + + atomic_dec(&mhi_dev->dev_wake); + read_lock_bh(&mhi_cntrl->pm_lock); + if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) { + mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data); + mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data); + } + + mhi_cntrl->wake_put(mhi_cntrl, false); + read_unlock_bh(&mhi_cntrl->pm_lock); +} +EXPORT_SYMBOL_GPL(mhi_device_put); diff --git a/include/linux/mhi.h b/include/linux/mhi.h index 3e8f797c4c51..33ac04ff2ab4 100644 --- a/include/linux/mhi.h +++ b/include/linux/mhi.h @@ -351,6 +351,8 @@ struct mhi_controller_config { * @runtimet_put: CB function to decrement pm usage * @lpm_disable: CB function to request disable link level low power modes * @lpm_enable: CB function to request enable link level low power modes again + * @map_single: CB function to create TRE buffer + * @unmap_single: CB function to destroy TRE buffer * @bounce_buf: Use of bounce buffer * @buffer_len: Bounce buffer length * @priv_data: Points to bus master's private data @@ -423,6 +425,10 @@ struct mhi_controller { void (*runtime_put)(struct mhi_controller *mhi_cntrl, void *priv); void (*lpm_disable)(struct mhi_controller *mhi_cntrl, void *priv); void (*lpm_enable)(struct mhi_controller *mhi_cntrl, void *priv); + int (*map_single)(struct mhi_controller *mhi_cntrl, + struct mhi_buf_info *buf); + void (*unmap_single)(struct mhi_controller *mhi_cntrl, + struct mhi_buf_info *buf); bool bounce_buf; size_t buffer_len; @@ -622,4 +628,53 @@ int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl); */ enum mhi_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl); +/** + * mhi_device_get - Disable device low power mode + * @mhi_dev: Device associated with the channel + */ +void mhi_device_get(struct mhi_device *mhi_dev); + +/** + * mhi_device_get_sync - Disable device low power mode. Synchronously + * take the controller out of suspended state + * @mhi_dev: Device associated with the channel + */ +int mhi_device_get_sync(struct mhi_device *mhi_dev); + +/** + * mhi_device_put - Re-enable device low power mode + * @mhi_dev: Device associated with the channel + */ +void mhi_device_put(struct mhi_device *mhi_dev); + +/** + * mhi_prepare_for_transfer - Setup channel for data transfer + * @mhi_dev: Device associated with the channels + */ +int mhi_prepare_for_transfer(struct mhi_device *mhi_dev); + +/** + * mhi_unprepare_from_transfer - Unprepare the channels + * @mhi_dev: Device associated with the channels + */ +void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev); + +/** + * mhi_poll - Poll for any available data in DL direction + * @mhi_dev: Device associated with the channels + * @budget: # of events to process + */ +int mhi_poll(struct mhi_device *mhi_dev, u32 budget); + +/** + * mhi_queue_transfer - Send or receive data from client device over MHI channel + * @mhi_dev: Device associated with the channels + * @dir: DMA direction for the channel + * @buf: Buffer for holding the data + * @len: Buffer length + * @mflags: MHI transfer flags used for the transfer + */ +int mhi_queue_transfer(struct mhi_device *mhi_dev, enum dma_data_direction dir, + void *buf, size_t len, enum mhi_flags mflags); + #endif /* _MHI_H_ */ From patchwork Thu Jan 23 11:18:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manivannan Sadhasivam X-Patchwork-Id: 11347397 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 214DD1580 for ; Thu, 23 Jan 2020 11:19:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 002FB24685 for ; Thu, 23 Jan 2020 11:19:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="eqJjWbud" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729268AbgAWLT2 (ORCPT ); Thu, 23 Jan 2020 06:19:28 -0500 Received: from mail-pg1-f195.google.com ([209.85.215.195]:34760 "EHLO mail-pg1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729259AbgAWLT2 (ORCPT ); Thu, 23 Jan 2020 06:19:28 -0500 Received: by mail-pg1-f195.google.com with SMTP id r11so1239821pgf.1 for ; Thu, 23 Jan 2020 03:19:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=82scBKRueLOczEUnCgg4pJWq6Va+L/4UJRbtpRC9qQE=; b=eqJjWbud8dKdn5D9SgbGh+3pdo31T7t8bpEY96WOoVx7ndPlMxdChiivMaP+ONmU1v P1HXlZVp254ANRCTIC3rMWw+n5+ADe3dW5j4KcuSZt5+3sWTli/F5HIqajelYq77qEjc DB9OY/JCq8jPELWqw6CJDbmv/Z8/DOmru0zOvptNWSAywQ4cEiBexjXwH8IITWiqGoZl lXrCehaWToId9gzgmiigcVNXxGq+8UQ5V1fQCVM/zHPDbPzHr44TH5gABdH0nLGWO5Ya 9HKCHUPWsJFPihPYkb+BY+c+4a7OzJE5k5Fs5RAo9ER6RmzJ6GfxoblR+CMErO+cEEs0 O0lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=82scBKRueLOczEUnCgg4pJWq6Va+L/4UJRbtpRC9qQE=; b=LYiw0twlkZsTwgO2E6aUH2h5nui1iOUNah4l8TI9Q4t4IVXEzyHt7pPpxhWcg3IFZN UIduOGZs2BkrTQGvrqH63ody9jUiWFDb8JkUsOnXtXCINUkmnp0z034xgV5IYsnejvfu 0cZ036plwWpEIo4PboWkxg3sN6i9123XQU5biuORKqhnU+/V6t/tgjivlDJStM8oVmzv QkthzsAsJoBLv7jedvqnpzWzlPjdF5r8WUgmY/54IK1ls8sG08tQh/rgIz0gYsv6PMf+ S6SojlyTcJuR4y5l36oKWYg5b7d+VY4bDuk6ucIlXWanLnkFtwezwVlgny2VfiRKvJ2F kEXA== X-Gm-Message-State: APjAAAUjP55f0406q32rbAeHMxLzYmGzZZlAeNkx1w+nseQLq1YqvMlf h020oEgQ6jKW+y2t1ux0Qu92ELLJ+g== X-Google-Smtp-Source: APXvYqzF8M30uMymNg+oRQF6xIHsdRqrwCQAV476rsMT6qiwvwQc+W6YAxwNkYRMZS4DKXLF/hf+Ow== X-Received: by 2002:a63:b642:: with SMTP id v2mr3339550pgt.126.1579778367395; Thu, 23 Jan 2020 03:19:27 -0800 (PST) Received: from localhost.localdomain ([103.59.133.81]) by smtp.googlemail.com with ESMTPSA id y6sm2627559pgc.10.2020.01.23.03.19.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2020 03:19:26 -0800 (PST) From: Manivannan Sadhasivam To: gregkh@linuxfoundation.org, arnd@arndb.de Cc: smohanad@codeaurora.org, jhugo@codeaurora.org, kvalo@codeaurora.org, bjorn.andersson@linaro.org, hemantk@codeaurora.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, Manivannan Sadhasivam Subject: [PATCH 12/16] bus: mhi: core: Add uevent support for module autoloading Date: Thu, 23 Jan 2020 16:48:32 +0530 Message-Id: <20200123111836.7414-13-manivannan.sadhasivam@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> References: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add uevent support to MHI bus so that the client drivers can be autoloaded by udev when the MHI devices gets created. The client drivers are expected to provide MODULE_DEVICE_TABLE with the MHI id_table struct so that the alias can be exported. Signed-off-by: Manivannan Sadhasivam --- drivers/bus/mhi/core/init.c | 9 +++++++++ include/linux/mod_devicetable.h | 1 + scripts/mod/devicetable-offsets.c | 3 +++ scripts/mod/file2alias.c | 10 ++++++++++ 4 files changed, 23 insertions(+) diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c index 40dcf8353f6f..152d12066bec 100644 --- a/drivers/bus/mhi/core/init.c +++ b/drivers/bus/mhi/core/init.c @@ -1229,6 +1229,14 @@ void mhi_driver_unregister(struct mhi_driver *mhi_drv) } EXPORT_SYMBOL_GPL(mhi_driver_unregister); +static int mhi_uevent(struct device *dev, struct kobj_uevent_env *env) +{ + struct mhi_device *mhi_dev = to_mhi_device(dev); + + return add_uevent_var(env, "MODALIAS=" MHI_DEVICE_MODALIAS_FMT, + mhi_dev->chan_name); +} + static int mhi_match(struct device *dev, struct device_driver *drv) { struct mhi_device *mhi_dev = to_mhi_device(dev); @@ -1255,6 +1263,7 @@ struct bus_type mhi_bus_type = { .name = "mhi", .dev_name = "mhi", .match = mhi_match, + .uevent = mhi_uevent, }; static int __init mhi_init(void) diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h index be15e997fe39..f10e779a3fd0 100644 --- a/include/linux/mod_devicetable.h +++ b/include/linux/mod_devicetable.h @@ -821,6 +821,7 @@ struct wmi_device_id { const void *context; }; +#define MHI_DEVICE_MODALIAS_FMT "mhi:%s" #define MHI_NAME_SIZE 32 /** diff --git a/scripts/mod/devicetable-offsets.c b/scripts/mod/devicetable-offsets.c index 054405b90ba4..fe3f4a95cb21 100644 --- a/scripts/mod/devicetable-offsets.c +++ b/scripts/mod/devicetable-offsets.c @@ -231,5 +231,8 @@ int main(void) DEVID(wmi_device_id); DEVID_FIELD(wmi_device_id, guid_string); + DEVID(mhi_device_id); + DEVID_FIELD(mhi_device_id, chan); + return 0; } diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c index c91eba751804..cae6a4e471b5 100644 --- a/scripts/mod/file2alias.c +++ b/scripts/mod/file2alias.c @@ -1335,6 +1335,15 @@ static int do_wmi_entry(const char *filename, void *symval, char *alias) return 1; } +/* Looks like: mhi:S */ +static int do_mhi_entry(const char *filename, void *symval, char *alias) +{ + DEF_FIELD_ADDR(symval, mhi_device_id, chan); + sprintf(alias, MHI_DEVICE_MODALIAS_FMT, *chan); + + return 1; +} + /* Does namelen bytes of name exactly match the symbol? */ static bool sym_is(const char *name, unsigned namelen, const char *symbol) { @@ -1407,6 +1416,7 @@ static const struct devtable devtable[] = { {"typec", SIZE_typec_device_id, do_typec_entry}, {"tee", SIZE_tee_client_device_id, do_tee_entry}, {"wmi", SIZE_wmi_device_id, do_wmi_entry}, + {"mhi", SIZE_mhi_device_id, do_mhi_entry}, }; /* Create MODULE_ALIAS() statements. From patchwork Thu Jan 23 11:18:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manivannan Sadhasivam X-Patchwork-Id: 11347389 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AD5741580 for ; Thu, 23 Jan 2020 11:19:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8BFF72467E for ; Thu, 23 Jan 2020 11:19:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="oNMapbpk" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729288AbgAWLTb (ORCPT ); Thu, 23 Jan 2020 06:19:31 -0500 Received: from mail-pl1-f196.google.com ([209.85.214.196]:36966 "EHLO mail-pl1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729240AbgAWLTb (ORCPT ); Thu, 23 Jan 2020 06:19:31 -0500 Received: by mail-pl1-f196.google.com with SMTP id c23so1226068plz.4 for ; Thu, 23 Jan 2020 03:19:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=OOn03P2j4+kr/DXG8NPT0x1Kk1bZl2DSDVVAcxfmW/Y=; b=oNMapbpk822BP7CdcxtHz54BSReHACxR+26qB0HWnkZ4gytrswDazDLuWvp8kWTHyY X0EM7F/EpABqwyGi14/41RPld6Tb6ZigyF+qHq4HZIw95eLkKTsU0Aznzq9Xa/n5lfXZ 3SwmcfunHdLLigUEHiATW5hTYmuxhxau5Iwo4I+pOo2p0IbEkPipHROpk8tMxqph6A41 VGtD5AHIb3ZXNfM6ZTBVwoKZoh6Upe+hBcr93VLXovkzRuS6xVRc9IuQPKm0zG+M+zfx XVC2NomZ1VqNLZn5dOMb8A40GP2OrtIFiH+K/i3qD9orFKnIMYsp0xmOqOaotAQu7cKf 3guQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=OOn03P2j4+kr/DXG8NPT0x1Kk1bZl2DSDVVAcxfmW/Y=; b=QHYhqBZDFvnnuGKz80bPuo/Pwd4QO0h8CIDU3TF2OZNP7nL2HIgn/bnj8pCDi5dpJq EObqsfUhGVZkv1Kpwvrbr0Lph94oPrO5airRr4S2IXgmNrHtoZZLaN7dz/6ua+K5CIED 1CqZSVI5UFjkbozfur7TcDMYlNImwB7sMcXD2GsfQx5ZDdOSPdzfSOtVOakOhGPI+AF5 bKbDHViDVQL7nSfEu2Zsldb0MulqKEUiUfB/XWAQXYVu/sxG0tuIxFzKvA9AYoPJgSS5 U4dU32Zl4l2d4sfRsgUTAobvB1U6AJ2HgkiTOgDn/Egus1/UCxtIvv5cw9kfFgwa5vZC tJHg== X-Gm-Message-State: APjAAAXGgBB+M0hDBFAa8iiPEy5Ex9xAzP7E8q8sTgfoPe04UdEpDN+w 2Gf6JyRDN/cwI19IrM4/Jl3/ X-Google-Smtp-Source: APXvYqzf7o8s8NL4+/h7QX2QyDG6Q2e+Dp159PX+H007Fn8qM0bcUDWCeLXu7wTK/APwY8aw2bW4zw== X-Received: by 2002:a17:902:6ac2:: with SMTP id i2mr15626148plt.221.1579778370707; Thu, 23 Jan 2020 03:19:30 -0800 (PST) Received: from localhost.localdomain ([103.59.133.81]) by smtp.googlemail.com with ESMTPSA id y6sm2627559pgc.10.2020.01.23.03.19.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2020 03:19:30 -0800 (PST) From: Manivannan Sadhasivam To: gregkh@linuxfoundation.org, arnd@arndb.de Cc: smohanad@codeaurora.org, jhugo@codeaurora.org, kvalo@codeaurora.org, bjorn.andersson@linaro.org, hemantk@codeaurora.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, Manivannan Sadhasivam Subject: [PATCH 13/16] MAINTAINERS: Add entry for MHI bus Date: Thu, 23 Jan 2020 16:48:33 +0530 Message-Id: <20200123111836.7414-14-manivannan.sadhasivam@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> References: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add MAINTAINERS entry for MHI bus. Signed-off-by: Manivannan Sadhasivam --- MAINTAINERS | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index cf6ccca6e61c..927cdd907f1f 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -10777,6 +10777,15 @@ M: Vladimir Vid S: Maintained F: arch/arm64/boot/dts/marvell/armada-3720-uDPU.dts +MHI BUS +M: Manivannan Sadhasivam +L: linux-arm-msm@vger.kernel.org +T: git git://git.kernel.org/pub/scm/linux/kernel/git/mani/mhi.git +S: Maintained +F: drivers/bus/mhi/ +F: include/linux/mhi.h +F: Documentation/mhi/ + MICROBLAZE ARCHITECTURE M: Michal Simek W: http://www.monstr.eu/fdt/ From patchwork Thu Jan 23 11:18:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manivannan Sadhasivam X-Patchwork-Id: 11347391 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C0D061580 for ; Thu, 23 Jan 2020 11:19:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9599D2468A for ; Thu, 23 Jan 2020 11:19:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="XzteJ0Mc" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729247AbgAWLTg (ORCPT ); Thu, 23 Jan 2020 06:19:36 -0500 Received: from mail-pf1-f196.google.com ([209.85.210.196]:38466 "EHLO mail-pf1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729332AbgAWLTg (ORCPT ); Thu, 23 Jan 2020 06:19:36 -0500 Received: by mail-pf1-f196.google.com with SMTP id x185so1398495pfc.5 for ; Thu, 23 Jan 2020 03:19:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Ukx4iY1w5PIZZr/tp1rLHRN10AfBkKKAjf4yqGYrtY8=; b=XzteJ0McGxr8yGC69FvIg6tDYBJaUR1ljvQl4iGMJhAcD1Ik4DCamYe8lNeBO7IUdE hZ/sSgaD3meyWQuaGjQeAaLSa4tlTnTNjDOUKQwlJIoiGePebKH/1PKyLyivohuQtMWR CNrtmp1Of0QAGg5qhxN17uErIzJf8svvojHxNMezTHzLet/N3iq3k70MYpMRBa+OWd5b Vimeeg7QmjF2gXHVlz8iSbtqScbkY37e9YRsny6eoRAqHeN67OMHjSS55DDByLnxsWZg ROQq+IOBcFPXm66/jVrY9hoJtzyCTgCROODA0bwcVSWJwOv3NoUd2r6DZzkM/0VTo5BA P0aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Ukx4iY1w5PIZZr/tp1rLHRN10AfBkKKAjf4yqGYrtY8=; b=OTya1r1DLh2iyFhd93ZdVKdUQyES52eBvovF7+YaQqj/TxPUNy0ki/N6cyIwN2h2zm m478+nrTIKRC8CPsgzE7xE6cHouLm3hH+/vgjEEc8wk1HC2QuC+6+LDMyj0X1CQX/QRS A3bm2L2A706jmomRq7NUwrMYkaaaYzjzXh67ip8mRuQ7xrR8LxIvMaMKR7qtRyUOqh96 FkikRCCedTaU8JszXR+Y/uSQsCe+qtlqX/FMW3gA2E1EqiqC4zngvVqIjnTP/wSOp78G M3bMVfoje1HfInxk4UdzIekmL+5EC48l4lndL9Xc5poPy95x1AeqnRwjGPgvGcj8jgVr EyLw== X-Gm-Message-State: APjAAAUaK2pWkWcI48MaZnd6MPkJ8qIFgLZT+WbHYwuXB1/9nOW8qzFI z/ahmyr9uUad0pkd2ojuXqk0 X-Google-Smtp-Source: APXvYqxafqul6E4O9BXwlF3ZthPhBMm6khKx+XLokgdHv+MLUPODxAaz93qM3QZENaefduFbbQDaWg== X-Received: by 2002:a63:28a:: with SMTP id 132mr3365730pgc.165.1579778375131; Thu, 23 Jan 2020 03:19:35 -0800 (PST) Received: from localhost.localdomain ([103.59.133.81]) by smtp.googlemail.com with ESMTPSA id y6sm2627559pgc.10.2020.01.23.03.19.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2020 03:19:33 -0800 (PST) From: Manivannan Sadhasivam To: gregkh@linuxfoundation.org, arnd@arndb.de Cc: smohanad@codeaurora.org, jhugo@codeaurora.org, kvalo@codeaurora.org, bjorn.andersson@linaro.org, hemantk@codeaurora.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, Manivannan Sadhasivam , "David S. Miller" , netdev@vger.kernel.org Subject: [PATCH 14/16] net: qrtr: Add MHI transport layer Date: Thu, 23 Jan 2020 16:48:34 +0530 Message-Id: <20200123111836.7414-15-manivannan.sadhasivam@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> References: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org MHI is the transport layer used for communicating to the external modems. Hence, this commit adds MHI transport layer support to QRTR for transferring the QMI messages over IPC Router. Cc: "David S. Miller" Cc: netdev@vger.kernel.org Signed-off-by: Manivannan Sadhasivam --- net/qrtr/Kconfig | 7 ++ net/qrtr/Makefile | 2 + net/qrtr/mhi.c | 207 ++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 216 insertions(+) create mode 100644 net/qrtr/mhi.c diff --git a/net/qrtr/Kconfig b/net/qrtr/Kconfig index 63f89cc6e82c..8eb876471564 100644 --- a/net/qrtr/Kconfig +++ b/net/qrtr/Kconfig @@ -29,4 +29,11 @@ config QRTR_TUN implement endpoints of QRTR, for purpose of tunneling data to other hosts or testing purposes. +config QRTR_MHI + tristate "MHI IPC Router channels" + depends on MHI_BUS + help + Say Y here to support MHI based ipcrouter channels. MHI is the + transport used for communicating to external modems. + endif # QRTR diff --git a/net/qrtr/Makefile b/net/qrtr/Makefile index 1c6d6c120fb7..3dc0a7c9d455 100644 --- a/net/qrtr/Makefile +++ b/net/qrtr/Makefile @@ -5,3 +5,5 @@ obj-$(CONFIG_QRTR_SMD) += qrtr-smd.o qrtr-smd-y := smd.o obj-$(CONFIG_QRTR_TUN) += qrtr-tun.o qrtr-tun-y := tun.o +obj-$(CONFIG_QRTR_MHI) += qrtr-mhi.o +qrtr-mhi-y := mhi.o diff --git a/net/qrtr/mhi.c b/net/qrtr/mhi.c new file mode 100644 index 000000000000..c85041a22f85 --- /dev/null +++ b/net/qrtr/mhi.c @@ -0,0 +1,207 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved. + */ + +#include +#include +#include +#include +#include + +#include "qrtr.h" + +struct qrtr_mhi_dev { + struct qrtr_endpoint ep; + struct mhi_device *mhi_dev; + struct device *dev; + spinlock_t ul_lock; /* lock to protect ul_pkts */ + struct list_head ul_pkts; + atomic_t in_reset; +}; + +struct qrtr_mhi_pkt { + struct list_head node; + struct sk_buff *skb; + struct kref refcount; + struct completion done; +}; + +static void qrtr_mhi_pkt_release(struct kref *ref) +{ + struct qrtr_mhi_pkt *pkt = container_of(ref, struct qrtr_mhi_pkt, + refcount); + struct sock *sk = pkt->skb->sk; + + consume_skb(pkt->skb); + if (sk) + sock_put(sk); + + kfree(pkt); +} + +/* From MHI to QRTR */ +static void qcom_mhi_qrtr_dl_callback(struct mhi_device *mhi_dev, + struct mhi_result *mhi_res) +{ + struct qrtr_mhi_dev *qdev = dev_get_drvdata(&mhi_dev->dev); + int rc; + + if (!qdev || mhi_res->transaction_status) + return; + + rc = qrtr_endpoint_post(&qdev->ep, mhi_res->buf_addr, + mhi_res->bytes_xferd); + if (rc == -EINVAL) + dev_err(qdev->dev, "invalid ipcrouter packet\n"); +} + +/* From QRTR to MHI */ +static void qcom_mhi_qrtr_ul_callback(struct mhi_device *mhi_dev, + struct mhi_result *mhi_res) +{ + struct qrtr_mhi_dev *qdev = dev_get_drvdata(&mhi_dev->dev); + struct qrtr_mhi_pkt *pkt; + unsigned long flags; + + spin_lock_irqsave(&qdev->ul_lock, flags); + pkt = list_first_entry(&qdev->ul_pkts, struct qrtr_mhi_pkt, node); + list_del(&pkt->node); + complete_all(&pkt->done); + + kref_put(&pkt->refcount, qrtr_mhi_pkt_release); + spin_unlock_irqrestore(&qdev->ul_lock, flags); +} + +static void qcom_mhi_qrtr_status_callback(struct mhi_device *mhi_dev, + enum mhi_callback mhi_cb) +{ + struct qrtr_mhi_dev *qdev = dev_get_drvdata(&mhi_dev->dev); + struct qrtr_mhi_pkt *pkt; + unsigned long flags; + + if (mhi_cb != MHI_CB_FATAL_ERROR) + return; + + atomic_inc(&qdev->in_reset); + spin_lock_irqsave(&qdev->ul_lock, flags); + list_for_each_entry(pkt, &qdev->ul_pkts, node) + complete_all(&pkt->done); + spin_unlock_irqrestore(&qdev->ul_lock, flags); +} + +/* Send data over MHI */ +static int qcom_mhi_qrtr_send(struct qrtr_endpoint *ep, struct sk_buff *skb) +{ + struct qrtr_mhi_dev *qdev = container_of(ep, struct qrtr_mhi_dev, ep); + struct qrtr_mhi_pkt *pkt; + int rc; + + rc = skb_linearize(skb); + if (rc) { + kfree_skb(skb); + return rc; + } + + pkt = kzalloc(sizeof(*pkt), GFP_KERNEL); + if (!pkt) { + kfree_skb(skb); + return -ENOMEM; + } + + init_completion(&pkt->done); + kref_init(&pkt->refcount); + kref_get(&pkt->refcount); + pkt->skb = skb; + + spin_lock_bh(&qdev->ul_lock); + list_add_tail(&pkt->node, &qdev->ul_pkts); + rc = mhi_queue_transfer(qdev->mhi_dev, DMA_TO_DEVICE, skb, skb->len, + MHI_EOT); + if (rc) { + list_del(&pkt->node); + kfree_skb(skb); + kfree(pkt); + spin_unlock_bh(&qdev->ul_lock); + return rc; + } + + spin_unlock_bh(&qdev->ul_lock); + if (skb->sk) + sock_hold(skb->sk); + + rc = wait_for_completion_interruptible_timeout(&pkt->done, HZ * 5); + if (atomic_read(&qdev->in_reset)) + rc = -ECONNRESET; + else if (rc == 0) + rc = -ETIMEDOUT; + else if (rc > 0) + rc = 0; + + kref_put(&pkt->refcount, qrtr_mhi_pkt_release); + + return rc; +} + +static int qcom_mhi_qrtr_probe(struct mhi_device *mhi_dev, + const struct mhi_device_id *id) +{ + struct qrtr_mhi_dev *qdev; + u32 net_id; + int rc; + + qdev = devm_kzalloc(&mhi_dev->dev, sizeof(*qdev), GFP_KERNEL); + if (!qdev) + return -ENOMEM; + + qdev->mhi_dev = mhi_dev; + qdev->dev = &mhi_dev->dev; + qdev->ep.xmit = qcom_mhi_qrtr_send; + atomic_set(&qdev->in_reset, 0); + + net_id = QRTR_EP_NID_AUTO; + + INIT_LIST_HEAD(&qdev->ul_pkts); + spin_lock_init(&qdev->ul_lock); + + dev_set_drvdata(&mhi_dev->dev, qdev); + rc = qrtr_endpoint_register(&qdev->ep, net_id); + if (rc) + return rc; + + dev_dbg(qdev->dev, "Qualcomm MHI QRTR driver probed\n"); + + return 0; +} + +static void qcom_mhi_qrtr_remove(struct mhi_device *mhi_dev) +{ + struct qrtr_mhi_dev *qdev = dev_get_drvdata(&mhi_dev->dev); + + qrtr_endpoint_unregister(&qdev->ep); + dev_set_drvdata(&mhi_dev->dev, NULL); +} + +static const struct mhi_device_id qcom_mhi_qrtr_id_table[] = { + { .chan = "IPCR" }, + {} +}; +MODULE_DEVICE_TABLE(mhi, qcom_mhi_qrtr_id_table); + +static struct mhi_driver qcom_mhi_qrtr_driver = { + .probe = qcom_mhi_qrtr_probe, + .remove = qcom_mhi_qrtr_remove, + .dl_xfer_cb = qcom_mhi_qrtr_dl_callback, + .ul_xfer_cb = qcom_mhi_qrtr_ul_callback, + .status_cb = qcom_mhi_qrtr_status_callback, + .id_table = qcom_mhi_qrtr_id_table, + .driver = { + .name = "qcom_mhi_qrtr", + }, +}; + +module_driver(qcom_mhi_qrtr_driver, mhi_driver_register, + mhi_driver_unregister); + +MODULE_DESCRIPTION("Qualcomm IPC-Router MHI interface driver"); +MODULE_LICENSE("GPL v2"); From patchwork Thu Jan 23 11:18:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manivannan Sadhasivam X-Patchwork-Id: 11347395 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E274F159A for ; Thu, 23 Jan 2020 11:19:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C0B8C2467E for ; Thu, 23 Jan 2020 11:19:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="RlDi9bN/" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729370AbgAWLTl (ORCPT ); Thu, 23 Jan 2020 06:19:41 -0500 Received: from mail-pl1-f195.google.com ([209.85.214.195]:36144 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729355AbgAWLTk (ORCPT ); Thu, 23 Jan 2020 06:19:40 -0500 Received: by mail-pl1-f195.google.com with SMTP id a6so1230738plm.3 for ; Thu, 23 Jan 2020 03:19:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=15QkUOyqGz6eomBKIwtq/Fp2YhGFsx/KFWnHxF+6NKE=; b=RlDi9bN/8bPFgqQgfP/5w0XWTn0g7NWphocLjSNdwU3gk2TR5m/9JJjyX20ie+xfuB s/hWZ2kFtCDe1Bf3rMI4ZuOOkQxBn3ATb4RYE+NZlD8HbMJcrDUkYyAzm3XIIKG1FGPw bB42RmozNEd8jRo9w8i/Z8ENNegXMJVmSgfjRuZmnjX7/r3POPo59EMv5p0bP4/gPfV9 pr/Q0QG7CBxOFXOVpKZaKzbNsFWDb9GQ4o0ohmQ1LQakTIbi1c9rA3JLeYUpi9Kyb2Bz G+qLMQQDHZkedyH9nwSkcriQ5oSLqk7mqFX/xiXzuBNO+2ectedr4X4S4IdqdA+ItQbX Wbow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=15QkUOyqGz6eomBKIwtq/Fp2YhGFsx/KFWnHxF+6NKE=; b=KirAxeqyWzdXpqyTondXB2yrT5jXgaH+7koU5V42CjJYiS8ha0LZ61YKO2hjC3lJ3a ZQq+kZNKNcOU52eSXJeMY57CyLOcgihbdj8JGr3MbT1DCIJv+8qGK3KjeRxnUNxqC+BR Sk920ZSZpRZLCR7DCIdDICiqh/HERhRTz4hXApNwgpsEEJEWM8JlmIs/irr+Fv/P6vuT 78JTOgzrWcB6fFsAkjZ3KD66mkUJgQEbREg/vTZcaRVmKUdqLTTMV3q/kbwU+kT5qRAU ACq3tgKfdituy28D76N5KxBMaQ4HTaHwpaVB6Q3QCREzIUHiyoaIFKb0aZbVJNTSKv0d 3SEg== X-Gm-Message-State: APjAAAUezvUpeS+gp7KqEykf4mwZNf7XoVn3o0LOqVXoy3Wz3duihe7P ynQzEPL1KkclkazP617Ua7uS X-Google-Smtp-Source: APXvYqzcZRJ2TmhHoxy1Rew245aJeebuTCCNqMgta8RcCA/PQhdOitTg+76KTM3uKTY3O0heA+6vUA== X-Received: by 2002:a17:902:12d:: with SMTP id 42mr15480441plb.246.1579778379054; Thu, 23 Jan 2020 03:19:39 -0800 (PST) Received: from localhost.localdomain ([103.59.133.81]) by smtp.googlemail.com with ESMTPSA id y6sm2627559pgc.10.2020.01.23.03.19.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2020 03:19:38 -0800 (PST) From: Manivannan Sadhasivam To: gregkh@linuxfoundation.org, arnd@arndb.de Cc: smohanad@codeaurora.org, jhugo@codeaurora.org, kvalo@codeaurora.org, bjorn.andersson@linaro.org, hemantk@codeaurora.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, Manivannan Sadhasivam , "David S. Miller" , netdev@vger.kernel.org Subject: [PATCH 15/16] net: qrtr: Do not depend on ARCH_QCOM Date: Thu, 23 Jan 2020 16:48:35 +0530 Message-Id: <20200123111836.7414-16-manivannan.sadhasivam@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> References: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org IPC Router protocol is also used by external modems for exchanging the QMI messages. Hence, it doesn't always depend on Qualcomm platforms. As a side effect of removing the ARCH_QCOM dependency, it is going to miss the COMPILE_TEST build coverage. Cc: "David S. Miller" Cc: netdev@vger.kernel.org Signed-off-by: Manivannan Sadhasivam --- net/qrtr/Kconfig | 1 - 1 file changed, 1 deletion(-) diff --git a/net/qrtr/Kconfig b/net/qrtr/Kconfig index 8eb876471564..f362ca316015 100644 --- a/net/qrtr/Kconfig +++ b/net/qrtr/Kconfig @@ -4,7 +4,6 @@ config QRTR tristate "Qualcomm IPC Router support" - depends on ARCH_QCOM || COMPILE_TEST ---help--- Say Y if you intend to use Qualcomm IPC router protocol. The protocol is used to communicate with services provided by other From patchwork Thu Jan 23 11:18:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manivannan Sadhasivam X-Patchwork-Id: 11347393 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7C5271580 for ; Thu, 23 Jan 2020 11:19:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5B68E2467E for ; Thu, 23 Jan 2020 11:19:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="gq1pST3X" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729393AbgAWLTo (ORCPT ); Thu, 23 Jan 2020 06:19:44 -0500 Received: from mail-pg1-f196.google.com ([209.85.215.196]:32808 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729334AbgAWLTn (ORCPT ); Thu, 23 Jan 2020 06:19:43 -0500 Received: by mail-pg1-f196.google.com with SMTP id 6so1243324pgk.0 for ; Thu, 23 Jan 2020 03:19:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=vVMDDP45gEarE2ftOW97e9LLiKh1M1ITIXoxyhoFdNU=; b=gq1pST3X546LlrU71JkIqEimum7KJkT8zP0g3954nEQsSnZjo+F6aVleNGcmjkrp7j 955/Gs4WvhhvxDh3Ko+lNyqKN3hQO+n2/gZJ07VEGZmS42FF3SbE5MZaOxQPdrEriIwU zviFHO2Ip3dJXCd59ZJtrkQyoDigdtNRww6dDC2EW+yvse9GHpzMcVNJ9j9QbWOyoTlg gZch33KG2Hz8CYA/0TJcFew6gxFw7tBtqDg6+X2bQ1VBMGFErEPk+YL5j4SINZcYKCF9 3eFN5K376Il2WWAOHlEzhT3d5brNA3jeE57qMEqw4QBJqDSby6cTPRyjOAjq4ioDkU9V ty4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=vVMDDP45gEarE2ftOW97e9LLiKh1M1ITIXoxyhoFdNU=; b=BUKRj3ayeNaQXtcV5LEqmy5u7SkOGIPZz34NRkK8lEZyMxmdEU7d041bP8SAV5oXhT TS/xdpQ3VXv/nhbF5TWfU0gmpg+Twz3YYR4eMAY9cH3VSA7fQJKJP3hHOaW5MkjhKB9O /VZcug7gTkmDpzPNaxqT/gCww+4bCye1Cy+GFrB775jGfDeu4vR0v4hDCBiJ+nbIr+iw bQYDFkGnCWIJ+anBxqGi0Vx/mswDBO3b1PzB8vsgxquApBV7kmCXWPfxEYhG8k6oNVPJ 2K2o6Z3RkGyn55gZaAMxh0EJnR4ZWm5znctdNxYWEvmHAkQ3ZqwGlNTdiFTTD5AJjpr9 nVMw== X-Gm-Message-State: APjAAAVPH2f5kk6rCy8drVbZKWFZo4pX4arMzGW811xgF+4sA2NBwWY1 of4XgXaM4sCbw28KKJasVBPv X-Google-Smtp-Source: APXvYqxCdrTNghHTjvY3gNU6wjMfWTBmwK1ER/gmzX91JopZB4AclM1GiiV7l+ySRy26woQE/2J6ig== X-Received: by 2002:a63:dc0d:: with SMTP id s13mr3293834pgg.129.1579778382628; Thu, 23 Jan 2020 03:19:42 -0800 (PST) Received: from localhost.localdomain ([103.59.133.81]) by smtp.googlemail.com with ESMTPSA id y6sm2627559pgc.10.2020.01.23.03.19.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2020 03:19:42 -0800 (PST) From: Manivannan Sadhasivam To: gregkh@linuxfoundation.org, arnd@arndb.de Cc: smohanad@codeaurora.org, jhugo@codeaurora.org, kvalo@codeaurora.org, bjorn.andersson@linaro.org, hemantk@codeaurora.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, Manivannan Sadhasivam , Andy Gross Subject: [PATCH 16/16] soc: qcom: Do not depend on ARCH_QCOM for QMI helpers Date: Thu, 23 Jan 2020 16:48:36 +0530 Message-Id: <20200123111836.7414-17-manivannan.sadhasivam@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> References: <20200123111836.7414-1-manivannan.sadhasivam@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org QMI helpers are not always used by Qualcomm platforms. One of the exceptions is the external modems available in near future. As a side effect of removing the dependency, it is also going to loose COMPILE_TEST build coverage. Cc: Andy Gross Cc: Bjorn Andersson Cc: linux-arm-msm@vger.kernel.org Signed-off-by: Manivannan Sadhasivam --- drivers/soc/qcom/Kconfig | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig index 79d826553ac8..ca057bc9aae6 100644 --- a/drivers/soc/qcom/Kconfig +++ b/drivers/soc/qcom/Kconfig @@ -88,7 +88,6 @@ config QCOM_PM config QCOM_QMI_HELPERS tristate - depends on ARCH_QCOM || COMPILE_TEST depends on NET config QCOM_RMTFS_MEM