From patchwork Fri May 31 03:53:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 10969553 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8055114C0 for ; Fri, 31 May 2019 03:54:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 71BC128C74 for ; Fri, 31 May 2019 03:54:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 65E8428C78; Fri, 31 May 2019 03:54:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 16E8D28C74 for ; Fri, 31 May 2019 03:54:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=0XedAdaevGpw4UU1bXmsaWo/LHP8auWiNGml5AJVsps=; b=BF3UVJQqLm9H5p NXCO3kTcCVMIwDSXQYtq7HED9caYiBcGaBFHkuc4pvaf/962sY1lG83zdwZzb9dduzHQ2MPTY3hpt nUxiC/kmBNKhZzk00lVEAOUxS8o2FHb26YG1TRp9I5XkFr54BFoe8pwF9ExWa+Ind7Lsupn5OzRab VaVUI8ZI4AXg0fnmmv3CTeSzW6gdblEPnoBDmdVQo06MFzfF2frtDDThpSQ+xe02JoxmBH2zcSTCo rWsCXeSVihq3N1YMLFdG+0nOUkeKS9q0zmIPTw1kKpQ4cLM05jULdZND+bHpwnLilIn5JxcUIySJT AgmlIgUHwLhrfAv43aYA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYcD-0002ZT-1A; Fri, 31 May 2019 03:54:01 +0000 Received: from mail-it1-x143.google.com ([2607:f8b0:4864:20::143]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYc9-0002Ym-Q6 for linux-arm-kernel@lists.infradead.org; Fri, 31 May 2019 03:53:59 +0000 Received: by mail-it1-x143.google.com with SMTP id g23so9833200iti.1 for ; Thu, 30 May 2019 20:53:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=r4mnV8n9h367f6/VUTN5uF0kUnyjJ6IcwlGv5KmdBDM=; b=uQ5CqKcdh/yUYawzZpJbRZ12LMQNtmodArP5AWBWD9vjKgkFac6xjrmwQsM1TNxDlB /afVEpVlc6yUnyDL44O6VpMUSZbGnJz5vY8t2TzIRYBTxXyfrA/opbRkuMh/s0FMDa10 Yaj82guuMR8LL9IGBSM7Ugvz2Mbl6If5nbkrkAzZQCD2ne7/fQngLuOUEnSSt3p2AKCj T+Fle7a7mKUJ7KN0nHLDILNV5TdkKzxqJsbbO43z/DfarwmzWVJXOuJ4Nt4wxRBxM6/R /OIbcj6WOO07tz9PONxo9f6wd/jt6IerZF/L3gWS9w39N8TbRAm7kbZpma7Age9p10+Y yCWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=r4mnV8n9h367f6/VUTN5uF0kUnyjJ6IcwlGv5KmdBDM=; b=BQkDpoUKGBVheuqZbV8ppWur52s2MwP2agGyXVpUfXgKin1QWHWZUUxf31blR7xH5X DuMi9V38Lb//24a2iDSa+q5UHUQTBc/NHPepNIL08MqZ3WJ96i9ol6nae2LDsMtCBkV+ nPGgBDwD8xwtxltTv5WV73ml4TgEwBgX7SgK+m8NLii91CEjVCXa14ocn9RA70RqLlCY Pcz+XVLAhqoqFmJQtG+/uLsLjX8Qh7G1q3pK+W6cBJ0eGIpNhV0uQbg73Qe1Q47jnndK jyuk9Sf2vNMi4+kNoJvjnb0E8vTUEbkQuNd3Quv9UVoRwEtX562AkJx4wbD9RwfHqBrm QfMw== X-Gm-Message-State: APjAAAWJ+zTZlgNOoIw/oRT8W6wrk8nEzgHBtARqA2vjS9ZSLwgWHF7B go2ih2cF6J1ibG7zw+fOXXk/DQ== X-Google-Smtp-Source: APXvYqwO6awYQhmpOv3yi4OxH66vN6iPJ0929GOj5GH4pyJJakg8VatINo8iNHYSqyTX8R1QdVMheQ== X-Received: by 2002:a24:b07:: with SMTP id 7mr2381324itd.59.1559274836720; Thu, 30 May 2019 20:53:56 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.53.55 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:53:56 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Subject: [PATCH v2 01/17] bitfield.h: add FIELD_MAX() and field_max() Date: Thu, 30 May 2019 22:53:32 -0500 Message-Id: <20190531035348.7194-2-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190530_205357_907961_E692D3CE X-CRM114-Status: GOOD ( 10.10 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, syadagir@codeaurora.org, ejcaruso@google.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, evgreen@chromium.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, subashab@codeaurora.org, linux-soc@vger.kernel.org, abhishek.esse@gmail.com, cpratapa@codeaurora.org, benchan@google.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Define FIELD_MAX(), which supplies the maximum value that can be represented by a field value. Define field_max() as well, to go along with the lower-case forms of the field mask functions. Signed-off-by: Alex Elder --- include/linux/bitfield.h | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h index 3f1ef4450a7c..cf4f06774520 100644 --- a/include/linux/bitfield.h +++ b/include/linux/bitfield.h @@ -63,6 +63,19 @@ (1ULL << __bf_shf(_mask))); \ }) +/** + * FIELD_MAX() - produce the maximum value representable by a field + * @_mask: shifted mask defining the field's length and position + * + * FIELD_MAX() returns the maximum value that can be held in the field + * specified by @_mask. + */ +#define FIELD_MAX(_mask) \ + ({ \ + __BF_FIELD_CHECK(_mask, 0ULL, 0ULL, "FIELD_MAX: "); \ + (typeof(_mask))((_mask) >> __bf_shf(_mask)); \ + }) + /** * FIELD_FIT() - check if value fits in the field * @_mask: shifted mask defining the field's length and position @@ -118,6 +131,7 @@ static __always_inline u64 field_mask(u64 field) { return field / field_multiplier(field); } +#define field_max(field) ((typeof(field))field_mask(field)) #define ____MAKE_OP(type,base,to,from) \ static __always_inline __##type type##_encode_bits(base v, base field) \ { \ From patchwork Fri May 31 03:53:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 10969563 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8A89914DB for ; Fri, 31 May 2019 03:54:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7A44A28C73 for ; Fri, 31 May 2019 03:54:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6D9FC289C9; Fri, 31 May 2019 03:54:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DB76628C77 for ; Fri, 31 May 2019 03:54:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GrihrZqywlva+2ZIS4b5wq0Z1FIC+q/mh3NKyevutzc=; b=qy8UqDYxdiYiSN LStVgr44zcPGE1LzTng5wbQ+n1eRWUqlOdJS1lHuQXkB3wHwqYkCQeotcNEQoFTDjPtbkbdm6eZwe DUU4Vaecqe8yE/adCBcY5FVM2/YYO38aYFWZEOFcW7y8QRxpf8KuFBhRsf+sW7SfK7YDdVzbfGFRg zx1V0fP18FCVtDUQZlPRgT+g0PqzixRVuojc2KbqpVGA82yHOmSMiT1lhXxQL1DmESS8aGlXKBi5J qDxE1ATluIh1NOB0/CrrNnfXYrlzdm5PUUhJtzO03PX+XUokb28x1q3Go50yaWuIpcI/kRon50YOq 2sXxguift4KOgGboccww==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYcd-00033g-W9; Fri, 31 May 2019 03:54:28 +0000 Received: from mail-it1-x144.google.com ([2607:f8b0:4864:20::144]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYcC-0002ZE-Nc for linux-arm-kernel@lists.infradead.org; Fri, 31 May 2019 03:54:04 +0000 Received: by mail-it1-x144.google.com with SMTP id t184so13548560itf.2 for ; Thu, 30 May 2019 20:53:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JZ95IE69sVcEBRYG+tgP+o1mXucxgv2wjwcFuiPisT8=; b=um96SFt2ycKzFq9uGSVCZXnnl+5VpTPczH63AkH78WbURU0HUg0oQkJqruuHV0ekPf AA8QjULtHn5OjoK6WV2NoNuhF6m9yfc8wRoJZcor4/rN+zgLvhXNPNNNaD+506eWeiks b65grxnRRhReyKn2e9jgp2KTK+5pfby9Eo137NqKOnRPuk2aMIudpwXCI3+D3h3v/g8S BdBznrfedr5Dhtdi/DwhnleSg1EHCzi3Ol6NsCk1py2XRGGeDpr+9cGA5MCc1wWPhVmU CznBta7yL7k7P1+WkhCQIjJChcuLD3mOcnO+9aKy45Z2WPNqdnEczTSZwVWbqXLhTxXC 3aiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JZ95IE69sVcEBRYG+tgP+o1mXucxgv2wjwcFuiPisT8=; b=ORi6QddDzMqnkthkJ5H9aaumvpDRUFY7p0vc7jz+t4Gz6f4RrpEgu8URxEsmZ1aZOZ +XisiLZ6tSVg3Op+uaSyTNjwqH+nPbyhVw7t7UhFR21NgnjqWNUs1ms92IwuqXgfF/W7 /NbW0yjAvl6+vePFjkzIcX5LYLguTI+8VXvcJ8o1dVF9j2txbkm1j4r7gG7jUwGaeilp i5FhA/sbiKROqXZfueQ3YRq8RMzhkZztkVgimUeWbQtNZSC2NwiTfxDuDV30rUnvy9P1 P9YY3yAIr+KdicMfR2yCWqFH9yCZviPWSeeV8R5tF6wX25LhowzKONDPYmIooE9OaqIU j72Q== X-Gm-Message-State: APjAAAU4/pP1DSvZoufTtMVHbOFqO1hra6IRn/K6/iv/mPWVlbylqdDx VPqEvS8V4pQYSxtf7fp8asDl5BLVwxI= X-Google-Smtp-Source: APXvYqwaoaNsdphNA/LmcRP5E4+03JIOwqNJ7oWa3ZTXbHBfI6jcDM1GZjPROsP7Jdb6VHnZlpOyGA== X-Received: by 2002:a05:660c:712:: with SMTP id l18mr5737921itk.169.1559274838050; Thu, 30 May 2019 20:53:58 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.53.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:53:57 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org, robh+dt@kernel.org, mark.rutland@arm.com, devicetree@vger.kernel.org Subject: [PATCH v2 02/17] dt-bindings: soc: qcom: add IPA bindings Date: Thu, 30 May 2019 22:53:33 -0500 Message-Id: <20190531035348.7194-3-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190530_205400_947340_1A1CC2C0 X-CRM114-Status: GOOD ( 16.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: syadagir@codeaurora.org, ejcaruso@google.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, evgreen@chromium.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, subashab@codeaurora.org, linux-soc@vger.kernel.org, abhishek.esse@gmail.com, cpratapa@codeaurora.org, benchan@google.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Add the binding definitions for the "qcom,ipa" device tree node. Signed-off-by: Alex Elder Reviewed-by: Rob Herring --- .../devicetree/bindings/net/qcom,ipa.yaml | 180 ++++++++++++++++++ 1 file changed, 180 insertions(+) create mode 100644 Documentation/devicetree/bindings/net/qcom,ipa.yaml diff --git a/Documentation/devicetree/bindings/net/qcom,ipa.yaml b/Documentation/devicetree/bindings/net/qcom,ipa.yaml new file mode 100644 index 000000000000..0037fc278a61 --- /dev/null +++ b/Documentation/devicetree/bindings/net/qcom,ipa.yaml @@ -0,0 +1,180 @@ +# SPDX-License-Identifier: GPL-2.0 +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/net/qcom,ipa.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Qualcomm IP Accelerator (IPA) + +maintainers: + - Alex Elder + +description: + This binding describes the Qualcomm IPA. The IPA is capable of offloading + certain network processing tasks (e.g. filtering, routing, and NAT) from + the main processor. + + The IPA sits between multiple independent "execution environments," + including the Application Processor (AP) and the modem. The IPA presents + a Generic Software Interface (GSI) to each execution environment. + The GSI is an integral part of the IPA, but it is logically isolated + and has a distinct interrupt and a separately-defined address space. + + See also soc/qcom/qcom,smp2p.txt and interconnect/interconnect.txt. + + - | + -------- --------- + | | | | + | AP +<---. .----+ Modem | + | +--. | | .->+ | + | | | | | | | | + -------- | | | | --------- + v | v | + --+-+---+-+-- + | GSI | + |-----------| + | | + | IPA | + | | + ------------- + +properties: + compatible: + const: "qcom,sdm845-ipa" + + reg: + items: + - description: IPA registers + - description: IPA shared memory + - description: GSI registers + + reg-names: + items: + - const: ipa-reg + - const: ipa-shared + - const: gsi + + clocks: + maxItems: 1 + + clock-names: + const: core + + interrupts: + items: + - description: IPA interrupt (hardware IRQ) + - description: GSI interrupt (hardware IRQ) + - description: Modem clock query interrupt (smp2p interrupt) + - description: Modem setup ready interrupt (smp2p interrupt) + + interrupt-names: + items: + - const: ipa + - const: gsi + - const: ipa-clock-query + - const: ipa-setup-ready + + interconnects: + items: + - description: Interconnect path between IPA and main memory + - description: Interconnect path between IPA and internal memory + - description: Interconnect path between IPA and the AP subsystem + + interconnect-names: + items: + - const: memory + - const: imem + - const: config + + qcom,smem-states: + description: State bits used in by the AP to signal the modem. + items: + - description: Whether the "ipa-clock-enabled" state bit is valid + - description: Whether the IPA clock is enabled (if valid) + + qcom,smem-state-names: + description: The names of the state bits used for SMP2P output + items: + - const: ipa-clock-enabled-valid + - const: ipa-clock-enabled + + modem-init: + type: boolean + description: + If present, it indicates that the modem is responsible for + performing early IPA initialization, including loading and + validating firwmare used by the GSI. + + memory-region: + maxItems: 1 + description: + If present, a phandle for a reserved memory area that holds + the firmware passed to Trust Zone for authentication. Required + when Trust Zone (not the modem) performs early initialization. + +required: + - compatible + - reg + - clocks + - interrupts + - interconnects + - qcom,smem-states + +oneOf: + - required: + - modem-init + - required: + - memory-region + +examples: + - | + smp2p-mpss { + compatible = "qcom,smp2p"; + ipa_smp2p_out: ipa-ap-to-modem { + qcom,entry-name = "ipa"; + #qcom,smem-state-cells = <1>; + }; + + ipa_smp2p_in: ipa-modem-to-ap { + qcom,entry-name = "ipa"; + interrupt-controller; + #interrupt-cells = <2>; + }; + }; + ipa@1e40000 { + compatible = "qcom,sdm845-ipa"; + + modem-init; + + reg = <0 0x1e40000 0 0x7000>, + <0 0x1e47000 0 0x2000>, + <0 0x1e04000 0 0x2c000>; + reg-names = "ipa-reg", + "ipa-shared"; + "gsi"; + + interrupts-extended = <&intc 0 311 IRQ_TYPE_EDGE_RISING>, + <&intc 0 432 IRQ_TYPE_LEVEL_HIGH>, + <&ipa_smp2p_in 0 IRQ_TYPE_EDGE_RISING>, + <&ipa_smp2p_in 1 IRQ_TYPE_EDGE_RISING>; + interrupt-names = "ipa", + "gsi", + "ipa-clock-query", + "ipa-setup-ready"; + + clocks = <&rpmhcc RPMH_IPA_CLK>; + clock-names = "core"; + + interconnects = + <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_EBI1>, + <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_IMEM>, + <&rsc_hlos MASTER_APPSS_PROC &rsc_hlos SLAVE_IPA_CFG>; + interconnect-names = "memory", + "imem", + "config"; + + qcom,smem-states = <&ipa_smp2p_out 0>, + <&ipa_smp2p_out 1>; + qcom,smem-state-names = "ipa-clock-enabled-valid", + "ipa-clock-enabled"; + }; From patchwork Fri May 31 03:53:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 10969609 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 286B814DB for ; Fri, 31 May 2019 03:57:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0FAD828BB9 for ; Fri, 31 May 2019 03:57:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F26B128BCA; Fri, 31 May 2019 03:57:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D2BEE28BB9 for ; Fri, 31 May 2019 03:57:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5txKtf17paEXDWOoXPf37DZC7kgXscaLPZH+6ZgvnaY=; b=PMQt+iy6D9haiI Kra08CAHoTZeEa754Zu2twSK+LWnynE5VKCN5Bla82PopI0hj6gRmtaMkhXjlOjMXZ5hnbIqgIeXd RMC0EB+IItdPU2O4DBhO/zrvnaZ4xZv5eXjWLEi/U5V6L9QbJEXQ2L7qkQC2gEAEh7Ddsqi8Ms0MB I1Y7UCtAo2YdXewLloLOVgNzPH4b8SsS+tTp1Ax5GwKCjOYHnsCL1/UzrTdKBgDtFep30zRcsMr7n 6tVbidH4tBXnT/0wHjt6IeO/TFvwo+G1ar5KcFPz5tod0eYNNHAwO/kJMj+C810afx5XVEa7BliwZ zdR5vpBkU2mrgwfUAp+Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYfG-0006tL-Am; Fri, 31 May 2019 03:57:10 +0000 Received: from mail-io1-xd41.google.com ([2607:f8b0:4864:20::d41]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYcC-0002ZS-W4 for linux-arm-kernel@lists.infradead.org; Fri, 31 May 2019 03:54:10 +0000 Received: by mail-io1-xd41.google.com with SMTP id f22so6985975iol.11 for ; Thu, 30 May 2019 20:54:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=10rk4EeLLsyvC1xr0GY9e+fZm4KSUy7ctXT9DRu7iCc=; b=By6kOwqpQoQ6Pj1mdyn8FcvtdxEXLA8rMl78y73Of1bl59K2HDANnnp9YDwoVGDFcp tIpC2nAniY8+5gQNtYQyEs0+wuV8EJdOtKfAKWRqMLZgU+M7QysRUj5pGct+2f2kUOjt JNAAmN9b6HQvYqHYnPvGjR4PPsBwT4bgZaQLc5HGPjtVukKymB12EONSAAymTEwmTz5u +auYKFI8ZnC4/DpwaZIb5F+9HI382QR1GNO/rTIQlBdtgilS5jar0cUfPS7ldWj43qYt h3roOuN618qKPiQpXTCqtaCuHCESpwM0UdB3KtUybA6pjjapx6G58I5Ri12RgMmJVWPa 0Lrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=10rk4EeLLsyvC1xr0GY9e+fZm4KSUy7ctXT9DRu7iCc=; b=aNERtHMF/t6ZqKeAogo7jjWnUk+10RWhRuLG7R/5mFiTNI0Ao7p/61s48soSqp9Ciw BzSudRFxpUbYrSZrI7pG1XDu7MuTaafmIuRivvwO5y6aa4gRagyO23nnlBAv1/ZLboq9 qC0YDjJjAeUqySYjqfFnOu4bac+ZmnuC25s1Ws8+mhRZaHtHAOh8pz4upWmhafLRkTuh FL6XTiBcJRZaytzM5quXbtHDUJRt+7cRQKu25LNa+pl/QRNIEErHgac0QQ3ovthF6sHl PpzNAoL5r+Yg7SE17/9xPHYfb1IylLXA9PV/2N2OBoZzow18EUiUAdS22Q+0fTaME6iV nMNQ== X-Gm-Message-State: APjAAAXzvQyF8pkIh10uydZ9Iy6i6FBcFP9UeExv5BCFFGyRFIwukX4I ey7bEH9Uf9G5dzKp9SblEc3oNw== X-Google-Smtp-Source: APXvYqw8hf4NbhNTDtUbd6ZrC1m/WSbL9r1af8m2dHtRtQHbSMnuyqW7Oy3YbaBroyVd75bSOoI/4Q== X-Received: by 2002:a5d:870e:: with SMTP id u14mr5025257iom.44.1559274839357; Thu, 30 May 2019 20:53:59 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.53.58 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:53:58 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Subject: [PATCH v2 03/17] soc: qcom: ipa: main code Date: Thu, 30 May 2019 22:53:34 -0500 Message-Id: <20190531035348.7194-4-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190530_205401_313362_EB70F1EB X-CRM114-Status: GOOD ( 28.92 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, syadagir@codeaurora.org, ejcaruso@google.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, evgreen@chromium.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, subashab@codeaurora.org, linux-soc@vger.kernel.org, abhishek.esse@gmail.com, cpratapa@codeaurora.org, benchan@google.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch includes three source files that represent some basic "main program" code for the IPA driver. They are: - "ipa.h" defines the top-level IPA structure which represents an IPA device throughout the code. - "ipa_main.c" contains the platform driver probe function, along with some general code used during initialization. - "ipa_reg.h" defines the offsets of the 32-bit registers used for the IPA device, along with masks that define the position and width of fields less than 32 bits located within these registers. Each file includes some documentation that provides a little more overview of how the code is organized and used. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa.h | 131 ++++++ drivers/net/ipa/ipa_main.c | 921 +++++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_reg.h | 279 +++++++++++ 3 files changed, 1331 insertions(+) create mode 100644 drivers/net/ipa/ipa.h create mode 100644 drivers/net/ipa/ipa_main.c create mode 100644 drivers/net/ipa/ipa_reg.h diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h new file mode 100644 index 000000000000..c580254d1e0e --- /dev/null +++ b/drivers/net/ipa/ipa.h @@ -0,0 +1,131 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _IPA_H_ +#define _IPA_H_ + +#include +#include +#include +#include + +#include "gsi.h" +#include "ipa_qmi.h" +#include "ipa_endpoint.h" +#include "ipa_interrupt.h" + +struct clk; +struct icc_path; +struct net_device; +struct platform_device; + +struct ipa_clock; +struct ipa_smp2p; +struct ipa_interrupt; + +/** + * struct ipa - IPA information + * @gsi: Embedded GSI structure + * @pdev: Platform device + * @smp2p: SMP2P information + * @clock: IPA clocking information + * @suspend_ref: Whether clock reference preventing suspend taken + * @route_virt: Virtual address of routing table + * @route_addr: DMA address for routing table + * @filter_virt: Virtual address of filter table + * @filter_addr: DMA address for filter table + * @interrupt: IPA Interrupt information + * @uc_loaded: Non-zero when microcontroller has reported it's ready + * @ipa_phys: Physical address of IPA memory space + * @ipa_virt: Virtual address for IPA memory space + * @reg_virt: Virtual address used for IPA register access + * @shared_phys: Physical address of memory space shared with modem + * @shared_virt: Virtual address of memory space shared with modem + * @shared_offset: Additional offset used for shared memory + * @wakeup: Wakeup source information + * @filter_support: Bit mask indicating endpoints that support filtering + * @initialized: Bit mask indicating endpoints initialized + * @set_up: Bit mask indicating endpoints set up + * @enabled: Bit mask indicating endpoints enabled + * @suspended: Bit mask indicating endpoints suspended + * @endpoint: Array of endpoint information + * @endpoint_map: Mapping of GSI channel to IPA endpoint information + * @command_endpoint: Endpoint used for command TX + * @default_endpoint: Endpoint used for default route RX + * @modem_netdev: Network device structure used for modem + * @setup_complete: Flag indicating whether setup stage has completed + * @qmi: QMI information + */ +struct ipa { + struct gsi gsi; + struct platform_device *pdev; + struct ipa_smp2p *smp2p; + struct ipa_clock *clock; + atomic_t suspend_ref; + + void *route_virt; + dma_addr_t route_addr; + void *filter_virt; + dma_addr_t filter_addr; + + struct ipa_interrupt *interrupt; + u32 uc_loaded; + + phys_addr_t reg_phys; + void __iomem *reg_virt; + phys_addr_t shared_phys; + void *shared_virt; + u32 shared_offset; + + struct wakeup_source wakeup; + + /* Bit masks indicating endpoint state */ + u32 filter_support; + u32 initialized; + u32 set_up; + u32 enabled; + u32 suspended; + + struct ipa_endpoint endpoint[IPA_ENDPOINT_MAX]; + struct ipa_endpoint *endpoint_map[GSI_CHANNEL_MAX]; + struct ipa_endpoint *command_endpoint; /* TX */ + struct ipa_endpoint *default_endpoint; /* Default route RX */ + + struct net_device *modem_netdev; + u32 setup_complete; + + struct ipa_qmi qmi; +}; + +/** + * ipa_setup() - Perform IPA setup + * @ipa: IPA pointer + * + * IPA initialization is broken into stages: init; config; setup; and + * sometimes enable. (These have inverses exit, deconfig, teardown, and + * disable.) Activities performed at the init stage can be done without + * requiring any access to hardware. For IPA, activities performed at the + * config stage require the IPA clock to be running, because they involve + * access to IPA registers. The setup stage is performed only after the + * GSI hardware is ready (more on this below). And finally IPA endpoints + * can be enabled once they're successfully set up. + * + * This function, @ipa_setup(), starts the setup stage. + * + * In order for the GSI hardware to be functional it needs firmware to be + * loaded (in addition to some other low-level initialization). This early + * GSI initialization can be done either by Trust Zone or by the modem. If + * it's done by Trust Zone, the AP loads the GSI firmware and supplies it to + * Trust Zone to verify and install. The AP knows when this completes, and + * whether it was successful. In this case the AP proceeds to setup once it + * knows GSI is ready. + * + * If the modem performs early GSI initialization, the AP needs to know when + * this has occurred. An SMP2P interrupt is used for this purpose, and + * receipt of that interrupt triggers the call to ipa_setup(). + */ +int ipa_setup(struct ipa *ipa); + +#endif /* _IPA_H_ */ diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c new file mode 100644 index 000000000000..bd3f258b3b02 --- /dev/null +++ b/drivers/net/ipa/ipa_main.c @@ -0,0 +1,921 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_clock.h" +#include "ipa_data.h" +#include "ipa_endpoint.h" +#include "ipa_cmd.h" +#include "ipa_mem.h" +#include "ipa_netdev.h" +#include "ipa_smp2p.h" +#include "ipa_uc.h" +#include "ipa_interrupt.h" + +/** + * DOC: The IP Accelerator + * + * This driver supports the Qualcomm IP Accelerator (IPA), which is a + * networking component found in many Qualcomm SoCs. The IPA is connected + * to the application processor (AP), but is also connected (and partially + * controlled by) other "execution environments" (EEs), such as a modem. + * + * The IPA is the conduit between the AP and the modem that carries network + * traffic. This driver presents a network interface representing the + * connection of the modem to external (e.g. LTE) networks. The IPA can + * provide protocol checksum calculation, offloading this work from the AP. + * The IPA is able to provide additional functionality, including routing, + * filtering, and NAT support, but that more advanced functionality is not + * currently supported. + * + * Certain resources--including routing tables and filter tables--are still + * defined in this driver, because they must be initialized even when the + * advanced hardware features are not used. + * + * There are two distinct layers that implement the IPA hardware, and this + * is reflected in the organization of the driver. The generic software + * interface (GSI) is an integral component of the IPA, providing a + * well-defined communication layer between the AP subsystem and the IPA + * core. The GSI implements a set of "channels" used for communication + * between the AP and the IPA. + * + * The IPA layer uses GSI channels to implement its "endpoints". And while + * a GSI channel carries data between the AP and the IPA, a pair of IPA + * endpoints is used to carry traffic between two EEs. Specifically, the main + * modem network interface is implemented by two pairs of endpoints: a TX + * endpoint on the AP coupled with an RX endpoint on the modem; and another + * RX endpoint on the AP receiving data from a TX endpoint on the modem. + */ + +#define IPA_TABLE_ALIGN 128 /* Minimum table alignment */ +#define IPA_TABLE_ENTRY_SIZE sizeof(u64) /* Holds a physical address */ +#define IPA_FILTER_SIZE 8 /* Filter descriptor size */ +#define IPA_ROUTE_SIZE 8 /* Route descriptor size */ + +/* Backward compatibility register value to use for SDM845 */ +#define IPA_BCR_REG_VAL 0x0000003b + +/* The name of the main firmware file relative to /lib/firmware */ +#define IPA_FWS_PATH "ipa_fws.mdt" +#define IPA_PAS_ID 15 + +/** + * ipa_filter_tuple_zero() - Zero an endpoints filter tuple + * @endpoint_id: Endpoint whose filter tuple should be zeroed + * + * Endpoint must be for AP (not modem) and support filtering. Updates the + * filter masks values without changing routing ones. + */ +static void ipa_filter_tuple_zero(struct ipa_endpoint *endpoint) +{ + enum ipa_endpoint_id endpoint_id = endpoint->endpoint_id; + u32 offset; + u32 val; + + offset = IPA_REG_ENDP_FILTER_ROUTER_HSH_CFG_N_OFFSET(endpoint_id); + + val = ioread32(endpoint->ipa->reg_virt + offset); + + /* Zero all filter-related fields, preserving the rest */ + u32_replace_bits(val, 0, IPA_REG_ENDP_FILTER_HASH_MSK_ALL); + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_filter_hash_tuple_config(struct ipa *ipa) +{ + u32 ep_mask = ipa->filter_support; + + while (ep_mask) { + enum ipa_endpoint_id endpoint_id = __ffs(ep_mask); + struct ipa_endpoint *endpoint; + + ep_mask ^= BIT(endpoint_id); + + endpoint = &ipa->endpoint[endpoint_id]; + if (endpoint->ee_id != GSI_EE_MODEM) + ipa_filter_tuple_zero(endpoint); + } +} + +/** + * ipa_route_tuple_zero() - Zero a routing table entry tuple + * @route_id: Identifier for routing table entry to be zeroed + * + * Updates the routing table values without changing filtering ones. + */ +static void ipa_route_tuple_zero(struct ipa *ipa, u32 route_id) +{ + u32 offset = IPA_REG_ENDP_FILTER_ROUTER_HSH_CFG_N_OFFSET(route_id); + u32 val; + + val = ioread32(ipa->reg_virt + offset); + + /* Zero all route-related fields, preserving the rest */ + u32_replace_bits(val, 0, IPA_REG_ENDP_ROUTER_HASH_MSK_ALL); + + iowrite32(val, ipa->reg_virt + offset); +} + +static void ipa_route_hash_tuple_config(struct ipa *ipa) +{ + u32 route_mask; + u32 modem_mask; + + BUILD_BUG_ON(!IPA_SMEM_MODEM_RT_COUNT); + BUILD_BUG_ON(IPA_SMEM_RT_COUNT < IPA_SMEM_MODEM_RT_COUNT); + BUILD_BUG_ON(IPA_SMEM_RT_COUNT >= BITS_PER_LONG); + + /* Compute a mask representing non-modem routing table entries */ + route_mask = GENMASK(IPA_SMEM_RT_COUNT - 1, 0); + modem_mask = GENMASK(IPA_SMEM_MODEM_RT_INDEX_MAX, + IPA_SMEM_MODEM_RT_INDEX_MIN); + route_mask &= ~modem_mask; + + while (route_mask) { + u32 route_id = __ffs(route_mask); + + route_mask ^= BIT(route_id); + + ipa_route_tuple_zero(ipa, route_id); + } +} + +/** + * ipa_route_setup() - Initialize an empty routing table + * @ipa: IPA pointer + * + * Each entry in the routing table contains the DMA address of a route + * descriptor. A special zero descriptor is allocated that represents "no + * route" and this function initializes all its entries to point at that + * zero route. The zero route is allocated with the table, immediately past + * its end. + * + * @Return: 0 if successful or -ENOMEM + */ +static int ipa_route_setup(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + u64 zero_route_addr; + dma_addr_t addr; + u32 route_id; + size_t size; + u64 *virt; + + BUILD_BUG_ON(!IPA_ROUTE_SIZE); + BUILD_BUG_ON(sizeof(*virt) != IPA_TABLE_ENTRY_SIZE); + + /* Allocate the routing table, with enough space at the end of the + * table to hold the zero route descriptor. Initialize all filter + * table entries to point to the zero route. + */ + size = IPA_SMEM_RT_COUNT * IPA_TABLE_ENTRY_SIZE; + virt = dma_alloc_coherent(dev, size + IPA_ROUTE_SIZE, &addr, + GFP_KERNEL); + if (!virt) + return -ENOMEM; + ipa->route_virt = virt; + ipa->route_addr = addr; + + /* Zero route is immediately after the route table */ + zero_route_addr = addr + size; + + for (route_id = 0; route_id < IPA_SMEM_RT_COUNT; route_id++) + *virt++ = zero_route_addr; + + ipa_cmd_route_config_ipv4(ipa, size); + ipa_cmd_route_config_ipv6(ipa, size); + + ipa_route_hash_tuple_config(ipa); + + /* Configure default route for exception packets */ + ipa_endpoint_default_route_setup(ipa->default_endpoint); + + return 0; +} + +/** + * ipa_route_teardown() - Inverse of ipa_route_setup(). + * @ipa: IPA pointer + */ +static void ipa_route_teardown(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + size_t size; + + ipa_endpoint_default_route_teardown(ipa->default_endpoint); + + size = IPA_SMEM_RT_COUNT * IPA_TABLE_ENTRY_SIZE; + size += IPA_ROUTE_SIZE; + + dma_free_coherent(dev, size, ipa->route_virt, ipa->route_addr); + ipa->route_virt = NULL; + ipa->route_addr = 0; +} + +/** + * ipa_filter_setup() - Initialize an empty filter table + * @ipa: IPA pointer + * + * The filter table consists of a bitmask representing which endpoints support + * filtering, followed by one table entry for each set bit in the mask. Each + * entry in the filter table contains the DMA address of a filter descriptor. + * A special zero descriptor is allocated that represents "no filter" and this + * function initializes all its entries to point at that zero filter. The + * zero filter is allocated with the table, immediately past its end. + * + * @Return: 0 if successful or a negative error code + */ +static int ipa_filter_setup(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + u64 zero_filter_addr; + u32 filter_count; + dma_addr_t addr; + size_t size; + u64 *virt; + u32 i; + + BUILD_BUG_ON(!IPA_FILTER_SIZE); + + /* Allocate the filter table, with an extra slot for the bitmap. Also + * allocate enough space at the end of the table to hold the zero + * filter descriptor. Initialize all filter table entries point to + * that. + */ + filter_count = hweight32(ipa->filter_support); + size = (filter_count + 1) * IPA_TABLE_ENTRY_SIZE; + virt = dma_alloc_coherent(dev, size + IPA_FILTER_SIZE, &addr, + GFP_KERNEL); + if (!virt) + goto err_clear_filter_support; + ipa->filter_virt = virt; + ipa->filter_addr = addr; + + /* Zero filter is immediately after the filter table */ + zero_filter_addr = addr + size; + + /* Save the filter table bitmap. The "soft" bitmap value must be + * converted to the hardware representation by shifting it left one + * position. (Bit 0 represents global filtering, which is possible + * but not used.) + */ + *virt++ = ipa->filter_support << 1; + + /* Now point every entry in the table at the empty filter */ + for (i = 0; i < filter_count; i++) + *virt++ = zero_filter_addr; + + ipa_cmd_filter_config_ipv4(ipa, size); + ipa_cmd_filter_config_ipv6(ipa, size); + + ipa_filter_hash_tuple_config(ipa); + + return 0; + +err_clear_filter_support: + ipa->filter_support = 0; + + return -ENOMEM; +} + +/** + * ipa_filter_teardown() - Inverse of ipa_filter_setup(). + * @ipa: IPA pointer + */ +static void ipa_filter_teardown(struct ipa *ipa) +{ + u32 filter_count = hweight32(ipa->filter_support); + struct device *dev = &ipa->pdev->dev; + size_t size; + + size = (filter_count + 1) * IPA_TABLE_ENTRY_SIZE; + size += IPA_FILTER_SIZE; + + dma_free_coherent(dev, size, ipa->filter_virt, ipa->filter_addr); + ipa->filter_virt = NULL; + ipa->filter_addr = 0; + ipa->filter_support = 0; +} + +/** + * ipa_suspend_handler() - Handle the suspend interrupt + * @ipa: IPA pointer + * @interrupt: Interrupt type. + * + * When in suspended state, the IPA can trigger a resume by sending a SUSPEND + * IPA interrupt. + */ +static void ipa_suspend_handler(struct ipa *ipa, + enum ipa_interrupt_id interrupt_id) +{ + /* Take a a single clock reference to prevent suspend. All + * endpoints will be resumed as a result. This reference will + * be dropped when we get a power management suspend request. + */ + if (!atomic_xchg(&ipa->suspend_ref, 1)) + ipa_clock_get(ipa->clock); + + /* Acknowledge/clear the suspend interrupt on all endpoints */ + ipa_interrupt_suspend_clear_all(ipa->interrupt); +} + +/* Remoteproc callbacks for SSR events: prepare, start, stop, unprepare */ +int ipa_ssr_prepare(struct rproc_subdev *subdev) +{ + return 0; +} +EXPORT_SYMBOL_GPL(ipa_ssr_prepare); + +int ipa_ssr_start(struct rproc_subdev *subdev) +{ + return 0; +} +EXPORT_SYMBOL_GPL(ipa_ssr_start); + +void ipa_ssr_stop(struct rproc_subdev *subdev, bool crashed) +{ +} +EXPORT_SYMBOL_GPL(ipa_ssr_stop); + +void ipa_ssr_unprepare(struct rproc_subdev *subdev) +{ +} +EXPORT_SYMBOL_GPL(ipa_ssr_unprepare); + +/** + * ipa_setup() - Set up IPA hardware + * @ipa: IPA pointer + * + * Perform initialization that requires issuing immediate commands using the + * command TX endpoint. This cannot be run until early initialization + * (including loading GSI firmware) is complete. + */ +int ipa_setup(struct ipa *ipa) +{ + struct ipa_endpoint *rx_endpoint; + struct ipa_endpoint *tx_endpoint; + int ret; + + dev_dbg(&ipa->pdev->dev, "%s() started\n", __func__); + + ret = gsi_setup(&ipa->gsi); + if (ret) + return ret; + + ipa->interrupt = ipa_interrupt_setup(ipa); + if (IS_ERR(ipa->interrupt)) { + ret = PTR_ERR(ipa->interrupt); + goto err_gsi_teardown; + } + ipa_interrupt_add(ipa->interrupt, IPA_INTERRUPT_TX_SUSPEND, + ipa_suspend_handler); + + ipa_uc_setup(ipa); + + ipa_endpoint_setup(ipa); + + /* We need to use the AP command out endpoint to perform other + * initialization, so we set that up first. + */ + ret = ipa_endpoint_enable_one(ipa->command_endpoint); + if (ret) + goto err_endpoint_teardown; + + ret = ipa_smem_setup(ipa); + if (ret) + goto err_command_disable; + + ret = ipa_route_setup(ipa); + if (ret) + goto err_smem_teardown; + + ret = ipa_filter_setup(ipa); + if (ret) + goto err_route_teardown; + + ret = ipa_endpoint_enable_one(ipa->default_endpoint); + if (ret) + goto err_filter_teardown; + + rx_endpoint = &ipa->endpoint[IPA_ENDPOINT_AP_MODEM_RX]; + tx_endpoint = &ipa->endpoint[IPA_ENDPOINT_AP_MODEM_TX]; + ipa->modem_netdev = ipa_netdev_setup(ipa, rx_endpoint, tx_endpoint); + if (IS_ERR(ipa->modem_netdev)) { + ret = PTR_ERR(ipa->modem_netdev); + goto err_default_disable; + } + + ipa->setup_complete = 1; + + dev_info(&ipa->pdev->dev, "IPA driver setup completed successfully\n"); + + return 0; + +err_default_disable: + ipa_endpoint_disable_one(ipa->default_endpoint); +err_filter_teardown: + ipa_filter_teardown(ipa); +err_route_teardown: + ipa_route_teardown(ipa); +err_smem_teardown: + ipa_smem_teardown(ipa); +err_command_disable: + ipa_endpoint_disable_one(ipa->command_endpoint); +err_endpoint_teardown: + ipa_endpoint_teardown(ipa); + ipa_uc_teardown(ipa); + ipa_interrupt_remove(ipa->interrupt, IPA_INTERRUPT_TX_SUSPEND); + ipa_interrupt_teardown(ipa->interrupt); +err_gsi_teardown: + gsi_teardown(&ipa->gsi); + + return ret; +} + +/** + * ipa_teardown() - Inverse of ipa_setup() + * @ipa: IPA pointer + */ +static void ipa_teardown(struct ipa *ipa) +{ + ipa_netdev_teardown(ipa->modem_netdev); + ipa_endpoint_disable_one(ipa->default_endpoint); + ipa_filter_teardown(ipa); + ipa_route_teardown(ipa); + ipa_smem_teardown(ipa); + ipa_endpoint_disable_one(ipa->command_endpoint); + ipa_endpoint_teardown(ipa); + ipa_uc_teardown(ipa); + ipa_interrupt_remove(ipa->interrupt, IPA_INTERRUPT_TX_SUSPEND); + ipa_interrupt_teardown(ipa->interrupt); + gsi_teardown(&ipa->gsi); +} + +/** + * ipa_hardware_config() - Primitive hardware initialization + * @ipa: IPA pointer + */ +static void ipa_hardware_config(struct ipa *ipa) +{ + u32 val; + + /* SDM845 has IPA version 3.5.1 */ + val = IPA_BCR_REG_VAL; + iowrite32(val, ipa->reg_virt + IPA_REG_BCR_OFFSET); + + val = u32_encode_bits(8, GEN_QMB_0_MAX_WRITES_FMASK); + val |= u32_encode_bits(4, GEN_QMB_1_MAX_WRITES_FMASK); + iowrite32(val, ipa->reg_virt + IPA_REG_QSB_MAX_WRITES_OFFSET); + + val = u32_encode_bits(8, GEN_QMB_0_MAX_READS_FMASK); + val |= u32_encode_bits(12, GEN_QMB_1_MAX_READS_FMASK); + iowrite32(val, ipa->reg_virt + IPA_REG_QSB_MAX_READS_OFFSET); +} + +/** + * ipa_hardware_deconfig() - Inverse of ipa_hardware_config() + * @ipa: IPA pointer + * + * This restores the power-on reset values (even if they aren't different) + */ +static void ipa_hardware_deconfig(struct ipa *ipa) +{ + /* Values we program above are the same as the power-on reset values */ +} + +static void ipa_resource_config_src_one(struct ipa *ipa, + const struct ipa_resource_src *resource) +{ + u32 offset = IPA_REG_SRC_RSRC_GRP_01_RSRC_TYPE_N_OFFSET; + u32 stride = IPA_REG_SRC_RSRC_GRP_01_RSRC_TYPE_N_STRIDE; + enum ipa_resource_type_src n = resource->type; + const struct ipa_resource_limits *xlimits; + const struct ipa_resource_limits *ylimits; + u32 val; + + xlimits = &resource->limits[IPA_RESOURCE_GROUP_LWA_DL]; + ylimits = &resource->limits[IPA_RESOURCE_GROUP_UL_DL]; + + val = u32_encode_bits(xlimits->min, X_MIN_LIM_FMASK); + val |= u32_encode_bits(xlimits->max, X_MAX_LIM_FMASK); + val |= u32_encode_bits(ylimits->min, Y_MIN_LIM_FMASK); + val |= u32_encode_bits(ylimits->max, Y_MAX_LIM_FMASK); + + iowrite32(val, ipa->reg_virt + offset + n * stride); +} + +static void ipa_resource_config_dst_one(struct ipa *ipa, + const struct ipa_resource_dst *resource) +{ + u32 offset = IPA_REG_DST_RSRC_GRP_01_RSRC_TYPE_N_OFFSET; + u32 stride = IPA_REG_DST_RSRC_GRP_01_RSRC_TYPE_N_STRIDE; + enum ipa_resource_type_dst n = resource->type; + const struct ipa_resource_limits *xlimits; + const struct ipa_resource_limits *ylimits; + u32 val; + + xlimits = &resource->limits[IPA_RESOURCE_GROUP_LWA_DL]; + ylimits = &resource->limits[IPA_RESOURCE_GROUP_UL_DL]; + + val = u32_encode_bits(xlimits->min, X_MIN_LIM_FMASK); + val |= u32_encode_bits(xlimits->max, X_MAX_LIM_FMASK); + val |= u32_encode_bits(ylimits->min, Y_MIN_LIM_FMASK); + val |= u32_encode_bits(ylimits->max, Y_MAX_LIM_FMASK); + + iowrite32(val, ipa->reg_virt + offset + n * stride); +} + +static void +ipa_resource_config(struct ipa *ipa, const struct ipa_resource_data *data) +{ + const struct ipa_resource_src *resource_src; + const struct ipa_resource_dst *resource_dst; + u32 i; + + resource_src = data->resource_src; + resource_dst = data->resource_dst; + + for (i = 0; i < data->resource_src_count; i++) + ipa_resource_config_src_one(ipa, &resource_src[i]); + + for (i = 0; i < data->resource_dst_count; i++) + ipa_resource_config_dst_one(ipa, &resource_dst[i]); +} + +static void ipa_resource_deconfig(struct ipa *ipa) +{ + /* Nothing to do */ +} + +static void ipa_idle_indication_cfg(struct ipa *ipa, + u32 enter_idle_debounce_thresh, + bool const_non_idle_enable) +{ + u32 val; + + val = u32_encode_bits(enter_idle_debounce_thresh, + ENTER_IDLE_DEBOUNCE_THRESH_FMASK); + if (const_non_idle_enable) + val |= CONST_NON_IDLE_ENABLE_FMASK; + + iowrite32(val, ipa->reg_virt + IPA_REG_IDLE_INDICATION_CFG_OFFSET); +} + +/** + * ipa_dcd_config() - Enable dynamic clock division on IPA + * + * Configures when the IPA signals it is idle to the global clock + * controller, which can respond by scalling down the clock to + * save power. + */ +static void ipa_dcd_config(struct ipa *ipa) +{ + /* Recommended values for IPA 3.5 according to IPA HPG */ + ipa_idle_indication_cfg(ipa, 256, false); +} + +static void ipa_dcd_deconfig(struct ipa *ipa) +{ + /* Power-on reset values */ + ipa_idle_indication_cfg(ipa, 0, true); +} + +/** + * ipa_config() - Configure IPA hardware + * @ipa: IPA pointer + * + * Perform initialization requiring IPA clock to be enabled. + */ +static int ipa_config(struct ipa *ipa, const struct ipa_data *data) +{ + u32 val; + int ret; + + /* Get a clock reference to allow initialization. This reference + * is held after initialization completes, and won't get dropped + * unless/until a system suspend request arrives. + */ + atomic_set(&ipa->suspend_ref, 1); + ipa_clock_get(ipa->clock); + + ipa_hardware_config(ipa); + + /* Ensure we support the number of endpoints supplied by hardware */ + val = ioread32(ipa->reg_virt + IPA_REG_ENABLED_PIPES_OFFSET); + if (val > IPA_ENDPOINT_MAX) { + ret = -EINVAL; + goto err_hardware_deconfig; + } + + ret = ipa_smem_config(ipa); + if (ret) + goto err_hardware_deconfig; + + /* Assign resource limitation to each group */ + ipa_resource_config(ipa, data->resource_data); + + /* Note enabling dynamic clock division must not be + * attempted for IPA hardware versions prior to 3.5. + */ + ipa_dcd_config(ipa); + + return 0; + +err_hardware_deconfig: + ipa_hardware_deconfig(ipa); + ipa_clock_put(ipa->clock); + + return ret; +} + +/** + * ipa_deconfig() - Inverse of ipa_config() + * @ipa: IPA pointer + */ +static void ipa_deconfig(struct ipa *ipa) +{ + ipa_dcd_deconfig(ipa); + ipa_resource_deconfig(ipa); + ipa_smem_deconfig(ipa); + ipa_hardware_deconfig(ipa); + + ipa_clock_put(ipa->clock); +} + +static int ipa_firmware_load(struct device *dev) +{ + const struct firmware *fw; + struct device_node *node; + struct resource res; + phys_addr_t phys; + ssize_t size; + void *virt; + int ret; + + node = of_parse_phandle(dev->of_node, "memory-region", 0); + if (!node) { + dev_err(dev, "memory-region not specified\n"); + return -EINVAL; + } + + ret = of_address_to_resource(node, 0, &res); + if (ret) + return ret; + + ret = request_firmware(&fw, IPA_FWS_PATH, dev); + if (ret) + return ret; + + phys = res.start; + size = (size_t)resource_size(&res); + virt = memremap(phys, size, MEMREMAP_WC); + if (!virt) { + ret = -ENOMEM; + goto out_release_firmware; + } + + ret = qcom_mdt_load(dev, fw, IPA_FWS_PATH, IPA_PAS_ID, + virt, phys, size, NULL); + if (!ret) + ret = qcom_scm_pas_auth_and_reset(IPA_PAS_ID); + + memunmap(virt); +out_release_firmware: + release_firmware(fw); + + return ret; +} + +static const struct of_device_id ipa_match[] = { + { + .compatible = "qcom,sdm845-ipa", + .data = &ipa_data_sdm845, + }, + { }, +}; + +/** + * ipa_probe() - IPA platform driver probe function + * @pdev: Platform device pointer + * + * @Return: 0 if successful, or a negative error code (possibly + * EPROBE_DEFER) + * + * This is the main entry point for the IPA driver. When successful, it + * initializes the IPA hardware for use. + * + * Initialization proceeds in several stages. The "init" stage involves + * activities that can be initialized without access to the IPA hardware. + * The "setup" stage requires the IPA clock to be active so IPA registers + * can beaccessed, but does not require access to the GSI layer. The + * "setup" stage requires access to GSI, and includes initialization that's + * performed by issuing IPA immediate commands. + */ +static int ipa_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + const struct ipa_data *data; + struct ipa *ipa; + bool modem_init; + int ret; + + /* We assume we're working on 64-bit hardware */ + BUILD_BUG_ON(!IS_ENABLED(CONFIG_64BIT)); + BUILD_BUG_ON(ARCH_DMA_MINALIGN % IPA_TABLE_ALIGN); + + data = of_device_get_match_data(dev); + + modem_init = of_property_read_bool(dev->of_node, "modem-init"); + + /* If we need Trust Zone, make sure it's ready */ + if (!modem_init) + if (!qcom_scm_is_available()) + return -EPROBE_DEFER; + + ipa = kzalloc(sizeof(*ipa), GFP_KERNEL); + if (!ipa) + return -ENOMEM; + ipa->pdev = pdev; + dev_set_drvdata(dev, ipa); + + /* Initialize the clock and interconnects early. They might + * not be ready when we're probed, so might return -EPROBE_DEFER. + */ + atomic_set(&ipa->suspend_ref, 0); + + ipa->clock = ipa_clock_init(ipa); + if (IS_ERR(ipa->clock)) { + ret = PTR_ERR(ipa->clock); + goto err_free_ipa; + } + + ret = ipa_mem_init(ipa); + if (ret) + goto err_clock_exit; + + ret = gsi_init(&ipa->gsi, pdev, data->endpoint_data_count, + data->endpoint_data); + if (ret) + goto err_mem_exit; + + ipa->smp2p = ipa_smp2p_init(ipa, modem_init); + if (IS_ERR(ipa->smp2p)) { + ret = PTR_ERR(ipa->smp2p); + goto err_gsi_exit; + } + + ret = ipa_endpoint_init(ipa, data->endpoint_data_count, + data->endpoint_data); + if (ret) + goto err_smp2p_exit; + ipa->command_endpoint = &ipa->endpoint[IPA_ENDPOINT_AP_COMMAND_TX]; + ipa->default_endpoint = &ipa->endpoint[IPA_ENDPOINT_AP_LAN_RX]; + + /* Create a wakeup source. */ + wakeup_source_init(&ipa->wakeup, "ipa"); + + /* Proceed to real initialization */ + ret = ipa_config(ipa, data); + if (ret) + goto err_endpoint_exit; + + dev_info(dev, "IPA driver initialized"); + + /* If the modem is verifying and loading firmware, we're + * done. We will receive an SMP2P interrupt when it is OK + * to proceed with the setup phase (involving issuing + * immediate commands after GSI is initialized). + */ + if (modem_init) + return 0; + + /* Otherwise we need to load the firmware and have Trust + * Zone validate and install it. If that succeeds we can + * proceed with setup. + */ + ret = ipa_firmware_load(dev); + if (ret) + goto err_deconfig; + + ret = ipa_setup(ipa); + if (ret) + goto err_deconfig; + + return 0; + +err_deconfig: + ipa_deconfig(ipa); +err_endpoint_exit: + wakeup_source_remove(&ipa->wakeup); + ipa_endpoint_exit(ipa); +err_smp2p_exit: + ipa_smp2p_exit(ipa->smp2p); +err_gsi_exit: + gsi_exit(&ipa->gsi); +err_mem_exit: + ipa_mem_exit(ipa); +err_clock_exit: + ipa_clock_exit(ipa->clock); +err_free_ipa: + kfree(ipa); + + return ret; +} + +static int ipa_remove(struct platform_device *pdev) +{ + struct ipa *ipa = dev_get_drvdata(&pdev->dev); + + ipa_smp2p_disable(ipa->smp2p); + if (ipa->setup_complete) + ipa_teardown(ipa); + + ipa_deconfig(ipa); + wakeup_source_remove(&ipa->wakeup); + ipa_endpoint_exit(ipa); + ipa_smp2p_exit(ipa->smp2p); + ipa_mem_exit(ipa); + ipa_clock_exit(ipa->clock); + kfree(ipa); + + return 0; +} + +/** + * ipa_suspend() - Power management system suspend callback + * @dev: IPA device structure + * + * @Return: Zero + * + * Called by the PM framework when a system suspend operation is invoked. + */ +int ipa_suspend(struct device *dev) +{ + struct ipa *ipa = dev_get_drvdata(dev); + + ipa_clock_put(ipa->clock); + atomic_set(&ipa->suspend_ref, 0); + + return 0; +} + +/** + * ipa_resume() - Power management system resume callback + * @dev: IPA device structure + * + * @Return: Always returns 0 + * + * Called by the PM framework when a system resume operation is invoked. + */ +int ipa_resume(struct device *dev) +{ + struct ipa *ipa = dev_get_drvdata(dev); + + /* This clock reference will keep the IPA out of suspend + * until we get a power management suspend request. + */ + atomic_set(&ipa->suspend_ref, 1); + ipa_clock_get(ipa->clock); + + return 0; +} + +static const struct dev_pm_ops ipa_pm_ops = { + .suspend_noirq = ipa_suspend, + .resume_noirq = ipa_resume, +}; + +static struct platform_driver ipa_driver = { + .probe = ipa_probe, + .remove = ipa_remove, + .driver = { + .name = "ipa", + .owner = THIS_MODULE, + .pm = &ipa_pm_ops, + .of_match_table = ipa_match, + }, +}; + +module_platform_driver(ipa_driver); + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("Qualcomm IP Accelerator device driver"); diff --git a/drivers/net/ipa/ipa_reg.h b/drivers/net/ipa/ipa_reg.h new file mode 100644 index 000000000000..8d04db6f7b00 --- /dev/null +++ b/drivers/net/ipa/ipa_reg.h @@ -0,0 +1,279 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _IPA_REG_H_ +#define _IPA_REG_H_ + +#include + +/** + * DOC: IPA Registers + * + * IPA registers are located within the "ipa" address space defined by + * Device Tree. The offset of each register within that space is specified + * by symbols defined below. The address space is mapped to virtual memory + * space in ipa_mem_init(). All IPA registers are 32 bits wide. + * + * Certain register types are duplicated for a number of instances of + * something. For example, each IPA endpoint has an set of registers + * defining its configuration. The offset to an endpoint's set of registers + * is computed based on an "base" offset plus an additional "stride" offset + * that's dependent on the endpoint's ID. For such registers, the offset + * is computed by a function-like macro that takes a parameter used in + * the computation. + * + * The offset of a register dependent on execution environment is computed + * by a macro that is supplied a parameter "ee". The "ee" value is a member + * of the gsi_ee enumerated type. + * + * The offset of a register dependent on endpoint id is computed by a macro + * that is supplied a parameter "ep". The "ep" value must be less than + * IPA_ENDPOINT_MAX. + * + * The offset of registers related to hashed filter and router tables is + * computed by a macro that is supplied a parameter "er". The "er" represents + * an endpoint ID for filters, or a route ID for routes. For filters, the + * endpoint ID must be less than IPA_ENDPOINT_MAX, but is further restricted + * because not all endpoints support filtering. For routes, the route ID + * must be less than IPA_SMEM_RT_COUNT. + * + * Some registers encode multiple fields within them. For these, each field + * has a symbol below definining a mask that defines both the position and + * width of the field within its register. + */ + +#define IPA_REG_ENABLED_PIPES_OFFSET 0x00000038 + +#define IPA_REG_ROUTE_OFFSET 0x00000048 +#define ROUTE_DIS_FMASK GENMASK(0, 0) +#define ROUTE_DEF_PIPE_FMASK GENMASK(5, 1) +#define ROUTE_DEF_HDR_TABLE_FMASK GENMASK(6, 6) +#define ROUTE_DEF_HDR_OFST_FMASK GENMASK(16, 7) +#define ROUTE_FRAG_DEF_PIPE_FMASK GENMASK(21, 17) +#define ROUTE_DEF_RETAIN_HDR_FMASK GENMASK(24, 24) + +#define IPA_REG_SHARED_MEM_SIZE_OFFSET 0x00000054 +#define SHARED_MEM_SIZE_FMASK GENMASK(15, 0) +#define SHARED_MEM_BADDR_FMASK GENMASK(31, 16) + +#define IPA_REG_QSB_MAX_WRITES_OFFSET 0x00000074 +#define GEN_QMB_0_MAX_WRITES_FMASK GENMASK(3, 0) +#define GEN_QMB_1_MAX_WRITES_FMASK GENMASK(7, 4) + +#define IPA_REG_QSB_MAX_READS_OFFSET 0x00000078 +#define GEN_QMB_0_MAX_READS_FMASK GENMASK(3, 0) +#define GEN_QMB_1_MAX_READS_FMASK GENMASK(7, 4) + +#define IPA_REG_STATE_AGGR_ACTIVE_OFFSET 0x0000010c + +#define IPA_REG_BCR_OFFSET 0x000001d0 + +#define IPA_REG_LOCAL_PKT_PROC_CNTXT_BASE_OFFSET 0x000001e8 + +#define IPA_REG_AGGR_FORCE_CLOSE_OFFSET 0x000001ec +#define PIPE_BITMAP_FMASK GENMASK(19, 0) + +#define IPA_REG_IDLE_INDICATION_CFG_OFFSET 0x00000220 +#define ENTER_IDLE_DEBOUNCE_THRESH_FMASK GENMASK(15, 0) +#define CONST_NON_IDLE_ENABLE_FMASK GENMASK(16, 16) + +#define IPA_REG_SRC_RSRC_GRP_01_RSRC_TYPE_N_OFFSET 0x00000400 +#define IPA_REG_SRC_RSRC_GRP_01_RSRC_TYPE_N_STRIDE 0x0020 +#define IPA_REG_DST_RSRC_GRP_01_RSRC_TYPE_N_OFFSET 0x00000500 +#define IPA_REG_DST_RSRC_GRP_01_RSRC_TYPE_N_STRIDE 0x0020 +#define X_MIN_LIM_FMASK GENMASK(5, 0) +#define X_MAX_LIM_FMASK GENMASK(13, 8) +#define Y_MIN_LIM_FMASK GENMASK(21, 16) +#define Y_MAX_LIM_FMASK GENMASK(29, 24) + +#define IPA_REG_ENDP_INIT_CTRL_N_OFFSET(ep) \ + (0x00000800 + 0x0070 * (ep)) +#define ENDP_SUSPEND_FMASK GENMASK(0, 0) +#define ENDP_DELAY_FMASK GENMASK(1, 1) + +#define IPA_REG_ENDP_INIT_CFG_N_OFFSET(ep) \ + (0x00000808 + 0x0070 * (ep)) +#define FRAG_OFFLOAD_EN_FMASK GENMASK(0, 0) +#define CS_OFFLOAD_EN_FMASK GENMASK(2, 1) +#define CS_METADATA_HDR_OFFSET_FMASK GENMASK(6, 3) +#define CS_GEN_QMB_MASTER_SEL_FMASK GENMASK(8, 8) + +#define IPA_REG_ENDP_INIT_HDR_N_OFFSET(ep) \ + (0x00000810 + 0x0070 * (ep)) +#define HDR_LEN_FMASK GENMASK(5, 0) +#define HDR_OFST_METADATA_VALID_FMASK GENMASK(6, 6) +#define HDR_OFST_METADATA_FMASK GENMASK(12, 7) +#define HDR_ADDITIONAL_CONST_LEN_FMASK GENMASK(18, 13) +#define HDR_OFST_PKT_SIZE_VALID_FMASK GENMASK(19, 19) +#define HDR_OFST_PKT_SIZE_FMASK GENMASK(25, 20) +#define HDR_A5_MUX_FMASK GENMASK(26, 26) +#define HDR_LEN_INC_DEAGG_HDR_FMASK GENMASK(27, 27) +#define HDR_METADATA_REG_VALID_FMASK GENMASK(28, 28) + +#define IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(ep) \ + (0x00000814 + 0x0070 * (ep)) +#define HDR_ENDIANNESS_FMASK GENMASK(0, 0) +#define HDR_TOTAL_LEN_OR_PAD_VALID_FMASK GENMASK(1, 1) +#define HDR_TOTAL_LEN_OR_PAD_FMASK GENMASK(2, 2) +#define HDR_PAYLOAD_LEN_INC_PADDING_FMASK GENMASK(3, 3) +#define HDR_TOTAL_LEN_OR_PAD_OFFSET_FMASK GENMASK(9, 4) +#define HDR_PAD_TO_ALIGNMENT_FMASK GENMASK(13, 10) + +#define IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(ep) \ + (0x00000818 + 0x0070 * (ep)) + +#define IPA_REG_ENDP_INIT_AGGR_N_OFFSET(ep) \ + (0x00000824 + 0x0070 * (ep)) +#define AGGR_EN_FMASK GENMASK(1, 0) +#define AGGR_TYPE_FMASK GENMASK(4, 2) +#define AGGR_BYTE_LIMIT_FMASK GENMASK(9, 5) +#define AGGR_TIME_LIMIT_FMASK GENMASK(14, 10) +#define AGGR_PKT_LIMIT_FMASK GENMASK(20, 15) +#define AGGR_SW_EOF_ACTIVE_FMASK GENMASK(21, 21) +#define AGGR_FORCE_CLOSE_FMASK GENMASK(22, 22) +#define AGGR_HARD_BYTE_LIMIT_ENABLE_FMASK GENMASK(24, 24) + +#define IPA_REG_ENDP_INIT_MODE_N_OFFSET(ep) \ + (0x00000820 + 0x0070 * (ep)) +#define MODE_FMASK GENMASK(2, 0) +#define DEST_PIPE_INDEX_FMASK GENMASK(8, 4) +#define BYTE_THRESHOLD_FMASK GENMASK(27, 12) +#define PIPE_REPLICATION_EN_FMASK GENMASK(28, 28) +#define PAD_EN_FMASK GENMASK(29, 29) +#define HDR_FTCH_DISABLE_FMASK GENMASK(30, 30) + +#define IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(ep) \ + (0x00000834 + 0x0070 * (ep)) +#define DEAGGR_HDR_LEN_FMASK GENMASK(5, 0) +#define PACKET_OFFSET_VALID_FMASK GENMASK(7, 7) +#define PACKET_OFFSET_LOCATION_FMASK GENMASK(13, 8) +#define MAX_PACKET_LEN_FMASK GENMASK(31, 16) + +#define IPA_REG_ENDP_INIT_SEQ_N_OFFSET(ep) \ + (0x0000083c + 0x0070 * (ep)) +#define HPS_SEQ_TYPE_FMASK GENMASK(3, 0) +#define DPS_SEQ_TYPE_FMASK GENMASK(7, 4) +#define HPS_REP_SEQ_TYPE_FMASK GENMASK(11, 8) +#define DPS_REP_SEQ_TYPE_FMASK GENMASK(15, 12) + +#define IPA_REG_ENDP_STATUS_N_OFFSET(ep) \ + (0x00000840 + 0x0070 * (ep)) +#define STATUS_EN_FMASK GENMASK(0, 0) +#define STATUS_ENDP_FMASK GENMASK(5, 1) +#define STATUS_LOCATION_FMASK GENMASK(8, 8) +#define STATUS_PKT_SUPPRESS_FMASK GENMASK(9, 9) + +/* "er" is either an endpoint id (for filters) or a route id (for routes) */ +#define IPA_REG_ENDP_FILTER_ROUTER_HSH_CFG_N_OFFSET(er) \ + (0x0000085c + 0x0070 * (er)) +#define FILTER_HASH_MSK_SRC_ID_FMASK GENMASK(0, 0) +#define FILTER_HASH_MSK_SRC_IP_FMASK GENMASK(1, 1) +#define FILTER_HASH_MSK_DST_IP_FMASK GENMASK(2, 2) +#define FILTER_HASH_MSK_SRC_PORT_FMASK GENMASK(3, 3) +#define FILTER_HASH_MSK_DST_PORT_FMASK GENMASK(4, 4) +#define FILTER_HASH_MSK_PROTOCOL_FMASK GENMASK(5, 5) +#define FILTER_HASH_MSK_METADATA_FMASK GENMASK(6, 6) +#define FILTER_HASH_UNDEFINED1_FMASK GENMASK(15, 7) +#define IPA_REG_ENDP_FILTER_HASH_MSK_ALL GENMASK(15, 0) + +#define ROUTER_HASH_MSK_SRC_ID_FMASK GENMASK(16, 16) +#define ROUTER_HASH_MSK_SRC_IP_FMASK GENMASK(17, 17) +#define ROUTER_HASH_MSK_DST_IP_FMASK GENMASK(18, 18) +#define ROUTER_HASH_MSK_SRC_PORT_FMASK GENMASK(19, 19) +#define ROUTER_HASH_MSK_DST_PORT_FMASK GENMASK(20, 20) +#define ROUTER_HASH_MSK_PROTOCOL_FMASK GENMASK(21, 21) +#define ROUTER_HASH_MSK_METADATA_FMASK GENMASK(22, 22) +#define ROUTER_HASH_UNDEFINED2_FMASK GENMASK(31, 23) +#define IPA_REG_ENDP_ROUTER_HASH_MSK_ALL GENMASK(31, 16) + +#define IPA_REG_IRQ_STTS_OFFSET \ + IPA_REG_IRQ_STTS_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_STTS_EE_N_OFFSET(ee) \ + (0x00003008 + 0x1000 * (ee)) + +#define IPA_REG_IRQ_EN_OFFSET \ + IPA_REG_IRQ_EN_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_EN_EE_N_OFFSET(ee) \ + (0x0000300c + 0x1000 * (ee)) + +#define IPA_REG_IRQ_CLR_OFFSET \ + IPA_REG_IRQ_CLR_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_CLR_EE_N_OFFSET(ee) \ + (0x00003010 + 0x1000 * (ee)) + +#define IPA_REG_IRQ_UC_OFFSET \ + IPA_REG_IRQ_UC_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_UC_EE_N_OFFSET(ee) \ + (0x0000301c + 0x1000 * (ee)) + +#define IPA_REG_IRQ_SUSPEND_INFO_OFFSET \ + IPA_REG_IRQ_SUSPEND_INFO_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_SUSPEND_INFO_EE_N_OFFSET(ee) \ + (0x00003030 + 0x1000 * (ee)) + +#define IPA_REG_SUSPEND_IRQ_EN_OFFSET \ + IPA_REG_SUSPEND_IRQ_EN_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_SUSPEND_IRQ_EN_EE_N_OFFSET(ee) \ + (0x00003034 + 0x1000 * (ee)) + +#define IPA_REG_SUSPEND_IRQ_CLR_OFFSET \ + IPA_REG_SUSPEND_IRQ_CLR_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_SUSPEND_IRQ_CLR_EE_N_OFFSET(ee) \ + (0x00003038 + 0x1000 * (ee)) + +/** enum ipa_cs_offload_en - checksum offload field in ENDP_INIT_CFG_N */ +enum ipa_cs_offload_en { + IPA_CS_OFFLOAD_NONE = 0, + IPA_CS_OFFLOAD_UL = 1, + IPA_CS_OFFLOAD_DL = 2, + IPA_CS_RSVD +}; + +/** enum ipa_aggr_en - aggregation type field in ENDP_INIT_AGGR_N */ +enum ipa_aggr_en { + IPA_BYPASS_AGGR = 0, + IPA_ENABLE_AGGR = 1, + IPA_ENABLE_DEAGGR = 2, +}; + +/** enum ipa_aggr_type - aggregation type field in in_ENDP_INIT_AGGR_N */ +enum ipa_aggr_type { + IPA_MBIM_16 = 0, + IPA_HDLC = 1, + IPA_TLP = 2, + IPA_RNDIS = 3, + IPA_GENERIC = 4, + IPA_QCMAP = 6, +}; + +/** enum ipa_mode - mode field in ENDP_INIT_MODE_N */ +enum ipa_mode { + IPA_BASIC = 0, + IPA_ENABLE_FRAMING_HDLC = 1, + IPA_ENABLE_DEFRAMING_HDLC = 2, + IPA_DMA = 3, +}; + +/** + * enum ipa_seq_type - HPS and DPS sequencer type fields in in ENDP_INIT_SEQ_N + * @IPA_SEQ_DMA_ONLY: only DMA is performed + * @IPA_SEQ_PKT_PROCESS_NO_DEC_UCP: + * packet processing + no decipher + microcontroller (Ethernet Bridging) + * @IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP: + * second packet processing pass + no decipher + microcontroller + * @IPA_SEQ_DMA_DEC: DMA + cipher/decipher + * @IPA_SEQ_DMA_COMP_DECOMP: DMA + compression/decompression + * @IPA_SEQ_INVALID: invalid sequencer type + */ +enum ipa_seq_type { + IPA_SEQ_DMA_ONLY = 0x00, + IPA_SEQ_PKT_PROCESS_NO_DEC_UCP = 0x02, + IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP = 0x04, + IPA_SEQ_DMA_DEC = 0x11, + IPA_SEQ_DMA_COMP_DECOMP = 0x20, + IPA_SEQ_INVALID = 0xff, +}; + +#endif /* _IPA_REG_H_ */ From patchwork Fri May 31 03:53:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 10969569 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6B79B14C0 for ; Fri, 31 May 2019 03:54:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 57DB728C77 for ; Fri, 31 May 2019 03:54:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4B93228C7B; Fri, 31 May 2019 03:54:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4408828C78 for ; Fri, 31 May 2019 03:54:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=H/ic0YUN/3+egGw4/AtDf/fM6MnRP5AN5R2SciuwM7Q=; b=Itkscv9YAD4DeW r6RcVsoVhL0uwwPwGLeN0pRhXlpoR0mGOX7WG43fJ5lySB9ljnFgtSx/SeXmts/TaGT6euuRDHTzo TELjjaa8G6c1SR9SwAEKg22TTItaLfut52ZHSwVgiScOEhexlLiETbPkn43/KNL61v39JHvayvsd+ ycHBtLoITRSgGPV8O3wOE6cjBW3k0HUyvMnzi+ZD51Knamzkk2lapLBEXqgrPxAAn2JziWdBq7ulA OCXe4GCbBCYEsThLeJOOiQ02bkNc6iDcYoUTdj4zpHCcvlpMQYH2nRK8gS+/ul3bVfeWauJQZaJkW wNxdCAoj810IkPS8LeeA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYcs-0003Jo-4g; Fri, 31 May 2019 03:54:42 +0000 Received: from mail-io1-xd42.google.com ([2607:f8b0:4864:20::d42]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYcD-0002bB-Rt for linux-arm-kernel@lists.infradead.org; Fri, 31 May 2019 03:54:09 +0000 Received: by mail-io1-xd42.google.com with SMTP id h6so7036926ioh.3 for ; Thu, 30 May 2019 20:54:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=O+nUFC6cQZFpp93OxhMjcDjllvqEhXm9UXJsgqoN5M8=; b=t+xiYeG8anHFjLUkwJGk5+ry5Yju7kw6g4ta0KSgGAyboGGhrNonV1WkYCfT2/Cj7+ kYx7zAXF6jQ5qkmTELt7mLjiWTQKU8Cwlo0WdqOj8Dw2rBJZ0k5+0dYpyflOth7WSDpZ v11kJH8WYKNmpCoGkOdCOz494Q/0OqgQNO7pWY+vB6wxwDwF3f0/SdwMkQkVsFqCzXoJ hGRgN25a4n+oHWcKGVp9/nuhrguNZm+QV9WQLgHlp5LSTGpx0QxY17lTz1pXYEzcTMj4 n1YFheAA0ZyrKcn32QM3+eHuncDGv4BNp5TAwR39BN1Fkoxfe6QffOEvGgSaM3If4Y0a uR/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=O+nUFC6cQZFpp93OxhMjcDjllvqEhXm9UXJsgqoN5M8=; b=dskAsluuu5elYDSB28DfQnTjf2id9JZGIuKvMlYSlMgL1cHs7Kidl2agpbRNF5jbKC a/EfJ9/2wN1VKE+499Uyu66zqSh0EJjZOeYNPnjPIaeywotmpSX7haiqadE3/8bionEs +BjUXmfvRTVyvy/zBRzPg9C9FcmCNAvw2gz5fvZ4ZGzcCHOS7sHZcN/P8Rlqb5PMzgnw TwEemQNs0GnXZwa1jMaP070a0FNtJq6gXkjUBzCr3dCaf8d64+jxnPa2bWG02a29qD7C XY8LVSo3Ks9vZhoxZKM3cXrVvCbBWKgL+7BQGhBnz6mKU+HtNmqy4WRmvavPGXOwf9dz eWtQ== X-Gm-Message-State: APjAAAVCZYy1t5rG+6OM2T/rGMLD/D39Q7htI5gqAR5KeqXuvbQWCqGt vX7Ser8VconCscIxgYKh0Fws4w== X-Google-Smtp-Source: APXvYqwPaTpyJS6BQunaWeVOdY5poJAngFiKLHfA5SdWYvUgoXFnSGVRKnqiMd2Nvh986wNknuTiDw== X-Received: by 2002:a5d:8357:: with SMTP id q23mr5123411ior.10.1559274841045; Thu, 30 May 2019 20:54:01 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.53.59 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:54:00 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Subject: [PATCH v2 04/17] soc: qcom: ipa: configuration data Date: Thu, 30 May 2019 22:53:35 -0500 Message-Id: <20190531035348.7194-5-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190530_205402_155358_462607E0 X-CRM114-Status: GOOD ( 19.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, syadagir@codeaurora.org, ejcaruso@google.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, evgreen@chromium.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, subashab@codeaurora.org, linux-soc@vger.kernel.org, abhishek.esse@gmail.com, cpratapa@codeaurora.org, benchan@google.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch defines configuration data that is used to specify some of the details of IPA hardware supported by the driver. It is built as Device Tree match data, discovered at boot time. Initially the driver only supports the Qualcomm SDM845 SoC. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_data-sdm845.c | 245 +++++++++++++++++++++++++++ drivers/net/ipa/ipa_data.h | 267 ++++++++++++++++++++++++++++++ 2 files changed, 512 insertions(+) create mode 100644 drivers/net/ipa/ipa_data-sdm845.c create mode 100644 drivers/net/ipa/ipa_data.h diff --git a/drivers/net/ipa/ipa_data-sdm845.c b/drivers/net/ipa/ipa_data-sdm845.c new file mode 100644 index 000000000000..62c0f25f5161 --- /dev/null +++ b/drivers/net/ipa/ipa_data-sdm845.c @@ -0,0 +1,245 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include + +#include "gsi.h" +#include "ipa_data.h" +#include "ipa_endpoint.h" + +/* Differentiate Boolean from numerical options */ +#define NO 0 +#define YES 1 + +/* Endpoint configuration for the SDM845 SoC. */ +static const struct gsi_ipa_endpoint_data gsi_ipa_endpoint_data[] = { + { + .ee_id = GSI_EE_AP, + .channel_id = 4, + .endpoint_id = IPA_ENDPOINT_AP_COMMAND_TX, + .toward_ipa = YES, + .channel = { + .tlv_count = 20, + .wrr_priority = YES, + .tre_count = 256, + .event_count = 512, + }, + .endpoint = { + .seq_type = IPA_SEQ_DMA_ONLY, + .config = { + .dma_mode = YES, + .dma_endpoint = IPA_ENDPOINT_AP_LAN_RX, + }, + }, + }, + { + .ee_id = GSI_EE_AP, + .channel_id = 5, + .endpoint_id = IPA_ENDPOINT_AP_LAN_RX, + .toward_ipa = NO, + .channel = { + .tlv_count = 8, + .tre_count = 256, + .event_count = 256, + }, + .endpoint = { + .seq_type = IPA_SEQ_INVALID, + .config = { + .checksum = YES, + .aggregation = YES, + .status_enable = YES, + .rx = { + .pad_align = ilog2(sizeof(u32)), + }, + }, + }, + }, + { + .ee_id = GSI_EE_AP, + .channel_id = 3, + .endpoint_id = IPA_ENDPOINT_AP_MODEM_TX, + .toward_ipa = YES, + .channel = { + .tlv_count = 16, + .tre_count = 512, + .event_count = 512, + }, + .endpoint = { + .support_flt = YES, + .seq_type = + IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP, + .config = { + .checksum = YES, + .qmap = YES, + .status_enable = YES, + .tx = { + .delay = YES, + .status_endpoint = + IPA_ENDPOINT_MODEM_AP_RX, + }, + }, + }, + }, + { + .ee_id = GSI_EE_AP, + .channel_id = 6, + .endpoint_id = IPA_ENDPOINT_AP_MODEM_RX, + .toward_ipa = NO, + .channel = { + .tlv_count = 8, + .tre_count = 256, + .event_count = 256, + }, + .endpoint = { + .seq_type = IPA_SEQ_INVALID, + .config = { + .checksum = YES, + .qmap = YES, + .aggregation = YES, + .rx = { + .aggr_close_eof = YES, + }, + }, + }, + }, + { + .ee_id = GSI_EE_MODEM, + .channel_id = 1, + .endpoint_id = IPA_ENDPOINT_MODEM_COMMAND_TX, + .toward_ipa = YES, + .endpoint = { + .seq_type = IPA_SEQ_PKT_PROCESS_NO_DEC_UCP, + }, + }, + { + .ee_id = GSI_EE_MODEM, + .channel_id = 0, + .endpoint_id = IPA_ENDPOINT_MODEM_LAN_TX, + .toward_ipa = YES, + .endpoint = { + .support_flt = YES, + }, + }, + { + .ee_id = GSI_EE_MODEM, + .channel_id = 3, + .endpoint_id = IPA_ENDPOINT_MODEM_LAN_RX, + .toward_ipa = NO, + }, + { + .ee_id = GSI_EE_MODEM, + .channel_id = 4, + .endpoint_id = IPA_ENDPOINT_MODEM_AP_TX, + .toward_ipa = YES, + .endpoint = { + .support_flt = YES, + }, + }, + { + .ee_id = GSI_EE_MODEM, + .channel_id = 2, + .endpoint_id = IPA_ENDPOINT_MODEM_AP_RX, + .toward_ipa = NO, + }, +}; + +static const struct ipa_resource_src ipa_resource_src[] = { + { + .type = IPA_RESOURCE_TYPE_SRC_PKT_CONTEXTS, + .limits[IPA_RESOURCE_GROUP_LWA_DL] = { + .min = 1, + .max = 63, + }, + .limits[IPA_RESOURCE_GROUP_UL_DL] = { + .min = 1, + .max = 63, + }, + }, + { + .type = IPA_RESOURCE_TYPE_SRC_DESCRIPTOR_LISTS, + .limits[IPA_RESOURCE_GROUP_LWA_DL] = { + .min = 10, + .max = 10, + }, + .limits[IPA_RESOURCE_GROUP_UL_DL] = { + .min = 10, + .max = 10, + }, + }, + { + .type = IPA_RESOURCE_TYPE_SRC_DESCRIPTOR_BUFF, + .limits[IPA_RESOURCE_GROUP_LWA_DL] = { + .min = 12, + .max = 12, + }, + .limits[IPA_RESOURCE_GROUP_UL_DL] = { + .min = 14, + .max = 14, + }, + }, + { + .type = IPA_RESOURCE_TYPE_SRC_HPS_DMARS, + .limits[IPA_RESOURCE_GROUP_LWA_DL] = { + .min = 0, + .max = 63, + }, + .limits[IPA_RESOURCE_GROUP_UL_DL] = { + .min = 0, + .max = 63, + }, + }, + { + .type = IPA_RESOURCE_TYPE_SRC_ACK_ENTRIES, + .limits[IPA_RESOURCE_GROUP_LWA_DL] = { + .min = 14, + .max = 14, + }, + .limits[IPA_RESOURCE_GROUP_UL_DL] = { + .min = 20, + .max = 20, + }, + }, +}; + +static const struct ipa_resource_dst ipa_resource_dst[] = { + { + .type = IPA_RESOURCE_TYPE_DST_DATA_SECTORS, + .limits[IPA_RESOURCE_GROUP_LWA_DL] = { + .min = 4, + .max = 4, + }, + .limits[IPA_RESOURCE_GROUP_UL_DL] = { + .min = 4, + .max = 4, + }, + }, + { + .type = IPA_RESOURCE_TYPE_DST_DPS_DMARS, + .limits[IPA_RESOURCE_GROUP_LWA_DL] = { + .min = 2, + .max = 63, + }, + .limits[IPA_RESOURCE_GROUP_UL_DL] = { + .min = 1, + .max = 63, + }, + }, +}; + +/* Resource configuration for the SDM845 SoC. */ +static const struct ipa_resource_data ipa_resource_data = { + .resource_src = ipa_resource_src, + .resource_src_count = ARRAY_SIZE(ipa_resource_src), + .resource_dst = ipa_resource_dst, + .resource_dst_count = ARRAY_SIZE(ipa_resource_dst), +}; + +/* Configuration data for the SDM845 SoC. */ +const struct ipa_data ipa_data_sdm845 = { + .endpoint_data = gsi_ipa_endpoint_data, + .endpoint_data_count = ARRAY_SIZE(gsi_ipa_endpoint_data), + .resource_data = &ipa_resource_data, +}; diff --git a/drivers/net/ipa/ipa_data.h b/drivers/net/ipa/ipa_data.h new file mode 100644 index 000000000000..f7669f73efc3 --- /dev/null +++ b/drivers/net/ipa/ipa_data.h @@ -0,0 +1,267 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ +#ifndef _IPA_DATA_H_ +#define _IPA_DATA_H_ + +#include + +#include "ipa_endpoint.h" + +/** + * DOC: IPA/GSI Configuration Data + * + * Boot-time configuration data is used to define the configuration of the + * IPA and GSI resources to use for a given platform. This data is supplied + * via the Device Tree match table, associated with a particular compatible + * string. The data defines information about resources, endpoints, and + * channels. For endpoints and channels, the configuration data defines how + * these hardware entities are initially configured, but in almost all cases, + * this configuration never changes. + * + * Resources are data structures used internally by the IPA hardware. The + * configuration data defines the number (or limits of the number) of various + * types of these resources. + * + * Endpoint configuration data defines properties of both IPA endpoints and + * GSI channels. A channel is a GSI construct, and represents a single + * communication path between the IPA and a particular execution environment + * (EE), such as the AP or Modem. Each EE has a set of channels associated + * with it, and each channel has an ID unique for that EE. Only GSI channels + * associated with the AP are of concern to this driver. + * + * An endpoint is an IPA construct representing a single channel anywhere + * within the system. As such, an IPA endpoint ID maps directly to an + * (EE, channel_id) pair. Generally, this driver is concerned with only + * endpoints associated with the AP, however this will change when support + * for routing (etc.) is added. IPA endpoint and GSI channel configuration + * data are defined together, establishing the endpoint_id->(EE, channel_id) + * mapping. + * + * Endpoint configuration data consists of three parts: properties that + * are common to IPA and GSI (EE ID, channel ID, endpoint ID, and direction); + * properties associated with the GSI channel; and properties associated with + * the IPA endpoint. + */ + +/** + * struct gsi_channel_data - GSI channel configuration data + * @tlv_count: number of entries in channel's TLV FIFO + * @wrr_priority: whether channel gets priority (AP command TX only) + * @tre_count: number of TREs in the channel ring + * @event_count: number of slots in the associated event ring + * + * A GSI channel is a unidirectional means of transferring data to or from + * (and through) the IPA. A GSI channel has a fixed number of "transfer + * elements" (TREs) that specify individual commands. A set of commands + * are provided to a GSI channel, and when they complete the GSI generates + * an event (and an interrupt) to signal their completion. These event + * structures are managed in a fixed-size event ring. + * + * Each GSI channel is fed by a FIFO if type/length/value (TLV) structures, + * and the number of entries in this FIFO limits the number of TREs that can + * be included in a single transaction. + * + * The GSI does weighted round-robin servicing of its channels, and it's + * possible to adjust a channel's priority of service. Only the AP command + * TX channel specifies that it should get priority. + */ +struct gsi_channel_data { + u32 tlv_count; + + u32 wrr_priority; + u32 tre_count; + u32 event_count; +}; + +/** + * struct ipa_endpoint_tx_data - configuration data for TX endpoints + * @delay: whether endpoint starts in delay mode + * @status_endpoint: endpoint to which status elements are sent + * + * Delay mode prevents an endpoint from transmitting anything, even if + * commands have been presented to the hardware. Once the endpoint exits + * delay mode, queued transfer commands are sent. + * + * The @status_endpoint is only valid if the endpoint's @status_enable + * flag is set. + */ +struct ipa_endpoint_tx_data { + u32 delay; + enum ipa_endpoint_id status_endpoint; +}; + +/** + * struct ipa_endpoint_rx_data - configuration data for RX endpoints + * @pad_align: power-of-2 boundary to which packet payload is aligned + * @aggr_close_eof: whether aggregation closes on end-of-frame + * + * With each packet it transfers, the IPA hardware can perform certain + * transformations of its packet data. One of these is adding pad bytes + * to the end of the packet data so the result ends on a power-of-2 boundary. + * + * It is also able to aggregate multiple packets into a single receive buffer. + * Aggregation is "open" while a buffer is being filled, and "closes" when + * certain criteria are met. One of those criteria is the sender indicating + * a "frame" consisting of several transfers has ended. + */ +struct ipa_endpoint_rx_data { + u32 pad_align; + u32 aggr_close_eof; +}; + +/** + * struct ipa_endpoint_config_data - IPA endpoint hardware configuration + * @checksum: whether checksum offload is enabled + * @qmap: whether endpoint uses QMAP protocol + * @aggregation: whether endpoint supports aggregation + * @dma_mode: whether endpoint operates in DMA mode + * @dma_endpoint: peer endpoint, if operating in DMA mode + * @status_enable: whether status elements are generated for endpoint + * @tx: TX-specific endpoint information (see above) + * @rx: RX-specific endpoint information (see above) + */ +struct ipa_endpoint_config_data { + u32 checksum; + u32 qmap; + u32 aggregation; + u32 dma_mode; + enum ipa_endpoint_id dma_endpoint; + u32 status_enable; + union { + struct ipa_endpoint_tx_data tx; + struct ipa_endpoint_rx_data rx; + }; +}; + +/** + * struct ipa_endpoint_data - IPA endpoint configuration data + * @support_flt: whether endpoint supports filtering + * @seq_type: hardware sequencer type used for endpoint + * @config: hardware configuration (see above) + * + * Not all endpoints support the IPA filtering capability. A filter table + * defines the filters to apply for those endpoints that support it. The + * AP is responsible for initializing this table, and it must include entries + * for non-AP endpoints. For this reason we define *all* endpoints used + * in the system, and indicate whether they support filtering. + * + * The remaining endpoint configuration data applies only to AP endpoints. + * The IPA hardware is implemented by sequencers, and the AP must program + * the type(s) of these sequencers at initialization time. The remaining + * endpoint configuration data is defined above. + */ +struct ipa_endpoint_data { + u32 support_flt; + /* The rest are specified only for AP endpoints */ + enum ipa_seq_type seq_type; + struct ipa_endpoint_config_data config; +}; + +/** + * struct gsi_ipa_endpoint_data - GSI channel/IPA endpoint data + * ee: GSI execution environment ID + * channel_id: GSI channel ID + * endpoint_id: IPA endpoint ID + * toward_ipa: direction of data transfer + * gsi: GSI channel configuration data (see above) + * ipa: IPA endpoint configuration data (see above) + */ +struct gsi_ipa_endpoint_data { + u32 ee_id; + u32 channel_id; + enum ipa_endpoint_id endpoint_id; + u32 toward_ipa; + + struct gsi_channel_data channel; + struct ipa_endpoint_data endpoint; +}; + +/** enum ipa_resource_group - IPA resource group */ +enum ipa_resource_group { + IPA_RESOURCE_GROUP_LWA_DL, /* currently not used */ + IPA_RESOURCE_GROUP_UL_DL, + IPA_RESOURCE_GROUP_MAX, +}; + +/** enum ipa_resource_type_src - source resource types */ +enum ipa_resource_type_src { + IPA_RESOURCE_TYPE_SRC_PKT_CONTEXTS, + IPA_RESOURCE_TYPE_SRC_DESCRIPTOR_LISTS, + IPA_RESOURCE_TYPE_SRC_DESCRIPTOR_BUFF, + IPA_RESOURCE_TYPE_SRC_HPS_DMARS, + IPA_RESOURCE_TYPE_SRC_ACK_ENTRIES, +}; + +/** enum ipa_resource_type_dst - destination resource types */ +enum ipa_resource_type_dst { + IPA_RESOURCE_TYPE_DST_DATA_SECTORS, + IPA_RESOURCE_TYPE_DST_DPS_DMARS, +}; + +/** + * struct ipa_resource_limits - minimum and maximum resource counts + * @min: minimum number of resources of a given type + * @max: maximum number of resources of a given type + */ +struct ipa_resource_limits { + u32 min; + u32 max; +}; + +/** + * struct ipa_resource_src - source endpoint group resource usage + * @type: source group resource type + * @limits: array of limits to use for each resource group + */ +struct ipa_resource_src { + enum ipa_resource_type_src type; + struct ipa_resource_limits limits[IPA_RESOURCE_GROUP_MAX]; +}; + +/** + * struct ipa_resource_dst - destination endpoint group resource usage + * @type: destination group resource type + * @limits: array of limits to use for each resource group + */ +struct ipa_resource_dst { + enum ipa_resource_type_dst type; + struct ipa_resource_limits limits[IPA_RESOURCE_GROUP_MAX]; +}; + +/** + * struct ipa_resource_data - IPA resource configuration data + * @resource_src: source endpoint group resources + * @resource_src_count: number of entries in the resource_src array + * @resource_dst: destination endpoint group resources + * @resource_dst_count: number of entries in the resource_dst array + * + * In order to manage quality of service between endpoints, certain resources + * required for operation are allocated to groups of endpoints. Generally + * this information is invisible to the AP, but the AP is responsible for + * programming it at initialization time, so we specify it here. + */ +struct ipa_resource_data { + const struct ipa_resource_src *resource_src; + u32 resource_src_count; + const struct ipa_resource_dst *resource_dst; + u32 resource_dst_count; +}; + +/** + * struct ipa_data - combined IPA/GSI configuration data + * @resource_data: IPA resource configuration data + * @endpoint_data: IPA endpoint/GSI channel data + * @endpoint_data_count: number of entries in endpoint_data array + */ +struct ipa_data { + const struct ipa_resource_data *resource_data; + const struct gsi_ipa_endpoint_data *endpoint_data; + u32 endpoint_data_count; /* # entries in endpoint_data[] */ +}; + +extern const struct ipa_data ipa_data_sdm845; + +#endif /* _IPA_DATA_H_ */ From patchwork Fri May 31 03:53:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 10969575 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A62F614C0 for ; Fri, 31 May 2019 03:55:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8FF9F288D3 for ; Fri, 31 May 2019 03:55:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7FF0D2899C; Fri, 31 May 2019 03:55:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id AAB9E288D3 for ; Fri, 31 May 2019 03:55:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=G6ksmFRgkXFaywSHNJI+NlTDDrpEHCfcWJO3zXdNzCo=; b=JoDaMOfA7Oj+wQ 6KWgClnuq0UpSpqipxp71H1M8gDKniUR3ZgTEAOX36aW+MWoOxcfoUKFLD9X+bRC7Jk/3tB6G4ibt TrebPxGcRMBS+ZUZogVenFcoTxnWtjujk+N/1RfeE7dCHiQqmI6O8sYamksEoq5xC92w3KTsz8Otc 01ayYI2SKBJDPSkrU/gptxLIxikxepCS76QQ7lCEdYv2RwFS7ETQL+n78C74LH3zm+YjNrap61rrq mgU7QdszPU2oXtwMH+gIgcT/Axi9lKw9dcIgoFGox1eWlQJAodRTYaGdHjNqXDhK8D/1a9Wfv+76P NSZbwKYJ5Mbbeh4hed2Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYdG-0003tG-SE; Fri, 31 May 2019 03:55:06 +0000 Received: from mail-io1-xd43.google.com ([2607:f8b0:4864:20::d43]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYcG-0002dQ-5I for linux-arm-kernel@lists.infradead.org; Fri, 31 May 2019 03:54:12 +0000 Received: by mail-io1-xd43.google.com with SMTP id f22so6986055iol.11 for ; Thu, 30 May 2019 20:54:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2/L/8emfxyGoTsf39QVK6y6tpkz0o/6m0FIh3x5x+sE=; b=LtLkVr6OY0lYm0adAANtt9XSE9nmqSNczmco3ezqSEOhx/itnTww+YEPey+9Fdj8U9 I+TSsz/OakQ+JoXbdZ+9imwMGJk3TOevNxYUEHh1Ck+0SDMYd+8Ce/JsLk/xu590Eoj/ cq6zTD7yEzdwntV2kbEJB3swnxkREsqTc9GhZo9KTsSu8hWSWPAshCulOxawv8pFQ+h7 C7rurTp3aK3HaKZSkvleqWnUKVjlSVGMlBHYrlAyHTjTANIgGBLGk+LAJN5Lfra2ZTb/ ujeW9SAUXkWgIbgwM+uKb4QCz0bf8e/QdH6vlUY4bAX/zyTay6JWcoSa14FHqNRJoHTL OPlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2/L/8emfxyGoTsf39QVK6y6tpkz0o/6m0FIh3x5x+sE=; b=jebgYwy0aJZAjRxkwM2o0blxhAQZSa/rzdZSYGI0HIuNHVaH9+nPd9EzISWtENCsJ7 i4WSYkbScNPV63iZZVlNdDVOTTTqGPjlpGJZo3x5jYlqFHWernphb+MaRaVwj5ve4QEz 36WqZCxSINJXzYQSl9+VT3DChscc4W5lPaoDmG6+tY23h6DqrhyS8R0oNX4N9qe9f5Nb fPf4ImkuRQd5SBOeZXfdNLosXCTNIiEbze/RnDi2pSIUQ4+UEQ7UA7YmO/xoNCHNd71q AHgAex3itQyfuSnMCNbILQVeadqJaj4apcIZHsxGemzwbwVG1cof3j1ewfnNQxEgr1vf LLSg== X-Gm-Message-State: APjAAAVB8oFMEfu8US6gQfJ4BX1TMXZku9jnwjA71xNQ+CAQK3uUklrV LBmRh1QFC656n/m41JqG5z/6KA== X-Google-Smtp-Source: APXvYqwNL7Lj06x1ekIS8NyAGLzYPS264SsHtVDZDVqTlA9ic7gjyozgPmtszRnQmHgWpFmfaUdyjw== X-Received: by 2002:a5d:9743:: with SMTP id c3mr5044760ioo.32.1559274842452; Thu, 30 May 2019 20:54:02 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.54.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:54:01 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Subject: [PATCH v2 05/17] soc: qcom: ipa: clocking, interrupts, and memory Date: Thu, 30 May 2019 22:53:36 -0500 Message-Id: <20190531035348.7194-6-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190530_205404_528113_E7128247 X-CRM114-Status: GOOD ( 25.64 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, syadagir@codeaurora.org, ejcaruso@google.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, evgreen@chromium.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, subashab@codeaurora.org, linux-soc@vger.kernel.org, abhishek.esse@gmail.com, cpratapa@codeaurora.org, benchan@google.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch incorporates three source files (and their headers). They're grouped into one patch mainly for the purpose of making the number and size of patches in this series somewhat reasonable. - "ipa_clock.c" and "ipa_clock.h" implement clocking for the IPA device. The IPA has a single core clock managed by the common clock framework. In addition, the IPA has three buses whose bandwidth is managed by the Linux interconnect framework. At this time the core clock and all three buses are either on or off; we don't yet do any more fine-grained management than that. The core clock and interconnects are enabled and disabled as a unit, using a unified clock-like abstraction, ipa_clock_get()/ipa_clock_put(). - "ipa_interrupt.c" and "ipa_interrupt.h" implement IPA interrupts. There are two hardare IRQs used by the IPA driver (the other is the GSI interrupt, described in a separate patch). Several types of interrupt are handled by the IPA IRQ handler; these are not part of data/fast path. - The IPA has a region of local memory that is accessible by the AP (and modem). Within that region are areas with certain defined purposes. "ipa_mem.c" and "ipa_mem.h" define those regions, and implement their initialization. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_clock.c | 297 ++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_clock.h | 52 ++++++ drivers/net/ipa/ipa_interrupt.c | 279 ++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_interrupt.h | 53 ++++++ drivers/net/ipa/ipa_mem.c | 234 +++++++++++++++++++++++++ drivers/net/ipa/ipa_mem.h | 83 +++++++++ 6 files changed, 998 insertions(+) create mode 100644 drivers/net/ipa/ipa_clock.c create mode 100644 drivers/net/ipa/ipa_clock.h create mode 100644 drivers/net/ipa/ipa_interrupt.c create mode 100644 drivers/net/ipa/ipa_interrupt.h create mode 100644 drivers/net/ipa/ipa_mem.c create mode 100644 drivers/net/ipa/ipa_mem.h diff --git a/drivers/net/ipa/ipa_clock.c b/drivers/net/ipa/ipa_clock.c new file mode 100644 index 000000000000..9ed12e8183ad --- /dev/null +++ b/drivers/net/ipa/ipa_clock.c @@ -0,0 +1,297 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_clock.h" +#include "ipa_netdev.h" + +/** + * DOC: IPA Clocking + * + * The "IPA Clock" manages both the IPA core clock and the interconnects + * (buses) the IPA depends on as a single logical entity. A reference count + * is incremented by "get" operations and decremented by "put" operations. + * Transitions of that count from 0 to 1 result in the clock and interconnects + * being enabled, and transitions of the count from 1 to 0 cause them to be + * disabled. We currently operate the core clock at a fixed clock rate, and + * all buses at a fixed average and peak bandwidth. As more advanced IPA + * features are enabled, we can will better use of clock and bus scaling. + * + * An IPA clock reference must be held for any access to IPA hardware. + */ + +#define IPA_CORE_CLOCK_RATE (75UL * 1000 * 1000) /* Hz */ + +/* Interconnect path bandwidths (each times 1000 bytes per second) */ +#define IPA_MEMORY_AVG (80 * 1000) /* 80 MBps */ +#define IPA_MEMORY_PEAK (600 * 1000) + +#define IPA_IMEM_AVG (80 * 1000) +#define IPA_IMEM_PEAK (350 * 1000) + +#define IPA_CONFIG_AVG (40 * 1000) +#define IPA_CONFIG_PEAK (40 * 1000) + +/** + * struct ipa_clock - IPA clocking information + * @core: IPA core clock + * @memory_path: Memory interconnect + * @imem_path: Internal memory interconnect + * @config_path: Configuration space interconnect + * @mutex; Protects clock enable/disable + * @count: Clocking reference count + */ +struct ipa_clock { + struct ipa *ipa; + atomic_t count; + struct mutex mutex; /* protects clock enable/disable */ + struct clk *core; + struct icc_path *memory_path; + struct icc_path *imem_path; + struct icc_path *config_path; +}; + +/* Initialize interconnects required for IPA operation */ +static int ipa_interconnect_init(struct ipa_clock *clock, struct device *dev) +{ + struct icc_path *path; + + path = of_icc_get(dev, "memory"); + if (IS_ERR(path)) + goto err_return; + clock->memory_path = path; + + path = of_icc_get(dev, "imem"); + if (IS_ERR(path)) + goto err_memory_path_put; + clock->imem_path = path; + + path = of_icc_get(dev, "config"); + if (IS_ERR(path)) + goto err_imem_path_put; + clock->config_path = path; + + return 0; + +err_imem_path_put: + icc_put(clock->imem_path); +err_memory_path_put: + icc_put(clock->memory_path); +err_return: + + return PTR_ERR(path); +} + +/* Inverse of ipa_interconnect_init() */ +static void ipa_interconnect_exit(struct ipa_clock *clock) +{ + icc_put(clock->config_path); + icc_put(clock->imem_path); + icc_put(clock->memory_path); +} + +/* Currently we only use one bandwidth level, so just "enable" interconnects */ +static int ipa_interconnect_enable(struct ipa_clock *clock) +{ + int ret; + + ret = icc_set_bw(clock->memory_path, IPA_MEMORY_AVG, IPA_MEMORY_PEAK); + if (ret) + return ret; + + ret = icc_set_bw(clock->imem_path, IPA_IMEM_AVG, IPA_IMEM_PEAK); + if (ret) + goto err_disable_memory_path; + + ret = icc_set_bw(clock->config_path, IPA_CONFIG_AVG, IPA_CONFIG_PEAK); + if (ret) + goto err_disable_imem_path; + + return 0; + +err_disable_imem_path: + (void)icc_set_bw(clock->imem_path, 0, 0); +err_disable_memory_path: + (void)icc_set_bw(clock->memory_path, 0, 0); + + return ret; +} + +/* To disable an interconnect, we just its bandwidth to 0 */ +static int ipa_interconnect_disable(struct ipa_clock *clock) +{ + int ret; + + ret = icc_set_bw(clock->memory_path, 0, 0); + if (ret) + return ret; + + ret = icc_set_bw(clock->imem_path, 0, 0); + if (ret) + goto err_reenable_memory_path; + + ret = icc_set_bw(clock->config_path, 0, 0); + if (ret) + goto err_reenable_imem_path; + + return 0; + +err_reenable_imem_path: + (void)icc_set_bw(clock->imem_path, IPA_IMEM_AVG, IPA_IMEM_PEAK); +err_reenable_memory_path: + (void)icc_set_bw(clock->memory_path, IPA_MEMORY_AVG, IPA_MEMORY_PEAK); + + return ret; +} + +/* Turn on IPA clocks, including interconnects */ +static int ipa_clock_enable(struct ipa_clock *clock) +{ + int ret; + + ret = ipa_interconnect_enable(clock); + if (ret) + return ret; + + ret = clk_prepare_enable(clock->core); + if (ret) + ipa_interconnect_disable(clock); + + return ret; +} + +/* Inverse of ipa_clock_enable() */ +static void ipa_clock_disable(struct ipa_clock *clock) +{ + clk_disable_unprepare(clock->core); + (void)ipa_interconnect_disable(clock); +} + +/* Get an IPA clock reference, but only if the reference count is + * already non-zero. Returns true if the additional reference was + * added successfully, or false otherwise. + */ +bool ipa_clock_get_additional(struct ipa_clock *clock) +{ + return !!atomic_inc_not_zero(&clock->count); +} + +/* Get an IPA clock reference. If the reference count is non-zero, it is + * incremented and return is immediate. Otherwise it is checked again + * under protection of the mutex, and enable clocks and resume RX endpoints + * before returning. For the first reference, the count is intentionally + * not incremented until after these activities are complete. + */ +void ipa_clock_get(struct ipa_clock *clock) +{ + /* If the clock is running, just bump the reference count */ + if (ipa_clock_get_additional(clock)) + return; + + /* Otherwise get the mutex and check again */ + mutex_lock(&clock->mutex); + + /* A reference might have been added before we got the mutex. */ + if (!ipa_clock_get_additional(clock)) { + int ret; + + ret = ipa_clock_enable(clock); + if (!WARN(ret, "error %d enabling IPA clock\n", ret)) { + struct ipa *ipa = clock->ipa; + + if (ipa->command_endpoint) + ipa_endpoint_resume(ipa->command_endpoint); + + if (ipa->default_endpoint) + ipa_endpoint_resume(ipa->default_endpoint); + + if (ipa->modem_netdev) + ipa_netdev_resume(ipa->modem_netdev); + + atomic_inc(&clock->count); + } + } + + mutex_unlock(&clock->mutex); +} + +/* Attempt to remove an IPA clock reference. If this represents + * the last reference, suspend endpoints and disable the clock + * (and interconnects) under protection of a mutex. + */ +void ipa_clock_put(struct ipa_clock *clock) +{ + /* If this is not the last reference there's nothing more to do */ + if (!atomic_dec_and_mutex_lock(&clock->count, &clock->mutex)) + return; + + if (clock->ipa->modem_netdev) + ipa_netdev_suspend(clock->ipa->modem_netdev); + + if (clock->ipa->default_endpoint) + ipa_endpoint_suspend(clock->ipa->default_endpoint); + + if (clock->ipa->command_endpoint) + ipa_endpoint_suspend(clock->ipa->command_endpoint); + + ipa_clock_disable(clock); + + mutex_unlock(&clock->mutex); +} + +/* Initialize IPA clocking */ +struct ipa_clock *ipa_clock_init(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + struct ipa_clock *clock; + int ret; + + clock = kzalloc(sizeof(*clock), GFP_KERNEL); + if (!clock) + return ERR_PTR(-ENOMEM); + + clock->ipa = ipa; + clock->core = clk_get(dev, "core"); + if (IS_ERR(clock->core)) { + ret = PTR_ERR(clock->core); + goto err_free_clock; + } + + ret = clk_set_rate(clock->core, IPA_CORE_CLOCK_RATE); + if (ret) + goto err_clk_put; + + ret = ipa_interconnect_init(clock, dev); + if (ret) + goto err_clk_put; + + mutex_init(&clock->mutex); + atomic_set(&clock->count, 0); + + return clock; + +err_clk_put: + clk_put(clock->core); +err_free_clock: + kfree(clock); + + return ERR_PTR(ret); +} + +/* Inverse of ipa_clock_init() */ +void ipa_clock_exit(struct ipa_clock *clock) +{ + mutex_destroy(&clock->mutex); + ipa_interconnect_exit(clock); + clk_put(clock->core); + kfree(clock); +} diff --git a/drivers/net/ipa/ipa_clock.h b/drivers/net/ipa/ipa_clock.h new file mode 100644 index 000000000000..f38c3face29a --- /dev/null +++ b/drivers/net/ipa/ipa_clock.h @@ -0,0 +1,52 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _IPA_CLOCK_H_ +#define _IPA_CLOCK_H_ + +struct ipa; +struct ipa_clock; + +/** + * ipa_clock_init() - Initialize IPA clocking + * @ipa: IPA pointer + * + * @Return: A pointer to an ipa_clock structure, or a pointer-coded error + */ +struct ipa_clock *ipa_clock_init(struct ipa *ipa); + +/** + * ipa_clock_exit() - Inverse of ipa_clock_init() + * @clock: IPA clock pointer + */ +void ipa_clock_exit(struct ipa_clock *clock); + +/** + * ipa_clock_get() - Get an IPA clock reference + * @clock: IPA clock pointer + * + * This call blocks if this is the first reference. + */ +void ipa_clock_get(struct ipa_clock *clock); + +/** + * ipa_clock_get_additional() - Get an IPA clock reference if not first + * @clock: IPA clock pointer + * + * This returns immediately, and only takes a reference if not the first + */ +bool ipa_clock_get_additional(struct ipa_clock *clock); + +/** + * ipa_clock_put() - Drop an IPA clock reference + * @clock: IPA clock pointer + * + * This drops a clock reference. If the last reference is being dropped, + * the clock is stopped and RX endpoints are suspended. This call will + * not block unless the last reference is dropped. + */ +void ipa_clock_put(struct ipa_clock *clock); + +#endif /* _IPA_CLOCK_H_ */ diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c new file mode 100644 index 000000000000..5be6b3c762ed --- /dev/null +++ b/drivers/net/ipa/ipa_interrupt.c @@ -0,0 +1,279 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2014-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ + +/* DOC: IPA Interrupts + * + * The IPA has an interrupt line distinct from the interrupt used by the GSI + * code. Whereas GSI interrupts are generally related to channel events (like + * transfer completions), IPA interrupts are related to other events related + * to the IPA. Some of the IPA interrupts come from a microcontroller + * embedded in the IPA. Each IPA interrupt type can be both masked and + * acknowledged independent of the others. + * + * Two of the IPA interrupts are initiated by the microcontroller. A third + * can be generated to signal the need for a wakeup/resume when an IPA + * endpoint has been suspended. There are other IPA events defined, but at + * this time only these three are supported. + */ + +#include +#include + +#include "ipa.h" +#include "ipa_clock.h" +#include "ipa_reg.h" +#include "ipa_endpoint.h" +#include "ipa_interrupt.h" + +/* Maximum number of bits in an IPA interrupt mask */ +#define IPA_INTERRUPT_MAX (sizeof(u32) * BITS_PER_BYTE) + +struct ipa_interrupt_info { + ipa_irq_handler_t handler; + enum ipa_interrupt_id interrupt_id; +}; + +/** + * struct ipa_interrupt - IPA interrupt information + * @ipa: IPA pointer + * @irq: Linux IRQ number used for IPA interrupts + * @interrupt_info: Information for each IPA interrupt type + */ +struct ipa_interrupt { + struct ipa *ipa; + u32 irq; + u32 enabled; + struct ipa_interrupt_info info[IPA_INTERRUPT_MAX]; +}; + +/* Map a logical interrupt number to a hardware IPA IRQ number */ +static const u32 ipa_interrupt_mapping[] = { + [IPA_INTERRUPT_UC_0] = 2, + [IPA_INTERRUPT_UC_1] = 3, + [IPA_INTERRUPT_TX_SUSPEND] = 14, +}; + +static bool ipa_interrupt_uc(struct ipa_interrupt *interrupt, u32 ipa_irq) +{ + return ipa_irq == ipa_interrupt_mapping[IPA_INTERRUPT_UC_0] || + ipa_irq == ipa_interrupt_mapping[IPA_INTERRUPT_UC_1]; +} + +static void ipa_interrupt_process(struct ipa_interrupt *interrupt, u32 ipa_irq) +{ + struct ipa_interrupt_info *info = &interrupt->info[ipa_irq]; + bool uc_irq = ipa_interrupt_uc(interrupt, ipa_irq); + struct ipa *ipa = interrupt->ipa; + u32 mask = BIT(ipa_irq); + + /* For microcontroller interrupts, clear the interrupt right away, + * "to avoid clearing unhandled interrupts." + */ + if (uc_irq) + iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET); + + if (info->handler) + info->handler(interrupt->ipa, info->interrupt_id); + + /* Clearing the SUSPEND_TX interrupt also clears the register + * that tells us which suspended endpoint(s) caused the interrupt, + * so defer clearing until after the handler's been called. + */ + if (!uc_irq) + iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET); +} + +static void ipa_interrupt_process_all(struct ipa_interrupt *interrupt) +{ + struct ipa *ipa = interrupt->ipa; + u32 enabled = interrupt->enabled; + u32 mask; + + /* The status register indicates which conditions are present, + * including conditions whose interrupt is not enabled. Handle + * only the enabled ones. + */ + mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET); + while ((mask &= enabled)) { + do { + u32 ipa_irq = __ffs(mask); + + mask ^= BIT(ipa_irq); + + ipa_interrupt_process(interrupt, ipa_irq); + } while (mask); + mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET); + } +} + +/* Threaded part of the IRQ handler */ +static irqreturn_t ipa_isr_thread(int irq, void *dev_id) +{ + struct ipa_interrupt *interrupt = dev_id; + + ipa_clock_get(interrupt->ipa->clock); + + ipa_interrupt_process_all(interrupt); + + ipa_clock_put(interrupt->ipa->clock); + + return IRQ_HANDLED; +} + +/* Hard part of the IRQ handler */ +static irqreturn_t ipa_isr(int irq, void *dev_id) +{ + struct ipa_interrupt *interrupt = dev_id; + struct ipa *ipa = interrupt->ipa; + u32 mask; + + mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET); + if (mask & interrupt->enabled) + return IRQ_WAKE_THREAD; + + /* Nothing in the mask was supposed to cause an interrupt */ + iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET); + + dev_err(&ipa->pdev->dev, "%s: unexpected interrupt, mask 0x%08x\n", + __func__, mask); + + return IRQ_HANDLED; +} + +static void ipa_interrupt_suspend_control(struct ipa_interrupt *interrupt, + enum ipa_endpoint_id endpoint_id, + bool enable) +{ + u32 offset = IPA_REG_SUSPEND_IRQ_EN_OFFSET; + u32 mask = BIT(endpoint_id); + u32 val; + + val = ioread32(interrupt->ipa->reg_virt + offset); + if (enable) + val |= mask; + else + val &= ~mask; + iowrite32(val, interrupt->ipa->reg_virt + offset); +} + +void ipa_interrupt_suspend_enable(struct ipa_interrupt *interrupt, + enum ipa_endpoint_id endpoint_id) +{ + ipa_interrupt_suspend_control(interrupt, endpoint_id, true); +} + +void ipa_interrupt_suspend_disable(struct ipa_interrupt *interrupt, + enum ipa_endpoint_id endpoint_id) +{ + ipa_interrupt_suspend_control(interrupt, endpoint_id, false); +} + +/* Clear the suspend interrupt for all endpoints that signaled it */ +void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt) +{ + struct ipa *ipa = interrupt->ipa; + u32 val; + + val = ioread32(ipa->reg_virt + IPA_REG_IRQ_SUSPEND_INFO_OFFSET); + iowrite32(val, ipa->reg_virt + IPA_REG_SUSPEND_IRQ_CLR_OFFSET); +} + +/** + * ipa_interrupt_simulate() - Simulate arrival of an IPA TX_SUSPEND interrupt + * + * This is needed to work around a problem that occurs if aggregation + * is active on an endpoint when its underlying channel is suspended. + */ +void ipa_interrupt_simulate_suspend(struct ipa_interrupt *interrupt) +{ + u32 ipa_irq = ipa_interrupt_mapping[IPA_INTERRUPT_TX_SUSPEND]; + + ipa_interrupt_process(interrupt, ipa_irq); +} + +/** + * ipa_interrupt_add() - Adds handler for an IPA interrupt + * @interrupt_id: IPA interrupt type + * @handler: The handler for that interrupt + * + * Adds handler for an IPA interrupt and enable it. IPA interrupt + * handlers are run in threaded interrupt context, so are allowed to + * block. + */ +void ipa_interrupt_add(struct ipa_interrupt *interrupt, + enum ipa_interrupt_id interrupt_id, + ipa_irq_handler_t handler) +{ + u32 ipa_irq = ipa_interrupt_mapping[interrupt_id]; + struct ipa *ipa = interrupt->ipa; + + interrupt->info[ipa_irq].handler = handler; + interrupt->info[ipa_irq].interrupt_id = interrupt_id; + + /* Update the IPA interrupt mask to enable it */ + interrupt->enabled |= BIT(ipa_irq); + iowrite32(interrupt->enabled, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET); +} + +/** + * ipa_interrupt_remove() - Removes handler for an IPA interrupt type + * @interrupt: IPA interrupt type + * + * Remove an IPA interrupt handler and disable it. + */ +void ipa_interrupt_remove(struct ipa_interrupt *interrupt, + enum ipa_interrupt_id interrupt_id) +{ + u32 ipa_irq = ipa_interrupt_mapping[interrupt_id]; + struct ipa *ipa = interrupt->ipa; + + /* Update the IPA interrupt mask to disable it */ + interrupt->enabled &= ~BIT(ipa_irq); + iowrite32(interrupt->enabled, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET); + + interrupt->info[ipa_irq].handler = NULL; +} + +/** + * ipa_interrupts_init() - Initialize the IPA interrupts framework + */ +struct ipa_interrupt *ipa_interrupt_setup(struct ipa *ipa) +{ + struct ipa_interrupt *interrupt; + unsigned int irq; + int ret; + + ret = platform_get_irq_byname(ipa->pdev, "ipa"); + if (ret < 0) + return ERR_PTR(ret); + irq = ret; + + interrupt = kzalloc(sizeof(*interrupt), GFP_KERNEL); + if (!interrupt) + return ERR_PTR(-ENOMEM); + interrupt->ipa = ipa; + interrupt->irq = irq; + + /* Start with all IPA interrupts disabled */ + iowrite32(0, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET); + + ret = request_threaded_irq(irq, ipa_isr, ipa_isr_thread, IRQF_ONESHOT, + "ipa", interrupt); + if (ret) + goto err_free_interrupt; + + return interrupt; + +err_free_interrupt: + kfree(interrupt); + + return ERR_PTR(ret); +} + +void ipa_interrupt_teardown(struct ipa_interrupt *interrupt) +{ + free_irq(interrupt->irq, interrupt); +} diff --git a/drivers/net/ipa/ipa_interrupt.h b/drivers/net/ipa/ipa_interrupt.h new file mode 100644 index 000000000000..6e452430c156 --- /dev/null +++ b/drivers/net/ipa/ipa_interrupt.h @@ -0,0 +1,53 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _IPA_INTERRUPT_H_ +#define _IPA_INTERRUPT_H_ + +#include +#include + +struct ipa; +struct ipa_interrupt; + +/** + * enum ipa_interrupt_id - IPA Interrupt Type + * + * Used to register handlers for IPA interrupts. + */ +enum ipa_interrupt_id { + IPA_INTERRUPT_UC_0, + IPA_INTERRUPT_UC_1, + IPA_INTERRUPT_TX_SUSPEND, +}; + +/** + * typedef ipa_irq_handler_t - irq handler/callback type + * @param interrupt - interrupt type + * @param interrupt_data - interrupt information data + * + * Callback function registered by ipa_interrupt_add() to handle a specific + * interrupt type + */ +typedef void (*ipa_irq_handler_t)(struct ipa *ipa, + enum ipa_interrupt_id interrupt_id); + +struct ipa_interrupt *ipa_interrupt_setup(struct ipa *ipa); +void ipa_interrupt_teardown(struct ipa_interrupt *interrupt); + +void ipa_interrupt_add(struct ipa_interrupt *interrupt, + enum ipa_interrupt_id interrupt_id, + ipa_irq_handler_t handler); +void ipa_interrupt_remove(struct ipa_interrupt *interrupt, + enum ipa_interrupt_id interrupt_id); + +void ipa_interrupt_suspend_enable(struct ipa_interrupt *interrupt, + enum ipa_endpoint_id endpoint_id); +void ipa_interrupt_suspend_disable(struct ipa_interrupt *interrupt, + enum ipa_endpoint_id endpoint_id); +void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt); +void ipa_interrupt_simulate_suspend(struct ipa_interrupt *interrupt); + +#endif /* _IPA_INTERRUPT_H_ */ diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c new file mode 100644 index 000000000000..ad7e55aec31f --- /dev/null +++ b/drivers/net/ipa/ipa_mem.c @@ -0,0 +1,234 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_reg.h" +#include "ipa_cmd.h" +#include "ipa_mem.h" + +/* "Canary" value placed between memory regions to detect overflow */ +#define IPA_SMEM_CANARY_VAL cpu_to_le32(0xdeadbeef) + +/* Only used for IPA_SMEM_UC_EVENT_RING */ +static __always_inline void smem_set_canary(struct ipa *ipa, u32 offset) +{ + __le32 *cp = ipa->shared_virt + offset; + + BUILD_BUG_ON(offset < sizeof(*cp)); + + *--cp = IPA_SMEM_CANARY_VAL; +} + +static __always_inline void smem_set_canaries(struct ipa *ipa, u32 offset) +{ + __le32 *cp = ipa->shared_virt + offset; + + /* IPA accesses memory at 8-byte aligned offsets, 8 bytes at a time */ + BUILD_BUG_ON(offset % 8); + BUILD_BUG_ON(offset < 2 * sizeof(*cp)); + + *--cp = IPA_SMEM_CANARY_VAL; + *--cp = IPA_SMEM_CANARY_VAL; +} + +/** + * ipa_smem_setup() - Set up IPA AP and modem shared memory areas + * + * Set up the IPA-local shared memory areas located in shared memory + * located in the IPA. This involves zero-filling each area (using + * DMA) and then telling the IPA where it's located. We set up the + * regions for the header and processing context structures used by + * both the modem and the AP. + * + * The modem and AP header areas are contiguous, with the modem area + * located at the lower address. The processing context memory areas + * for the modem and AP are also contiguous, with the modem at the base + * of the combined space. + * + * The modem portions are also zeroed in ipa_smem_zero_modem(); if it + * crashes and restarts via SSR these areas need to be * re-initialized. + * + * @Return: 0 if successful, or a negative error code + */ +int ipa_smem_setup(struct ipa *ipa) +{ + u32 offset; + u32 size; + int ret; + + /* Alignments of some offsets are verified in smem_set_canaries() */ + BUILD_BUG_ON(IPA_SMEM_AP_HDR_OFFSET % 8); + BUILD_BUG_ON(IPA_SMEM_MODEM_HDR_SIZE % 8); + BUILD_BUG_ON(IPA_SMEM_AP_HDR_SIZE % 8); + + /* Initialize IPA-local header memory */ + offset = IPA_SMEM_MODEM_HDR_OFFSET; + size = IPA_SMEM_MODEM_HDR_SIZE + IPA_SMEM_AP_HDR_SIZE; + ret = ipa_cmd_hdr_init_local(ipa, offset, size); + if (ret) + return ret; + + BUILD_BUG_ON(IPA_SMEM_AP_HDR_PROC_CTX_OFFSET % 8); + BUILD_BUG_ON(IPA_SMEM_MODEM_HDR_PROC_CTX_SIZE % 8); + BUILD_BUG_ON(IPA_SMEM_AP_HDR_PROC_CTX_SIZE % 8); + + /* Zero the processing context IPA-local memory for the modem and AP */ + offset = IPA_REG_LOCAL_PKT_PROC_CNTXT_BASE_OFFSET; + size = IPA_SMEM_MODEM_HDR_PROC_CTX_SIZE + IPA_SMEM_AP_HDR_PROC_CTX_SIZE; + ret = ipa_cmd_smem_dma_zero(ipa, offset, size); + if (ret) + return ret; + + /* Tell the hardware where the processing context area is located */ + iowrite32(ipa->shared_offset + offset, + ipa->reg_virt + IPA_REG_LOCAL_PKT_PROC_CNTXT_BASE_OFFSET); + + return ret; +} + +void ipa_smem_teardown(struct ipa *ipa) +{ + /* Nothing to do */ +} + +/** + * ipa_smem_config() - Configure IPA shared memory + * + * @Return: 0 if successful, or a negative error code + */ +int ipa_smem_config(struct ipa *ipa) +{ + u32 size; + u32 val; + + /* Check the advertised location and size of the shared memory area */ + val = ioread32(ipa->reg_virt + IPA_REG_SHARED_MEM_SIZE_OFFSET); + + /* The fields in the register are in 8 byte units */ + ipa->shared_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK); + dev_dbg(&ipa->pdev->dev, "shared memory offset 0x%x bytes\n", + ipa->shared_offset); + if (WARN_ON(ipa->shared_offset)) + return -EINVAL; + + /* The code assumes a certain minimum shared memory area size */ + size = 8 * u32_get_bits(val, SHARED_MEM_SIZE_FMASK); + dev_dbg(&ipa->pdev->dev, "shared memory size 0x%x bytes\n", size); + if (WARN_ON(size < IPA_SMEM_SIZE)) + return -EINVAL; + + /* Now write "canary" values before each sub-section. */ + smem_set_canaries(ipa, IPA_SMEM_V4_FLT_HASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V4_FLT_NHASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V6_FLT_HASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V6_FLT_NHASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V4_RT_HASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V4_RT_NHASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V6_RT_HASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V6_RT_NHASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_MODEM_HDR_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_MODEM_HDR_PROC_CTX_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_MODEM_OFFSET); + + /* Only one canary precedes the microcontroller ring */ + BUILD_BUG_ON(IPA_SMEM_UC_EVENT_RING_OFFSET % 1024); + smem_set_canary(ipa, IPA_SMEM_UC_EVENT_RING_OFFSET); + + return 0; +} + +void ipa_smem_deconfig(struct ipa *ipa) +{ + /* Don't bother zeroing any of the shared memory on exit */ +} + +/** + * ipa_smem_zero_modem() - Zero modem IPA-local memory regions + * + * Zero regions of IPA-local memory used by the modem. These are + * configured (and initially zeroed) by ipa_smem_setup(), but if + * the modem crashes and restarts via SSR we need to re-initialize + * them. + */ +int ipa_smem_zero_modem(struct ipa *ipa) +{ + int ret; + + ret = ipa_cmd_smem_dma_zero(ipa, IPA_SMEM_MODEM_OFFSET, + IPA_SMEM_MODEM_SIZE); + if (ret) + return ret; + + ret = ipa_cmd_smem_dma_zero(ipa, IPA_SMEM_MODEM_HDR_OFFSET, + IPA_SMEM_MODEM_HDR_SIZE); + if (ret) + return ret; + + ret = ipa_cmd_smem_dma_zero(ipa, IPA_SMEM_MODEM_HDR_PROC_CTX_OFFSET, + IPA_SMEM_MODEM_HDR_PROC_CTX_SIZE); + + return ret; +} + +int ipa_mem_init(struct ipa *ipa) +{ + struct resource *res; + int ret; + + ret = dma_set_mask_and_coherent(&ipa->pdev->dev, DMA_BIT_MASK(64)); + if (ret) + return ret; + + /* Set up IPA shared memory */ + res = platform_get_resource_byname(ipa->pdev, IORESOURCE_MEM, + "ipa-shared"); + if (!res) + return -ENODEV; + + /* The code assumes a certain minimum shared memory area size */ + if (WARN_ON(resource_size(res) < IPA_SMEM_SIZE)) + return -EINVAL; + + ipa->shared_virt = memremap(res->start, resource_size(res), + MEMREMAP_WC); + if (!ipa->shared_virt) + ret = -ENOMEM; + ipa->shared_phys = res->start; + + /* Setup IPA register memory */ + res = platform_get_resource_byname(ipa->pdev, IORESOURCE_MEM, + "ipa-reg"); + if (!res) { + ret = -ENODEV; + goto err_unmap_shared; + } + + ipa->reg_virt = ioremap(res->start, resource_size(res)); + if (!ipa->reg_virt) { + ret = -ENOMEM; + goto err_unmap_shared; + } + ipa->reg_phys = res->start; + + return 0; + +err_unmap_shared: + memunmap(ipa->shared_virt); + + return ret; +} + +void ipa_mem_exit(struct ipa *ipa) +{ + iounmap(ipa->reg_virt); + memunmap(ipa->shared_virt); +} diff --git a/drivers/net/ipa/ipa_mem.h b/drivers/net/ipa/ipa_mem.h new file mode 100644 index 000000000000..179b62c958ed --- /dev/null +++ b/drivers/net/ipa/ipa_mem.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ +#ifndef _IPA_MEM_H_ +#define _IPA_MEM_H_ + +struct ipa; + +/** + * DOC: IPA Local Memory + * + * The IPA has a block of shared memory, divided into regions used for + * specific purposes. The offset within the IPA address space of this shared + * memory block is defined by the IPA_SMEM_DIRECT_ACCESS_OFFSET register. + * + * The regions within the shared block are bounded by an offset and size found + * in the IPA_SHARED_MEM_SIZE register. The first 128 bytes of the shared + * memory block are shared with the microcontroller, and the first 40 bytes of + * that contain a structure used to communicate between the microcontroller + * and the AP. + * + * There is a set of filter and routing tables, and each is given a 128 byte + * region in shared memory. Each entry in a filter or route table is + * IPA_TABLE_ENTRY_SIZE, or 8 bytes. The first "slot" of every table is + * filled with a "canary" value, and the table offsets defined below represent + * the location of the first real entry in each table after this. + * + * The number of filter table entries depends on the number of endpoints that + * support filtering. The first non-canary slot of a filter table contains a + * bitmap, with each set bit indicating an endpoint containing an entry in the + * table. Bit 0 is used to represent a global filter. + * + * About half of the routing table entries are reserved for modem use. + */ + +/* The maximum number of filter table entries (IPv4, IPv6; hashed and not) */ +#define IPA_SMEM_FLT_COUNT 14 + +/* The number of routing table entries (IPv4, IPv6; hashed and not) */ +#define IPA_SMEM_RT_COUNT 15 + + /* Which routing table entries are for the modem */ +#define IPA_SMEM_MODEM_RT_COUNT 8 +#define IPA_SMEM_MODEM_RT_INDEX_MIN 0 +#define IPA_SMEM_MODEM_RT_INDEX_MAX \ + (IPA_SMEM_MODEM_RT_INDEX_MIN + IPA_SMEM_MODEM_RT_COUNT - 1) + +/* Regions within the shared memory block. Table sizes are 0x80 bytes. */ +#define IPA_SMEM_V4_FLT_HASH_OFFSET 0x0288 +#define IPA_SMEM_V4_FLT_NHASH_OFFSET 0x0308 +#define IPA_SMEM_V6_FLT_HASH_OFFSET 0x0388 +#define IPA_SMEM_V6_FLT_NHASH_OFFSET 0x0408 +#define IPA_SMEM_V4_RT_HASH_OFFSET 0x0488 +#define IPA_SMEM_V4_RT_NHASH_OFFSET 0x0508 +#define IPA_SMEM_V6_RT_HASH_OFFSET 0x0588 +#define IPA_SMEM_V6_RT_NHASH_OFFSET 0x0608 +#define IPA_SMEM_MODEM_HDR_OFFSET 0x0688 +#define IPA_SMEM_MODEM_HDR_SIZE 0x0140 +#define IPA_SMEM_AP_HDR_OFFSET 0x07c8 +#define IPA_SMEM_AP_HDR_SIZE 0x0000 +#define IPA_SMEM_MODEM_HDR_PROC_CTX_OFFSET 0x07d0 +#define IPA_SMEM_MODEM_HDR_PROC_CTX_SIZE 0x0200 +#define IPA_SMEM_AP_HDR_PROC_CTX_OFFSET 0x09d0 +#define IPA_SMEM_AP_HDR_PROC_CTX_SIZE 0x0200 +#define IPA_SMEM_MODEM_OFFSET 0x0bd8 +#define IPA_SMEM_MODEM_SIZE 0x1024 +#define IPA_SMEM_UC_EVENT_RING_OFFSET 0x1c00 /* v3.5 and later */ +#define IPA_SMEM_SIZE 0x2000 + +int ipa_smem_config(struct ipa *ipa); +void ipa_smem_deconfig(struct ipa *ipa); + +int ipa_smem_setup(struct ipa *ipa); +void ipa_smem_teardown(struct ipa *ipa); + +int ipa_smem_zero_modem(struct ipa *ipa); + +int ipa_mem_init(struct ipa *ipa); +void ipa_mem_exit(struct ipa *ipa); + +#endif /* _IPA_SMEM_H_ */ From patchwork Fri May 31 03:53:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 10969587 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7F52514C0 for ; Fri, 31 May 2019 03:55:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6BA85288D3 for ; Fri, 31 May 2019 03:55:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5EEC02892E; Fri, 31 May 2019 03:55:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id CF7F828987 for ; Fri, 31 May 2019 03:55:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ATXiTBoKdFwW3WZX7CGHjH1rl8nYRvccOC5H9xDJH/I=; b=r44UTTgH4q4VzV zmYdlZXlye+q4xcQMYDveLp2LrK4RgofaOc18WqTKfROL+XKcoZ9Tb+mfBhJ1qpBnAzJLalAijwp1 ycoK4uQbxuOPRe2uECW8y1rZ85f3BdNLIyftoLKrrMlTaQxMlfcz+UeGmG80v2V/O1svGDypzkKeO Crrsd64H2b+pbjAN7WxEepRjIvcgJteU/CUGrhRL4N7pKByuh/beNuwA8TPRYEQy2u1GwulVWAAOG zGVYKwCLcLb4XpuCyTND6k2teAUNPkoDbw2469vWrqlkuejmyZickBqMIfq6GL+LHL+Dpqny07FzI sXVP1UcFKNlDmew2iuHg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYdS-0005AT-7V; Fri, 31 May 2019 03:55:18 +0000 Received: from mail-it1-x143.google.com ([2607:f8b0:4864:20::143]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYcG-0002fO-T5 for linux-arm-kernel@lists.infradead.org; Fri, 31 May 2019 03:54:15 +0000 Received: by mail-it1-x143.google.com with SMTP id e184so13053691ite.1 for ; Thu, 30 May 2019 20:54:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=z6hkJmMf8qtiwvP4+iR9vyBYr7u1fX1hzLkLMLwIWRM=; b=wN+D8g8js2XEisYQvkeRXQyiudL2LOGxdjA6s3gBd3JT7xGJ4KaoGwGeKO3Z7Dy7w6 yzE99/Z6KpE+IpuoRCd2NoNhpmRZa66jvWpbDkVqD6K01Gvfwmn7NPO7slvfRWGJn5bO kVB9+nk6uZyMHpD5kym1spUnMEAGoGLIrT5Hr8pH+xj3qxm2NtmvvDtU0/JrPDJreVhD x5jMIZe6Du7Np9fy06y/Ovem90Cf9ekTySYP5Y/oI7+F450Erw7g9N6T9kpo5wduuVYB PW2A8y1zWvH9t/5xHxAhpejmVqntLlAXzT6WWMK0ZkThSVnHvRYW9zRFkqY2b+yDhI7k Pvpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=z6hkJmMf8qtiwvP4+iR9vyBYr7u1fX1hzLkLMLwIWRM=; b=pVoA5INA5UsC1JXGzI5y/EzDDkhxKmWoTTHZpgDCho2w1IN81Zft80gXVLeU9z934o bdcdYesS+q7fiMm13zqTEseyawULWchg/lVbuaj/2TEvysN0ItI5/R8VHozVjiicqMt0 yC80wyxx/lKKDC9WESOxgbo7UxZxv/Wp1QFTqPCeeLFTDDqC0EehsRG9YHnD5Ui2yc5Y MUHBOS8Gdiyam0njfYpQSd5NCbanoZadbjaSssNcMRhjRCoLWBXNVm3oM5VzwGuPJ8X6 841xeYKLZWaRQ8NjH15ebg32QGqrjLVrI0tGvIToDNKaYwty1R/3HejvKT8KnAZVZvXj R7Vg== X-Gm-Message-State: APjAAAXyP9tYT57f0B2Nk+YNRlahn1WRE8ZP1mNIDvsZqmX4bUD4NloS 4bxMk1Y3KEoMOYFukH5IiXjWrQ== X-Google-Smtp-Source: APXvYqx9RDeuEcDH6yMlb3paYNoy6N9tU8gTrpR8s3fvcsTb1+uQBTG4QNVvcqoz/PnBQLHsphhsOw== X-Received: by 2002:a24:ed03:: with SMTP id r3mr6249612ith.47.1559274843847; Thu, 30 May 2019 20:54:03 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.54.02 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:54:03 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Subject: [PATCH v2 06/17] soc: qcom: ipa: GSI headers Date: Thu, 30 May 2019 22:53:37 -0500 Message-Id: <20190531035348.7194-7-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190530_205405_278909_8403E06F X-CRM114-Status: GOOD ( 22.90 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, syadagir@codeaurora.org, ejcaruso@google.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, evgreen@chromium.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, subashab@codeaurora.org, linux-soc@vger.kernel.org, abhishek.esse@gmail.com, cpratapa@codeaurora.org, benchan@google.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The Generic Software Interface is a layer of the IPA driver that abstracts the underlying hardware. The next patch includes the main code for GSI (including some additional documentation). This patch just includes three GSI header files. - "gsi.h" is the top-level GSI header file. There is one of these associated with the IPA structure; in fact, it is embedded within the IPA structure. (Were it not embedded this way, many of the definitions structures defined here could be private to GSI code.) The main abstraction implemented by the GSI code is the channel, and this header exposes several operations that can be performed on a GSI channel. - "gsi_private.h" exposes some definitions that are intended to be private, used only by the main GSI code and the GSI transaction code (defined in an upcoming patch). - Like "ipa_reg.h", "gsi_reg.h" defines the offsets of the 32-bit registers used by the GSI layer, along with masks that define the position and width of fields less than 32 bits located within these registers. Signed-off-by: Alex Elder --- drivers/net/ipa/gsi.h | 246 ++++++++++++++++++++++ drivers/net/ipa/gsi_private.h | 148 +++++++++++++ drivers/net/ipa/gsi_reg.h | 376 ++++++++++++++++++++++++++++++++++ 3 files changed, 770 insertions(+) create mode 100644 drivers/net/ipa/gsi.h create mode 100644 drivers/net/ipa/gsi_private.h create mode 100644 drivers/net/ipa/gsi_reg.h diff --git a/drivers/net/ipa/gsi.h b/drivers/net/ipa/gsi.h new file mode 100644 index 000000000000..872ca682853a --- /dev/null +++ b/drivers/net/ipa/gsi.h @@ -0,0 +1,246 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _GSI_H_ +#define _GSI_H_ + +#include +#include +#include +#include +#include +#include + +#define GSI_CHANNEL_MAX 14 +#define GSI_EVT_RING_MAX 10 + +struct device; +struct scatterlist; +struct platform_device; + +struct gsi; +struct gsi_trans; +struct gsi_channel_data; +struct gsi_ipa_endpoint_data; + +/* Execution environment IDs */ +enum gsi_ee_id { + GSI_EE_AP = 0, + GSI_EE_MODEM = 1, + GSI_EE_UC = 2, + GSI_EE_TZ = 3, +}; + +/* Channel operation statistics, aggregated across all channels */ +struct gsi_channel_stats { + u64 allocate; + u64 start; + u64 stop; + u64 reset; + u64 free; +}; + +struct gsi_ring { + void *virt; /* ring array base address */ + dma_addr_t addr; /* primarily low 32 bits used */ + u32 count; /* number of elements in ring */ + + /* The ring index value indicates the next "open" entry in the ring. + * + * A channel ring consists of TRE entries filled by the AP and passed + * to the hardware for processing. For a channel ring, the ring index + * identifies the next unused entry to be filled by the AP. + * + * An event ring consists of event structures filled by the hardware + * and passed to the AP. For event rings, the ring index identifies + * the next ring entry that is not known to have been filled by the + * hardware. + */ + u32 index; +}; + +struct gsi_trans_info { + struct gsi_trans **map; /* TRE -> transaction map */ + u32 pool_count; /* # transactions in the pool */ + struct gsi_trans *pool; /* transaction allocation pool */ + u32 pool_free; /* next free trans in pool (modulo) */ + u32 sg_pool_count; /* # SGs in the allocation pool */ + struct scatterlist *sg_pool; /* SG allocation pool */ + u32 sg_pool_free; /* next free SG pool entry */ + + atomic_t tre_avail; /* # unallocated TREs in ring */ + spinlock_t spinlock; /* protects updates to the lists */ + struct list_head alloc; /* allocated, not committed */ + struct list_head pending; /* committed, awaiting completion */ + struct list_head complete; /* completed, awaiting poll */ + struct list_head polled; /* returned by gsi_channel_poll_one() */ +}; + +/* Hardware values signifying the state of a channel */ +enum gsi_channel_state { + GSI_CHANNEL_STATE_NOT_ALLOCATED = 0x0, + GSI_CHANNEL_STATE_ALLOCATED = 0x1, + GSI_CHANNEL_STATE_STARTED = 0x2, + GSI_CHANNEL_STATE_STOPPED = 0x3, + GSI_CHANNEL_STATE_STOP_IN_PROC = 0x4, + GSI_CHANNEL_STATE_ERROR = 0xf, +}; + +/* We only care about channels between IPA and AP */ +struct gsi_channel { + struct gsi *gsi; + u32 toward_ipa; /* 0: IPA->AP; 1: AP->IPA */ + + const struct gsi_channel_data *data; /* initialization data */ + + struct completion completion; /* signals channel state changes */ + enum gsi_channel_state state; + + struct gsi_ring tre_ring; + u32 evt_ring_id; + + u64 byte_count; /* total # bytes transferred */ + u64 trans_count; /* total # transactions */ + /* The following counts are used only for TX endpoints */ + u64 queued_byte_count; /* last reported queued byte count */ + u64 queued_trans_count; /* ...and queued trans count */ + u64 compl_byte_count; /* last reported completed byte count */ + u64 compl_trans_count; /* ...and completed trans count */ + + struct gsi_trans_info trans_info; + + struct napi_struct napi; +}; + +/* Hardware values signifying the state of an event ring */ +enum gsi_evt_ring_state { + GSI_EVT_RING_STATE_NOT_ALLOCATED = 0x0, + GSI_EVT_RING_STATE_ALLOCATED = 0x1, + GSI_EVT_RING_STATE_ERROR = 0xf, +}; + +struct gsi_evt_ring { + struct gsi_channel *channel; + struct completion completion; /* signals event ring state changes */ + enum gsi_evt_ring_state state; + struct gsi_ring ring; +}; + +struct gsi { + struct device *dev; /* Same as IPA device */ + struct net_device dummy_dev; /* needed for NAPI */ + void __iomem *virt; + u32 irq; + u32 irq_wake_enabled; /* 1: irq wake was enabled */ + struct gsi_channel channel[GSI_CHANNEL_MAX]; + struct gsi_channel_stats channel_stats; + struct gsi_evt_ring evt_ring[GSI_EVT_RING_MAX]; + u32 event_bitmap; + u32 event_enable_bitmap; + struct mutex mutex; /* protects commands, programming */ +}; + +/** + * gsi_setup() - Set up the GSI subsystem + * @gsi: Address of GSI structure embedded in an IPA structure + * + * @Return: 0 if successful, or a negative error code + * + * Performs initialization that must wait until the GSI hardware is + * ready (including firmware loaded). + */ +int gsi_setup(struct gsi *gsi); + +/** + * gsi_teardown() - Tear down GSI subsystem + * @gsi: GSI address previously passed to a successful gsi_setup() call + */ +void gsi_teardown(struct gsi *gsi); + +/** + * gsi_channel_trans_max() - Channel maximum number of transactions + * @gsi: GSI pointer + * @channel_id: Channel whose limit is to be returned + * + * @Return: The maximum number of pending transactions on the channel + */ +u32 gsi_channel_trans_max(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_trans_tre_max() - Return the maximum TREs per transaction + * @gsi: GSI pointer + * @channel_id: Channel whose limit is to be returned + * + * @Return: The maximum TRE count per transaction on the channel + */ +u32 gsi_channel_trans_tre_max(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_trans_quiesce() - Wait for channel transactions to complete + * @gsi: GSI pointer + * @channel_id: Channel to quiesce + * + * Wait for all of a channel's currently-allocated transactions to + * be committed, complete, and be freed. + * + * NOTE: Assumes no new transactions will be issued before it returns. + */ +void gsi_channel_trans_quiesce(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_start() - Make a GSI channel operational + * @gsi: GSI pointer + * @channel_id: Channel to start + * + * @Return: 0 if successful, or a negative error code + */ +int gsi_channel_start(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_stop() - Stop an operational GSI channel + * @gsi: GSI pointer returned by gsi_setup() + * @channel_id: Channel to stop + * + * @Return: 0 if successful, or a negative error code + */ +int gsi_channel_stop(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_reset() - Reset a GSI channel + * @gsi: GSI pointer + * @channel_id: Channel to be reset + * @db_enable: Whether doorbell engine should be enabled + * + * @Return: 0 if successful, or a negative error code + * + * Reset a channel and reconfigure it. The @db_enable flag indicates + * whether the doorbell engine will be enabled following reconfiguration. + * + * GSI hardware relinquishes ownership of all pending receive buffer + * transactions as a result of reset. They will be completed with + * result code -ECANCELED. + */ +int gsi_channel_reset(struct gsi *gsi, u32 channel_id, bool db_enable); + +/** + * gsi_init() - Initialize the GSI subsystem + * @gsi: Address of GSI structure embedded in an IPA structure + * @pdev: IPA platform device + * + * @Return: 0 if successful, or a negative error code + * + * Early stage initialization of the GSI subsystem, performing tasks + * that can be done before the GSI hardware is ready to use. + */ +int gsi_init(struct gsi *gsi, struct platform_device *pdev, u32 data_count, + const struct gsi_ipa_endpoint_data *data); + +/** + * gsi_exit() - Exit the GSI subsystem + * @gsi: GSI address previously passed to a successful gsi_init() call + */ +void gsi_exit(struct gsi *gsi); + +#endif /* _GSI_H_ */ diff --git a/drivers/net/ipa/gsi_private.h b/drivers/net/ipa/gsi_private.h new file mode 100644 index 000000000000..778e2dcb5b2b --- /dev/null +++ b/drivers/net/ipa/gsi_private.h @@ -0,0 +1,148 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _GSI_PRIVATE_H_ +#define _GSI_PRIVATE_H_ + +/* === Only "gsi.c" and "gsi_trans.c" should include this file === */ + +#include + +struct gsi_trans; +struct gsi_ring; +struct gsi_channel; + +/* An entry in an event ring */ +struct gsi_xfer_compl_evt { + __le64 xfer_ptr; + __le16 len; + u8 reserved1; + u8 code; + __le16 reserved2; + u8 type; + u8 chid; +}; + +/* An entry in a channel ring */ +struct gsi_tre { + __le64 addr; /* DMA address */ + __le16 len_opcode; /* length in bytes or enum IPA_CMD_* */ + __le16 reserved; + __le32 flags; /* GSI_TRE_FLAGS_* */ +}; + +/** + * gsi_trans_move_complete() - Mark a GSI transaction completed + * @trans: Transaction to commit + */ +void gsi_trans_move_complete(struct gsi_trans *trans); + +/** + * gsi_trans_move_polled() - Mark a transaction polled + * @trans: Transaction to update + */ +void gsi_trans_move_polled(struct gsi_trans *trans); + +/** + * gsi_channel_trans_last() - Get channel's last allocated transaction + * @gsi: GSI pointer + * @channel_id: Channel whose transaction is to be returned + * + * @Return: Channel's last GSI transaction pointer allocated (or NULL) + * + * Returns a pointer to the last transaction allocated on a channel. + * That transaction could be in any state: allocated; pending; + * complete; or polled. A null pointer is returned if all allocated + * transactions have been freed. + * + * NOTE: Caller is responsible for supplying the returned pointer + * to gsi_trans_free() if it is non-null. + */ +struct gsi_trans *gsi_channel_trans_last(struct gsi *gsi, u32 channel_id); + +/** + * gsi_trans_complete() - Complete a GSI transaction + * @trans: Transaction to complete + * + * Marks a transaction complete (including freeing it). + */ +void gsi_trans_complete(struct gsi_trans *trans); + +/** + * gsi_channel_trans_mapped() - Return a transaction mapped to a TRE index + * @channel: Channel associated with the transaction + * @index: Index of the TRE having a transaction + * + * @Return: The GSI transaction pointer associated with the TRE index + */ +struct gsi_trans *gsi_channel_trans_mapped(struct gsi_channel *channel, + u32 index); + +/** + * gsi_channel_trans_complete() - Return a channel's next completed transaction + * @channel: Channel whose next transaction is to be returned + * + * @Return: The next completed transaction, or NULL if nothing new + */ +struct gsi_trans *gsi_channel_trans_complete(struct gsi_channel *channel); + +/** + * gsi_channel_trans_cancel_pending() - Cancel pending transactions + * @channel: Channel whose pending transactions should be cancelled + * + * Cancel all pending transactions on a channel. These are + * transactions that have been comitted but not yet completed. This + * is required when the channel gets reset. At that time all + * pending transactions will be completed with a result -ECANCELED. + * + * NOTE: Transactions already complete at the time of this call are + * unaffected. + */ +void gsi_channel_trans_cancel_pending(struct gsi_channel *channel); + +/** + * gsi_channel_trans_init() - Initialize a channel's GSI transaction info + * @channel: The channel whose transaction info is to be set up + * + * @Return: 0 if successful, or -ENOMEM on allocation failure + * + * Creates and sets up information for managing transactions on a channel + */ +int gsi_channel_trans_init(struct gsi_channel *channel); + +/** + * gsi_channel_trans_exit() - Inverse of gsi_channel_trans_init() + * @channel: Channel whose transaction information is to be cleaned up + */ +void gsi_channel_trans_exit(struct gsi_channel *channel); + +/** + * gsi_channel_doorbell() - Ring a channel's doorbell + * @channel: Channel whose doorbell should be rung + * + * Rings a channel's doorbell to inform the GSI hardware that new + * transactions (TREs, really) are available for it to process. + */ +void gsi_channel_doorbell(struct gsi_channel *channel); + +/** + * gsi_ring_virt() - Return virtual address for a ring entry + * @ring: Ring whose address is to be translated + * @addr: Index (slot number) of entry + */ +void *gsi_ring_virt(struct gsi_ring *ring, u32 index); + +/** + * gsi_channel_tx_queued() - Report the number of bytes queued to hardware + * @channel: Channel whose bytes have been queued + * + * This arranges for the the number of transactions and bytes for + * transfer that have been queued to hardware to be reported. It + * passes this information up the network stack so it can be used to + * throttle transmissions. + */ +void gsi_channel_tx_queued(struct gsi_channel *channel); + +#endif /* _GSI_PRIVATE_H_ */ diff --git a/drivers/net/ipa/gsi_reg.h b/drivers/net/ipa/gsi_reg.h new file mode 100644 index 000000000000..c6e68933b011 --- /dev/null +++ b/drivers/net/ipa/gsi_reg.h @@ -0,0 +1,376 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _GSI_REG_H_ +#define _GSI_REG_H_ + +/* === Only "gsi.c" should include this file === */ + +#include + +/** + * DOC: GSI Registers + * + * GSI registers are located within the "gsi" address space defined by Device + * Tree. The offset of each register within that space is specified by + * symbols defined below. The GSI address space is mapped to virtual memory + * space in gsi_init(). All GSI registers are 32 bits wide. + * + * Each register type is duplicated for a number of instances of something. + * For example, each GSI channel has its own set of registers defining its + * configuration. The offset to a channel's set of registers is computed + * based on a "base" offset plus an additional "stride" amount computed + * from the channel's ID. For such registers, the offset is computed by a + * function-like macro that takes a parameter used in the computation. + * + * The offset of a register dependent on execution environment is computed + * by a macro that is supplied a parameter "ee". The "ee" value is a member + * of the gsi_ee enumerated type. + * + * The offset of a channel register is computed by a macro that is supplied a + * parameter "ch". The "ch" value is a channel id whose maximum value is 30 + * (though the actual limit is hardware-dependent). + * + * The offset of an event register is computed by a macro that is supplied a + * parameter "ev". The "ev" value is an event id whose maximum value is 15 + * (though the actual limit is hardware-dependent). + */ + +#define GSI_INTER_EE_SRC_CH_IRQ_OFFSET \ + GSI_INTER_EE_N_SRC_CH_IRQ_OFFSET(GSI_EE_AP) +#define GSI_INTER_EE_N_SRC_CH_IRQ_OFFSET(ee) \ + (0x0000c018 + 0x1000 * (ee)) + +#define GSI_INTER_EE_SRC_EV_CH_IRQ_OFFSET \ + GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFSET(GSI_EE_AP) +#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFSET(ee) \ + (0x0000c01c + 0x1000 * (ee)) + +#define GSI_INTER_EE_SRC_CH_IRQ_CLR_OFFSET \ + GSI_INTER_EE_N_SRC_CH_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_INTER_EE_N_SRC_CH_IRQ_CLR_OFFSET(ee) \ + (0x0000c028 + 0x1000 * (ee)) + +#define GSI_INTER_EE_SRC_EV_CH_IRQ_CLR_OFFSET \ + GSI_INTER_EE_N_SRC_EV_CH_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_CLR_OFFSET(ee) \ + (0x0000c02c + 0x1000 * (ee)) + +#define GSI_CH_C_CNTXT_0_OFFSET(ch) \ + GSI_EE_N_CH_C_CNTXT_0_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_0_OFFSET(ch, ee) \ + (0x0001c000 + 0x4000 * (ee) + 0x80 * (ch)) +#define CHTYPE_PROTOCOL_FMASK GENMASK(2, 0) +#define CHTYPE_DIR_FMASK GENMASK(3, 3) +#define EE_FMASK GENMASK(7, 4) +#define CHID_FMASK GENMASK(12, 8) +#define ERINDEX_FMASK GENMASK(18, 14) +#define CHSTATE_FMASK GENMASK(23, 20) +#define ELEMENT_SIZE_FMASK GENMASK(31, 24) + +#define GSI_CH_C_CNTXT_1_OFFSET(ch) \ + GSI_EE_N_CH_C_CNTXT_1_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_1_OFFSET(ch, ee) \ + (0x0001c004 + 0x4000 * (ee) + 0x80 * (ch)) +#define R_LENGTH_FMASK GENMASK(15, 0) + +#define GSI_CH_C_CNTXT_2_OFFSET(ch) \ + GSI_EE_N_CH_C_CNTXT_2_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_2_OFFSET(ch, ee) \ + (0x0001c008 + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_CH_C_CNTXT_3_OFFSET(ch) \ + GSI_EE_N_CH_C_CNTXT_3_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_3_OFFSET(ch, ee) \ + (0x0001c00c + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_CH_C_QOS_OFFSET(ch) \ + GSI_EE_N_CH_C_QOS_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_QOS_OFFSET(ch, ee) \ + (0x0001c05c + 0x4000 * (ee) + 0x80 * (ch)) +#define WRR_WEIGHT_FMASK GENMASK(3, 0) +#define MAX_PREFETCH_FMASK GENMASK(8, 8) +#define USE_DB_ENG_FMASK GENMASK(9, 9) + +#define GSI_CH_C_SCRATCH_0_OFFSET(ch) \ + GSI_EE_N_CH_C_SCRATCH_0_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_SCRATCH_0_OFFSET(ch, ee) \ + (0x0001c060 + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_CH_C_SCRATCH_1_OFFSET(ch) \ + GSI_EE_N_CH_C_SCRATCH_1_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_SCRATCH_1_OFFSET(ch, ee) \ + (0x0001c064 + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_CH_C_SCRATCH_2_OFFSET(ch) \ + GSI_EE_N_CH_C_SCRATCH_2_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_SCRATCH_2_OFFSET(ch, ee) \ + (0x0001c068 + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_CH_C_SCRATCH_3_OFFSET(ch) \ + GSI_EE_N_CH_C_SCRATCH_3_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_SCRATCH_3_OFFSET(ch, ee) \ + (0x0001c06c + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_EV_CH_E_CNTXT_0_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_0_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_0_OFFSET(ev, ee) \ + (0x0001d000 + 0x4000 * (ee) + 0x80 * (ev)) +#define EV_CHTYPE_FMASK GENMASK(3, 0) +#define EV_EE_FMASK GENMASK(7, 4) +#define EV_EVCHID_FMASK GENMASK(15, 8) +#define EV_INTYPE_FMASK GENMASK(16, 16) +#define EV_CHSTATE_FMASK GENMASK(23, 20) +#define EV_ELEMENT_SIZE_FMASK GENMASK(31, 24) + +#define GSI_EV_CH_E_CNTXT_1_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_1_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_1_OFFSET(ev, ee) \ + (0x0001d004 + 0x4000 * (ee) + 0x80 * (ev)) +#define EV_R_LENGTH_FMASK GENMASK(15, 0) + +#define GSI_EV_CH_E_CNTXT_2_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_2_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_2_OFFSET(ev, ee) \ + (0x0001d008 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_3_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_3_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_3_OFFSET(ev, ee) \ + (0x0001d00c + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_4_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_4_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_4_OFFSET(ev, ee) \ + (0x0001d010 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_8_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_8_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_8_OFFSET(ev, ee) \ + (0x0001d020 + 0x4000 * (ee) + 0x80 * (ev)) +#define MODT_FMASK GENMASK(15, 0) +#define MODC_FMASK GENMASK(23, 16) +#define MOD_CNT_FMASK GENMASK(31, 24) + +#define GSI_EV_CH_E_CNTXT_9_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_9_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_9_OFFSET(ev, ee) \ + (0x0001d024 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_10_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_10_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_10_OFFSET(ev, ee) \ + (0x0001d028 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_11_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_11_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_11_OFFSET(ev, ee) \ + (0x0001d02c + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_12_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_12_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_12_OFFSET(ev, ee) \ + (0x0001d030 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_13_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_13_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_13_OFFSET(ev, ee) \ + (0x0001d034 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_SCRATCH_0_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_SCRATCH_0_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_SCRATCH_0_OFFSET(ev, ee) \ + (0x0001d048 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_SCRATCH_1_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_SCRATCH_1_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_SCRATCH_1_OFFSET(ev, ee) \ + (0x0001d04c + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_CH_C_DOORBELL_0_OFFSET(ch) \ + GSI_EE_N_CH_C_DOORBELL_0_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_DOORBELL_0_OFFSET(ch, ee) \ + (0x0001e000 + 0x4000 * (ee) + 0x08 * (ch)) + +#define GSI_EV_CH_E_DOORBELL_0_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_DOORBELL_0_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_DOORBELL_0_OFFSET(ev, ee) \ + (0x0001e100 + 0x4000 * (ee) + 0x08 * (ev)) + +#define GSI_GSI_STATUS_OFFSET \ + GSI_EE_N_GSI_STATUS_OFFSET(GSI_EE_AP) +#define GSI_EE_N_GSI_STATUS_OFFSET(ee) \ + (0x0001f000 + 0x4000 * (ee)) +#define ENABLED_FMASK GENMASK(0, 0) + +#define GSI_CH_CMD_OFFSET \ + GSI_EE_N_CH_CMD_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CH_CMD_OFFSET(ee) \ + (0x0001f008 + 0x4000 * (ee)) +#define CH_CHID_FMASK GENMASK(7, 0) +#define CH_OPCODE_FMASK GENMASK(31, 24) + +#define GSI_EV_CH_CMD_OFFSET \ + GSI_EE_N_EV_CH_CMD_OFFSET(GSI_EE_AP) +#define GSI_EE_N_EV_CH_CMD_OFFSET(ee) \ + (0x0001f010 + 0x4000 * (ee)) +#define EV_CHID_FMASK GENMASK(7, 0) +#define EV_OPCODE_FMASK GENMASK(31, 24) + +#define GSI_GSI_HW_PARAM_2_OFFSET \ + GSI_EE_N_GSI_HW_PARAM_2_OFFSET(GSI_EE_AP) +#define GSI_EE_N_GSI_HW_PARAM_2_OFFSET(ee) \ + (0x0001f040 + 0x4000 * (ee)) +#define IRAM_SIZE_FMASK GENMASK(2, 0) +#define NUM_CH_PER_EE_FMASK GENMASK(7, 3) +#define NUM_EV_PER_EE_FMASK GENMASK(12, 8) +#define GSI_CH_PEND_TRANSLATE_FMASK GENMASK(13, 13) +#define GSI_CH_FULL_LOGIC_FMASK GENMASK(14, 14) +#define IRAM_SIZE_ONE_KB_FVAL 0 +#define IRAM_SIZE_TWO_KB_FVAL 1 + +#define GSI_CNTXT_TYPE_IRQ_OFFSET \ + GSI_EE_N_CNTXT_TYPE_IRQ_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_TYPE_IRQ_OFFSET(ee) \ + (0x0001f080 + 0x4000 * (ee)) +#define CH_CTRL_FMASK GENMASK(0, 0) +#define EV_CTRL_FMASK GENMASK(1, 1) +#define GLOB_EE_FMASK GENMASK(2, 2) +#define IEOB_FMASK GENMASK(3, 3) +#define INTER_EE_CH_CTRL_FMASK GENMASK(4, 4) +#define INTER_EE_EV_CTRL_FMASK GENMASK(5, 5) +#define GENERAL_FMASK GENMASK(6, 6) + +#define GSI_CNTXT_TYPE_IRQ_MSK_OFFSET \ + GSI_EE_N_CNTXT_TYPE_IRQ_MSK_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_TYPE_IRQ_MSK_OFFSET(ee) \ + (0x0001f088 + 0x4000 * (ee)) +#define MSK_CH_CTRL_FMASK GENMASK(0, 0) +#define MSK_EV_CTRL_FMASK GENMASK(1, 1) +#define MSK_GLOB_EE_FMASK GENMASK(2, 2) +#define MSK_IEOB_FMASK GENMASK(3, 3) +#define MSK_INTER_EE_CH_CTRL_FMASK GENMASK(4, 4) +#define MSK_INTER_EE_EV_CTRL_FMASK GENMASK(5, 5) +#define MSK_GENERAL_FMASK GENMASK(6, 6) +#define GSI_CNTXT_TYPE_IRQ_MSK_ALL GENMASK(6, 0) + +#define GSI_CNTXT_SRC_CH_IRQ_OFFSET \ + GSI_EE_N_CNTXT_SRC_CH_IRQ_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_CH_IRQ_OFFSET(ee) \ + (0x0001f090 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_EV_CH_IRQ_OFFSET \ + GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_OFFSET(ee) \ + (0x0001f094 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET \ + GSI_EE_N_CNTXT_SRC_CH_IRQ_MSK_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_CH_IRQ_MSK_OFFSET(ee) \ + (0x0001f098 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET \ + GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET(ee) \ + (0x0001f09c + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_CH_IRQ_CLR_OFFSET \ + GSI_EE_N_CNTXT_SRC_CH_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_CH_IRQ_CLR_OFFSET(ee) \ + (0x0001f0a0 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET \ + GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET(ee) \ + (0x0001f0a4 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_IEOB_IRQ_OFFSET \ + GSI_EE_N_CNTXT_SRC_IEOB_IRQ_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_IEOB_IRQ_OFFSET(ee) \ + (0x0001f0b0 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET \ + GSI_EE_N_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET(ee) \ + (0x0001f0b8 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET \ + GSI_EE_N_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET(ee) \ + (0x0001f0c0 + 0x4000 * (ee)) + +#define GSI_CNTXT_GLOB_IRQ_STTS_OFFSET \ + GSI_EE_N_CNTXT_GLOB_IRQ_STTS_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GLOB_IRQ_STTS_OFFSET(ee) \ + (0x0001f100 + 0x4000 * (ee)) +#define ERROR_INT_FMASK GENMASK(0, 0) +#define GP_INT1_FMASK GENMASK(1, 1) +#define GP_INT2_FMASK GENMASK(2, 2) +#define GP_INT3_FMASK GENMASK(3, 3) + +#define GSI_CNTXT_GLOB_IRQ_EN_OFFSET \ + GSI_EE_N_CNTXT_GLOB_IRQ_EN_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GLOB_IRQ_EN_OFFSET(ee) \ + (0x0001f108 + 0x4000 * (ee)) +#define EN_ERROR_INT_FMASK GENMASK(0, 0) +#define EN_GP_INT1_FMASK GENMASK(1, 1) +#define EN_GP_INT2_FMASK GENMASK(2, 2) +#define EN_GP_INT3_FMASK GENMASK(3, 3) +#define GSI_CNTXT_GLOB_IRQ_ALL GENMASK(3, 0) + +#define GSI_CNTXT_GLOB_IRQ_CLR_OFFSET \ + GSI_EE_N_CNTXT_GLOB_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GLOB_IRQ_CLR_OFFSET(ee) \ + (0x0001f110 + 0x4000 * (ee)) +#define CLR_ERROR_INT_FMASK GENMASK(0, 0) +#define CLR_GP_INT1_FMASK GENMASK(1, 1) +#define CLR_GP_INT2_FMASK GENMASK(2, 2) +#define CLR_GP_INT3_FMASK GENMASK(3, 3) + +#define GSI_CNTXT_GSI_IRQ_STTS_OFFSET \ + GSI_EE_N_CNTXT_GSI_IRQ_STTS_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GSI_IRQ_STTS_OFFSET(ee) \ + (0x0001f118 + 0x4000 * (ee)) +#define BREAK_POINT_FMASK GENMASK(0, 0) +#define BUS_ERROR_FMASK GENMASK(1, 1) +#define CMD_FIFO_OVRFLOW_FMASK GENMASK(2, 2) +#define MCS_STACK_OVRFLOW_FMASK GENMASK(3, 3) + +#define GSI_CNTXT_GSI_IRQ_EN_OFFSET \ + GSI_EE_N_CNTXT_GSI_IRQ_EN_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GSI_IRQ_EN_OFFSET(ee) \ + (0x0001f120 + 0x4000 * (ee)) +#define EN_BREAK_POINT_FMASK GENMASK(0, 0) +#define EN_BUS_ERROR_FMASK GENMASK(1, 1) +#define EN_CMD_FIFO_OVRFLOW_FMASK GENMASK(2, 2) +#define EN_MCS_STACK_OVRFLOW_FMASK GENMASK(3, 3) +#define GSI_CNTXT_GSI_IRQ_ALL GENMASK(3, 0) + +#define GSI_CNTXT_GSI_IRQ_CLR_OFFSET \ + GSI_EE_N_CNTXT_GSI_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GSI_IRQ_CLR_OFFSET(ee) \ + (0x0001f128 + 0x4000 * (ee)) +#define CLR_BREAK_POINT_FMASK GENMASK(0, 0) +#define CLR_BUS_ERROR_FMASK GENMASK(1, 1) +#define CLR_CMD_FIFO_OVRFLOW_FMASK GENMASK(2, 2) +#define CLR_MCS_STACK_OVRFLOW_FMASK GENMASK(3, 3) + +#define GSI_CNTXT_INTSET_OFFSET \ + GSI_EE_N_CNTXT_INTSET_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_INTSET_OFFSET(ee) \ + (0x0001f180 + 0x4000 * (ee)) +#define INTYPE_FMASK GENMASK(0, 0) + +#define GSI_ERROR_LOG_OFFSET \ + GSI_EE_N_ERROR_LOG_OFFSET(GSI_EE_AP) +#define GSI_EE_N_ERROR_LOG_OFFSET(ee) \ + (0x0001f200 + 0x4000 * (ee)) + +#define GSI_ERROR_LOG_CLR_OFFSET \ + GSI_EE_N_ERROR_LOG_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_ERROR_LOG_CLR_OFFSET(ee) \ + (0x0001f210 + 0x4000 * (ee)) + +#endif /* _GSI_REG_H_ */ From patchwork Fri May 31 03:53:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 10969599 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 436EA14DB for ; Fri, 31 May 2019 03:55:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2D6D328BB9 for ; Fri, 31 May 2019 03:55:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2021C28BCA; Fri, 31 May 2019 03:55:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9772228BB9 for ; Fri, 31 May 2019 03:55:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=spLX5XU5zh2aJrfWTD/OR6GGeUSQSdH53sAs8bvkqnM=; b=GR8tx58l1MgdMi qRTVfxAyqS43TIQYpxLyCv691JkPktHlcRdWmTtTvAh/dSXRimEvp0nGX/T06V1cXtuyosbi8/1yt p8yu9RXq2R7COOBbCWTI9nmTBEKmGFEh5Tf9VyaIIp79jqi5IF0qP+W9Y8MR2T4IQTWZS5drc/mUf FSNX5F2VZKELnA3lcDbuVObtUeImk/R3EUFjwyyhzwIYne+JWMu1Pv1EF9+BRgA/KT+7HbmG+F3cu 6O1qUUVv9dgnwLSsMtFejzqTwispj7ao+jzxlS2EdRIJRNUD1Jnfh1M60qH2H74LGptp47t6KZr2X yyT5kqEkxQm8XsUQY7GQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYe0-0005pq-VC; Fri, 31 May 2019 03:55:53 +0000 Received: from mail-it1-x144.google.com ([2607:f8b0:4864:20::144]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYcI-0002i9-TP for linux-arm-kernel@lists.infradead.org; Fri, 31 May 2019 03:54:17 +0000 Received: by mail-it1-x144.google.com with SMTP id m3so13581989itl.1 for ; Thu, 30 May 2019 20:54:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zl6Ym3k52iiA7503Ws3+JCBahS2qMDspj8g010dFM+E=; b=DEA2ltuFdf0EhwWyJVJvzh3qxKbVsjPSzgL5dE/zkzT8ROS6gtbB71UPuEORu+B6RR 3/+Ubn8XucNa4a4D19UqkRcnDoTDNtB6aB18XT4O40r+jRn6LMPiWa7Bgm47ad/Goj+q 8JLn5hZk82p1rhl8dqGUtMItMa5Rkg7H7HIAZJAmU69phjoCKPMrTt7SODkC8+CKvWFR 1DMRvP+lDww/ViN2h+LWPNFZDOlVgc1c8bt6OipoXiB5wVnPu2D7R6Tddij9M3NvdZNG cmNq8tcTk2ZcWfO3fKa1D7azkROYwj6L5U1s4t287nOiP0M3YIQBu0zVztrSm7jQPQ6R vhuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zl6Ym3k52iiA7503Ws3+JCBahS2qMDspj8g010dFM+E=; b=XgcaddgWOohiOOoCRT9CPEKwxr+2xlwFEIvl900pVDGyXmJaJ5CgXGAjb7OVN5xRVO Qfb+yUdChXc218WmRax947wxeqidtW6Bs46pQQ+H+WMUIyxhJLn1qvnMUyesSC/WUCuI JifsJ0GlzVVkTu3sEKkpn9rSMGJQItXB8wF8rjMSsimz8mBDyJ/eK0/2O0UajEPBhU94 sPdE81mOr76oXh9RPVtEeug7KWLuAwXGmDvKXOcNA9afwYQZNDjaAZjV5bAeFBjxv6GW gUtLKXrSrkPQHwJt4rnPcuhyuwa7fX8XTW8yYVAhDt3DOhJztky5nq2SXzdl8mjHWlbN Wywg== X-Gm-Message-State: APjAAAXd/YstUFFzP6vEYMuPdaYPJbsp+8eKririg8eKyLQTgfXfIFkW idagsAMyHIcnA/34NnDJAI6YcA== X-Google-Smtp-Source: APXvYqxVNDQ+7fO8vdEASGPcxqzbdBXk0Ig5WvPew6aUDEvtf5AVC9Mu6h9UiKcY8QGllaoc/5qsMw== X-Received: by 2002:a24:47cc:: with SMTP id t195mr5276615itb.117.1559274845341; Thu, 30 May 2019 20:54:05 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.54.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:54:04 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Subject: [PATCH v2 07/17] soc: qcom: ipa: the generic software interface Date: Thu, 30 May 2019 22:53:38 -0500 Message-Id: <20190531035348.7194-8-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190530_205407_520041_ADBC66D5 X-CRM114-Status: GOOD ( 24.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, syadagir@codeaurora.org, ejcaruso@google.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, evgreen@chromium.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, subashab@codeaurora.org, linux-soc@vger.kernel.org, abhishek.esse@gmail.com, cpratapa@codeaurora.org, benchan@google.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch includes "gsi.c", which implements the generic software interface (GSI) for IPA. The generic software interface abstracts channels, which provide a means of transferring data either from the AP to the IPA, or from the IPA to the AP. A ring buffer of "transfer elements" (TREs) is used to describe data transfers to perform. The AP writes a doorbell register associated with a channel to let it know it has added new entries (for an AP->IPA channel) or has finished processing entries (for an IPA->AP channel). Each channel also has an event ring buffer, used by the IPA to communicate information about events related to a channel (for example, the completion of TREs). The IPA writes its own doorbell register, which triggers an interrupt on the AP, to signal that new event information has arrived. Signed-off-by: Alex Elder --- drivers/net/ipa/gsi.c | 1635 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 1635 insertions(+) create mode 100644 drivers/net/ipa/gsi.c diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c new file mode 100644 index 000000000000..a749d3b0d792 --- /dev/null +++ b/drivers/net/ipa/gsi.c @@ -0,0 +1,1635 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "gsi.h" +#include "gsi_reg.h" +#include "gsi_private.h" +#include "gsi_trans.h" +#include "ipa_gsi.h" +#include "ipa_data.h" + +/** + * DOC: The IPA Generic Software Interface + * + * The generic software interface (GSI) is an integral component of the IPA, + * providing a well-defined communication layer between the AP subsystem + * and the IPA core. The modem uses the GSI layer as well. + * + * -------- --------- + * | | | | + * | AP +<---. .----+ Modem | + * | +--. | | .->+ | + * | | | | | | | | + * -------- | | | | --------- + * v | v | + * --+-+---+-+-- + * | GSI | + * |-----------| + * | | + * | IPA | + * | | + * ------------- + * + * In the above diagram, the AP and Modem represent "execution environments" + * (EEs), which are independent operating environments that use the IPA for + * data transfer. + * + * Each EE uses a set of unidirectional GSI "channels," which allow transfer + * of data to or from the IPA. A channel is implemented as a ring buffer, + * with a DRAM-resident array of "transfer elements" (TREs) available to + * describe transfers to or from other EEs through the IPA. A transfer + * element can also contain an immediate command, requesting the IPA perform + * actions other than data transfer. + * + * Each TRE refers to a block of data--also located DRAM. After writing one + * or more TREs to a channel, the writer (either the IPA or an EE) writes a + * doorbell register to inform the receiving side how many elements have + * been written. Writing to a doorbell register triggers within the GSI. + * + * Each channel has a GSI "event ring" associated with it. An event ring + * is implemented very much like a channel ring, but is always directed from + * the IPA to an EE. The IPA notifies an EE (such as the AP) about channel + * events by adding an entry to the event ring associated with the channel. + * The GSI then writes its doorbell for the event ring, causing the target + * EE to be interrupted. Each entry in an event ring contains a pointer + * to the channel TRE whose completion the event represents. + * + * Each TRE in a channel ring has a set of flags. One flag indicates whether + * the completion of the transfer operation generates an entry (and possibly + * an interrupt) in the channel's event ring. Other flags allow transfer + * elements to be chained together, forming a single logical transaction. + * TRE flags are used to control whether and when interrupts are generated + * to signal completion of channel transfers. + * + * Elements in channel and event rings are completed (or consumed) strictly + * in order. Completion of one entry implies the completion of all preceding + * entries. A single completion interrupt can therefore communicate the + * completion of many transfers. + * + * Note that all GSI registers are little-endian, which is the assumed + * endianness of I/O space accesses. The accessor functions perform byte + * swapping if needed (i.e., for a big endian CPU). + */ + +/* Delay period for interrupt moderation (in 32KHz IPA internal timer ticks) */ +#define IPA_GSI_EVT_RING_INT_MODT (32 * 1) /* 1ms under 32KHz clock */ + +#define GSI_CMD_TIMEOUT 5 /* seconds */ + +#define GSI_MHI_ER_START 10 /* First reserved event number */ +#define GSI_MHI_ER_END 16 /* Last reserved event number */ + +#define GSI_ISR_MAX_ITER 50 /* Detect interrupt storms */ + +/* Hardware values from the error log register error code field */ +enum gsi_err_code { + GSI_INVALID_TRE_ERR = 0x1, + GSI_OUT_OF_BUFFERS_ERR = 0x2, + GSI_OUT_OF_RESOURCES_ERR = 0x3, + GSI_UNSUPPORTED_INTER_EE_OP_ERR = 0x4, + GSI_EVT_RING_EMPTY_ERR = 0x5, + GSI_NON_ALLOCATED_EVT_ACCESS_ERR = 0x6, + GSI_HWO_1_ERR = 0x8, +}; + +/* Hardware values from the error log register error type field */ +enum gsi_err_type { + GSI_ERR_TYPE_GLOB = 0x1, + GSI_ERR_TYPE_CHAN = 0x2, + GSI_ERR_TYPE_EVT = 0x3, +}; + +/* Fields in an error log register at GSI_ERROR_LOG_OFFSET */ +#define GSI_LOG_ERR_ARG3_FMASK GENMASK(3, 0) +#define GSI_LOG_ERR_ARG2_FMASK GENMASK(7, 4) +#define GSI_LOG_ERR_ARG1_FMASK GENMASK(11, 8) +#define GSI_LOG_ERR_CODE_FMASK GENMASK(15, 12) +#define GSI_LOG_ERR_VIRT_IDX_FMASK GENMASK(23, 19) +#define GSI_LOG_ERR_TYPE_FMASK GENMASK(27, 24) +#define GSI_LOG_ERR_EE_FMASK GENMASK(31, 28) + +/* Hardware values used when programming an event ring */ +enum gsi_evt_chtype { + GSI_EVT_CHTYPE_MHI_EV = 0x0, + GSI_EVT_CHTYPE_XHCI_EV = 0x1, + GSI_EVT_CHTYPE_GPI_EV = 0x2, + GSI_EVT_CHTYPE_XDCI_EV = 0x3, +}; + +/* Hardware values used when programming a channel */ +enum gsi_channel_protocol { + GSI_CHANNEL_PROTOCOL_MHI = 0x0, + GSI_CHANNEL_PROTOCOL_XHCI = 0x1, + GSI_CHANNEL_PROTOCOL_GPI = 0x2, + GSI_CHANNEL_PROTOCOL_XDCI = 0x3, +}; + +/* Hardware values representing an event ring immediate command opcode */ +enum gsi_evt_ch_cmd_opcode { + GSI_EVT_ALLOCATE = 0x0, + GSI_EVT_RESET = 0x9, + GSI_EVT_DE_ALLOC = 0xa, +}; + +/* Hardware values representing a channel immediate command opcode */ +enum gsi_ch_cmd_opcode { + GSI_CH_ALLOCATE = 0x0, + GSI_CH_START = 0x1, + GSI_CH_STOP = 0x2, + GSI_CH_RESET = 0x9, + GSI_CH_DE_ALLOC = 0xa, + GSI_CH_DB_STOP = 0xb, +}; + +/** gsi_gpi_channel_scratch - GPI protocol scratch register + * + * @max_outstanding_tre: + * Defines the maximum number of TREs allowed in a single transaction + * on a channel (in Bytes). This determines the amount of prefetch + * performed by the hardware. We configure this to equal the size of + * the TLV FIFO for the channel. + * @outstanding_threshold: + * Defines the threshold (in Bytes) determining when the sequencer + * should update the channel doorbell. We configure this to equal + * the size of two TREs. + */ +struct gsi_gpi_channel_scratch { + u64 reserved1; + u16 reserved2; + u16 max_outstanding_tre; + u16 reserved3; + u16 outstanding_threshold; +}; + +/** gsi_channel_scratch - channel scratch configuration area + * + * The exact interpretation of this register is protocol-specific. + * We only use GPI channels; see struct gsi_gpi_channel_scratch, above. + */ +union gsi_channel_scratch { + struct gsi_gpi_channel_scratch gpi; + struct { + u32 word1; + u32 word2; + u32 word3; + u32 word4; + } data; +}; + +/* Return the channel id associated with a given channel */ +static u32 gsi_channel_id(struct gsi_channel *channel) +{ + return channel - &channel->gsi->channel[0]; +} + +/* Report the number of bytes queued to hardware since last call */ +void gsi_channel_tx_queued(struct gsi_channel *channel) +{ + u32 trans_count; + u32 byte_count; + + trans_count = channel->trans_count - channel->queued_trans_count; + byte_count = channel->byte_count - channel->queued_byte_count; + channel->queued_trans_count = channel->trans_count; + channel->queued_byte_count = channel->byte_count; + + ipa_gsi_channel_tx_queued(channel->gsi, gsi_channel_id(channel), + trans_count, byte_count); +} + +static void gsi_irq_event_enable(struct gsi *gsi, u32 evt_ring_id) +{ + u32 val; + + gsi->event_enable_bitmap |= BIT(evt_ring_id); + val = gsi->event_enable_bitmap; + iowrite32(val, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET); +} + +static void gsi_irq_event_disable(struct gsi *gsi, u32 evt_ring_id) +{ + u32 val; + + gsi->event_enable_bitmap &= ~BIT(evt_ring_id); + val = gsi->event_enable_bitmap; + iowrite32(val, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET); +} + +/* Enable all GSI_interrupt types */ +static void gsi_irq_enable(struct gsi *gsi) +{ + u32 val; + + /* Inter EE commands / interrupt are not supported. */ + val = GSI_CNTXT_TYPE_IRQ_MSK_ALL; + iowrite32(val, gsi->virt + GSI_CNTXT_TYPE_IRQ_MSK_OFFSET); + + val = GENMASK(GSI_CHANNEL_MAX - 1, 0); + iowrite32(val, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET); + + val = GENMASK(GSI_EVT_RING_MAX - 1, 0); + iowrite32(val, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET); + + /* Each IEOB interrupt is enabled (later) as needed by channels */ + iowrite32(0, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET); + + val = GSI_CNTXT_GLOB_IRQ_ALL; + iowrite32(val, gsi->virt + GSI_CNTXT_GLOB_IRQ_EN_OFFSET); + + /* Never enable GSI_BREAK_POINT */ + val = GSI_CNTXT_GSI_IRQ_ALL & ~EN_BREAK_POINT_FMASK; + iowrite32(val, gsi->virt + GSI_CNTXT_GSI_IRQ_EN_OFFSET); +} + +/* Disable all GSI_interrupt types */ +static void gsi_irq_disable(struct gsi *gsi) +{ + iowrite32(0, gsi->virt + GSI_CNTXT_GSI_IRQ_EN_OFFSET); + iowrite32(0, gsi->virt + GSI_CNTXT_GLOB_IRQ_EN_OFFSET); + iowrite32(0, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET); + iowrite32(0, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET); + iowrite32(0, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET); + iowrite32(0, gsi->virt + GSI_CNTXT_TYPE_IRQ_MSK_OFFSET); +} + +/* Return the hardware's notion of the current state of a channel */ +static enum gsi_channel_state gsi_channel_state(struct gsi_channel *channel) +{ + u32 channel_id = gsi_channel_id(channel); + struct gsi *gsi = channel->gsi; + u32 val; + + val = ioread32(gsi->virt + GSI_CH_C_CNTXT_0_OFFSET(channel_id)); + + return u32_get_bits(val, CHSTATE_FMASK); +} + +/* Return the hardware's notion of the current state of an event ring */ +static enum gsi_evt_ring_state +gsi_evt_ring_state(struct gsi *gsi, u32 evt_ring_id) +{ + u32 val = ioread32(gsi->virt + GSI_EV_CH_E_CNTXT_0_OFFSET(evt_ring_id)); + + return u32_get_bits(val, EV_CHSTATE_FMASK); +} + +/* Channel control interrupt handler */ +static void gsi_isr_chan_ctrl(struct gsi *gsi) +{ + u32 channel_mask; + + channel_mask = ioread32(gsi->virt + GSI_CNTXT_SRC_CH_IRQ_OFFSET); + iowrite32(channel_mask, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_CLR_OFFSET); + + while (channel_mask) { + u32 channel_id = __ffs(channel_mask); + struct gsi_channel *channel; + + channel_mask ^= BIT(channel_id); + + channel = &gsi->channel[channel_id]; + channel->state = gsi_channel_state(channel); + + complete(&channel->completion); + } +} + +static void gsi_isr_evt_ctrl(struct gsi *gsi) +{ + u32 event_mask; + + event_mask = ioread32(gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_OFFSET); + iowrite32(event_mask, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET); + + while (event_mask) { + u32 evt_ring_id = __ffs(event_mask); + struct gsi_evt_ring *evt_ring; + + event_mask ^= BIT(evt_ring_id); + + evt_ring = &gsi->evt_ring[evt_ring_id]; + evt_ring->state = gsi_evt_ring_state(gsi, evt_ring_id); + + complete(&evt_ring->completion); + } +} + +static void +gsi_isr_glob_chan_err(struct gsi *gsi, u32 err_ee, u32 channel_id, u32 code) +{ + if (code == GSI_OUT_OF_RESOURCES_ERR) { + dev_err(gsi->dev, "channel %u out of resources\n", channel_id); + complete(&gsi->channel[channel_id].completion); + return; + } + + /* Report, but otherwise ignore all other error codes */ + dev_err(gsi->dev, "channel %u global error ee 0x%08x code 0x%08x\n", + channel_id, err_ee, code); +} + +static void +gsi_isr_glob_evt_err(struct gsi *gsi, u32 err_ee, u32 evt_ring_id, u32 code) +{ + if (code == GSI_OUT_OF_RESOURCES_ERR) { + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + u32 channel_id = gsi_channel_id(evt_ring->channel); + + complete(&evt_ring->completion); + dev_err(gsi->dev, "evt_ring for channel %u out of resources\n", + channel_id); + return; + } + + /* Report, but otherwise ignore all other error codes */ + dev_err(gsi->dev, "event ring %u global error ee %u code 0x%08x\n", + evt_ring_id, err_ee, code); +} + +static void gsi_isr_glob_err(struct gsi *gsi) +{ + enum gsi_err_type type; + enum gsi_err_code code; + u32 which; + u32 val; + u32 ee; + + /* Get the logged error, then reinitialize the log */ + val = ioread32(gsi->virt + GSI_ERROR_LOG_OFFSET); + iowrite32(0, gsi->virt + GSI_ERROR_LOG_OFFSET); + iowrite32(~0, gsi->virt + GSI_ERROR_LOG_CLR_OFFSET); + + ee = u32_get_bits(val, GSI_LOG_ERR_EE_FMASK); + which = u32_get_bits(val, GSI_LOG_ERR_VIRT_IDX_FMASK); + type = u32_get_bits(val, GSI_LOG_ERR_TYPE_FMASK); + code = u32_get_bits(val, GSI_LOG_ERR_CODE_FMASK); + + if (type == GSI_ERR_TYPE_CHAN) + gsi_isr_glob_chan_err(gsi, ee, which, code); + else if (type == GSI_ERR_TYPE_EVT) + gsi_isr_glob_evt_err(gsi, ee, which, code); + else /* type GSI_ERR_TYPE_GLOB should be fatal */ + dev_err(gsi->dev, "unexpected global error 0x%08x\n", type); +} + +static void gsi_isr_glob_ee(struct gsi *gsi) +{ + u32 val; + + val = ioread32(gsi->virt + GSI_CNTXT_GLOB_IRQ_STTS_OFFSET); + + if (val & ERROR_INT_FMASK) + gsi_isr_glob_err(gsi); + + iowrite32(val, gsi->virt + GSI_CNTXT_GLOB_IRQ_CLR_OFFSET); + + val &= ~ERROR_INT_FMASK; + + if (val & EN_GP_INT1_FMASK) { + dev_err(gsi->dev, "unexpected global INT1\n"); + val ^= EN_GP_INT1_FMASK; + } + + if (val) + dev_err(gsi->dev, "unexpected global interrupt 0x%08x\n", val); +} + +/* I/O completion interrupt event */ +static void gsi_isr_ieob(struct gsi *gsi) +{ + u32 event_mask; + + event_mask = ioread32(gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_OFFSET); + iowrite32(event_mask, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET); + + while (event_mask) { + u32 evt_ring_id = __ffs(event_mask); + + event_mask ^= BIT(evt_ring_id); + + gsi_irq_event_disable(gsi, evt_ring_id); + napi_schedule(&gsi->evt_ring[evt_ring_id].channel->napi); + } +} + +/* We don't currently expect to receive any inter-EE channel interrupts */ +static void gsi_isr_inter_ee_chan_ctrl(struct gsi *gsi) +{ + u32 channel_mask; + + channel_mask = ioread32(gsi->virt + GSI_INTER_EE_SRC_CH_IRQ_OFFSET); + iowrite32(channel_mask, gsi->virt + GSI_INTER_EE_SRC_CH_IRQ_CLR_OFFSET); + + while (channel_mask) { + u32 channel_id = __ffs(channel_mask); + + dev_err(gsi->dev, "ch %u inter-EE interrupt\n", channel_id); + channel_mask ^= BIT(channel_id); + } +} + +/* We don't currently expect to receive any inter-EE event interrupts */ +static void gsi_isr_inter_ee_evt_ctrl(struct gsi *gsi) +{ + u32 event_mask; + + event_mask = ioread32(gsi->virt + GSI_INTER_EE_SRC_EV_CH_IRQ_OFFSET); + iowrite32(event_mask, + gsi->virt + GSI_INTER_EE_SRC_EV_CH_IRQ_CLR_OFFSET); + + while (event_mask) { + u32 evt_ring_id = __ffs(event_mask); + + event_mask ^= BIT(evt_ring_id); + + /* not currently expected */ + dev_err(gsi->dev, "evt %u inter-EE interrupt\n", evt_ring_id); + } +} + +/* We don't currently expect to receive any general event interrupts */ +static void gsi_isr_general(struct gsi *gsi) +{ + u32 val; + + val = ioread32(gsi->virt + GSI_CNTXT_GSI_IRQ_STTS_OFFSET); + iowrite32(val, gsi->virt + GSI_CNTXT_GSI_IRQ_CLR_OFFSET); + + if (val & CLR_BREAK_POINT_FMASK) + dev_err(gsi->dev, "breakpoint!\n"); + val ^= CLR_BREAK_POINT_FMASK; + + if (val) + dev_err(gsi->dev, "unexpected general interrupt 0x%08x\n", val); +} + +/** + * gsi_isr() - Top level GSI interrupt service routine + * @irq: Interrupt number (ignored) + * @dev_id: GSI pointer supplied to request_irq() + * + * This is the main handler function registered for the GSI IRQ. Each type + * of interrupt has a separate handler function that is called from here. + */ +static irqreturn_t gsi_isr(int irq, void *dev_id) +{ + struct gsi *gsi = dev_id; + u32 intr_mask; + u32 cnt = 0; + + while ((intr_mask = ioread32(gsi->virt + GSI_CNTXT_TYPE_IRQ_OFFSET))) { + /* intr_mask contains bitmask of pending GSI interrupts */ + do { + u32 gsi_intr = BIT(__ffs(intr_mask)); + + intr_mask ^= gsi_intr; + + switch (gsi_intr) { + case CH_CTRL_FMASK: + gsi_isr_chan_ctrl(gsi); + break; + case EV_CTRL_FMASK: + gsi_isr_evt_ctrl(gsi); + break; + case GLOB_EE_FMASK: + gsi_isr_glob_ee(gsi); + break; + case IEOB_FMASK: + gsi_isr_ieob(gsi); + break; + case INTER_EE_CH_CTRL_FMASK: + gsi_isr_inter_ee_chan_ctrl(gsi); + break; + case INTER_EE_EV_CTRL_FMASK: + gsi_isr_inter_ee_evt_ctrl(gsi); + break; + case GENERAL_FMASK: + gsi_isr_general(gsi); + break; + default: + dev_err(gsi->dev, + "%s: unrecognized type 0x%08x\n", + __func__, gsi_intr); + break; + } + } while (intr_mask); + + if (++cnt > GSI_ISR_MAX_ITER) { + dev_err(gsi->dev, "interrupt flood\n"); + break; + } + } + + return IRQ_HANDLED; +} + +/* Return the virtual address associated with a ring index */ +void *gsi_ring_virt(struct gsi_ring *ring, u32 index) +{ + /* Note: index *must* be used modulo the ring count here */ + return ring->virt + (index % ring->count) * sizeof(struct gsi_tre); +} + +/* Return the 32-bit DMA address associated with a ring index */ +u32 gsi_ring_addr(struct gsi_ring *ring, u32 index) +{ + return (ring->addr & GENMASK(31, 0)) + index * sizeof(struct gsi_tre); +} + +/* Return the ring index of a 32-bit ring offset */ +static u32 gsi_ring_index(struct gsi_ring *ring, u32 offset) +{ + /* Code assumes channel and event ring elements are the same size */ + BUILD_BUG_ON(sizeof(struct gsi_tre) != + sizeof(struct gsi_xfer_compl_evt)); + + return (offset - gsi_ring_addr(ring, 0)) / sizeof(struct gsi_tre); +} + +/* Return the transaction associated with a transfer completion event */ +static struct gsi_trans *gsi_event_trans(struct gsi_channel *channel, + struct gsi_xfer_compl_evt *evt) +{ + u32 tre_offset; + u32 tre_index; + + /* Event xfer_ptr records the TRE it's associated with */ + tre_offset = le64_to_cpu(evt->xfer_ptr) & GENMASK(31, 0); + tre_index = gsi_ring_index(&channel->tre_ring, tre_offset); + + return gsi_channel_trans_mapped(channel, tre_index); +} +/** + * gsi_channel_tx_update() - Report completed TX transfers + * @channel: Channel that has completed transmitting packets + * @trans: Last transation known to be complete + * + * Compute the number of transactions and bytes that have been + * transferred on a TX channel, and report that to higher layers in + * the network stack for throttling. + */ +static void +gsi_channel_tx_update(struct gsi_channel *channel, struct gsi_trans *trans) +{ + u64 byte_count = trans->byte_count + trans->len; + u64 trans_count = trans->trans_count + 1; + + byte_count -= channel->compl_byte_count; + channel->compl_byte_count += byte_count; + trans_count -= channel->compl_trans_count; + channel->compl_trans_count += trans_count;; + /* assert(trans_count <= U32_MAX); */ + + ipa_gsi_channel_tx_completed(channel->gsi, gsi_channel_id(channel), + trans_count, byte_count); +} + +/** + * gsi_evt_ring_rx_update() - Record lengths of received data + * @evt_ring: Event ring associated with channel that received packets + * @index: Event index in ring reported by hardware + * + * Events for RX channels contain the actual number of bytes received into + * the buffer. Every event has a transaction associated with it, and here + * we update transactions to record their actual received lengths. + * + * This function is called whenever we learn that the GSI hardware has filled + * new events since the last time we checked. The ring's index field tells + * the first entry in need of processing. The index provided is the + * first *unfilled* event in the ring (following the last filled one). + * + * Events are sequential within the event ring, and transactions are + * sequential within the transaction pool. + * + * Note that @index always refers to an element *within* the event ring. + */ +static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index) +{ + struct gsi_channel *channel = evt_ring->channel; + struct gsi_ring *ring = &evt_ring->ring; + struct gsi_xfer_compl_evt *evt_done; + struct gsi_trans_info *trans_info; + struct gsi_xfer_compl_evt *evt; + struct gsi_trans *trans; + u32 byte_count = 0; + u32 trans_avail; + u32 old_index; + u32 evt_avail; + + /* We'll start with the oldest un-processed event. RX channels + * replenish receive buffers in single-TRE transactions, so we + * can just map that event to its transaction. + */ + old_index = ring->index; + evt = gsi_ring_virt(ring, old_index); + trans = gsi_event_trans(channel, evt); + + /* Compute the number of events to process before we wrap */ + evt_avail = ring->count - old_index % ring->count; + + /* And compute how many transactions to process before we wrap */ + trans_info = &channel->trans_info; + trans_avail = (u32)(&trans_info->pool[trans_info->pool_count] - trans); + + /* Finally, determine when we'll be done processing events */ + evt_done = gsi_ring_virt(ring, index); + do { + trans->len = __le16_to_cpu(evt->len); + byte_count += trans->result; + + if (--evt_avail) + evt++; + else + evt = gsi_ring_virt(ring, 0); + + if (--trans_avail) + trans++; + else + trans = &trans_info->pool[0]; + } while (evt != evt_done); + + /* We record RX bytes when they are received */ + channel->byte_count += byte_count; + channel->trans_count++; +} + +/* Ring an event ring doorbell, reporting the last entry processed by the AP. + * The index argument (modulo the ring count) is the first unfilled entry, so + * we supply one less than that with the doorbell. Update the event ring + * index field with the value provided. + */ +static void gsi_evt_ring_doorbell(struct gsi *gsi, u32 evt_ring_id, u32 index) +{ + struct gsi_ring *ring = &gsi->evt_ring[evt_ring_id].ring; + u32 val; + + ring->index = index; /* Next unused entry */ + + /* Note: index *must* be used modulo the ring count here */ + val = gsi_ring_addr(ring, (index - 1) % ring->count); + iowrite32(val, gsi->virt + GSI_EV_CH_E_DOORBELL_0_OFFSET(evt_ring_id)); +} + +/* Return the maximum number of channels the hardware supports */ +static u32 gsi_channel_max(struct gsi *gsi) +{ + u32 val = ioread32(gsi->virt + GSI_GSI_HW_PARAM_2_OFFSET); + + return u32_get_bits(val, NUM_CH_PER_EE_FMASK); +} + +/* Return the maximum number of event rings the hardware supports */ +static u32 gsi_evt_ring_max(struct gsi *gsi) +{ + u32 val = ioread32(gsi->virt + GSI_GSI_HW_PARAM_2_OFFSET); + + return u32_get_bits(val, NUM_EV_PER_EE_FMASK); +} + +/* Issue a GSI command by writing a value to a register, then wait for + * completion to be signaled. Reports an error if the command times out. + * (Timeout is not expected, and suggests broken hardware.) + */ +static void +gsi_command(struct gsi *gsi, u32 reg, u32 val, struct completion *completion) +{ + reinit_completion(completion); + + iowrite32(val, gsi->virt + reg); + if (!wait_for_completion_timeout(completion, GSI_CMD_TIMEOUT * HZ)) + dev_err(gsi->dev, "%s timeout reg 0x%08x val 0x%08x\n", + __func__, reg, val); +} + +/* Issue an event ring command and wait for it to complete */ +static void evt_ring_command(struct gsi *gsi, u32 evt_ring_id, + enum gsi_evt_ch_cmd_opcode op) +{ + struct completion *completion = &gsi->evt_ring[evt_ring_id].completion; + u32 val = 0; + + val |= u32_encode_bits(evt_ring_id, EV_CHID_FMASK); + val |= u32_encode_bits(op, EV_OPCODE_FMASK); + + gsi_command(gsi, GSI_EV_CH_CMD_OFFSET, val, completion); +} + +/* Issue a channel command and wait for it to complete */ +static void +gsi_channel_command(struct gsi_channel *channel, enum gsi_ch_cmd_opcode op) +{ + u32 channel_id = gsi_channel_id(channel); + u32 val = 0; + + val |= u32_encode_bits(channel_id, CH_CHID_FMASK); + val |= u32_encode_bits(op, CH_OPCODE_FMASK); + + gsi_command(channel->gsi, GSI_CH_CMD_OFFSET, val, &channel->completion); +} + +/* Initialize a ring, including allocating DMA memory for its entries */ +static int gsi_ring_alloc(struct gsi *gsi, struct gsi_ring *ring, u32 count) +{ + size_t size = count * sizeof(struct gsi_tre); + dma_addr_t addr; + + BUILD_BUG_ON(!is_power_of_2(sizeof(struct gsi_tre))); + + if (!count) + return -EINVAL; + + /* Hardware requires a 2^n ring size, with alignment equal to size */ + ring->virt = dma_alloc_coherent(gsi->dev, size, &addr, GFP_KERNEL); + if (ring->virt && addr % size) { + dma_free_coherent(gsi->dev, size, ring->virt, ring->addr); + dev_err(gsi->dev, "unable to alloc 0x%zx-aligned ring buffer\n", + size); + return -EINVAL; /* Not a good error value, but distinct */ + } else if (!ring->virt) { + return -ENOMEM; + } + ring->addr = addr; + ring->count = count; + + return 0; +} + +/* Free a previously-allocated ring */ +static void gsi_ring_free(struct gsi *gsi, struct gsi_ring *ring) +{ + size_t size = ring->count * sizeof(struct gsi_tre); + + dma_free_coherent(gsi->dev, size, ring->virt, ring->addr); +} + +/* Program an event ring for use */ +static void gsi_evt_ring_program(struct gsi *gsi, u32 evt_ring_id) +{ + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + size_t size = evt_ring->ring.count * sizeof(struct gsi_tre); + u32 val = 0; + + BUILD_BUG_ON(sizeof(struct gsi_xfer_compl_evt) > + field_max(EV_ELEMENT_SIZE_FMASK)); + + val |= u32_encode_bits(GSI_EVT_CHTYPE_GPI_EV, EV_CHTYPE_FMASK); + val |= EV_INTYPE_FMASK; + val |= u32_encode_bits(sizeof(struct gsi_xfer_compl_evt), + EV_ELEMENT_SIZE_FMASK); + iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_0_OFFSET(evt_ring_id)); + + val = u32_encode_bits(size, EV_R_LENGTH_FMASK); + iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_1_OFFSET(evt_ring_id)); + + /* The context 2 and 3 registers store the low-order and + * high-order 32 bits of the address of the event ring, + * respectively. + */ + val = evt_ring->ring.addr & GENMASK(31, 0); + iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_2_OFFSET(evt_ring_id)); + + val = evt_ring->ring.addr >> 32; + iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_3_OFFSET(evt_ring_id)); + + /* Enable interrupt moderation by setting the moderation delay */ + val = u32_encode_bits(IPA_GSI_EVT_RING_INT_MODT, MODT_FMASK); + val |= u32_encode_bits(1, MODC_FMASK); /* comes from channel */ + iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_8_OFFSET(evt_ring_id)); + + /* No MSI write data, and MSI address high and low address is 0 */ + iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_9_OFFSET(evt_ring_id)); + iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_10_OFFSET(evt_ring_id)); + iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_11_OFFSET(evt_ring_id)); + + /* We don't need to get event read pointer updates */ + iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_12_OFFSET(evt_ring_id)); + iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_13_OFFSET(evt_ring_id)); +} + +/* Issue an allocation request to the hardware for an event ring */ +static int gsi_evt_ring_alloc_hw(struct gsi *gsi, u32 evt_ring_id) +{ + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + + evt_ring_command(gsi, evt_ring_id, GSI_EVT_ALLOCATE); + + if (evt_ring->state != GSI_EVT_RING_STATE_ALLOCATED) { + dev_err(gsi->dev, "evt_ring_id %u allocation bad state %u\n", + evt_ring_id, evt_ring->state); + return -EIO; + } + + gsi_evt_ring_program(gsi, evt_ring_id); + + /* Have the first event in the ring be the first one filled. */ + gsi_evt_ring_doorbell(gsi, evt_ring_id, 0); + + return 0; +} + +/* Issue a hardware de-allocation request for an (allocated) event ring */ +static void gsi_evt_ring_free_hw(struct gsi *gsi, u32 evt_ring_id) +{ + evt_ring_command(gsi, evt_ring_id, GSI_EVT_RESET); + + evt_ring_command(gsi, evt_ring_id, GSI_EVT_DE_ALLOC); +} + +/* Allocate an available event ring id */ +static int gsi_evt_ring_id_alloc(struct gsi *gsi) +{ + u32 evt_ring_id; + + if (gsi->event_bitmap == ~0U) + return -ENOSPC; + + evt_ring_id = ffz(gsi->event_bitmap); + gsi->event_bitmap |= BIT(evt_ring_id); + + return (int)evt_ring_id; +} + +/* Free a previously-allocated event ring id */ +static void gsi_evt_ring_id_free(struct gsi *gsi, u32 evt_ring_id) +{ + gsi->event_bitmap &= ~BIT(evt_ring_id); +} + +/* Ring a channel doorbell, reporting the first un-filled entry */ +void gsi_channel_doorbell(struct gsi_channel *channel) +{ + struct gsi_ring *tre_ring = &channel->tre_ring; + u32 channel_id = gsi_channel_id(channel); + struct gsi *gsi = channel->gsi; + u32 val; + + /* Note: index *must* be used modulo the ring count here */ + val = gsi_ring_addr(tre_ring, tre_ring->index % tre_ring->count); + iowrite32(val, gsi->virt + GSI_CH_C_DOORBELL_0_OFFSET(channel_id)); +} + +/* Consult hardware, move any newly completed transactions to completed list */ +static void gsi_channel_update(struct gsi_channel *channel) +{ + u32 evt_ring_id = channel->evt_ring_id; + struct gsi *gsi = channel->gsi; + struct gsi_evt_ring *evt_ring; + struct gsi_trans *trans; + struct gsi_ring *ring; + u32 offset; + u32 index; + + evt_ring = &gsi->evt_ring[evt_ring_id]; + ring = &evt_ring->ring; + + /* See if there's anything new to process; if not, we're done. Note + * that index always refers to an entry *within* the event ring. + */ + offset = GSI_EV_CH_E_CNTXT_4_OFFSET(evt_ring_id); + index = gsi_ring_index(ring, ioread32(gsi->virt + offset)); + if (index == ring->index % ring->count) + return; + + /* Get the transaction for the latest completed event. Take a + * reference to keep it from completing before we give the events + * for this and previous transactions back to the hardware. + */ + trans = gsi_event_trans(channel, gsi_ring_virt(ring, index - 1)); + refcount_inc(&trans->refcount); + + /* For RX channels, update each completed transaction with the number + * of bytes that were actually received. For TX channels, report + * the number of transactions and bytes this completion represents + * up the network stack. + */ + if (channel->toward_ipa) + gsi_channel_tx_update(channel, trans); + else + gsi_evt_ring_rx_update(evt_ring, index); + + gsi_trans_move_complete(trans); + + /* Tell the hardware we've handled these events */ + gsi_evt_ring_doorbell(channel->gsi, channel->evt_ring_id, index); + + gsi_trans_free(trans); +} + +/** + * gsi_channel_poll_one() - Return a single completed transaction on a channel + * @channel: Channel to be polled + * + * @Return: Transaction pointer, or null if none are available + * + * This function returns the first entry on a channel's completed transaction + * list. If that list is empty, the hardware is consulted to determine + * whether any new transactions have completed. If so, they're moved to the + * completed list and the new first entry is returned. If there are no more + * completed transactions, a null pointer is returned. + */ +static struct gsi_trans *gsi_channel_poll_one(struct gsi_channel *channel) +{ + struct gsi_trans *trans; + + /* Get the first transaction from the completed list */ + trans = gsi_channel_trans_complete(channel); + if (!trans) { + /* List is empty; see if there's more to do */ + gsi_channel_update(channel); + trans = gsi_channel_trans_complete(channel); + } + + if (trans) + gsi_trans_move_polled(trans); + + return trans; +} + +/** + * gsi_channel_poll() - NAPI poll function for a channel + * @napi: NAPI structure for the channel + * @budget: Budget supplied by NAPI core + + * @Return: Number of items polled (<= budget) + * + * Single transactions completed by hardware are polled until either + * the budget is exhausted, or there are no more. Each transaction + * polled is passed to gsi_trans_complete(), to perform remaining + * completion processing and retire/free the transaction. + */ +static int gsi_channel_poll(struct napi_struct *napi, int budget) +{ + struct gsi_channel *channel; + int count = 0; + + channel = container_of(napi, struct gsi_channel, napi); + while (count < budget) { + struct gsi_trans *trans; + + trans = gsi_channel_poll_one(channel); + if (!trans) + break; + gsi_trans_complete(trans); + } + + if (count < budget) { + napi_complete(&channel->napi); + gsi_irq_event_enable(channel->gsi, channel->evt_ring_id); + } + + return count; +} + +/* The event bitmap represents which event ids are available for allocation. + * Set bits are not available, clear bits can be used. This function + * initializes the map so all events supported by the hardware are available, + * then precludes any reserved events from being allocated. + */ +static u32 gsi_event_bitmap_init(u32 evt_ring_max) +{ + u32 event_bitmap = GENMASK(BITS_PER_LONG - 1, evt_ring_max); + + return event_bitmap | GENMASK(GSI_MHI_ER_END, GSI_MHI_ER_START); +} + +/* Setup function for event rings */ +static int gsi_evt_ring_setup(struct gsi *gsi) +{ + u32 evt_ring_max; + u32 evt_ring_id; + + evt_ring_max = gsi_evt_ring_max(gsi); + dev_dbg(gsi->dev, "evt_ring_max %u\n", evt_ring_max); + if (evt_ring_max != GSI_EVT_RING_MAX) + return -EIO; + + for (evt_ring_id = 0; evt_ring_id < GSI_EVT_RING_MAX; evt_ring_id++) { + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + + evt_ring->state = gsi_evt_ring_state(gsi, evt_ring_id); + if (evt_ring->state != GSI_EVT_RING_STATE_NOT_ALLOCATED) + return -EIO; + } + + return 0; +} + +/* Inverse of gsi_evt_ring_setup() */ +static void gsi_evt_ring_teardown(struct gsi *gsi) +{ + /* Nothing to do */ +} + +/* Configure a channel's "scratch registers" for a particular protocol */ +static void gsi_channel_scratch_write(struct gsi_channel *channel) +{ + u32 channel_id = gsi_channel_id(channel); + union gsi_channel_scratch scr = { }; + struct gsi_gpi_channel_scratch *gpi; + struct gsi *gsi = channel->gsi; + u32 val; + + /* See comments above definition of gsi_gpi_channel_scratch */ + gpi = &scr.gpi; + gpi->max_outstanding_tre = channel->data->tlv_count * + sizeof(struct gsi_tre); + gpi->outstanding_threshold = 2 * sizeof(struct gsi_tre); + + val = scr.data.word1; + iowrite32(val, gsi->virt + GSI_CH_C_SCRATCH_0_OFFSET(channel_id)); + + val = scr.data.word2; + iowrite32(val, gsi->virt + GSI_CH_C_SCRATCH_1_OFFSET(channel_id)); + + val = scr.data.word3; + iowrite32(val, gsi->virt + GSI_CH_C_SCRATCH_2_OFFSET(channel_id)); + + /* We must preserve the upper 16 bits of the last scratch register. + * The next sequence assumes those bits remain unchanged between the + * read and the write. + */ + val = ioread32(gsi->virt + GSI_CH_C_SCRATCH_3_OFFSET(channel_id)); + val = (scr.data.word4 & GENMASK(31, 16)) | (val & GENMASK(15, 0)); + iowrite32(val, gsi->virt + GSI_CH_C_SCRATCH_3_OFFSET(channel_id)); +} + +/* Program a channel for use */ +static void gsi_channel_program(struct gsi_channel *channel, bool doorbell) +{ + size_t size = channel->tre_ring.count * sizeof(struct gsi_tre); + u32 channel_id = gsi_channel_id(channel); + struct gsi *gsi = channel->gsi; + u32 wrr_weight = 0; + u32 val = 0; + + BUILD_BUG_ON(sizeof(struct gsi_tre) > field_max(ELEMENT_SIZE_FMASK)); + + val |= u32_encode_bits(GSI_CHANNEL_PROTOCOL_GPI, CHTYPE_PROTOCOL_FMASK); + if (channel->toward_ipa) + val |= CHTYPE_DIR_FMASK; + val |= u32_encode_bits(channel->evt_ring_id, ERINDEX_FMASK); + val |= u32_encode_bits(sizeof(struct gsi_tre), ELEMENT_SIZE_FMASK); + iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_0_OFFSET(channel_id)); + + val = u32_encode_bits(size, R_LENGTH_FMASK); + iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_1_OFFSET(channel_id)); + + /* The context 2 and 3 registers store the low-order and + * high-order 32 bits of the address of the channel ring, + * respectively. + */ + val = channel->tre_ring.addr & GENMASK(31, 0); + iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_2_OFFSET(channel_id)); + + val = channel->tre_ring.addr >> 32; + iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_3_OFFSET(channel_id)); + + if (channel->data->wrr_priority) + wrr_weight = field_max(WRR_WEIGHT_FMASK); + val = u32_encode_bits(wrr_weight, WRR_WEIGHT_FMASK); + + /* Max prefetch is 1 segment (do not set MAX_PREFETCH_FMASK) */ + if (doorbell) + val |= USE_DB_ENG_FMASK; + iowrite32(val, gsi->virt + GSI_CH_C_QOS_OFFSET(channel_id)); +} + +/* Configure a channel; we configure all channels to use GPI protocol */ +static void gsi_channel_config(struct gsi_channel *channel, bool db_enable) +{ + /* Start at the first TRE entry each time we configure the channel */ + channel->tre_ring.index = 0; + gsi_channel_program(channel, db_enable); + gsi_channel_scratch_write(channel); +} + +/* Setup function for a single channel */ +static int gsi_channel_setup_one(struct gsi_channel *channel) +{ + u32 evt_ring_id = channel->evt_ring_id; + struct gsi *gsi = channel->gsi; + u32 val; + int ret; + + if (!gsi) + return 0; /* Ignore uninitialized channels */ + + channel->state = gsi_channel_state(channel); + if (channel->state != GSI_CHANNEL_STATE_NOT_ALLOCATED) + return -EIO; + + mutex_lock(&gsi->mutex); + + ret = gsi_evt_ring_alloc_hw(gsi, evt_ring_id); + if (ret) + goto err_mutex_unlock; + + gsi_channel_command(channel, GSI_CH_ALLOCATE); + gsi->channel_stats.allocate++; + + ret = channel->state == GSI_CHANNEL_STATE_ALLOCATED ? 0 : -EIO; + if (ret) + goto err_free_evt_ring; + + gsi_channel_config(channel, true); + + mutex_unlock(&gsi->mutex); + + if (channel->toward_ipa) + netif_tx_napi_add(&gsi->dummy_dev, &channel->napi, + gsi_channel_poll, NAPI_POLL_WEIGHT); + else + netif_napi_add(&gsi->dummy_dev, &channel->napi, + gsi_channel_poll, NAPI_POLL_WEIGHT); + + /* Enable the event interrupt (clear it first in case pending) */ + val = BIT(evt_ring_id); + iowrite32(val, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET); + gsi_irq_event_enable(gsi, evt_ring_id); + + return 0; + +err_free_evt_ring: + gsi_evt_ring_free_hw(gsi, evt_ring_id); +err_mutex_unlock: + mutex_unlock(&gsi->mutex); + + return ret; +} + +/* Inverse of gsi_channel_setup_one() */ +static void gsi_channel_teardown_one(struct gsi_channel *channel) +{ + u32 evt_ring_id = channel->evt_ring_id; + struct gsi *gsi = channel->gsi; + + if (!gsi) + return; + + gsi_irq_event_disable(gsi, evt_ring_id); + + netif_napi_del(&channel->napi); + + mutex_lock(&gsi->mutex); + + gsi_channel_command(channel, GSI_CH_DE_ALLOC); + gsi->channel_stats.free++; + + gsi_evt_ring_free_hw(gsi, evt_ring_id); + + mutex_unlock(&gsi->mutex); + + gsi_channel_trans_exit(channel); +} + +/* Setup function for channels */ +static int gsi_channel_setup(struct gsi *gsi) +{ + u32 channel_max; + u32 channel_id; + int ret; + + channel_max = gsi_channel_max(gsi); + dev_dbg(gsi->dev, "channel_max %u\n", channel_max); + if (channel_max != GSI_CHANNEL_MAX) + return -EIO; + + ret = gsi_evt_ring_setup(gsi); + if (ret) + return ret; + + gsi_irq_enable(gsi); + + for (channel_id = 0; channel_id < GSI_CHANNEL_MAX; channel_id++) { + ret = gsi_channel_setup_one(&gsi->channel[channel_id]); + if (ret) + goto err_unwind; + } + + return 0; + +err_unwind: + while (channel_id--) + gsi_channel_teardown_one(&gsi->channel[channel_id]); + gsi_irq_enable(gsi); + gsi_evt_ring_teardown(gsi); + + return ret; +} + +/* Inverse of gsi_channel_setup() */ +static void gsi_channel_teardown(struct gsi *gsi) +{ + u32 channel_id; + + for (channel_id = 0; channel_id < GSI_CHANNEL_MAX; channel_id++) { + struct gsi_channel *channel = &gsi->channel[channel_id]; + + gsi_channel_teardown_one(channel); + } + gsi_irq_disable(gsi); + gsi_evt_ring_teardown(gsi); +} + +/* Setup function for GSI. GSI firmware must be loaded and initialized */ +int gsi_setup(struct gsi *gsi) +{ + u32 val; + + /* Here is where we first touch the GSI hardware */ + val = ioread32(gsi->virt + GSI_GSI_STATUS_OFFSET); + if (!(val & ENABLED_FMASK)) { + dev_err(gsi->dev, "GSI has not been enabled\n"); + return -EIO; + } + + /* Initialize the error log */ + iowrite32(0, gsi->virt + GSI_ERROR_LOG_OFFSET); + + /* Writing 1 indicates IRQ interrupts; 0 would be MSI */ + iowrite32(1, gsi->virt + GSI_CNTXT_INTSET_OFFSET); + + return gsi_channel_setup(gsi); +} + +/* Inverse of gsi_setup() */ +void gsi_teardown(struct gsi *gsi) +{ + gsi_channel_teardown(gsi); +} + +/* Initialize a channel's event ring */ +static int gsi_channel_evt_ring_init(struct gsi_channel *channel) +{ + struct gsi *gsi = channel->gsi; + struct gsi_evt_ring *evt_ring; + int ret; + + ret = gsi_evt_ring_id_alloc(gsi); + if (ret < 0) + return ret; + channel->evt_ring_id = ret; + + evt_ring = &gsi->evt_ring[channel->evt_ring_id]; + evt_ring->channel = channel; + + ret = gsi_ring_alloc(gsi, &evt_ring->ring, channel->data->event_count); + if (ret) + goto err_free_evt_ring_id; + + return 0; + +err_free_evt_ring_id: + gsi_evt_ring_id_free(gsi, channel->evt_ring_id); + + return ret; +} + +/* Inverse of gsi_channel_evt_ring_init() */ +static void gsi_channel_evt_ring_exit(struct gsi_channel *channel) +{ + struct gsi *gsi = channel->gsi; + struct gsi_evt_ring *evt_ring; + + evt_ring = &gsi->evt_ring[channel->evt_ring_id]; + gsi_ring_free(gsi, &evt_ring->ring); + gsi_evt_ring_id_free(gsi, channel->evt_ring_id); +} + +/* Init function for event rings */ +static void gsi_evt_ring_init(struct gsi *gsi) +{ + u32 evt_ring_id; + + BUILD_BUG_ON(GSI_EVT_RING_MAX >= BITS_PER_LONG); + + gsi->event_bitmap = gsi_event_bitmap_init(GSI_EVT_RING_MAX); + gsi->event_enable_bitmap = 0; + for (evt_ring_id = 0; evt_ring_id < GSI_EVT_RING_MAX; evt_ring_id++) + init_completion(&gsi->evt_ring[evt_ring_id].completion); +} + +/* Inverse of gsi_evt_ring_init() */ +static void gsi_evt_ring_exit(struct gsi *gsi) +{ + /* Nothing to do */ +} + +/* Init function for a single channel */ +static int +gsi_channel_init_one(struct gsi *gsi, const struct gsi_ipa_endpoint_data *data) +{ + struct gsi_channel *channel; + int ret; + + if (data->ee_id != GSI_EE_AP) + return 0; /* Ignore non-AP channels */ + + if (data->channel_id >= GSI_CHANNEL_MAX) { + dev_err(gsi->dev, "bad channel id %u (must be less than %u)\n", + data->channel_id, GSI_CHANNEL_MAX); + return -EINVAL; + } + + /* The value 256 here is arbitrary, and much higher than expected */ + if (!data->channel.tlv_count || data->channel.tlv_count > 256) { + dev_err(gsi->dev, "bad tlv_count %u (must be 1..256)\n", + data->channel.tlv_count); + return -EINVAL; + } + + if (!is_power_of_2(data->channel.tre_count)) { + dev_err(gsi->dev, "bad tre_count %u (must be power of 2)\n", + data->channel.tre_count); + return -EINVAL; + } + + if (!is_power_of_2(data->channel.event_count)) { + dev_err(gsi->dev, "bad event_count %u (must be power of 2)\n", + data->channel.event_count); + return -EINVAL; + } + + channel = &gsi->channel[data->channel_id]; + memset(channel, 0, sizeof(*channel)); + + channel->gsi = gsi; + channel->toward_ipa = data->toward_ipa; + channel->data = &data->channel; + + init_completion(&channel->completion); + + ret = gsi_channel_evt_ring_init(channel); + if (ret) + return ret; + + ret = gsi_ring_alloc(gsi, &channel->tre_ring, channel->data->tre_count); + if (ret) + goto err_channel_evt_ring_exit; + + ret = gsi_channel_trans_init(channel); + if (ret) + goto err_ring_free; + + return 0; + +err_ring_free: + gsi_ring_free(gsi, &channel->tre_ring); +err_channel_evt_ring_exit: + gsi_channel_evt_ring_exit(channel); + + return ret; +} + +/* Inverse of gsi_channel_init_one() */ +static void gsi_channel_exit_one(struct gsi_channel *channel) +{ + gsi_channel_trans_exit(channel); + gsi_ring_free(channel->gsi, &channel->tre_ring); + gsi_channel_evt_ring_exit(channel); +} + +/* Init function for channels */ +static int gsi_channel_init(struct gsi *gsi, u32 data_count, + const struct gsi_ipa_endpoint_data *data) +{ + int ret = 0; + u32 i; + + gsi_evt_ring_init(gsi); + for (i = 0; i < data_count; i++) { + ret = gsi_channel_init_one(gsi, &data[i]); + if (ret) + break; + } + + return ret; +} + +/* Inverse of gsi_channel_init() */ +static void gsi_channel_exit(struct gsi *gsi) +{ + u32 channel_id; + + for (channel_id = 0; channel_id < GSI_CHANNEL_MAX; channel_id++) + gsi_channel_exit_one(&gsi->channel[channel_id]); + gsi_evt_ring_exit(gsi); +} + +/* Init function for GSI. GSI hardware does not need to be "ready" */ +int gsi_init(struct gsi *gsi, struct platform_device *pdev, u32 data_count, + const struct gsi_ipa_endpoint_data *data) +{ + struct resource *res; + resource_size_t size; + unsigned int irq; + int ret; + + gsi->dev = &pdev->dev; + + /* The GSI layer performs NAPI on all endpoints. NAPI requires a + * network device structure, but the GSI layer does not have one, + * so we must create a dummy network device for this purpose. + */ + init_dummy_netdev(&gsi->dummy_dev); + + /* Get GSI memory range and map it */ + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "gsi"); + if (!res) + return -ENXIO; + + size = resource_size(res); + if (res->start > U32_MAX || size > U32_MAX - res->start) + return -EINVAL; + + gsi->virt = ioremap(res->start, size); + if (!gsi->virt) + return -ENOMEM; + + mutex_init(&gsi->mutex); + + ret = platform_get_irq_byname(pdev, "gsi"); + if (ret < 0) + goto err_unmap_virt; + irq = ret; + + ret = request_irq(irq, gsi_isr, 0, "gsi", gsi); + if (ret) + goto err_unmap_virt; + gsi->irq = irq; + + ret = enable_irq_wake(gsi->irq); + if (ret) + dev_err(gsi->dev, "error %d enabling gsi wake irq\n", ret); + gsi->irq_wake_enabled = ret ? 0 : 1; + + ret = gsi_channel_init(gsi, data_count, data); + if (ret) + goto err_mutex_destroy; + + return 0; + +err_mutex_destroy: + if (gsi->irq_wake_enabled) + (void)disable_irq_wake(gsi->irq); + free_irq(gsi->irq, gsi); + mutex_destroy(&gsi->mutex); +err_unmap_virt: + iounmap(gsi->virt); + + return ret; +} + +/* Inverse of gsi_init() */ +void gsi_exit(struct gsi *gsi) +{ + gsi_channel_exit(gsi); + + if (gsi->irq_wake_enabled) + (void)disable_irq_wake(gsi->irq); + free_irq(gsi->irq, gsi); + mutex_destroy(&gsi->mutex); + iounmap(gsi->virt); +} + +/* Returns the maximum number of pending transactions on a channel */ +u32 gsi_channel_trans_max(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + return channel->data->tre_count; +} + +/* Returns the maximum number of TREs in a single transaction for a channel */ +u32 gsi_channel_trans_tre_max(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + return channel->data->tlv_count; +} + +/* Wait for all transaction activity on a channel to complete */ +void gsi_channel_trans_quiesce(struct gsi *gsi, u32 channel_id) +{ + struct gsi_trans *trans; + + /* Get the last transaction, and wait for it to complete */ + trans = gsi_channel_trans_last(gsi, channel_id); + if (trans) { + wait_for_completion(&trans->completion); + gsi_trans_free(trans); + } +} + +/* Make a channel operational */ +int gsi_channel_start(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + if (channel->state != GSI_CHANNEL_STATE_ALLOCATED && + channel->state != GSI_CHANNEL_STATE_STOP_IN_PROC && + channel->state != GSI_CHANNEL_STATE_STOPPED) { + dev_err(gsi->dev, "channel %u bad state %u\n", channel_id, + (u32)channel->state); + return -ENOTSUPP; + } + + napi_enable(&channel->napi); + + mutex_lock(&gsi->mutex); + + gsi_channel_command(channel, GSI_CH_START); + gsi->channel_stats.start++; + + mutex_unlock(&gsi->mutex); + + return 0; +} + +/* Stop an operational channel */ +int gsi_channel_stop(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + int ret; + + if (channel->state == GSI_CHANNEL_STATE_STOPPED) + return 0; + + if (channel->state != GSI_CHANNEL_STATE_STARTED && + channel->state != GSI_CHANNEL_STATE_STOP_IN_PROC && + channel->state != GSI_CHANNEL_STATE_ERROR) { + dev_err(gsi->dev, "channel %u bad state %u\n", channel_id, + (u32)channel->state); + return -ENOTSUPP; + } + + gsi_channel_trans_quiesce(gsi, channel_id); + + mutex_lock(&gsi->mutex); + + gsi_channel_command(channel, GSI_CH_STOP); + gsi->channel_stats.stop++; + + mutex_unlock(&gsi->mutex); + + if (channel->state == GSI_CHANNEL_STATE_STOPPED) + ret = 0; + else if (channel->state == GSI_CHANNEL_STATE_STOP_IN_PROC) + ret = -EAGAIN; + else + ret = -EIO; + + if (!ret) + napi_disable(&channel->napi); + + return ret; +} + +/* Reset and reconfigure a GSI channel (possibly leaving doorbell disabled) */ +int gsi_channel_reset(struct gsi *gsi, u32 channel_id, bool db_enable) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + if (channel->state != GSI_CHANNEL_STATE_STOPPED) { + dev_err(gsi->dev, "channel %u bad state %u\n", channel_id, + (u32)channel->state); + return -ENOTSUPP; + } + + /* In case the reset follows stop, need to wait 1 msec */ + usleep_range(USEC_PER_MSEC, 2 * USEC_PER_MSEC); + + mutex_lock(&gsi->mutex); + + gsi_channel_command(channel, GSI_CH_RESET); + gsi->channel_stats.reset++; + + /* workaround: reset RX channels again */ + if (!channel->toward_ipa) { + usleep_range(USEC_PER_MSEC, 2 * USEC_PER_MSEC); + gsi_channel_command(channel, GSI_CH_RESET); + } + + gsi_channel_config(channel, db_enable); + + /* Cancel pending transactions before the channel is started again */ + gsi_channel_trans_cancel_pending(channel); + + mutex_unlock(&gsi->mutex); + + return 0; +} From patchwork Fri May 31 03:53:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 10969589 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A5DA14C0 for ; Fri, 31 May 2019 03:55:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 36C72288D3 for ; Fri, 31 May 2019 03:55:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 27A6528987; Fri, 31 May 2019 03:55:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id BFA8D288D3 for ; Fri, 31 May 2019 03:55:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=nLbx78wvqfh8PlDazp5S3RPb0noHkGJ1dEdkBv2ltls=; b=rAMX9dywQe03uB Sue8f+PN0cGrHIRakVKPWlPiiX/SbPY/6LosFgrlgTtUh+jiAkqwqA6pw3rHJ+2OgRFyMrz5sZzoy Uztxdzj458VKRq1aAqQKmvjuX01Ty/NPxIbsUrCJo0HFEDzXqlsYcBijYX8m/YgGfxkFHBRlKl3NY ePAyPLH5Ii6fSERYvgwNn6M2xHBYwYWef+Ln6YFxWm4ivbMDVKgG5Wjih8m+0XP4im5yyRk1v1OMZ 3ajH6SPpn2u9YdvJAP7b0sTGv4JoRn9YFzKb7nQUrjJNk+co//gKQuq6w2dQZGgKUwE28L9teQvx1 T82zrD/Eq5eC1/v5WpHA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYdf-0005OF-9K; Fri, 31 May 2019 03:55:31 +0000 Received: from mail-io1-xd42.google.com ([2607:f8b0:4864:20::d42]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYcJ-0002j9-IK for linux-arm-kernel@lists.infradead.org; Fri, 31 May 2019 03:54:16 +0000 Received: by mail-io1-xd42.google.com with SMTP id f22so6986155iol.11 for ; Thu, 30 May 2019 20:54:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=heqGQKaiqjnGfnQxAhpeIlMm25793VV3ODq8AUARHOI=; b=pUg7g9GwuRyqe5RfpSeAHR0wym9RwDCZzsQcESgPydMY2m5sNQ14kfF7IsmvAxBUG0 H8JOkhYiw7RVbiMdAv8UtgIGKhA9jukxJHakWYQe1Deindb7OIh9DGviMi6NIW1eWL/L scwk5otxRQPLdLQh9oWh71ZJ97nrqOgXSSsK4TjToAscvOS7S8tniqCk4oHmQ+vA42GX RmuPFRDTgCHbSaA06ny0O6JltXzgeVrwp2U8PFJK/c9i2Zb23LJ2tC6VDYtGMXzYfVny n0UYMtiN6c9yRxvm8z9++pN+G2llDrUzQK2VhyGySmUpPGtQ5Qo7yqwrqYpI4QUscSAG mzWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=heqGQKaiqjnGfnQxAhpeIlMm25793VV3ODq8AUARHOI=; b=kATVojzvNSGXvVxjvpzurU1maYhy8j5kgGI/c0ydj7MKm7e+xINUkZ6hx5h9iEW/0q aTSM4lkXy2L+8sbErE5htur0Q0zdJkVyCVysIQyXQhFNIvsuLUj880dj/+PjTpIBTFzw jhJ54eE5dnUL281O180npr+wOVmprBWSZlruVIM2ktAqiQQ2YEcO3G78Qt0lR2FLxJdf KTvBANueOFYHyQolff+GTnDtuk8CM8Hu2huNw8wpveySSEvEOc2FfY+PNaz0JE8oFpW7 DVPeYpc58et+esw/fM2XkMGJvcJ82/cuA4caPUu5pAN1Ihx24dn9GFVr+aiARiZIUgUo 1xFg== X-Gm-Message-State: APjAAAX7iuzh9MmWdqbhqmw9Vy3LRvb4Kzlxqr0Zq0nUHD5OHE4LoJTP VIrWMpsUj58f8wcpxhR2fh4EJw== X-Google-Smtp-Source: APXvYqyeCCZczR+KhFVmlE1nT0wKRSnVci7DSU4+uHfL7nQ7D+Jr2oxYX74RBUjd9UoFNuRJ960m1w== X-Received: by 2002:a05:6602:220a:: with SMTP id n10mr5713733ion.205.1559274846594; Thu, 30 May 2019 20:54:06 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.54.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:54:06 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Subject: [PATCH v2 08/17] soc: qcom: ipa: GSI transactions Date: Thu, 30 May 2019 22:53:39 -0500 Message-Id: <20190531035348.7194-9-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190530_205408_072898_C96FDF16 X-CRM114-Status: GOOD ( 24.29 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, syadagir@codeaurora.org, ejcaruso@google.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, evgreen@chromium.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, subashab@codeaurora.org, linux-soc@vger.kernel.org, abhishek.esse@gmail.com, cpratapa@codeaurora.org, benchan@google.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch implements GSI transactions. A GSI transaction is a structure that represents a single request (consisting of one or more TREs) sent to the GSI hardware. The last TRE in a transaction includes a flag requesting that the GSI interrupt the AP to notify that it has completed. TREs are executed and completed strictly in order. For this reason, the completion of a single TRE implies that all previous TREs (in particular all of those "earlier" in a transaction) have completed. Whenever there is a need to send a request (a set of TREs) to the IPA, a GSI transaction is allocated, specifying the number of TREs that will be required. Details of the request (e.g. transfer offsets and length) are represented by in a Linux scatterlist array that is incorporated in the transaction structure. Once "filled," the transaction is committed. The GSI transaction layer performs all needed mapping (and unmapping) for DMA, and issues the request to the hardware. When the hardware signals that the request has completed, a callback function allows for cleanup or followup activity to be performed before the transaction is freed. Signed-off-by: Alex Elder --- drivers/net/ipa/gsi_trans.c | 624 ++++++++++++++++++++++++++++++++++++ drivers/net/ipa/gsi_trans.h | 116 +++++++ 2 files changed, 740 insertions(+) create mode 100644 drivers/net/ipa/gsi_trans.c create mode 100644 drivers/net/ipa/gsi_trans.h diff --git a/drivers/net/ipa/gsi_trans.c b/drivers/net/ipa/gsi_trans.c new file mode 100644 index 000000000000..267e33093554 --- /dev/null +++ b/drivers/net/ipa/gsi_trans.c @@ -0,0 +1,624 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "gsi.h" +#include "gsi_private.h" +#include "gsi_trans.h" +#include "ipa_gsi.h" +#include "ipa_data.h" +#include "ipa_cmd.h" + +/** + * DOC: GSI Transactions + * + * A GSI transaction abstracts the behavior of a GSI channel by representing + * everything about a related group of data transfers in a single structure. + * Most details of interaction with the GSI hardware are managed by the GSI + * transaction core, allowing users to simply describe transfers to be + * performed. When a transaction has completed a callback function + * (dependent on the type of endpoint associated with the channel) allows + * cleanup of resources associated with the transaction. + * + * To perform a data transfer (or a related set of them), a user of the GSI + * transaction interface allocates a transaction, indicating the number of + * TREs required (one per data transfer). If sufficient TREs are available, + * they are reserved for use in the transaction and the allocation succeeds. + * This way exhaustion of the available TREs in a channel ring is detected + * as early as possible. All resources required to complete a transaction + * are allocated at transaction allocation time. + * + * Transfers performed as part of a transaction are represented in an array + * of Linux scatterlist structures. This array is allocated with the + * transaction, and its entries must be initialized using standard + * scatterlist functions (such as sg_init_one() or skb_to_sgvec()). + * + * Once a transaction's scatterlist structures have been initialized, the + * transaction is committed. The GSI transaction layer is responsible for + * DMA mapping (and unmapping) memory described in the transaction's + * scatterlist array. The only way committing a transaction fails is if + * this DMA mapping step returns an error. Otherwise, ownership of the + * entire transaction is transferred to the GSI transaction core. The GSI + * transaction code formats the content of the scatterlist array into the + * channel ring buffer and informs the hardware that new TREs are available + * to process. + * + * The last TRE in each transaction is marked to interrupt the AP when the + * GSI hardware has completed it. Because transfers described by TREs are + * performed strictly in order, signaling the completion of just the last + * TRE in the transaction is sufficient to indicate the full transaction + * is complete. + * + * When a transaction is complete, ipa_gsi_trans_complete() is called by the + * GSI code into the IPA layer, allowing it to perform any final cleanup + * required before the transaction is freed. + */ + +/* gsi_tre->flags mask values (in CPU byte order) */ +#define GSI_TRE_FLAGS_CHAIN_FMASK GENMASK(0, 0) +#define GSI_TRE_FLAGS_IEOB_FMASK GENMASK(8, 8) +#define GSI_TRE_FLAGS_IEOT_FMASK GENMASK(9, 9) +#define GSI_TRE_FLAGS_BEI_FMASK GENMASK(10, 10) +#define GSI_TRE_FLAGS_TYPE_FMASK GENMASK(23, 16) + +/* Hardware values representing a transfer element type */ +enum gsi_tre_type { + GSI_RE_XFER = 0x2, + GSI_RE_IMMD_CMD = 0x3, + GSI_RE_NOP = 0x4, +}; + +/* Map a given ring entry index to the transaction associated with it */ +static void gsi_channel_trans_map(struct gsi_channel *channel, u32 index, + struct gsi_trans *trans) +{ + /* Note: index *must* be used modulo the ring count here */ + channel->trans_info.map[index % channel->tre_ring.count] = trans; +} + +/* Return the transaction mapped to a given ring entry */ +struct gsi_trans * +gsi_channel_trans_mapped(struct gsi_channel *channel, u32 index) +{ + /* Note: index *must* be used modulo the ring count here */ + return channel->trans_info.map[index % channel->tre_ring.count]; +} + +/* Return the oldest completed transaction for a channel (or null) */ +struct gsi_trans *gsi_channel_trans_complete(struct gsi_channel *channel) +{ + return list_first_entry_or_null(&channel->trans_info.complete, + struct gsi_trans, links); +} + +/* Move a transaction from the allocated list to the pending list */ +static void gsi_trans_move_pending(struct gsi_trans *trans) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + struct gsi_trans_info *trans_info = &channel->trans_info; + + spin_lock_bh(&trans_info->spinlock); + + list_move_tail(&trans->links, &trans_info->pending); + + spin_unlock_bh(&trans_info->spinlock); +} + +/* Move a transaction and all of its predecessors from the pending list + * to the completed list. + */ +void gsi_trans_move_complete(struct gsi_trans *trans) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + struct gsi_trans_info *trans_info = &channel->trans_info; + struct list_head list; + + spin_lock_bh(&trans_info->spinlock); + + /* Move this transaction and all predecessors to completed list */ + list_cut_position(&list, &trans_info->pending, &trans->links); + list_splice_tail(&list, &trans_info->complete); + + spin_unlock_bh(&trans_info->spinlock); +} + +/* Move a transaction from the completed list to the polled list */ +void gsi_trans_move_polled(struct gsi_trans *trans) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + struct gsi_trans_info *trans_info = &channel->trans_info; + + spin_lock_bh(&trans_info->spinlock); + + list_move_tail(&trans->links, &trans_info->polled); + + spin_unlock_bh(&trans_info->spinlock); +} + +/* Return the last (most recent) transaction allocated on a channel */ +struct gsi_trans *gsi_channel_trans_last(struct gsi *gsi, u32 channel_id) +{ + struct gsi_trans_info *trans_info; + struct gsi_trans *trans; + struct list_head *list; + + trans_info = &gsi->channel[channel_id].trans_info; + + spin_lock_bh(&trans_info->spinlock); + + /* Find the last list to which a transaction was added */ + if (!list_empty(&trans_info->alloc)) + list = &trans_info->alloc; + else if (!list_empty(&trans_info->pending)) + list = &trans_info->pending; + else if (!list_empty(&trans_info->complete)) + list = &trans_info->complete; + else if (!list_empty(&trans_info->polled)) + list = &trans_info->polled; + else + list = NULL; + + if (list) { + /* The last entry on this list is the last one allocated. + * Grab a reference so it can be waited for. + */ + trans = list_last_entry(list, struct gsi_trans, links); + refcount_inc(&trans->refcount); + } else { + trans = NULL; + } + + spin_unlock_bh(&trans_info->spinlock); + + return trans; +} + +/* Reserve some number of TREs on a channel. Returns true if successful */ +static bool +gsi_trans_tre_reserve(struct gsi_trans_info *trans_info, u32 tre_count) +{ + int avail = atomic_read(&trans_info->tre_avail); + int new; + + do { + new = avail - (int)tre_count; + if (unlikely(new < 0)) + return false; + } while (!atomic_try_cmpxchg(&trans_info->tre_avail, &avail, new)); + + return true; +} + +/* Release previously-reserved TRE entries to a channel */ +static void +gsi_trans_tre_release(struct gsi_trans_info *trans_info, u32 tre_count) +{ + atomic_add(tre_count, &trans_info->tre_avail); +} + +/* Allocate a GSI transaction on a channel */ +struct gsi_trans * +gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id, u32 tre_count) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + struct gsi_trans_info *trans_info; + struct gsi_trans *trans; + u32 which; + + /* Caller should know the limit is gsi_channel_trans_max() */ + if (WARN_ON(tre_count > channel->data->tlv_count)) + return NULL; + + trans_info = &channel->trans_info; + + /* We reserve the TREs now, but consume them at commit time. + * If there aren't enough available, we're done. + */ + if (!gsi_trans_tre_reserve(trans_info, tre_count)) + return NULL; + + /* Allocate the transaction and initialize it */ + which = trans_info->pool_free++ % trans_info->pool_count; + trans = &trans_info->pool[which]; + + trans->gsi = gsi; + trans->channel_id = channel_id; + refcount_set(&trans->refcount, 1); + trans->tre_count = tre_count; + init_completion(&trans->completion); + + /* We're reusing, so make sure all fields are reinitialized */ + trans->dev = gsi->dev; + trans->result = 0; /* Success assumed unless overwritten */ + trans->data = NULL; + + /* Allocate the scatter/gather entries it will use. If what's + * needed would cross the end-of-pool boundary, allocate them + * from the beginning of the pool. + */ + if (tre_count > trans_info->sg_pool_count - trans_info->sg_pool_free) + trans_info->sg_pool_free = 0; + trans->sgl = &trans_info->sg_pool[trans_info->sg_pool_free]; + trans->sgc = tre_count; + trans_info->sg_pool_free += tre_count; + + spin_lock_bh(&trans_info->spinlock); + + list_add_tail(&trans->links, &trans_info->alloc); + + spin_unlock_bh(&trans_info->spinlock); + + return trans; +} + +/* Free a previously-allocated transaction (used only in case of error) */ +void gsi_trans_free(struct gsi_trans *trans) +{ + struct gsi_trans_info *trans_info; + + if (!refcount_dec_and_test(&trans->refcount)) + return; + + trans_info = &trans->gsi->channel[trans->channel_id].trans_info; + + spin_lock_bh(&trans_info->spinlock); + + list_del(&trans->links); + + spin_unlock_bh(&trans_info->spinlock); + + gsi_trans_tre_release(trans_info, trans->tre_count); +} + +/* Compute the length/opcode value to use for a TRE */ +static __le16 gsi_tre_len_opcode(enum ipa_cmd_opcode opcode, u32 len) +{ + return opcode == IPA_CMD_NONE ? cpu_to_le16((u16)len) + : cpu_to_le16((u16)opcode); +} + +/* Compute the flags value to use for a given TRE */ +static __le32 gsi_tre_flags(bool last_tre, bool bei, enum ipa_cmd_opcode opcode) +{ + enum gsi_tre_type tre_type; + u32 tre_flags; + + tre_type = opcode == IPA_CMD_NONE ? GSI_RE_XFER : GSI_RE_IMMD_CMD; + tre_flags = u32_encode_bits(tre_type, GSI_TRE_FLAGS_TYPE_FMASK); + + /* Last TRE contains interrupt flags */ + if (last_tre) { + /* All transactions end in a transfer completion interrupt */ + tre_flags |= GSI_TRE_FLAGS_IEOT_FMASK; + /* Don't interrupt when outbound commands are acknowledged */ + if (bei) + tre_flags |= GSI_TRE_FLAGS_BEI_FMASK; + } else { /* All others indicate there's more to come */ + tre_flags |= GSI_TRE_FLAGS_CHAIN_FMASK; + } + + return cpu_to_le32(tre_flags); +} + +static void gsi_trans_tre_fill(struct gsi_tre *dest_tre, dma_addr_t addr, + u32 len, bool last_tre, bool bei, + enum ipa_cmd_opcode opcode) +{ + struct gsi_tre tre; + + tre.addr = cpu_to_le64(addr); + tre.len_opcode = gsi_tre_len_opcode(opcode, len); + tre.reserved = 0; + tre.flags = gsi_tre_flags(last_tre, bei, opcode); + + /* ARM64 can write 16 bytes as a unit with a single instruction. + * Doing the assignment this way is an attempt to make that happen. + */ + *dest_tre = tre; +} + +/* Issue a command to read a single byte from a channel */ +int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + struct gsi_trans_info *trans_info; + struct gsi_ring *tre_ring; + struct gsi_tre *dest_tre; + struct gsi_ring *ring; + + trans_info = &channel->trans_info; + + /* First reserve the TRE, if possible */ + if (!gsi_trans_tre_reserve(trans_info, 1)) + return -EBUSY; + + /* Now allocate the next TRE, fill it, and tell the hardware */ + tre_ring = &channel->tre_ring; + ring = &gsi->evt_ring[channel->evt_ring_id].ring; + + dest_tre = gsi_ring_virt(tre_ring, tre_ring->index); + gsi_trans_tre_fill(dest_tre, addr, 1, true, false, IPA_CMD_NONE); + + tre_ring->index++; + gsi_channel_doorbell(channel); + + return 0; +} + +/* Mark a gsi_trans_read_byte() request done */ +void gsi_trans_read_byte_done(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + gsi_trans_tre_release(&channel->trans_info, 1); +} + +/** + * __gsi_trans_commit() - Common GSI transaction commit code + * @trans: Transaction to commit + * @opcode: Immediate command opcode, or IPA_CMD_NONE + * @ring_db: Whether to tell the hardware about these queued transfers + * + * @Return: 0 if successful, or a negative error code + * + * Maps the transactions's scatterlist array for DMA, and returns -ENOMEM + * if that fails. Formats channel ring TRE entries based on the content of + * the scatterlist. Maps a transaction pointer to the last ring entry used + * for the transaction, so it can be recovered when it completes. Moves + * the transaction to the pending list. Finally, updates the channel ring + * pointer and optionally rings the doorbell. + */ +static int __gsi_trans_commit(struct gsi_trans *trans, + enum ipa_cmd_opcode opcode, bool ring_db) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + struct gsi_ring *tre_ring = &channel->tre_ring; + enum dma_data_direction direction; + bool bei = channel->toward_ipa; + struct gsi_tre *dest_tre; + struct scatterlist *sg; + struct gsi_ring *ring; + u32 byte_count = 0; + u32 avail; + int ret; + u32 i; + + direction = channel->toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE; + ret = dma_map_sg(trans->dev, trans->sgl, trans->sgc, direction); + if (!ret) + return -ENOMEM; + + ring = &channel->gsi->evt_ring[channel->evt_ring_id].ring; + + /* Consume the entries. If we cross the end of the ring while + * filling them we'll switch to the beginning to finish. + */ + avail = ring->count - tre_ring->index % tre_ring->count; + dest_tre = gsi_ring_virt(tre_ring, tre_ring->index); + for_each_sg(trans->sgl, sg, trans->sgc, i) { + bool last_tre = i == trans->tre_count - 1; + dma_addr_t addr = sg_dma_address(sg); + u32 len = sg_dma_len(sg); + + byte_count += len; + if (!avail--) + dest_tre = gsi_ring_virt(tre_ring, 0); + + gsi_trans_tre_fill(dest_tre, addr, len, last_tre, bei, opcode); + dest_tre++; + } + tre_ring->index += trans->tre_count; + + if (channel->toward_ipa) { + /* We record TX bytes when they are sent */ + trans->len = byte_count; + trans->trans_count = channel->trans_count; + trans->byte_count = channel->byte_count; + channel->trans_count++; + channel->byte_count += byte_count; + } + + /* Associate the last TRE with the transaction */ + gsi_channel_trans_map(channel, tre_ring->index - 1, trans); + + gsi_trans_move_pending(trans); + + /* Ring doorbell if requested, or if all TREs are allocated */ + if (ring_db || !atomic_read(&channel->trans_info.tre_avail)) { + /* Report what we're handing off to hardware for TX channels */ + if (channel->toward_ipa) + gsi_channel_tx_queued(channel); + gsi_channel_doorbell(channel); + } + + return 0; +} + +/* Commit a GSI transaction */ +int gsi_trans_commit(struct gsi_trans *trans, bool ring_db) +{ + return __gsi_trans_commit(trans, IPA_CMD_NONE, ring_db); +} + +/* Commit a GSI command transaction and wait for it to complete */ +int gsi_trans_commit_command(struct gsi_trans *trans, + enum ipa_cmd_opcode opcode) +{ + int ret; + + refcount_inc(&trans->refcount); + + ret = __gsi_trans_commit(trans, opcode, true); + if (ret) + goto out_free_trans; + + wait_for_completion(&trans->completion); + +out_free_trans: + gsi_trans_free(trans); + + return ret; +} + +/* Commit a GSI command transaction, wait for it to complete, with timeout */ +int gsi_trans_commit_command_timeout(struct gsi_trans *trans, + enum ipa_cmd_opcode opcode, + unsigned long timeout) +{ + unsigned long timeout_jiffies = msecs_to_jiffies(timeout); + unsigned long remaining; + int ret; + + refcount_inc(&trans->refcount); + + ret = __gsi_trans_commit(trans, opcode, true); + if (ret) + goto out_free_trans; + + remaining = wait_for_completion_timeout(&trans->completion, + timeout_jiffies); +out_free_trans: + gsi_trans_free(trans); + + return ret ? ret : remaining ? 0 : -ETIMEDOUT; +} + +/* Return a channel's next completed transaction (or NULL) */ +void gsi_trans_complete(struct gsi_trans *trans) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + enum dma_data_direction direction; + + direction = channel->toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE; + + dma_unmap_sg(trans->dev, trans->sgl, trans->sgc, direction); + + ipa_gsi_trans_complete(trans); + + complete(&trans->completion); + + gsi_trans_free(trans); +} + +/* Cancel a channel's pending transactions */ +void gsi_channel_trans_cancel_pending(struct gsi_channel *channel) +{ + struct gsi_trans_info *trans_info = &channel->trans_info; + u32 evt_ring_id = channel->evt_ring_id; + struct gsi *gsi = channel->gsi; + struct gsi_trans *trans; + struct gsi_ring *ring; + + ring = &gsi->evt_ring[evt_ring_id].ring; + + spin_lock_bh(&trans_info->spinlock); + + list_for_each_entry(trans, &trans_info->pending, links) + trans->result = -ECANCELED; + + list_splice_tail_init(&trans_info->pending, &trans_info->complete); + + spin_unlock_bh(&trans_info->spinlock); + + /* Schedule NAPI polling to complete the cancelled transactions */ + napi_schedule(&channel->napi); +} + +/* Initialize a channel's GSI transaction info */ +int gsi_channel_trans_init(struct gsi_channel *channel) +{ + struct gsi_trans_info *trans_info = &channel->trans_info; + u32 tre_count = channel->data->tre_count; + + trans_info->map = kcalloc(tre_count, sizeof(*trans_info->map), + GFP_KERNEL); + if (!trans_info->map) + return -ENOMEM; + + /* We will never need more transactions than there are TRE + * entries in the transfer ring. For that reason, we can + * preallocate an array of (at least) that many transactions, + * and use a single free index to determine the next one + * available for allocation. + */ + trans_info->pool_count = tre_count; + trans_info->pool = kcalloc(trans_info->pool_count, + sizeof(*trans_info->pool), GFP_KERNEL); + if (!trans_info->pool) + goto err_free_map; + /* If we get extra memory from the allocator, use it */ + trans_info->pool_count = + ksize(trans_info->pool) / sizeof(*trans_info->pool); + trans_info->pool_free = 0; + + /* While transactions are allocated one at a time, a transaction + * can have multiple TREs. The number of TRE entries in a single + * transaction is limited by the number of TLV FIFO entries the + * channel has. We reserve TREs when a transaction is allocated, + * but we don't actually use/fill them until the transaction is + * committed. + * + * A transaction uses a scatterlist array to represent the data + * transfers implemented by the transaction. Each scatterlist + * element is used to fill a single TRE when the transaction is + * committed. As a result, we need the same number of scatterlist + * elements as there are TREs in the transfer ring, and we can + * preallocate them in a pool. + * + * If we allocate a few (tlv_count - 1) extra entries in our pool, + * we can always satisfy requests without ever worrying about + * straddling the end of the array. If there aren't enough + * entries starting at the free index, we just allocate free + * entries from the beginning of the pool. + */ + trans_info->sg_pool_count = tre_count + channel->data->tlv_count - 1; + trans_info->sg_pool = kcalloc(trans_info->sg_pool_count, + sizeof(*trans_info->sg_pool), GFP_KERNEL); + if (!trans_info->sg_pool) + goto err_free_pool; + /* Use any extra memory we get from the allocator */ + trans_info->sg_pool_count = + ksize(trans_info->sg_pool) / sizeof(*trans_info->sg_pool); + trans_info->sg_pool_free = 0; + + /* The tre_avail field limits the number of outstanding transactions. + * In theory we should be able use all of the TREs in the ring. But + * in practice, doing that caused the hardware to report running out + * of event ring slots for writing completion information. So give + * the poor hardware a break, and allow one less than the maximum. + */ + atomic_set(&trans_info->tre_avail, tre_count - 1); + + spin_lock_init(&trans_info->spinlock); + INIT_LIST_HEAD(&trans_info->alloc); + INIT_LIST_HEAD(&trans_info->pending); + INIT_LIST_HEAD(&trans_info->complete); + INIT_LIST_HEAD(&trans_info->polled); + + return 0; + +err_free_pool: + kfree(trans_info->pool); +err_free_map: + kfree(trans_info->map); + + return -ENOMEM; +} + +/* Inverse of gsi_channel_trans_init() */ +void gsi_channel_trans_exit(struct gsi_channel *channel) +{ + struct gsi_trans_info *trans_info = &channel->trans_info; + + kfree(trans_info->sg_pool); + kfree(trans_info->pool); + kfree(trans_info->map); +} diff --git a/drivers/net/ipa/gsi_trans.h b/drivers/net/ipa/gsi_trans.h new file mode 100644 index 000000000000..2d5a199e4396 --- /dev/null +++ b/drivers/net/ipa/gsi_trans.h @@ -0,0 +1,116 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ +#ifndef _GSI_TRANS_H_ +#define _GSI_TRANS_H_ + +#include +#include +#include + +struct scatterlist; +struct device; + +struct gsi; +struct gsi_trans; +enum ipa_cmd_opcode; + +struct gsi_trans { + struct list_head links; /* gsi_channel lists */ + + struct gsi *gsi; + u32 channel_id; + + u32 tre_count; /* # TREs requested */ + u32 len; /* total # bytes in sgl */ + struct scatterlist *sgl; + u32 sgc; /* # entries in sgl[] */ + + struct completion completion; + refcount_t refcount; + + /* fields above are internal only */ + + struct device *dev; /* Use this for DMA mapping */ + long result; /* RX count, 0, or error code */ + + u64 byte_count; /* channel byte_count when committed */ + u64 trans_count; /* channel trans_count when committed */ + + void *data; +}; + +/** + * gsi_channel_trans_alloc() - Allocate a GSI transaction on a channel + * @gsi: GSI pointer + * @channel_id: Channel the transaction is associated with + * @tre_count: Number of elements in the transaction + * + * @Return: A GSI transaction structure, or a null pointer if all + * available transactions are in use + */ +struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id, + u32 tre_count); + +/** + * gsi_trans_free() - Free a previously-allocated GSI transaction + * @trans: Transaction to be freed + * + * Note: this should only be used in error paths, before the transaction is + * committed or in the event committing the transaction produces an error. + * Successfully committing a transaction passes ownership of the structure + * to the core transaction code. + */ +void gsi_trans_free(struct gsi_trans *trans); + +/** + * gsi_trans_commit() - Commit a GSI transaction + * @trans: Transaction to commit + * @ring_db: Whether to tell the hardware about these queued transfers + * @callback: Function called when transaction has completed. + */ +int gsi_trans_commit(struct gsi_trans *trans, bool ring_db); + +/** + * gsi_trans_commit_command() - Commit a GSI command transaction and wait + * wait for it to complete + * @trans: Transaction to commit + */ +int gsi_trans_commit_command(struct gsi_trans *trans, + enum ipa_cmd_opcode opcode); + +/** + * gsi_trans_commit_command_timeout() - Commit a GSI command transaction, + * wait for it to complete, with timeout + * @trans: Transaction to commit + * @ring_db: Whether to tell the hardware about these queued transfers + * @timeout: Timeout period (in milliseconds) + */ +int gsi_trans_commit_command_timeout(struct gsi_trans *trans, + enum ipa_cmd_opcode opcode, + unsigned long timeout); + +/** + * gsi_trans_read_byte() - Issue a single byte read TRE on a channel + * @gsi: GSI pointer + * @channel_id: Channel on which to read a byte + * @addr: DMA address into which to transfer the one byte + * + * This is not a transaction operation at all. It's defined here because + * it needs to be done in coordination with other transaction activity. + */ +int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr); + +/** + * gsi_trans_read_byte_done() - Clean up after a single byte read TRE + * @gsi: GSI pointer + * @channel_id: Channel on which byte was read + * + * This function needs to be called to signal that the work related + * to reading a byte initiated by gsi_trans_read_byte() is complete. + */ +void gsi_trans_read_byte_done(struct gsi *gsi, u32 channel_id); + +#endif /* _GSI_TRANS_H_ */ From patchwork Fri May 31 03:53:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 10969593 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E8B8B14DB for ; Fri, 31 May 2019 03:55:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D6CB628BB9 for ; Fri, 31 May 2019 03:55:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CB0F028BCA; Fri, 31 May 2019 03:55:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6230F28BB9 for ; Fri, 31 May 2019 03:55:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=BYIE7ufZrFigXEfDhBSYQUOboILzG0L0vfxFHquNJXQ=; b=GJJhHBJilJa6CL sDT6ntSBkDJAKhLajClX/t+NAlVy8dcPdAripBLu4fS6OoGVZ92wiSX6MCijPK1XcpHIwr/5lw9Xu ygodiIychCDb7ZAouCiUI64tnpKMfP4pThKutW8i4UCKGmChHMZvn+G93NOZzfENgPgxkhatZhdJ9 GzNhnF8y2vgmpF1D7BF5/BEiz5q1Yf5lL4eiggneB/y11p9uRDvwjowFNE8YDXsxw2uKWBNAaSymd Ue6q4bM7MjaKG8VcIwhtAbbmuuK45fuvZi1/53PKb/2WRAZGLrNfEsIALcdjkWLHntNRldE3NgIYz ZIjEIPl9KtRJ6GzV9oVw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYdp-0005cf-Dq; Fri, 31 May 2019 03:55:41 +0000 Received: from mail-io1-xd41.google.com ([2607:f8b0:4864:20::d41]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYcK-0002l1-PS for linux-arm-kernel@lists.infradead.org; Fri, 31 May 2019 03:54:17 +0000 Received: by mail-io1-xd41.google.com with SMTP id u25so6988776iot.13 for ; Thu, 30 May 2019 20:54:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qcjrhVYsZg52RJ5XXriOknCyqMNzK66d33ZzlwDcMVo=; b=W3TwW4DqsezXBlUJHicJWKqM/j0qAEnP6ywmwbFYab/eJVNGGidy9owsITmfaNAdf6 Fz5fuAuBri9mUH60zR3nugFjCv3odkZK6b4QYl4vGiQCDYsAc+9+SB1gAXM8GLdw+1eR q/MWSwM0Q45OYgflLad5PZ2BGq5fTi6Zh5TtBnmnOq2f31aSzXW0kPNfq+dFmxNe3bLW xDC2BU3ZbGf9oaByMK8aQe0qmfaNHtkNnyuES5HZvwzjktrsL/wwUV3DtJss56WiImYG 588YsvLCDWIkiEHFIAb0j8CR+dgreVIM0l064j6cYll80b80qiv2Bf4TCK3groaINQrC k+IQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qcjrhVYsZg52RJ5XXriOknCyqMNzK66d33ZzlwDcMVo=; b=camdB6WFIaU/uH2O4WYdXiDEUw+/YfXi8Nv3ESLYCOyFfyKxVNK18fGYhrbiYvLHjV b4sPG420ktP7S/brcpBhRtkjDqKzUOGVCCL7aX8qZO+KnXFUssR9DwZokh3daHhCF5G/ EFhwAeKOns9shrs9xds51wIMhZ7dqxn1EIbcBl6WkyJOew6mD/1PKesTnnCVoDuKXL4y 3bROV8RhH/FWqFh60DpM91ZtIrpibyFhdlpD4tieBR+fITUzoE1nz7AbpNcBVZ9nmWcB 88GtpxX6PFnIzFJtke5PxWsDzgJyGj1Ej4YdBn+QgE5rhu+bSiQNugqIu05gZjHHlOi9 9WDg== X-Gm-Message-State: APjAAAWATuhfYtJVB5RhtBSkOUzPnIb5/SO68BKk5lFbybAlKtQppdDE h2vkfWoejwDHFVQTPWTM3MhvZQ== X-Google-Smtp-Source: APXvYqyoSK33VrVMXXIhBImgNXPM19PElBRuQNwVsxLaDvN7FuwTs/xkVcRJ5E34mPw9RNG7aVFRmg== X-Received: by 2002:a5d:9518:: with SMTP id d24mr5008878iom.21.1559274847945; Thu, 30 May 2019 20:54:07 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.54.06 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:54:07 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Subject: [PATCH v2 09/17] soc: qcom: ipa: IPA interface to GSI Date: Thu, 30 May 2019 22:53:40 -0500 Message-Id: <20190531035348.7194-10-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190530_205409_048067_C84E2556 X-CRM114-Status: GOOD ( 16.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, syadagir@codeaurora.org, ejcaruso@google.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, evgreen@chromium.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, subashab@codeaurora.org, linux-soc@vger.kernel.org, abhishek.esse@gmail.com, cpratapa@codeaurora.org, benchan@google.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch provides interface functions supplied by the IPA layer that are called from the GSI layer. One function is called when a GSI transaction has completed. The others allow the GSI layer to inform the IPA layer when the hardware has been told it has new TREs to execute, and when the hardware has indicated transactions have completed. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_gsi.c | 48 ++++++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_gsi.h | 49 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 97 insertions(+) create mode 100644 drivers/net/ipa/ipa_gsi.c create mode 100644 drivers/net/ipa/ipa_gsi.h diff --git a/drivers/net/ipa/ipa_gsi.c b/drivers/net/ipa/ipa_gsi.c new file mode 100644 index 000000000000..7f8d74688c1e --- /dev/null +++ b/drivers/net/ipa/ipa_gsi.c @@ -0,0 +1,48 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include + +#include "gsi_trans.h" +#include "ipa.h" +#include "ipa_endpoint.h" + +void ipa_gsi_trans_complete(struct gsi_trans *trans) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + struct ipa_endpoint *endpoint; + + endpoint = ipa->endpoint_map[trans->channel_id]; + if (endpoint == ipa->command_endpoint) + return; /* Nothing to do for commands */ + + if (endpoint->toward_ipa) + ipa_endpoint_skb_tx_complete(trans); + else + ipa_endpoint_rx_complete(trans); +} + +void ipa_gsi_channel_tx_queued(struct gsi *gsi, u32 channel_id, u32 count, + u32 byte_count) +{ + struct ipa *ipa = container_of(gsi, struct ipa, gsi); + struct ipa_endpoint *endpoint; + + endpoint = ipa->endpoint_map[channel_id]; + if (endpoint->netdev) + netdev_sent_queue(endpoint->netdev, byte_count); +} + +void ipa_gsi_channel_tx_completed(struct gsi *gsi, u32 channel_id, u32 count, + u32 byte_count) +{ + struct ipa *ipa = container_of(gsi, struct ipa, gsi); + struct ipa_endpoint *endpoint; + + endpoint = ipa->endpoint_map[channel_id]; + if (endpoint->netdev) + netdev_completed_queue(endpoint->netdev, count, byte_count); +} diff --git a/drivers/net/ipa/ipa_gsi.h b/drivers/net/ipa/ipa_gsi.h new file mode 100644 index 000000000000..72adb520da40 --- /dev/null +++ b/drivers/net/ipa/ipa_gsi.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ +#ifndef _IPA_GSI_TRANS_H_ +#define _IPA_GSI_TRANS_H_ + +#include + +struct gsi_trans; + +/** + * ipa_gsi_trans_complete() - GSI transaction completion callback + * @gsi: GSI pointer + * @trans: Transaction that has completed + * + * This called from the GSI layer to notify the IPA layer that a + * transaction has completed. + */ +void ipa_gsi_trans_complete(struct gsi_trans *trans); + +/** + * ipa_gsi_channel_tx_queued() - GSI queued to hardware notification + * @gsi: GSI pointer + * @channel_id: Channel number + * @count: Number of transactions queued + * @byte_count: Number of bytes to transfer represented by transactions + * + * This called from the GSI layer to notify the IPA layer that some + * number of transactions have been queued to hardware for execution. + */ +void ipa_gsi_channel_tx_queued(struct gsi *gsi, u32 channel_id, u32 count, + u32 byte_count); +/** + * ipa_gsi_trans_complete() - GSI transaction completion callback +ipa_gsi_channel_tx_completed() + * @gsi: GSI pointer + * @channel_id: Channel number + * @count: Number of transactions completed since last report + * @byte_count: Number of bytes transferred represented by transactions + * + * This called from the GSI layer to notify the IPA layer that the hardware + * has reported the completion of some number of transactions. + */ +void ipa_gsi_channel_tx_completed(struct gsi *gsi, u32 channel_id, u32 count, + u32 byte_count); + +#endif /* _IPA_GSI_TRANS_H_ */ From patchwork Fri May 31 03:53:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 10969611 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0260114DB for ; Fri, 31 May 2019 03:57:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E023728BB9 for ; Fri, 31 May 2019 03:57:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D2A5E28BCA; Fri, 31 May 2019 03:57:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id CAEA428BB9 for ; Fri, 31 May 2019 03:57:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=9rzbtF6tb1HPYsxyMpWJtl/5dK9CgMWKsREJ5/cHO/o=; b=od6aoj6/5NK2sX A7vt1IImeGANI/4piVwBmHG2VHdmNgzzQI1B5qurWj+YXNG8vzVrUPBY0b1r2r0rtzu9S/Bg7/pgq gJIvsn8/SLWak8UYRGg1Z6YIwKi6O1j1z7WIq3Vm6B8uzodNVEgeXpv2C5r9ycMLwnYgFuoW9GXQi ZRBlogMArauOi3Oq5vD726BjG8Z4ShFOKiUgMgGXUC9hZhLxLRmYYxQz1TnPHT+qtc5LliHEGHRQ0 nakq14ewuwevLlAxyYzQP4lp8+ivgG1DjyHxDqmZyN9CdaRTzC4iP9Nn8eYgwZjmU4J33Fnwdj8+F S9AVSTlBVcmox3bnUgHg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYfp-00078o-AL; Fri, 31 May 2019 03:57:45 +0000 Received: from mail-io1-xd44.google.com ([2607:f8b0:4864:20::d44]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYcN-0002n1-B2 for linux-arm-kernel@lists.infradead.org; Fri, 31 May 2019 03:54:24 +0000 Received: by mail-io1-xd44.google.com with SMTP id k20so6997385ios.10 for ; Thu, 30 May 2019 20:54:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7J80d6jDGrLdHzifPJ0VY7ucmc99kOnuWbfGNwYA58Y=; b=RLHVhF/SfM9tmoUTrmHx3ze5aJYIIe8t21/QRU3xsQBo/WeBTPW3NWHTf2qE2noqKM A+H7kliAEs8blGxJWwv4ErSRAyvAqdlpDRD3NboA1EOOet9W1FcP+iY/zq5dViCCxxi5 PjOJ0OYw60BNaSqpdo7xGBq6Tz9xQYTIYUSUQyre8mX4F5yUM6LVXtE/fwxn7jahhh/I MVO1mLsZUwe+/AAlWTaM7iWpPXXCXVBocX9EoNKNg6EAEs6KuKtxRF/qCOgjvCPMzVEE 8DF1WFGgQdVTv6tLPNpKpaNTcvATqgazIe7emrHRTE14t3GPxDWLy3DSEWZx5ZnHSjZK AjMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7J80d6jDGrLdHzifPJ0VY7ucmc99kOnuWbfGNwYA58Y=; b=nHiZFtu5UqGhc4fvqCEJYuWjtuG4U4khRRAVy1708HQsYolMHXHq0B8ajCkxNpsC0X +feQs6TN5uTSBP3d/Q7J1dxIGP4B3641cpCHQAE5FcY8Qpvbw1XF3pOQymSifz7q93Ob CBJnqiCkAticgEQTIj/3G6zVQjhnDo90lZs5sTt18H/bbHRfdfgwfGkVt+68iGbmVo+L Q5g30a2ueFdkQI1f9YURVqU2ms9LsJb6HDXmcga8R4z8frKxCey1gz2DSHbU1OIK2lkH JvLMcJK/8Gw/MB4wP3/9YgshXs1VJqY3FbneHhdyDDQc9k30xi3PRhkKKOZASTX7YOfT esoA== X-Gm-Message-State: APjAAAUbmsXmH+Sti2hy6vtybS5RU5Vf8dgQ0dav5T84tFPxW6ZT7Zcj P5i/VUYqvDKQQYzLOiwV/MSfcQ== X-Google-Smtp-Source: APXvYqxVQ7YNDZ7hDqC9Mk1Teg/q2QTpDMa4tS3LmtqCqCXCZLI1bRW00YghKvxhOf+XfLGHzxN8yw== X-Received: by 2002:a05:6602:2245:: with SMTP id o5mr3638781ioo.59.1559274849286; Thu, 30 May 2019 20:54:09 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.54.08 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:54:08 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Subject: [PATCH v2 10/17] soc: qcom: ipa: IPA endpoints Date: Thu, 30 May 2019 22:53:41 -0500 Message-Id: <20190531035348.7194-11-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190530_205411_677751_CBE8B59D X-CRM114-Status: GOOD ( 22.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, syadagir@codeaurora.org, ejcaruso@google.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, evgreen@chromium.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, subashab@codeaurora.org, linux-soc@vger.kernel.org, abhishek.esse@gmail.com, cpratapa@codeaurora.org, benchan@google.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch includes the code implementing an IPA endpoint. This is the primary abstraction implemented by the IPA. An endpoint is one end of a network connection between two entities physically connected to the IPA. Specifically, the AP and the modem implement endpoints, and an (AP endpoint, modem endpoint) pair implements the transfer of network data in one direction between the AP and modem. Endpoints are built on top of GSI channels, but IPA endpoints represent the higher-level functionality that the IPA provides. Data can be sent through a GSI channel, but it is the IPA endpoint that represents what is on the "other end" to receive that data. Other functionality, including aggregation, checksum offload and (at some future date) IP routing and filtering are all associated with the IPA endpoint. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_endpoint.c | 1283 ++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_endpoint.h | 97 +++ 2 files changed, 1380 insertions(+) create mode 100644 drivers/net/ipa/ipa_endpoint.c create mode 100644 drivers/net/ipa/ipa_endpoint.h diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c new file mode 100644 index 000000000000..0185db35033d --- /dev/null +++ b/drivers/net/ipa/ipa_endpoint.c @@ -0,0 +1,1283 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "gsi.h" +#include "gsi_trans.h" +#include "ipa.h" +#include "ipa_data.h" +#include "ipa_endpoint.h" +#include "ipa_cmd.h" +#include "ipa_mem.h" +#include "ipa_netdev.h" + +#define atomic_dec_not_zero(v) atomic_add_unless((v), -1, 0) + +#define IPA_REPLENISH_BATCH 16 + +#define IPA_RX_BUFFER_SIZE (PAGE_SIZE << IPA_RX_BUFFER_ORDER) +#define IPA_RX_BUFFER_ORDER 1 /* 8KB endpoint RX buffers (2 pages) */ + +/* The amount of RX buffer space consumed by standard skb overhead */ +#define IPA_RX_BUFFER_OVERHEAD (PAGE_SIZE - SKB_MAX_ORDER(NET_SKB_PAD, 0)) + +#define IPA_ENDPOINT_STOP_RETRY_MAX 10 +#define IPA_ENDPOINT_STOP_RX_SIZE 1 /* bytes */ + +#define IPA_ENDPOINT_RESET_AGGR_RETRY_MAX 3 +#define IPA_AGGR_TIME_LIMIT_DEFAULT 1 /* milliseconds */ + +/** enum ipa_status_opcode - status element opcode hardware values */ +enum ipa_status_opcode { + IPA_STATUS_OPCODE_PACKET = 0x01, + IPA_STATUS_OPCODE_NEW_FRAG_RULE = 0x02, + IPA_STATUS_OPCODE_DROPPED_PACKET = 0x04, + IPA_STATUS_OPCODE_SUSPENDED_PACKET = 0x08, + IPA_STATUS_OPCODE_LOG = 0x10, + IPA_STATUS_OPCODE_DCMP = 0x20, + IPA_STATUS_OPCODE_PACKET_2ND_PASS = 0x40, +}; + +/** enum ipa_status_exception - status element exception type */ +enum ipa_status_exception { + IPA_STATUS_EXCEPTION_NONE, + IPA_STATUS_EXCEPTION_DEAGGR, + IPA_STATUS_EXCEPTION_IPTYPE, + IPA_STATUS_EXCEPTION_PACKET_LENGTH, + IPA_STATUS_EXCEPTION_PACKET_THRESHOLD, + IPA_STATUS_EXCEPTION_FRAG_RULE_MISS, + IPA_STATUS_EXCEPTION_SW_FILT, + IPA_STATUS_EXCEPTION_NAT, + IPA_STATUS_EXCEPTION_IPV6CT, + IPA_STATUS_EXCEPTION_MAX, +}; + +/** + * struct ipa_status - Abstracted IPA status element + * @opcode: Status element type + * @exception: The first exception that took place + * @pkt_len: Payload length + * @dst_endpoint: Destination endpoint + * @metadata: 32-bit metadata value used by packet + * @rt_miss: Flag; if 1, indicates there was a routing rule miss + * + * Note that the hardware status element supplies additional information + * that is currently unused. + */ +struct ipa_status { + enum ipa_status_opcode opcode; + enum ipa_status_exception exception; + u32 pkt_len; + u32 dst_endpoint; + u32 metadata; + u32 rt_miss; +}; + +/* Field masks for struct ipa_status_raw structure fields */ + +#define IPA_STATUS_SRC_IDX_FMASK GENMASK(4, 0) + +#define IPA_STATUS_DST_IDX_FMASK GENMASK(4, 0) + +#define IPA_STATUS_FLAGS1_FLT_LOCAL_FMASK GENMASK(0, 0) +#define IPA_STATUS_FLAGS1_FLT_HASH_FMASK GENMASK(1, 1) +#define IPA_STATUS_FLAGS1_FLT_GLOBAL_FMASK GENMASK(2, 2) +#define IPA_STATUS_FLAGS1_FLT_RET_HDR_FMASK GENMASK(3, 3) +#define IPA_STATUS_FLAGS1_FLT_RULE_ID_FMASK GENMASK(13, 4) +#define IPA_STATUS_FLAGS1_RT_LOCAL_FMASK GENMASK(14, 14) +#define IPA_STATUS_FLAGS1_RT_HASH_FMASK GENMASK(15, 15) +#define IPA_STATUS_FLAGS1_UCP_FMASK GENMASK(16, 16) +#define IPA_STATUS_FLAGS1_RT_TBL_IDX_FMASK GENMASK(21, 17) +#define IPA_STATUS_FLAGS1_RT_RULE_ID_FMASK GENMASK(31, 22) + +#define IPA_STATUS_FLAGS2_NAT_HIT_FMASK GENMASK_ULL(0, 0) +#define IPA_STATUS_FLAGS2_NAT_ENTRY_IDX_FMASK GENMASK_ULL(13, 1) +#define IPA_STATUS_FLAGS2_NAT_TYPE_FMASK GENMASK_ULL(15, 14) +#define IPA_STATUS_FLAGS2_TAG_INFO_FMASK GENMASK_ULL(63, 16) + +#define IPA_STATUS_FLAGS3_SEQ_NUM_FMASK GENMASK(7, 0) +#define IPA_STATUS_FLAGS3_TOD_CTR_FMASK GENMASK(31, 8) + +#define IPA_STATUS_FLAGS4_HDR_LOCAL_FMASK GENMASK(0, 0) +#define IPA_STATUS_FLAGS4_HDR_OFFSET_FMASK GENMASK(10, 1) +#define IPA_STATUS_FLAGS4_FRAG_HIT_FMASK GENMASK(11, 11) +#define IPA_STATUS_FLAGS4_FRAG_RULE_FMASK GENMASK(15, 12) +#define IPA_STATUS_FLAGS4_HW_SPECIFIC_FMASK GENMASK(31, 16) + +/* Status element provided by hardware */ +struct ipa_status_raw { + u8 opcode; + u8 exception; + u16 mask; + u16 pkt_len; + u8 endp_src_idx; /* Only bottom 5 bits valid */ + u8 endp_dst_idx; /* Only bottom 5 bits valid */ + u32 metadata; + u32 flags1; + u64 flags2; + u32 flags3; + u32 flags4; +}; + +static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one); + +/* suspend_delay represents suspend for RX, delay for TX endpoints */ +bool ipa_endpoint_init_ctrl(struct ipa_endpoint *endpoint, bool suspend_delay) +{ + u32 offset = IPA_REG_ENDP_INIT_CTRL_N_OFFSET(endpoint->endpoint_id); + u32 mask; + u32 val; + + mask = endpoint->toward_ipa ? ENDP_DELAY_FMASK : ENDP_SUSPEND_FMASK; + + val = ioread32(endpoint->ipa->reg_virt + offset); + if (suspend_delay == !!(val & mask)) + return false; /* Already set to desired state */ + + val ^= mask; + iowrite32(val, endpoint->ipa->reg_virt + offset); + + return true; +} + +static void ipa_endpoint_init_cfg(struct ipa_endpoint *endpoint) +{ + u32 offset = IPA_REG_ENDP_INIT_CFG_N_OFFSET(endpoint->endpoint_id); + u32 val = 0; + + /* FRAG_OFFLOAD_EN is 0 */ + if (endpoint->data->config.checksum) { + if (endpoint->toward_ipa) { + u32 checksum_offset; + + val |= u32_encode_bits(IPA_CS_OFFLOAD_UL, + CS_OFFLOAD_EN_FMASK); + /* Checksum header offset is in 4-byte units */ + checksum_offset = sizeof(struct rmnet_map_header); + checksum_offset /= sizeof(u32); + val |= u32_encode_bits(checksum_offset, + CS_METADATA_HDR_OFFSET_FMASK); + } else { + val |= u32_encode_bits(IPA_CS_OFFLOAD_DL, + CS_OFFLOAD_EN_FMASK); + } + } else { + val |= u32_encode_bits(IPA_CS_OFFLOAD_NONE, + CS_OFFLOAD_EN_FMASK); + } + /* CS_GEN_QMB_MASTER_SEL is 0 */ + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint) +{ + u32 offset = IPA_REG_ENDP_INIT_HDR_N_OFFSET(endpoint->endpoint_id); + u32 val = 0; + + if (endpoint->data->config.qmap) { + size_t header_size = sizeof(struct rmnet_map_header); + + if (endpoint->toward_ipa && endpoint->data->config.checksum) + header_size += sizeof(struct rmnet_map_ul_csum_header); + + val |= u32_encode_bits(header_size, HDR_LEN_FMASK); + /* metadata is the 4 byte rmnet_map header itself */ + val |= HDR_OFST_METADATA_VALID_FMASK; + val |= u32_encode_bits(0, HDR_OFST_METADATA_FMASK); + /* HDR_ADDITIONAL_CONST_LEN is 0; (IPA->AP only) */ + if (!endpoint->toward_ipa) { + u32 size_offset = offsetof(struct rmnet_map_header, + pkt_len); + + val |= HDR_OFST_PKT_SIZE_VALID_FMASK; + val |= u32_encode_bits(size_offset, + HDR_OFST_PKT_SIZE_FMASK); + } + /* HDR_A5_MUX is 0 */ + /* HDR_LEN_INC_DEAGG_HDR is 0 */ + /* HDR_METADATA_REG_VALID is 0; (AP->IPA only) */ + } + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint) +{ + u32 offset = IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(endpoint->endpoint_id); + u32 pad_align = endpoint->data->config.rx.pad_align; + u32 val = 0; + + val |= HDR_ENDIANNESS_FMASK; /* big endian */ + val |= HDR_TOTAL_LEN_OR_PAD_VALID_FMASK; + /* HDR_TOTAL_LEN_OR_PAD is 0 (pad, not total_len) */ + /* HDR_PAYLOAD_LEN_INC_PADDING is 0 */ + /* HDR_TOTAL_LEN_OR_PAD_OFFSET is 0 */ + if (!endpoint->toward_ipa) + val |= u32_encode_bits(pad_align, HDR_PAD_TO_ALIGNMENT_FMASK); + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +/** + * Generate a metadata mask value that will select only the mux_id + * field in an rmnet_map header structure. The mux_id is at offset + * 1 byte from the beginning of the structure, but the metadata + * value is treated as a 4-byte unit. So this mask must be computed + * with endianness in mind. Note that ipa_endpoint_init_hdr_metadata_mask() + * will convert this value to the proper byte order. + * + * Marked __always_inline because this is really computing a + * constant value. + */ +static __always_inline __be32 ipa_rmnet_mux_id_metadata_mask(void) +{ + size_t mux_id_offset = offsetof(struct rmnet_map_header, mux_id); + u32 mux_id_mask = 0; + u8 *bytes; + + bytes = (u8 *)&mux_id_mask; + bytes[mux_id_offset] = 0xff; /* mux_id is 1 byte */ + + return cpu_to_be32(mux_id_mask); +} + +static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint) +{ + u32 endpoint_id = endpoint->endpoint_id; + u32 val = 0; + u32 offset; + + offset = IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(endpoint_id); + + if (!endpoint->toward_ipa && endpoint->data->config.qmap) + val = ipa_rmnet_mux_id_metadata_mask(); + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +/* Compute the aggregation size value to use for a given buffer size */ +static u32 ipa_aggr_size_kb(u32 rx_buffer_size) +{ + BUILD_BUG_ON(IPA_RX_BUFFER_SIZE > + field_max(AGGR_BYTE_LIMIT_FMASK) * SZ_1K + + IPA_MTU + IPA_RX_BUFFER_OVERHEAD); + + /* Because we don't have the "hard byte limit" enabled, we + * need to make sure there's enough space in the buffer to + * receive a complete MTU (plus normal skb overhead) beyond + * the aggregated size limit we specify. + */ + rx_buffer_size -= IPA_MTU + IPA_RX_BUFFER_OVERHEAD; + + return rx_buffer_size / SZ_1K; +} + +static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint) +{ + const struct ipa_endpoint_config_data *config = &endpoint->data->config; + u32 offset = IPA_REG_ENDP_INIT_AGGR_N_OFFSET(endpoint->endpoint_id); + u32 val = 0; + + if (config->aggregation) { + if (!endpoint->toward_ipa) { + u32 aggr_size = ipa_aggr_size_kb(IPA_RX_BUFFER_SIZE); + + val |= u32_encode_bits(IPA_ENABLE_AGGR, AGGR_EN_FMASK); + val |= u32_encode_bits(IPA_GENERIC, AGGR_TYPE_FMASK); + val |= u32_encode_bits(aggr_size, + AGGR_BYTE_LIMIT_FMASK); + val |= u32_encode_bits(IPA_AGGR_TIME_LIMIT_DEFAULT, + AGGR_TIME_LIMIT_FMASK); + val |= u32_encode_bits(0, AGGR_PKT_LIMIT_FMASK); + if (config->rx.aggr_close_eof) + val |= AGGR_SW_EOF_ACTIVE_FMASK; + /* AGGR_HARD_BYTE_LIMIT_ENABLE is 0 */ + } else { + val |= u32_encode_bits(IPA_ENABLE_DEAGGR, + AGGR_EN_FMASK); + val |= u32_encode_bits(IPA_QCMAP, AGGR_TYPE_FMASK); + /* other fields ignored */ + } + /* AGGR_FORCE_CLOSE is 0 */ + } else { + val |= u32_encode_bits(IPA_BYPASS_AGGR, AGGR_EN_FMASK); + /* other fields ignored */ + } + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_endpoint_init_mode(struct ipa_endpoint *endpoint) +{ + u32 offset = IPA_REG_ENDP_INIT_MODE_N_OFFSET(endpoint->endpoint_id); + u32 val = 0; + + if (endpoint->toward_ipa && endpoint->data->config.dma_mode) { + u32 dma_endpoint_id = endpoint->data->config.dma_endpoint; + + val |= u32_encode_bits(IPA_DMA, MODE_FMASK); + val |= u32_encode_bits(dma_endpoint_id, DEST_PIPE_INDEX_FMASK); + } else { + val |= u32_encode_bits(IPA_BASIC, MODE_FMASK); + } + /* Other bitfields unspecified (and 0) */ + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_endpoint_init_deaggr(struct ipa_endpoint *endpoint) +{ + u32 offset = IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(endpoint->endpoint_id); + u32 val = 0; + + /* DEAGGR_HDR_LEN is 0 */ + /* PACKET_OFFSET_VALID is 0 */ + /* PACKET_OFFSET_LOCATION is ignored (not valid) */ + /* MAX_PACKET_LEN is 0 (not enforced) */ + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_endpoint_init_seq(struct ipa_endpoint *endpoint) +{ + u32 offset = IPA_REG_ENDP_INIT_SEQ_N_OFFSET(endpoint->endpoint_id); + u32 seq_type = endpoint->data->seq_type; + u32 val = 0; + + val |= u32_encode_bits(seq_type & 0xf, HPS_SEQ_TYPE_FMASK); + val |= u32_encode_bits((seq_type >> 4) & 0xf, DPS_SEQ_TYPE_FMASK); + /* HPS_REP_SEQ_TYPE is 0 */ + /* DPS_REP_SEQ_TYPE is 0 */ + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +/* Complete transaction initiated in ipa_endpoint_skb_tx() */ +void ipa_endpoint_skb_tx_complete(struct gsi_trans *trans) +{ + struct sk_buff *skb = trans->data; + + dev_kfree_skb_any(skb); +} + +/** + * ipa_endpoint_skb_tx() - Transmit a socket buffer + * @endpoint: Endpoint pointer + * @skb: Socket buffer to send + * + * Returns: 0 if successful, or a negative error code + */ +int ipa_endpoint_skb_tx(struct ipa_endpoint *endpoint, struct sk_buff *skb) +{ + struct gsi_trans *trans; + u32 nr_frags; + int ret; + + /* Make sure source endpoint's TLV FIFO has enough entries to + * hold the linear portion of the skb and all its fragments. + * If not, see if we can linearize it before giving up. + */ + nr_frags = skb_shinfo(skb)->nr_frags; + if (1 + nr_frags > endpoint->trans_tre_max) { + if (skb_linearize(skb)) + return -ENOMEM; + nr_frags = 0; + } + + trans = gsi_channel_trans_alloc(&endpoint->ipa->gsi, + endpoint->channel_id, nr_frags + 1); + if (!trans) + return -EBUSY; + trans->data = skb; + + ret = skb_to_sgvec(skb, trans->sgl, 0, skb->len); + if (ret < 0) + goto err_trans_free; + trans->sgc = ret; + + ret = gsi_trans_commit(trans, !netdev_xmit_more()); + if (ret) + goto err_trans_free; + return 0; + +err_trans_free: + gsi_trans_free(trans); + + return -ENOMEM; +} + +static void ipa_endpoint_status(struct ipa_endpoint *endpoint) +{ + const struct ipa_endpoint_config_data *config = &endpoint->data->config; + enum ipa_endpoint_id endpoint_id = endpoint->endpoint_id; + u32 val = 0; + u32 offset; + + offset = IPA_REG_ENDP_STATUS_N_OFFSET(endpoint_id); + + if (endpoint->data->config.status_enable) { + val |= STATUS_EN_FMASK; + if (endpoint->toward_ipa) { + u32 status_endpoint_id = config->tx.status_endpoint; + + val |= u32_encode_bits(status_endpoint_id, + STATUS_ENDP_FMASK); + } + /* STATUS_LOCATION is 0 (status element precedes packet) */ + /* STATUS_PKT_SUPPRESS_FMASK */ + } + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_endpoint_skb_copy(struct ipa_endpoint *endpoint, + void *data, u32 len, u32 extra) +{ + struct sk_buff *skb; + + skb = __dev_alloc_skb(len, GFP_ATOMIC); + if (skb) { + skb_put(skb, len); + memcpy(skb->data, data, len); + skb->truesize += extra; + } + + /* Now receive it, or drop it if there's no netdev */ + if (endpoint->netdev) + ipa_netdev_skb_rx(endpoint->netdev, skb); + else if (skb) + dev_kfree_skb_any(skb); +} + +static void ipa_endpoint_skb_build(struct ipa_endpoint *endpoint, + struct page *page, u32 len) +{ + struct sk_buff *skb; + + /* assert(len <= SKB_WITH_OVERHEAD(IPA_RX_BUFFER_SIZE-NET_SKB_PAD)); */ + skb = build_skb(page_address(page), IPA_RX_BUFFER_SIZE); + if (skb) { + /* Reserve the headroom and account for the data */ + skb_reserve(skb, NET_SKB_PAD); + skb_put(skb, len); + } + + /* Now receive it, or drop it if there's no netdev */ + if (endpoint->netdev) + ipa_netdev_skb_rx(endpoint->netdev, skb); + else if (skb) + dev_kfree_skb_any(skb); + + /* If no socket buffer took the pages, free them */ + if (!skb) + __free_pages(page, IPA_RX_BUFFER_ORDER); +} + +/* Maps an exception type returned in a ipa_status_raw structure + * to the ipa_status_exception value that represents it in + * the exception field of a ipa_status structure. Returns + * IPA_STATUS_EXCEPTION_MAX for an unrecognized value. + */ +static enum ipa_status_exception exception_map(u8 exception, bool is_ipv6) +{ + switch (exception) { + case 0x00: return IPA_STATUS_EXCEPTION_NONE; + case 0x01: return IPA_STATUS_EXCEPTION_DEAGGR; + case 0x04: return IPA_STATUS_EXCEPTION_IPTYPE; + case 0x08: return IPA_STATUS_EXCEPTION_PACKET_LENGTH; + case 0x10: return IPA_STATUS_EXCEPTION_FRAG_RULE_MISS; + case 0x20: return IPA_STATUS_EXCEPTION_SW_FILT; + case 0x40: return is_ipv6 ? IPA_STATUS_EXCEPTION_IPV6CT + : IPA_STATUS_EXCEPTION_NAT; + default: return IPA_STATUS_EXCEPTION_MAX; + } +} + +/* A rule miss is indicated as an all-1's value in the rt_rule_id + * or flt_rule_id field of the ipa_status structure. + */ +static bool ipa_rule_miss_id(u32 id) +{ + return id == field_max(IPA_STATUS_FLAGS1_RT_RULE_ID_FMASK); +} + +size_t ipa_status_parse(struct ipa_status *status, void *data, u32 count) +{ + const struct ipa_status_raw *status_raw = data; + bool is_ipv6; + u32 val; + + BUILD_BUG_ON(sizeof(*status_raw) % 4); + if (WARN_ON(count < sizeof(*status_raw))) + return 0; + + status->opcode = status_raw->opcode; + is_ipv6 = status_raw->mask & BIT(7) ? false : true; + status->exception = exception_map(status_raw->exception, is_ipv6); + status->pkt_len = status_raw->pkt_len; + val = u32_get_bits(status_raw->endp_dst_idx, IPA_STATUS_DST_IDX_FMASK); + status->dst_endpoint = val; + status->metadata = status_raw->metadata; + val = u32_get_bits(status_raw->flags1, + IPA_STATUS_FLAGS1_RT_RULE_ID_FMASK); + status->rt_miss = ipa_rule_miss_id(val) ? 1 : 0; + + return sizeof(*status_raw); +} + +/* The format of a packet status element is the same for several status + * types (opcodes). The NEW_FRAG_RULE, LOG, DCMP (decompression) types + * aren't currently supported + */ +static bool ipa_status_format_packet(enum ipa_status_opcode opcode) +{ + switch (opcode) { + case IPA_STATUS_OPCODE_PACKET: + case IPA_STATUS_OPCODE_DROPPED_PACKET: + case IPA_STATUS_OPCODE_SUSPENDED_PACKET: + case IPA_STATUS_OPCODE_PACKET_2ND_PASS: + return true; + default: + return false; + } +} + +static bool ipa_endpoint_status_skip(struct ipa_endpoint *endpoint, + struct ipa_status *status) +{ + if (!ipa_status_format_packet(status->opcode)) + return true; + if (!status->pkt_len) + return true; + if (status->dst_endpoint != endpoint->endpoint_id) + return true; + + return false; /* Don't skip this packet, process it */ +} + +static void ipa_endpoint_status_parse(struct ipa_endpoint *endpoint, + struct page *page, u32 total_len) +{ + void *data = page_address(page) + NET_SKB_PAD; + u32 unused = IPA_RX_BUFFER_SIZE - total_len; + u32 resid = total_len; + + while (resid) { + struct ipa_status status; + bool drop_packet = false; + size_t status_size; + u32 align; + u32 len; + + status_size = ipa_status_parse(&status, data, resid); + + /* Skip over status packets that lack packet data */ + if (ipa_endpoint_status_skip(endpoint, &status)) { + data += status_size; + resid -= status_size; + continue; + } + + /* Packet data follows the status structure. Unless + * the packet failed to match a routing rule, or it + * had a deaggregation exception, we'll consume it. + */ + if (status.exception == IPA_STATUS_EXCEPTION_NONE) { + if (status.rt_miss) + drop_packet = true; + } else if (status.exception == IPA_STATUS_EXCEPTION_DEAGGR) { + drop_packet = true; + } + + /* Compute the amount of buffer space consumed by the + * packet, including the status element. If the hardware + * is configured to pad packet data to an aligned boundary, + * account for that. And if checksum offload is is enabled + * a trailer containing computed checksum information will + * be appended. + */ + align = endpoint->data->config.rx.pad_align ? : 1; + len = status_size + ALIGN(status.pkt_len, align); + if (endpoint->data->config.checksum) + len += sizeof(struct rmnet_map_dl_csum_trailer); + + /* Charge the new packet with a proportional fraction of + * the unused space in the original receive buffer. + * XXX Charge a proportion of the *whole* receive buffer? + */ + if (!drop_packet) { + u32 extra = unused * len / total_len; + void *data2 = data + status_size; + u32 len2 = status.pkt_len; + + /* Client receives only packet data (no status) */ + ipa_endpoint_skb_copy(endpoint, data2, len2, extra); + } + + /* Consume status and the full packet it describes */ + data += len; + resid -= len; + } + + __free_pages(page, IPA_RX_BUFFER_ORDER); +} + +/* Complete transaction initiated in ipa_endpoint_replenish_one() */ +void ipa_endpoint_rx_complete(struct gsi_trans *trans) +{ + struct page *page = trans->data; + struct ipa_endpoint *endpoint; + struct ipa *ipa; + + ipa = container_of(trans->gsi, struct ipa, gsi); + endpoint = ipa->endpoint_map[trans->channel_id]; + + ipa_endpoint_replenish(endpoint, true); + + if (trans->result == -ECANCELED) { + __free_pages(page, IPA_RX_BUFFER_ORDER); + return; + } + + /* Parse or build a socket buffer using the actual received length */ + if (endpoint->data->config.status_enable) + ipa_endpoint_status_parse(endpoint, page, trans->len); + else + ipa_endpoint_skb_build(endpoint, page, trans->len); +} + +static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint) +{ + struct gsi_trans *trans; + bool doorbell = false; + struct page *page; + u32 offset; + u32 len; + + page = dev_alloc_pages(IPA_RX_BUFFER_ORDER); + if (!page) + return -ENOMEM; + offset = NET_SKB_PAD; + len = IPA_RX_BUFFER_SIZE - offset; + + trans = gsi_channel_trans_alloc(&endpoint->ipa->gsi, + endpoint->channel_id, 1); + if (!trans) + goto err_page_free; + trans->data = page; + + /* Set up and map a scatterlist entry representing the buffer */ + sg_init_table(trans->sgl, trans->sgc); + sg_set_page(trans->sgl, page, len, offset); + + if (++endpoint->replenish_ready == IPA_REPLENISH_BATCH) { + doorbell = true; + endpoint->replenish_ready = 0; + } + + if (!gsi_trans_commit(trans, doorbell)) + return 0; + +err_page_free: + __free_pages(page, IPA_RX_BUFFER_ORDER); + + return -ENOMEM; +} + +/** + * ipa_endpoint_replenish() - Replenish the Rx packets cache. + * + * Allocate RX packet wrapper structures with maximal socket buffers + * for an endpoint. These are supplied to the hardware, which fills + * them with incoming data. + */ +static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one) +{ + struct gsi *gsi; + u32 backlog; + + if (add_one) { + if (endpoint->replenish_enabled) + atomic_inc(&endpoint->replenish_backlog); + else + atomic_inc(&endpoint->replenish_saved); + } + + if (!endpoint->replenish_enabled) + return; + + while (atomic_dec_not_zero(&endpoint->replenish_backlog)) + if (ipa_endpoint_replenish_one(endpoint)) + goto try_again_later; + + return; + +try_again_later: + /* The last one didn't succeed, so fix the backlog */ + backlog = atomic_inc_return(&endpoint->replenish_backlog); + + /* Whenever a receive buffer transaction completes we'll try to + * replenish again. It's unlikely, but if we fail to supply even + * one buffer, nothing will trigger another replenish attempt. + * If this happens, schedule work to try again. + */ + gsi = &endpoint->ipa->gsi; + if (backlog == gsi_channel_trans_max(gsi, endpoint->channel_id)) + schedule_delayed_work(&endpoint->replenish_work, + msecs_to_jiffies(1)); +} + +static void ipa_endpoint_replenish_enable(struct ipa_endpoint *endpoint) +{ + struct gsi *gsi = &endpoint->ipa->gsi; + u32 max_backlog; + u32 saved; + + endpoint->replenish_enabled = true; + while ((saved = atomic_xchg(&endpoint->replenish_saved, 0))) + atomic_add(saved, &endpoint->replenish_backlog); + + /* Start replenishing if hardware currently has no buffers */ + max_backlog = gsi_channel_trans_max(gsi, endpoint->channel_id); + if (atomic_read(&endpoint->replenish_backlog) == max_backlog) { + ipa_endpoint_replenish(endpoint, false); + return; + } +} + +static void ipa_endpoint_replenish_disable(struct ipa_endpoint *endpoint) +{ + endpoint->replenish_enabled = false; +} + +static void ipa_endpoint_replenish_work(struct work_struct *work) +{ + struct delayed_work *dwork = to_delayed_work(work); + struct ipa_endpoint *endpoint; + + endpoint = container_of(dwork, struct ipa_endpoint, replenish_work); + + ipa_endpoint_replenish(endpoint, false); +} + +static bool ipa_endpoint_set_up(struct ipa_endpoint *endpoint) +{ + struct ipa *ipa = endpoint->ipa; + + return ipa && (ipa->set_up & BIT(endpoint->endpoint_id)); +} + +static void ipa_endpoint_default_route_set(struct ipa *ipa, + enum ipa_endpoint_id endpoint_id) +{ + u32 val; + + /* ROUTE_DIS is 0 */ + val = u32_encode_bits(endpoint_id, ROUTE_DEF_PIPE_FMASK); + val |= ROUTE_DEF_HDR_TABLE_FMASK; + val |= u32_encode_bits(0, ROUTE_DEF_HDR_OFST_FMASK); + val |= u32_encode_bits(endpoint_id, ROUTE_FRAG_DEF_PIPE_FMASK); + val |= ROUTE_DEF_RETAIN_HDR_FMASK; + + iowrite32(val, ipa->reg_virt + IPA_REG_ROUTE_OFFSET); +} +/** + * ipa_endpoint_default_route_init() - Configure IPA default route + * @ipa: IPA pointer + * @client: Client to which exceptions should be directed + */ +void ipa_endpoint_default_route_setup(struct ipa_endpoint *endpoint) +{ + ipa_endpoint_default_route_set(endpoint->ipa, endpoint->endpoint_id); +} + +/** + * ipa_endpoint_default_route_teardown() - + * Inverse of ipa_endpoint_default_route_setup() + * @ipa: IPA pointer + */ +void ipa_endpoint_default_route_teardown(struct ipa_endpoint *endpoint) +{ + ipa_endpoint_default_route_set(endpoint->ipa, 0); +} + +/** + * ipa_endpoint_stop()- Stops a GSI channel in IPA + * @client: Client whose endpoint should be stopped + * + * This function implements the sequence to stop a GSI channel + * in IPA. This function returns when the channel is is STOP state. + * + * Return value: 0 on success, negative otherwise + */ +int ipa_endpoint_stop(struct ipa_endpoint *endpoint) +{ + struct device *dev = &endpoint->ipa->pdev->dev; + size_t size = IPA_ENDPOINT_STOP_RX_SIZE; + struct gsi *gsi = &endpoint->ipa->gsi; + void *virt = NULL; + dma_addr_t addr; + int ret; + int i; + + /* An RX endpoint might not stop right away. In that case we issue + * a small (1-byte) DMA command, delay for a bit (1-2 milliseconds), + * and try again. Allocate the DMA buffer in case this is needed. + */ + if (!endpoint->toward_ipa) { + virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL); + if (!virt) + return -ENOMEM; + } + + for (i = 0; i < IPA_ENDPOINT_STOP_RETRY_MAX; i++) { + ret = gsi_channel_stop(gsi, endpoint->channel_id); + if (ret != -EAGAIN) + break; + + if (endpoint->toward_ipa) + continue; + + /* Send a 1 byte 32-bit DMA task and try again after a delay */ + ret = ipa_cmd_dma_task_32(endpoint->ipa, size, addr); + if (ret) + break; + + usleep_range(USEC_PER_MSEC, 2 * USEC_PER_MSEC); + } + if (i >= IPA_ENDPOINT_STOP_RETRY_MAX) + ret = -EIO; + + if (!endpoint->toward_ipa) + dma_free_coherent(dev, size, virt, addr); + + return ret; +} + +bool ipa_endpoint_enabled(struct ipa_endpoint *endpoint) +{ + return !!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)); +} + +int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint) +{ + struct ipa *ipa = endpoint->ipa; + int ret; + + if (WARN_ON(!ipa_endpoint_set_up(endpoint))) + return -EINVAL; + + ret = gsi_channel_start(&ipa->gsi, endpoint->channel_id); + if (ret) + return ret; + + ipa_interrupt_suspend_enable(ipa->interrupt, endpoint->endpoint_id); + + if (!endpoint->toward_ipa) + ipa_endpoint_replenish_enable(endpoint); + + ipa->enabled |= BIT(endpoint->endpoint_id); + + return 0; +} + +void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint) +{ + struct ipa *ipa = endpoint->ipa; + int ret; + + if (WARN_ON(!ipa_endpoint_enabled(endpoint))) + return; + + if (!endpoint->toward_ipa) + ipa_endpoint_replenish_disable(endpoint); + + ipa_interrupt_suspend_disable(ipa->interrupt, endpoint->endpoint_id); + + ret = ipa_endpoint_stop(endpoint); + WARN(ret, "error %d attempting to stop endpoint %u\n", ret, + endpoint->endpoint_id); + + if (!ret) + endpoint->ipa->enabled &= ~BIT(endpoint->endpoint_id); +} + +static bool ipa_endpoint_aggr_active(struct ipa_endpoint *endpoint) +{ + u32 mask = BIT(endpoint->endpoint_id); + struct ipa *ipa = endpoint->ipa; + u32 val; + + val = ioread32(ipa->reg_virt + IPA_REG_STATE_AGGR_ACTIVE_OFFSET); + + return !!(val & mask); +} + +static void ipa_endpoint_force_close(struct ipa_endpoint *endpoint) +{ + u32 mask = BIT(endpoint->endpoint_id); + struct ipa *ipa = endpoint->ipa; + u32 val; + + val = u32_encode_bits(mask, PIPE_BITMAP_FMASK); + iowrite32(val, ipa->reg_virt + IPA_REG_AGGR_FORCE_CLOSE_OFFSET); +} + +/** + * ipa_endpoint_reset_rx_aggr() - Reset RX endpoint with aggregation active + * @endpoint: Endpoint to be reset + * + * If aggregation is active on an RX endpoint when a reset is performed + * on its underlying GSI channel, a special sequence of actions must be + * taken to ensure the IPA pipeline is properly cleared. + * + * @Return: 0 if successful, or a negative error code + */ +static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint) +{ + struct device *dev = &endpoint->ipa->pdev->dev; + struct ipa *ipa = endpoint->ipa; + bool endpoint_suspended = false; + struct gsi *gsi = &ipa->gsi; + dma_addr_t addr; + u32 len = 1; + void *virt; + int ret; + int i; + + virt = kzalloc(len, GFP_KERNEL); + if (!virt) + return -ENOMEM; + + addr = dma_map_single(dev, virt, len, DMA_FROM_DEVICE); + if (dma_mapping_error(dev, addr)) { + ret = -ENOMEM; + goto out_free_virt; + } + + /* Force close aggregation before issuing the reset */ + ipa_endpoint_force_close(endpoint); + + /* Reset and reconfigure the channel with the doorbell engine + * disabled. Then poll until we know aggregation is no longer + * active. We'll re-enable the doorbell when we reset below. + */ + ret = gsi_channel_reset(gsi, endpoint->channel_id, false); + if (ret) + goto out_unmap_addr; + + if (ipa_endpoint_init_ctrl(endpoint, false)) + endpoint_suspended = true; + + /* Start channel and do a 1 byte read */ + ret = gsi_channel_start(gsi, endpoint->channel_id); + if (ret) + goto out_suspend_again; + + ret = gsi_trans_read_byte(gsi, endpoint->channel_id, addr); + if (ret) + goto err_stop_channel; + + /* Wait for aggregation to be closed on the channel */ + for (i = 0; i < IPA_ENDPOINT_RESET_AGGR_RETRY_MAX; i++) { + if (!ipa_endpoint_aggr_active(endpoint)) + break; + usleep_range(USEC_PER_MSEC, 2 * USEC_PER_MSEC); + } + WARN_ON(ipa_endpoint_aggr_active(endpoint)); + gsi_trans_read_byte_done(gsi, endpoint->channel_id); + + ret = ipa_endpoint_stop(endpoint); + if (ret) + goto out_suspend_again; + + /* Finally, reset and reconfigure the channel again (this time with + * the doorbell engine enabled). Sleep for 1 millisecond to complete + * the channel reset sequence. Finish by suspending the channel + * again (if necessary). + */ + ret = gsi_channel_reset(gsi, endpoint->channel_id, true); + + usleep_range(USEC_PER_MSEC, 2 * USEC_PER_MSEC); + + goto out_suspend_again; + +err_stop_channel: + ipa_endpoint_stop(endpoint); +out_suspend_again: + if (endpoint_suspended) + (void)ipa_endpoint_init_ctrl(endpoint, true); +out_unmap_addr: + dma_unmap_single(dev, addr, len, DMA_FROM_DEVICE); +out_free_virt: + kfree(virt); + + return ret; +} + +static void ipa_endpoint_reset(struct ipa_endpoint *endpoint) +{ + u32 channel_id = endpoint->channel_id; + struct ipa *ipa = endpoint->ipa; + struct gsi *gsi = &ipa->gsi; + int ret; + + /* For TX endpoints, or RX endpoints without aggregation active, + * we only need to reset the underlying GSI channel. + */ + if (!endpoint->toward_ipa && endpoint->data->config.aggregation) { + if (ipa_endpoint_aggr_active(endpoint)) + ret = ipa_endpoint_reset_rx_aggr(endpoint); + else + ret = gsi_channel_reset(gsi, channel_id, true); + } else { + ret = gsi_channel_reset(gsi, channel_id, true); + } + WARN(ret, "error %d attempting to reset channel %u\n", ret, + endpoint->channel_id); +} + +static bool ipa_endpoint_suspended(struct ipa_endpoint *endpoint) +{ + return !!(endpoint->ipa->suspended & BIT(endpoint->endpoint_id)); +} + +/** + * ipa_endpoint_suspend_aggr() - Emulate suspend interrupt + * @endpoint_id: Endpoint on which to emulate a suspend + * + * Emulate suspend IPA interrupt to unsuspend an endpoint suspended + * with an open aggregation frame. This is to work around a hardware + * issue where the suspend interrupt will not be generated when it + * should be. + */ +static void ipa_endpoint_suspend_aggr(struct ipa_endpoint *endpoint) +{ + struct ipa *ipa = endpoint->ipa; + + /* Nothing to do if the endpoint doesn't have aggregation open */ + if (!ipa_endpoint_aggr_active(endpoint)) + return; + + /* Force close aggregation */ + ipa_endpoint_force_close(endpoint); + + ipa_interrupt_simulate_suspend(ipa->interrupt); +} + +void ipa_endpoint_suspend(struct ipa_endpoint *endpoint) +{ + struct gsi *gsi = &endpoint->ipa->gsi; + + if (!ipa_endpoint_enabled(endpoint)) + return; + + if (!endpoint->toward_ipa) { + if (!ipa_endpoint_init_ctrl(endpoint, true)) + return; + + ipa_endpoint_replenish_disable(endpoint); + + /* Due to a hardware bug, a client suspended with an open + * aggregation frame will not generate a SUSPEND IPA interrupt. + * We work around this by force-closing the aggregation frame, + * then simulating the arrival of such an interrupt. + */ + if (endpoint->data->config.aggregation) + ipa_endpoint_suspend_aggr(endpoint); + } + + gsi_channel_trans_quiesce(gsi, endpoint->channel_id); + + endpoint->ipa->suspended |= BIT(endpoint->endpoint_id); +} + +void ipa_endpoint_resume(struct ipa_endpoint *endpoint) +{ + if (!ipa_endpoint_suspended(endpoint)) + return; + + if (endpoint->toward_ipa) + ipa_endpoint_replenish_enable(endpoint); + else + WARN_ON(ipa_endpoint_init_ctrl(endpoint, false)); + + endpoint->ipa->suspended &= ~BIT(endpoint->endpoint_id); +} + +static void ipa_endpoint_program(struct ipa_endpoint *endpoint) +{ + if (endpoint->toward_ipa) { + bool delay_mode = !!endpoint->data->config.tx.delay; + + (void)ipa_endpoint_init_ctrl(endpoint, delay_mode); + ipa_endpoint_init_hdr_ext(endpoint); + ipa_endpoint_init_aggr(endpoint); + ipa_endpoint_init_deaggr(endpoint); + ipa_endpoint_init_seq(endpoint); + } else { + (void)ipa_endpoint_init_ctrl(endpoint, false); + ipa_endpoint_init_hdr_ext(endpoint); + ipa_endpoint_init_aggr(endpoint); + } + ipa_endpoint_init_cfg(endpoint); + ipa_endpoint_init_hdr(endpoint); + ipa_endpoint_init_hdr_metadata_mask(endpoint); + ipa_endpoint_init_mode(endpoint); + ipa_endpoint_status(endpoint); +} + +static void ipa_endpoint_setup_one(struct ipa_endpoint *endpoint) +{ + struct gsi *gsi = &endpoint->ipa->gsi; + u32 channel_id = endpoint->channel_id; + + /* Only AP endpoints get configured */ + if (endpoint->ee_id != GSI_EE_AP) + return; + + endpoint->trans_tre_max = gsi_channel_trans_tre_max(gsi, channel_id); + if (!endpoint->toward_ipa) { + endpoint->replenish_enabled = false; + atomic_set(&endpoint->replenish_saved, + gsi_channel_trans_max(gsi, endpoint->channel_id)); + atomic_set(&endpoint->replenish_backlog, 0); + INIT_DELAYED_WORK(&endpoint->replenish_work, + ipa_endpoint_replenish_work); + } + + ipa_endpoint_program(endpoint); + + endpoint->ipa->set_up |= BIT(endpoint->endpoint_id); +} + +static void ipa_endpoint_teardown_one(struct ipa_endpoint *endpoint) +{ + if (!endpoint->toward_ipa) + cancel_delayed_work_sync(&endpoint->replenish_work); + + ipa_endpoint_reset(endpoint); + + endpoint->ipa->set_up &= ~BIT(endpoint->endpoint_id); +} + +void ipa_endpoint_setup(struct ipa *ipa) +{ + u32 initialized = ipa->initialized; + + ipa->set_up = 0; + while (initialized) { + enum ipa_endpoint_id endpoint_id = __ffs(initialized); + + initialized ^= BIT(endpoint_id); + + ipa_endpoint_setup_one(&ipa->endpoint[endpoint_id]); + } +} + +void ipa_endpoint_teardown(struct ipa *ipa) +{ + u32 set_up = ipa->set_up; + + while (set_up) { + enum ipa_endpoint_id endpoint_id = __fls(set_up); + + set_up ^= BIT(endpoint_id); + + ipa_endpoint_teardown_one(&ipa->endpoint[endpoint_id]); + } +} + +static int ipa_endpoint_init_one(struct ipa *ipa, + const struct gsi_ipa_endpoint_data *data) +{ + struct ipa_endpoint *endpoint; + + if (data->endpoint_id >= IPA_ENDPOINT_MAX) + return -EIO; + endpoint = &ipa->endpoint[data->endpoint_id]; + + if (data->ee_id == GSI_EE_AP) + ipa->endpoint_map[data->channel_id] = endpoint; + + endpoint->ipa = ipa; + endpoint->ee_id = data->ee_id; + endpoint->channel_id = data->channel_id; + endpoint->endpoint_id = data->endpoint_id; + endpoint->toward_ipa = data->toward_ipa; + endpoint->data = &data->endpoint; + + if (endpoint->data->support_flt) + ipa->filter_support |= BIT(endpoint->endpoint_id); + + ipa->initialized |= BIT(endpoint->endpoint_id); + + return 0; +} + +void ipa_endpoint_exit_one(struct ipa_endpoint *endpoint) +{ + endpoint->ipa->initialized &= ~BIT(endpoint->endpoint_id); +} + +int ipa_endpoint_init(struct ipa *ipa, u32 data_count, + const struct gsi_ipa_endpoint_data *data) +{ + u32 initialized; + int ret; + u32 i; + + ipa->initialized = 0; + + ipa->filter_support = 0; + for (i = 0; i < data_count; i++) { + ret = ipa_endpoint_init_one(ipa, &data[i]); + if (ret) + goto err_endpoint_unwind; + } + dev_dbg(&ipa->pdev->dev, "initialized 0x%08x\n", ipa->initialized); + + /* Verify the bitmap of endpoints that support filtering. */ + dev_dbg(&ipa->pdev->dev, "filter_support 0x%08x\n", + ipa->filter_support); + if (!ipa->filter_support) + goto err_endpoint_unwind; + if (hweight32(ipa->filter_support) > IPA_SMEM_FLT_COUNT) + goto err_endpoint_unwind; + + return 0; + +err_endpoint_unwind: + initialized = ipa->initialized; + while (initialized) { + enum ipa_endpoint_id endpoint_id = __fls(initialized); + + initialized ^= BIT(endpoint_id); + + ipa_endpoint_exit_one(&ipa->endpoint[endpoint_id]); + } + + return ret; +} + +void ipa_endpoint_exit(struct ipa *ipa) +{ + u32 initialized = ipa->initialized; + + while (initialized) { + enum ipa_endpoint_id endpoint_id = __fls(initialized); + + initialized ^= BIT(endpoint_id); + + ipa_endpoint_exit_one(&ipa->endpoint[endpoint_id]); + } +} diff --git a/drivers/net/ipa/ipa_endpoint.h b/drivers/net/ipa/ipa_endpoint.h new file mode 100644 index 000000000000..c9f7ccc59a5a --- /dev/null +++ b/drivers/net/ipa/ipa_endpoint.h @@ -0,0 +1,97 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ +#ifndef _IPA_ENDPOINT_H_ +#define _IPA_ENDPOINT_H_ + +#include +#include +#include + +#include "gsi.h" +#include "ipa_reg.h" + +struct net_device; +struct sk_buff; + +struct ipa; +struct gsi_ipa_endpoint_data; + +#define IPA_MTU ETH_DATA_LEN + +enum ipa_endpoint_id { + IPA_ENDPOINT_INVALID = 0, + IPA_ENDPOINT_AP_MODEM_TX = 2, + IPA_ENDPOINT_MODEM_LAN_TX = 3, + IPA_ENDPOINT_MODEM_COMMAND_TX = 4, + IPA_ENDPOINT_AP_COMMAND_TX = 5, + IPA_ENDPOINT_MODEM_AP_TX = 6, + IPA_ENDPOINT_AP_LAN_RX = 9, + IPA_ENDPOINT_AP_MODEM_RX = 10, + IPA_ENDPOINT_MODEM_AP_RX = 12, + IPA_ENDPOINT_MODEM_LAN_RX = 13, +}; + +#define IPA_ENDPOINT_MAX 32 /* Max supported */ + +/** + * struct ipa_endpoint - IPA endpoint information + * @client: Client associated with the endpoint + * @channel_id: EP's GSI channel + * @evt_ring_id: EP's GSI channel event ring + */ +struct ipa_endpoint { + struct ipa *ipa; + enum ipa_seq_type seq_type; + enum gsi_ee_id ee_id; + u32 channel_id; + enum ipa_endpoint_id endpoint_id; + u32 toward_ipa; /* Boolean */ + const struct ipa_endpoint_data *data; + + u32 trans_tre_max; /* maximum descriptors per transaction */ + u32 evt_ring_id; + + /* Net device this endpoint is associated with, if any */ + struct net_device *netdev; + + /* Receive buffer replenishing for RX endpoints */ + u32 replenish_enabled; /* Boolean */ + u32 replenish_ready; + atomic_t replenish_saved; + atomic_t replenish_backlog; + struct delayed_work replenish_work; /* global wq */ +}; + +bool ipa_endpoint_init_ctrl(struct ipa_endpoint *endpoint, bool suspend_delay); + +int ipa_endpoint_skb_tx(struct ipa_endpoint *endpoint, struct sk_buff *skb); + +int ipa_endpoint_stop(struct ipa_endpoint *endpoint); + +void ipa_endpoint_exit_one(struct ipa_endpoint *endpoint); + +bool ipa_endpoint_enabled(struct ipa_endpoint *endpoint); +int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint); +void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint); + +void ipa_endpoint_default_route_setup(struct ipa_endpoint *endpoint); +void ipa_endpoint_default_route_teardown(struct ipa_endpoint *endpoint); + +void ipa_endpoint_suspend(struct ipa_endpoint *endpoint); +void ipa_endpoint_resume(struct ipa_endpoint *endpoint); + +void ipa_endpoint_setup(struct ipa *ipa); +void ipa_endpoint_teardown(struct ipa *ipa); + +int ipa_endpoint_init(struct ipa *ipa, u32 data_count, + const struct gsi_ipa_endpoint_data *data); +void ipa_endpoint_exit(struct ipa *ipa); + +void ipa_endpoint_skb_tx_complete(struct gsi_trans *trans); +void ipa_endpoint_rx_complete(struct gsi_trans *trans); + + +#endif /* _IPA_ENDPOINT_H_ */ From patchwork Fri May 31 03:53:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 10969605 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7C9DE14DB for ; Fri, 31 May 2019 03:56:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6746E28BB9 for ; Fri, 31 May 2019 03:56:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 582AC28BCA; Fri, 31 May 2019 03:56:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 622DA28BB9 for ; Fri, 31 May 2019 03:56:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zfd3Dl2cjAdsjnosWk3ejO7xJkKwc+6SGP5d6g/Z0jQ=; b=KS8RNz3o9+tBce NID82iKszlHqvGtw4nzwOs093Hrf4e+2mFhlWzkdLHvEPgmVgUtTjlnwOO3z6GWk/nBLMdQpgTau8 DYtwQ9pStUcT7R/8R4iK7tdQHk9HrYKhopV7wi8EhF7J4fjyAWpVBbTjg2IkaF9TA66q4EwL1i4Al XsC1s00jC929PSeCIH9lUzAsRlP9/mtyMZMCz5EiLQNv80Ctje0QPuLPGRReHH3OWioAopLUA2aT+ 78wAB8SA/LwH8pGV5cA11NQA3NlL6LMziCQP6X/ubz1pzQ9XvClekO+d5A9gQF7QkT4hgWpfyAVEv ZL1x74tr60+qybBveq5g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYeN-0006Ho-VT; Fri, 31 May 2019 03:56:15 +0000 Received: from mail-io1-xd44.google.com ([2607:f8b0:4864:20::d44]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYcN-0002oC-Jv for linux-arm-kernel@lists.infradead.org; Fri, 31 May 2019 03:54:19 +0000 Received: by mail-io1-xd44.google.com with SMTP id x24so7020597ion.5 for ; Thu, 30 May 2019 20:54:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=n9vDD3Jfc66TNCOaDCzcBm5exj2keDdJZBnw5RzUKQQ=; b=BR9ktqiycYFlSF34OpURC2Dc8hB357FeqpzW4RYV68ZQS3SGcwHzd3JRrs+vhYRY3I pm8WXGB1L4RnEh+l4UCP6vm5oC/kbSB+KBr7JTX8HA/QoryLYpFrYRNOiGTYsllgD95A 46a8x5KCYY2KneCZW2Lc7EuHfYT4mudIdh6YLBFl4GyPc9UvPsCJuQ68XzLKRHUZmVGA knm+ljF9qiUvUbjyybjsUGmWzVC7+Y3E/wmo8YrEWMEB/ou2ZAZl7qHnDP5fWZWJRQdb gMHmNxH+9N2Uo7WwUMYyABOcSYlnNaRqr6tD+jo4JaDW85xLOpz5vURZxXtommdH9R2r dNdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=n9vDD3Jfc66TNCOaDCzcBm5exj2keDdJZBnw5RzUKQQ=; b=R2uFBIW5Gu0/jEJSrVFqOgifX32HpYHe6tgjMP/GOW8Mewu/o9I7xSHMXDofQaEnJa JNIwkeXow/f+SyEFek95jOAg7kbk2lgxLyOoaNMfSPSRdKKuWG49oE4sDhBv0GqLDuGY 5jrIaZe8azt1o869DTDKiy6r7UQB/emf8cY0sBXIGecz+91aHLLRyLaP9JbxCtyXpjDl EQMMAbcOzpwez6/N2EIF9Qm13xa5a6hOhon0E33ynGAf5lpVyGGdufDwOcLGE4V83ybc hpZWZ+9ik9n9nK0ZTKxgSNyGvB+dIp8CIK9vX4EgYV587on5W2LTcUUoLx2xUaRvVxTG Mfiw== X-Gm-Message-State: APjAAAUk+36yqlnz02vBmCB0zYAb2glXY9z8edEIzUwF06lxC9sMsaR7 qiWEDsBbuljE3IBLp94P5AR1Pg== X-Google-Smtp-Source: APXvYqwXTdW2ZfmvzR3xAkDbew9bxIw1ENi559vd+t6+3VzNVdO/UH4T9xuIpoC8IiDI1dNvbjr9UQ== X-Received: by 2002:a05:6602:44:: with SMTP id z4mr4495779ioz.180.1559274850612; Thu, 30 May 2019 20:54:10 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.54.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:54:10 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Subject: [PATCH v2 11/17] soc: qcom: ipa: immediate commands Date: Thu, 30 May 2019 22:53:42 -0500 Message-Id: <20190531035348.7194-12-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190530_205411_973577_3FFB0496 X-CRM114-Status: GOOD ( 20.64 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, syadagir@codeaurora.org, ejcaruso@google.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, evgreen@chromium.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, subashab@codeaurora.org, linux-soc@vger.kernel.org, abhishek.esse@gmail.com, cpratapa@codeaurora.org, benchan@google.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP One TX endpoint (per EE) is used for issuing immediate commands to the IPA. These commands request activites beyond simple data transfers to be done by the IPA hardware. For example, the IPA is able to manage routing packets among endpoints, and immediate commands are used to configure tables used for that routing. Immediate commands are built on top of GSI transactions. They are different from normal transfers (in that they use a special endpoint, and their "payload" is interpreted differently), so separate functions are used to issue immediate command transactions. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_cmd.c | 377 ++++++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_cmd.h | 116 ++++++++++++ 2 files changed, 493 insertions(+) create mode 100644 drivers/net/ipa/ipa_cmd.c create mode 100644 drivers/net/ipa/ipa_cmd.h diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c new file mode 100644 index 000000000000..32b11941436d --- /dev/null +++ b/drivers/net/ipa/ipa_cmd.c @@ -0,0 +1,377 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include +#include + +#include "gsi.h" +#include "gsi_trans.h" +#include "ipa.h" +#include "ipa_endpoint.h" +#include "ipa_cmd.h" +#include "ipa_mem.h" + +/** + * DOC: IPA Immediate Commands + * + * The AP command TX endpoint is used to issue immediate commands to the IPA. + * An immediate command is generally used to request the IPA do something + * other than data transfer to another endpoint. + * + * Immediate commands are represented by GSI transactions just like other + * transfer requests, represented by a single GSI TRE. Each immediate + * command has a well-defined format, having a payload of a known length. + * This allows the transfer element's length field to be used to hold an + * immediate command's opcode. The payload for a command resides in DRAM + * and is described by a single scatterlist entry in its transaction. + * Commands do not require a transaction completion callback. To commit + * an immediate command transaction, either gsi_trans_commit_command() or + * gsi_trans_commit_command_timeout() is used. + */ + +#define IPA_GSI_DMA_TASK_TIMEOUT 15 /* milliseconds */ + +/** + * __ipa_cmd_timeout() - Send an immediate command with timeout + * @ipa: IPA structure + * @opcode: Immediate command opcode (must not be IPA_CMD_NONE) + * @payload: Pointer to command payload + * @size: Size of payload + * @timeout: Milliseconds to wait for completion (0 waits indefinitely) + * + * This common function implements ipa_cmd() and ipa_cmd_timeout(). It + * allocates, initializes, and commits a transaction for the immediate + * command. The transaction is committed using gsi_trans_commit_command(), + * or if a non-zero timeout is supplied, gsi_trans_commit_command_timeout(). + * + * @Return: 0 if successful, or a negative error code + */ +static int __ipa_cmd_timeout(struct ipa *ipa, enum ipa_cmd_opcode opcode, + void *payload, size_t size, u32 timeout) +{ + struct ipa_endpoint *endpoint = ipa->command_endpoint; + struct gsi_trans *trans; + int ret; + + /* assert(opcode != IPA_CMD_NONE) */ + trans = gsi_channel_trans_alloc(&ipa->gsi, endpoint->channel_id, 1); + if (!trans) + return -EBUSY; + + sg_init_one(trans->sgl, payload, size); + + if (timeout) + ret = gsi_trans_commit_command_timeout(trans, opcode, timeout); + else + ret = gsi_trans_commit_command(trans, opcode); + if (ret) + goto err_trans_free; + + return 0; + +err_trans_free: + gsi_trans_free(trans); + + return ret; +} + +static int +ipa_cmd(struct ipa *ipa, enum ipa_cmd_opcode opcode, void *payload, size_t size) +{ + return __ipa_cmd_timeout(ipa, opcode, payload, size, 0); +} + +static int ipa_cmd_timeout(struct ipa *ipa, enum ipa_cmd_opcode opcode, + void *payload, size_t size) +{ + return __ipa_cmd_timeout(ipa, opcode, payload, size, + IPA_GSI_DMA_TASK_TIMEOUT); +} + +/* Field masks for ipa_imm_cmd_hw_hdr_init_local structure fields */ +#define IPA_CMD_HDR_INIT_FLAGS_TABLE_SIZE_FMASK GENMASK(11, 0) +#define IPA_CMD_HDR_INIT_FLAGS_HDR_ADDR_FMASK GENMASK(27, 12) +#define IPA_CMD_HDR_INIT_FLAGS_RESERVED_FMASK GENMASK(28, 31) + +struct ipa_imm_cmd_hw_hdr_init_local { + u64 hdr_table_addr; + u32 flags; + u32 reserved; +}; + +/* Initialize header space in IPA-local memory */ +int ipa_cmd_hdr_init_local(struct ipa *ipa, u32 offset, u32 size) +{ + struct ipa_imm_cmd_hw_hdr_init_local *payload; + struct device *dev = &ipa->pdev->dev; + dma_addr_t addr; + void *virt; + u32 flags; + u32 max; + int ret; + + if (size > field_max(IPA_CMD_HDR_INIT_FLAGS_TABLE_SIZE_FMASK)) + return -EINVAL; + + max = field_max(IPA_CMD_HDR_INIT_FLAGS_HDR_ADDR_FMASK); + if (offset > max || ipa->shared_offset > max - offset) + return -EINVAL; + offset += ipa->shared_offset; + + /* With this command we tell the IPA where in its local memory the + * header tables reside. We also supply a (host) buffer whose + * content is copied via DMA into that table space. We just want + * to zero fill it, so a zeroed DMA buffer is all that's required + * The IPA owns the table, but the AP must initialize it. + */ + virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL); + if (!virt) + return -ENOMEM; + + payload = kzalloc(sizeof(*payload), GFP_KERNEL); + if (!payload) { + ret = -ENOMEM; + goto out_dma_free; + } + + payload->hdr_table_addr = addr; + flags = u32_encode_bits(size, IPA_CMD_HDR_INIT_FLAGS_TABLE_SIZE_FMASK); + flags |= u32_encode_bits(offset, IPA_CMD_HDR_INIT_FLAGS_HDR_ADDR_FMASK); + payload->flags = flags; + + ret = ipa_cmd(ipa, IPA_CMD_HDR_INIT_LOCAL, payload, sizeof(*payload)); + + kfree(payload); +out_dma_free: + dma_free_coherent(dev, size, virt, addr); + + return ret; +} + +enum ipahal_pipeline_clear_option { + IPAHAL_HPS_CLEAR = 0, + IPAHAL_SRC_GRP_CLEAR = 1, + IPAHAL_FULL_PIPELINE_CLEAR = 2, +}; + +/* Field masks for ipa_imm_cmd_hw_dma_shared_mem structure fields */ +#define IPA_CMD_DMA_SHARED_FLAGS_DIRECTION_FMASK GENMASK(0, 0) +#define IPA_CMD_DMA_SHARED_FLAGS_SKIP_CLEAR_FMASK GENMASK(1, 1) +#define IPA_CMD_DMA_SHARED_FLAGS_CLEAR_OPTIONS_FMASK GENMASK(3, 2) + +struct ipa_imm_cmd_hw_dma_shared_mem { + u16 sw_reserved; + u16 size; + u16 local_addr; + u16 flags; + u64 system_addr; +}; + +/* Use a DMA command to zero a block of memory */ +int ipa_cmd_smem_dma_zero(struct ipa *ipa, u32 offset, u32 size) +{ + struct ipa_imm_cmd_hw_dma_shared_mem *payload; + struct device *dev = &ipa->pdev->dev; + dma_addr_t addr; + void *virt; + u32 val; + int ret; + + /* size must be non-zero, and must fit in a 16 bit field */ + if (!size || size > U16_MAX) + return -EINVAL; + + /* offset must fit in a 16 bit local_addr field */ + if (offset > U16_MAX || ipa->shared_offset > U16_MAX - offset) + return -EINVAL; + offset += ipa->shared_offset; + + /* A zero-filled buffer of the right size is all that's required */ + virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL); + if (!virt) + return -ENOMEM; + + payload = kzalloc(sizeof(*payload), GFP_KERNEL); + if (!payload) { + ret = -ENOMEM; + goto out_dma_free; + } + + payload->size = size; + payload->local_addr = offset; + /* direction: 0 = write to IPA; skip clear: 0 = don't wait */ + val = u16_encode_bits(IPAHAL_HPS_CLEAR, + IPA_CMD_DMA_SHARED_FLAGS_CLEAR_OPTIONS_FMASK); + payload->flags = val; + payload->system_addr = addr; + + ret = ipa_cmd(ipa, IPA_CMD_DMA_SHARED_MEM, payload, sizeof(*payload)); + + kfree(payload); +out_dma_free: + dma_free_coherent(dev, size, virt, addr); + + return ret; +} + +/* Field masks for ipa_imm_cmd_hw_ip_fltrt_init structure fields */ +#define IPA_CMD_IP_FLTRT_FLAGS_HASH_SIZE_FMASK GENMASK_ULL(11, 0) +#define IPA_CMD_IP_FLTRT_FLAGS_HASH_ADDR_FMASK GENMASK_ULL(27, 12) +#define IPA_CMD_IP_FLTRT_FLAGS_NHASH_SIZE_FMASK GENMASK_ULL(39, 28) +#define IPA_CMD_IP_FLTRT_FLAGS_NHASH_ADDR_FMASK GENMASK_ULL(55, 40) + +struct ipa_imm_cmd_hw_ip_fltrt_init { + u64 hash_rules_addr; + u64 flags; + u64 nhash_rules_addr; +}; + +/* Configure a routing or filter table, for IPv4 or IPv6 */ +static int ipa_cmd_table_config(struct ipa *ipa, enum ipa_cmd_opcode opcode, + dma_addr_t addr, size_t size, u32 hash_offset, + u32 nhash_offset) +{ + struct ipa_imm_cmd_hw_ip_fltrt_init *payload; + u64 val; + u32 max; + int ret; + + if (size > field_max(IPA_CMD_IP_FLTRT_FLAGS_HASH_SIZE_FMASK)) + return -EINVAL; + if (size > field_max(IPA_CMD_IP_FLTRT_FLAGS_NHASH_SIZE_FMASK)) + return -EINVAL; + + max = field_max(IPA_CMD_IP_FLTRT_FLAGS_HASH_ADDR_FMASK); + if (hash_offset > max || ipa->shared_offset > max - hash_offset) + return -EINVAL; + hash_offset += ipa->shared_offset; + + max = field_max(IPA_CMD_IP_FLTRT_FLAGS_NHASH_ADDR_FMASK); + if (nhash_offset > max || ipa->shared_offset > max - nhash_offset) + return -EINVAL; + nhash_offset += ipa->shared_offset; + + payload = kzalloc(sizeof(*payload), GFP_KERNEL); + if (!payload) + return -ENOMEM; + + payload->hash_rules_addr = addr; + val = u64_encode_bits(size, IPA_CMD_IP_FLTRT_FLAGS_HASH_SIZE_FMASK); + val |= u64_encode_bits(hash_offset, + IPA_CMD_IP_FLTRT_FLAGS_HASH_ADDR_FMASK); + val |= u64_encode_bits(size, IPA_CMD_IP_FLTRT_FLAGS_NHASH_SIZE_FMASK); + val |= u64_encode_bits(nhash_offset, + IPA_CMD_IP_FLTRT_FLAGS_NHASH_ADDR_FMASK); + payload->flags = val; + payload->nhash_rules_addr = addr; + + ret = ipa_cmd(ipa, opcode, payload, sizeof(*payload)); + + kfree(payload); + + return ret; +} + +/* Configure IPv4 routing table */ +int ipa_cmd_route_config_ipv4(struct ipa *ipa, size_t size) +{ + enum ipa_cmd_opcode opcode = IPA_CMD_IP_V4_ROUTING_INIT; + u32 nhash_offset = IPA_SMEM_V4_RT_NHASH_OFFSET; + u32 hash_offset = IPA_SMEM_V4_RT_HASH_OFFSET; + dma_addr_t addr = ipa->route_addr; + + return ipa_cmd_table_config(ipa, opcode, addr, size, hash_offset, + nhash_offset); +} + +/* Configure IPv6 routing table */ +int ipa_cmd_route_config_ipv6(struct ipa *ipa, size_t size) +{ + enum ipa_cmd_opcode opcode = IPA_CMD_IP_V6_ROUTING_INIT; + u32 nhash_offset = IPA_SMEM_V6_RT_NHASH_OFFSET; + u32 hash_offset = IPA_SMEM_V6_RT_HASH_OFFSET; + dma_addr_t addr = ipa->route_addr; + + return ipa_cmd_table_config(ipa, opcode, addr, size, hash_offset, + nhash_offset); +} + +/* Configure IPv4 filter table */ +int ipa_cmd_filter_config_ipv4(struct ipa *ipa, size_t size) +{ + enum ipa_cmd_opcode opcode = IPA_CMD_IP_V4_FILTER_INIT; + u32 nhash_offset = IPA_SMEM_V4_FLT_NHASH_OFFSET; + u32 hash_offset = IPA_SMEM_V4_FLT_HASH_OFFSET; + dma_addr_t addr = ipa->filter_addr; + + return ipa_cmd_table_config(ipa, opcode, addr, size, hash_offset, + nhash_offset); +} + +/* Configure IPv6 filter table */ +int ipa_cmd_filter_config_ipv6(struct ipa *ipa, size_t size) +{ + enum ipa_cmd_opcode opcode = IPA_CMD_IP_V6_FILTER_INIT; + u32 nhash_offset = IPA_SMEM_V6_FLT_NHASH_OFFSET; + u32 hash_offset = IPA_SMEM_V6_FLT_HASH_OFFSET; + dma_addr_t addr = ipa->filter_addr; + + return ipa_cmd_table_config(ipa, opcode, addr, size, hash_offset, + nhash_offset); +} + +/* Field masks for ipa_imm_cmd_hw_dma_task_32b_addr structure fields */ +#define IPA_CMD_DMA32_TASK_SW_RSVD_FMASK GENMASK(10, 0) +#define IPA_CMD_DMA32_TASK_CMPLT_FMASK GENMASK(11, 11) +#define IPA_CMD_DMA32_TASK_EOF_FMASK GENMASK(12, 12) +#define IPA_CMD_DMA32_TASK_FLSH_FMASK GENMASK(13, 13) +#define IPA_CMD_DMA32_TASK_LOCK_FMASK GENMASK(14, 14) +#define IPA_CMD_DMA32_TASK_UNLOCK_FMASK GENMASK(15, 15) +#define IPA_CMD_DMA32_SIZE1_FMASK GENMASK(31, 16) +#define IPA_CMD_DMA32_PACKET_SIZE_FMASK GENMASK(15, 0) + +struct ipa_imm_cmd_hw_dma_task_32b_addr { + u32 size1_flags; + u32 addr1; + u32 packet_size; + u32 reserved; +}; + +/* Use a 32-bit DMA command to zero a block of memory */ +int ipa_cmd_dma_task_32(struct ipa *ipa, size_t size, dma_addr_t addr) +{ + struct ipa_imm_cmd_hw_dma_task_32b_addr *payload; + u32 size1_flags; + int ret; + + if (size > field_max(IPA_CMD_DMA32_SIZE1_FMASK)) + return -EINVAL; + if (size > field_max(IPA_CMD_DMA32_PACKET_SIZE_FMASK)) + return -EINVAL; + + payload = kzalloc(sizeof(*payload), GFP_KERNEL); + if (!payload) + return -ENOMEM; + + /* complete: 0 = don't interrupt; eof: 0 = don't assert eot */ + size1_flags = IPA_CMD_DMA32_TASK_FLSH_FMASK; + /* lock: 0 = don't lock endpoint; unlock: 0 = don't unlock */ + size1_flags |= u32_encode_bits(size, IPA_CMD_DMA32_SIZE1_FMASK); + + payload->size1_flags = size1_flags; + payload->addr1 = addr; + payload->packet_size = + u32_encode_bits(size, IPA_CMD_DMA32_PACKET_SIZE_FMASK); + + ret = ipa_cmd_timeout(ipa, IPA_CMD_DMA_TASK_32B_ADDR, payload, + sizeof(payload)); + + kfree(payload); + + return ret; +} diff --git a/drivers/net/ipa/ipa_cmd.h b/drivers/net/ipa/ipa_cmd.h new file mode 100644 index 000000000000..f69d2eaddd53 --- /dev/null +++ b/drivers/net/ipa/ipa_cmd.h @@ -0,0 +1,116 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ +#ifndef _IPA_CMD_H_ +#define _IPA_CMD_H_ + +#include + +struct sk_buff; + +struct ipa; + +/** + * enum ipa_cmd_opcode: IPA immediate commands + * + * All immediate commands are issued using the AP command TX endpoint. + * The numeric values here are the opcodes for IPA v3.5.1 hardware. + * + * IPA_CMD_NONE is a special (invalid) value that's used to indicate + * a request is *not* an immediate command. + */ +enum ipa_cmd_opcode { + IPA_CMD_NONE = 0, + IPA_CMD_IP_V4_FILTER_INIT = 3, + IPA_CMD_IP_V6_FILTER_INIT = 4, + IPA_CMD_IP_V4_ROUTING_INIT = 7, + IPA_CMD_IP_V6_ROUTING_INIT = 8, + IPA_CMD_HDR_INIT_LOCAL = 9, + IPA_CMD_DMA_TASK_32B_ADDR = 17, + IPA_CMD_DMA_SHARED_MEM = 19, +}; + +/** + * ipa_cmd_hdr_init_local() - Initialize header space in IPA-local memory + * @ipa: IPA structure + * @offset: Offset of memory to be initialized + * @size: Size of memory to be initialized + * + * @Return: 0 if successful, or a negative error code + * + * Defines the location of a block of local memory to use for + * headers and fills it with zeroes. + */ +int ipa_cmd_hdr_init_local(struct ipa *ipa, u32 offset, u32 size); + +/** + * ipa_cmd_smem_dma_zero() - Use a DMA command to zero a block of memory + * @ipa: IPA structure + * @offset: Offset of memory to be zeroed + * @size: Size in bytes of memory to be zeroed + * + * @Return: 0 if successful, or a negative error code + */ +int ipa_cmd_smem_dma_zero(struct ipa *ipa, u32 offset, u32 size); + +/** + * ipa_cmd_route_config_ipv4() - Configure IPv4 routing table + * @ipa: IPA structure + * @size: Size in bytes of table + * + * @Return: 0 if successful, or a negative error code + * + * Defines the location and size of the IPv4 routing table and + * zeroes its content. + */ +int ipa_cmd_route_config_ipv4(struct ipa *ipa, size_t size); + +/** + * ipa_cmd_route_config_ipv6() - Configure IPv6 routing table + * @ipa: IPA structure + * @size: Size in bytes of table + * + * @Return: 0 if successful, or a negative error code + * + * Defines the location and size of the IPv6 routing table and + * zeroes its content. + */ +int ipa_cmd_route_config_ipv6(struct ipa *ipa, size_t size); + +/** + * ipa_cmd_filter_config_ipv4() - Configure IPv4 filter table + * @ipa: IPA structure + * @size: Size in bytes of table + * + * @Return: 0 if successful, or a negative error code + * + * Defines the location and size of the IPv4 filter table and + * zeroes its content. + */ +int ipa_cmd_filter_config_ipv4(struct ipa *ipa, size_t size); + +/** + * ipa_cmd_filter_config_ipv6() - Configure IPv6 filter table + * @ipa: IPA structure + * @size: Size in bytes of table + * + * @Return: 0 if successful, or a negative error code + * + * Defines the location and size of the IPv6 filter table and + * zeroes its content. + */ +int ipa_cmd_filter_config_ipv6(struct ipa *ipa, size_t size); + +/** + * ipa_cmd_dma_task_32() - Use a 32-bit DMA command to zero a block of memory + * @ipa: IPA structure + * @size: Size of memory to be zeroed + * @addr: DMA address defining start of range to be zeroed + * + * @Return: 0 if successful, or a negative error code + */ +int ipa_cmd_dma_task_32(struct ipa *ipa, size_t size, dma_addr_t addr); + +#endif /* _IPA_CMD_H_ */ From patchwork Fri May 31 03:53:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 10969607 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1C1B414C0 for ; Fri, 31 May 2019 03:56:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 09FDD28BB9 for ; Fri, 31 May 2019 03:56:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F262E28BCA; Fri, 31 May 2019 03:56:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 04E3A28BB9 for ; Fri, 31 May 2019 03:56:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7l5yGjwnlll3AKa54aUcq0oH2ocWBxKEFoBwgpaaVGM=; b=IUjw6LdxqiEIrq FgLJZB6+UKQG4KxyxU6zbFXsf5YLfX5R5AVOxsGm77XUYE+BJEZgRwr1gatOXYgEc9cyTPtwjUOuN QK/stwLtIEIBXicr5VExtrd5GUHyJSS/+/rs87xGuYC9N1wL0kxXZCuiCd58R6X4MhLOgwd9VS/L2 Xq0E6MXOVycQlH0Ma+KhCC0Sj4+U0/Q0f0JhkXw4yi8ntM2d1d7DD6wM1N66GzT9aKMHy7dvTg/r8 C63q3Jr672BJKy5SPeVCh5qL8gozS62HEqJLmGgiiSdH8AEDFdzcrZEUBGsfqx6A3whFVPzyf2tN5 AZ1PJONXvu8Z5IoUobSg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYeZ-0006YU-BQ; Fri, 31 May 2019 03:56:27 +0000 Received: from mail-it1-x142.google.com ([2607:f8b0:4864:20::142]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYcP-0002pY-78 for linux-arm-kernel@lists.infradead.org; Fri, 31 May 2019 03:54:20 +0000 Received: by mail-it1-x142.google.com with SMTP id s16so13047487ita.2 for ; Thu, 30 May 2019 20:54:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=aKNmhoCuw/eCc7wd53pD7Lrwug+Yfz2FeaJH4e2wTqA=; b=tDmMop7pS/Jx2ovpSkXLmS4AZMBK4I02veODFx6Oic0oGU/s5Gg6/sXkjoVn0icAns sEHHieQF3cTNrmm8f/nKb/X5cIL3RmqZjMZeYCrAUxeprVgKeRjjkog0rV/lCMWj5Vbu EPNj0WE3ZbAhA9e0pIWQ5FUwb69ayx+SU7Z6/OBnhY4a+umf+W7E+gFOp45E1dqXBCzW MVXEcxZHtYo+KdPV9qye+ZOTTNLD572iR5m3jCzkt55XZqU5h4fuGovqEJ4LeWFNHC/y Ch86qcwExxwZvIGs1K7pB9sygEm6WRZClhZyMWcH/t8AIkNN4/qV9I4CydGBrNKktH6H j1CA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aKNmhoCuw/eCc7wd53pD7Lrwug+Yfz2FeaJH4e2wTqA=; b=I0w+rr3eltCUtdu6iCC0hcydec0UPnUY4VDYwjqZqlIfobT/vbg9WmN1QKUdhY+pGB 0qDZGGfNzvREC5V9Qp/zjL+6Tz2EBuHlT5f+xXWbgHKWJbUu4f5qapyFuzD+nvWFhW5H dO108JbMNskWM9dY/voW6ydyxwbu7GcuUxzt0qYra86qDR8oetrYV0ZV439w3UMEo6xy 1F1tSa61XMSQq9DOw7+e7iri3rmwUWwNflJk+lAhCn5L8xi5I6kauXT7ilNaim9Z6Pbo KC3TtSjVMTyeJboVTdLHL60jf5J+hwC40ssiTai6sos+V9wRkWnUI08QsGU1/LRxPKcx GU1A== X-Gm-Message-State: APjAAAXSWNjzG12ciALzKfw+oTgXWLGW8vjgXNGerTvUCxPQsaoB1gX9 cRNdQam5LbKRzMVyOgwJCa2vWg== X-Google-Smtp-Source: APXvYqzNh5Gt73/UDcYSjOKcN0N786csRFOLEdAJmG94j2xowlQ5VKuNP+ftJScbqqkCiVapc7lkYQ== X-Received: by 2002:a02:950a:: with SMTP id y10mr5357447jah.26.1559274851907; Thu, 30 May 2019 20:54:11 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.54.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:54:11 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Subject: [PATCH v2 12/17] soc: qcom: ipa: IPA network device and microcontroller Date: Thu, 30 May 2019 22:53:43 -0500 Message-Id: <20190531035348.7194-13-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190530_205413_449322_D86E594D X-CRM114-Status: GOOD ( 27.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, syadagir@codeaurora.org, ejcaruso@google.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, evgreen@chromium.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, subashab@codeaurora.org, linux-soc@vger.kernel.org, abhishek.esse@gmail.com, cpratapa@codeaurora.org, benchan@google.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch includes the code that implements a Linux network device, using one TX and one RX IPA endpoint. It is used to implement the network device representing the modem and its connection to wireless networks. There are only a few things that are really modem-specific though, and they aren't clearly called out here. Such distinctions will be made clearer if we wish to support a network device for anything other than the modem. Sort of unrelated, this patch also includes the code supporting the microcontroller CPU present on the IPA. The microcontroller can be used to implement special handling of packets, but at this time we don't support that. Still, it is a component that needs to be initialized, and in the event of a crash we need to do some synchronization between the AP and the microcontroller. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_netdev.c | 251 +++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_netdev.h | 24 ++++ drivers/net/ipa/ipa_uc.c | 208 +++++++++++++++++++++++++++++ drivers/net/ipa/ipa_uc.h | 32 +++++ 4 files changed, 515 insertions(+) create mode 100644 drivers/net/ipa/ipa_netdev.c create mode 100644 drivers/net/ipa/ipa_netdev.h create mode 100644 drivers/net/ipa/ipa_uc.c create mode 100644 drivers/net/ipa/ipa_uc.h diff --git a/drivers/net/ipa/ipa_netdev.c b/drivers/net/ipa/ipa_netdev.c new file mode 100644 index 000000000000..19c73c4da02b --- /dev/null +++ b/drivers/net/ipa/ipa_netdev.c @@ -0,0 +1,251 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2014-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ + +/* Modem Transport Network Driver. */ + +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_data.h" +#include "ipa_endpoint.h" +#include "ipa_mem.h" +#include "ipa_netdev.h" +#include "ipa_qmi.h" + +#define IPA_NETDEV_NAME "rmnet_ipa%d" + +#define TAILROOM 0 /* for padding by mux layer */ + +#define IPA_NETDEV_TIMEOUT 10 /* seconds */ + +/** struct ipa_priv - IPA network device private data */ +struct ipa_priv { + struct ipa_endpoint *tx_endpoint; + struct ipa_endpoint *rx_endpoint; +}; + +/** ipa_netdev_open() - Opens the modem network interface */ +static int ipa_netdev_open(struct net_device *netdev) +{ + struct ipa_priv *priv = netdev_priv(netdev); + int ret; + + ret = ipa_endpoint_enable_one(priv->tx_endpoint); + if (ret) + return ret; + ret = ipa_endpoint_enable_one(priv->rx_endpoint); + if (ret) + goto err_disable_tx; + + netif_start_queue(netdev); + + return 0; + +err_disable_tx: + ipa_endpoint_disable_one(priv->tx_endpoint); + + return ret; +} + +/** ipa_netdev_stop() - Stops the modem network interface. */ +static int ipa_netdev_stop(struct net_device *netdev) +{ + struct ipa_priv *priv = netdev_priv(netdev); + + netif_stop_queue(netdev); + + ipa_endpoint_disable_one(priv->rx_endpoint); + ipa_endpoint_disable_one(priv->tx_endpoint); + + return 0; +} + +/** ipa_netdev_xmit() - Transmits an skb. + * @skb: skb to be transmitted + * @dev: network device + * + * Return codes: + * NETDEV_TX_OK: Success + * NETDEV_TX_BUSY: Error while transmitting the skb. Try again later + */ +static int ipa_netdev_xmit(struct sk_buff *skb, struct net_device *netdev) +{ + struct net_device_stats *stats = &netdev->stats; + struct ipa_priv *priv = netdev_priv(netdev); + struct ipa_endpoint *endpoint; + u32 skb_len = skb->len; + + if (!skb_len) + goto err_drop; + + endpoint = priv->tx_endpoint; + if (endpoint->data->config.qmap && skb->protocol != htons(ETH_P_MAP)) + goto err_drop; + + if (ipa_endpoint_skb_tx(endpoint, skb)) + return NETDEV_TX_BUSY; + + stats->tx_packets++; + stats->tx_bytes += skb_len; + + return NETDEV_TX_OK; + +err_drop: + dev_kfree_skb_any(skb); + stats->tx_dropped++; + + return NETDEV_TX_OK; +} + +void ipa_netdev_skb_rx(struct net_device *netdev, struct sk_buff *skb) +{ + struct net_device_stats *stats = &netdev->stats; + + if (skb) { + skb->dev = netdev; + skb->protocol = htons(ETH_P_MAP); + stats->rx_packets++; + stats->rx_bytes += skb->len; + + (void)netif_receive_skb(skb); + } else { + stats->rx_dropped++; + } +} + +static const struct net_device_ops ipa_netdev_ops = { + .ndo_open = ipa_netdev_open, + .ndo_stop = ipa_netdev_stop, + .ndo_start_xmit = ipa_netdev_xmit, +}; + +/** netdev_setup() - netdev setup function */ +static void netdev_setup(struct net_device *netdev) +{ + netdev->netdev_ops = &ipa_netdev_ops; + ether_setup(netdev); + /* No header ops (override value set by ether_setup()) */ + netdev->header_ops = NULL; + netdev->type = ARPHRD_RAWIP; + netdev->hard_header_len = 0; + netdev->max_mtu = IPA_MTU; + netdev->mtu = netdev->max_mtu; + netdev->addr_len = 0; + netdev->flags &= ~(IFF_BROADCAST | IFF_MULTICAST); + /* The endpoint is configured for QMAP */ + netdev->needed_headroom = sizeof(struct rmnet_map_header); + netdev->needed_tailroom = TAILROOM; + netdev->watchdog_timeo = IPA_NETDEV_TIMEOUT * HZ; + netdev->hw_features = NETIF_F_SG; +} + +/** ipa_netdev_suspend() - suspend callback for runtime_pm + * @dev: pointer to device + * + * This callback will be invoked by the runtime_pm framework when an AP suspend + * operation is invoked, usually by pressing a suspend button. + * + * Returns -EAGAIN to runtime_pm framework in case there are pending packets + * in the Tx queue. This will postpone the suspend operation until all the + * pending packets will be transmitted. + * + * In case there are no packets to send, releases the WWAN0_PROD entity. + * As an outcome, the number of IPA active clients should be decremented + * until IPA clocks can be gated. + */ +void ipa_netdev_suspend(struct net_device *netdev) +{ + struct ipa_priv *priv = netdev_priv(netdev); + + netif_stop_queue(netdev); + + ipa_endpoint_suspend(priv->tx_endpoint); + ipa_endpoint_suspend(priv->rx_endpoint); +} + +/** ipa_netdev_resume() - resume callback for runtime_pm + * @dev: pointer to device + * + * This callback will be invoked by the runtime_pm framework when an AP resume + * operation is invoked. + * + * Enables the network interface queue and returns success to the + * runtime_pm framework. + */ +void ipa_netdev_resume(struct net_device *netdev) +{ + struct ipa_priv *priv = netdev_priv(netdev); + + ipa_endpoint_resume(priv->rx_endpoint); + ipa_endpoint_resume(priv->tx_endpoint); + + netif_wake_queue(netdev); +} + +struct net_device *ipa_netdev_setup(struct ipa *ipa, + struct ipa_endpoint *rx_endpoint, + struct ipa_endpoint *tx_endpoint) +{ + struct net_device *netdev; + struct ipa_priv *priv; + int ret; + + /* Zero modem shared memory before we begin */ + ret = ipa_smem_zero_modem(ipa); + if (ret) + return ERR_PTR(ret); + + /* Start QMI communication with the modem */ + ret = ipa_qmi_setup(ipa); + if (ret) + return ERR_PTR(ret); + + netdev = alloc_netdev(sizeof(struct ipa_priv), IPA_NETDEV_NAME, + NET_NAME_UNKNOWN, netdev_setup); + if (!netdev) { + ret = -ENOMEM; + goto err_qmi_exit; + } + + rx_endpoint->netdev = netdev; + tx_endpoint->netdev = netdev; + + priv = netdev_priv(netdev); + priv->tx_endpoint = tx_endpoint; + priv->rx_endpoint = rx_endpoint; + + ret = register_netdev(netdev); + if (ret) + goto err_free_netdev; + + return netdev; + +err_free_netdev: + free_netdev(netdev); +err_qmi_exit: + ipa_qmi_teardown(ipa); + + return ERR_PTR(ret); +} + +void ipa_netdev_teardown(struct net_device *netdev) +{ + struct ipa_priv *priv = netdev_priv(netdev); + struct ipa *ipa = priv->tx_endpoint->ipa; + + if (!netif_queue_stopped(netdev)) + (void)ipa_netdev_stop(netdev); + + unregister_netdev(netdev); + + free_netdev(netdev); + + ipa_qmi_teardown(ipa); +} diff --git a/drivers/net/ipa/ipa_netdev.h b/drivers/net/ipa/ipa_netdev.h new file mode 100644 index 000000000000..8ab1e8ea0b4a --- /dev/null +++ b/drivers/net/ipa/ipa_netdev.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _IPA_NETDEV_H_ +#define _IPA_NETDEV_H_ + +struct ipa; +struct ipa_endpoint; +struct net_device; +struct sk_buff; + +struct net_device *ipa_netdev_setup(struct ipa *ipa, + struct ipa_endpoint *rx_endpoint, + struct ipa_endpoint *tx_endpoint); +void ipa_netdev_teardown(struct net_device *netdev); + +void ipa_netdev_skb_rx(struct net_device *netdev, struct sk_buff *skb); + +void ipa_netdev_suspend(struct net_device *netdev); +void ipa_netdev_resume(struct net_device *netdev); + +#endif /* _IPA_NETDEV_H_ */ diff --git a/drivers/net/ipa/ipa_uc.c b/drivers/net/ipa/ipa_uc.c new file mode 100644 index 000000000000..57256d1c3b90 --- /dev/null +++ b/drivers/net/ipa/ipa_uc.c @@ -0,0 +1,208 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ + +#include +#include +#include + +#include "ipa.h" +#include "ipa_clock.h" +#include "ipa_uc.h" + +/** + * DOC: The IPA embedded microcontroller + * + * The IPA incorporates a microcontroller that is able to do some additional + * handling/offloading of network activity. The current code makes + * essentially no use of the microcontroller, but it still requires some + * initialization. It needs to be notified in the event the AP crashes. + * + * The microcontroller can generate two interrupts to the AP. One interrupt + * is used to indicate that a response to a request from the AP is available. + * The other is used to notify the AP of the occurrence of an event. In + * addition, the AP can interrupt the microcontroller by writing a register. + * + * A 128 byte block of structured memory within the IPA SRAM is used together + * with these interrupts to implement the communication interface between the + * AP and the IPA microcontroller. Each side writes data to the shared area + * before interrupting its peer, which will read the written data in response + * to the interrupt. Some information found in the shared area is currently + * unused. All remaining space in the shared area is reserved, and must not + * be read or written by the AP. + */ +/* Supports hardware interface version 0x2000 */ + +/* Offset relative to the base of the IPA shared address space of the + * shared region used for communication with the microcontroller. The + * region is 128 bytes in size, but only the first 40 bytes are used. + */ +#define IPA_SMEM_UC_OFFSET 0x0000 + +/* Delay to allow a the microcontroller to save state when crashing */ +#define IPA_SEND_DELAY 100 /* microseconds */ + +/** + * struct ipa_uc_shared_area - AP/microcontroller shared memory area + * @command: command code (AP->microcontroller) + * @command_param: low 32 bits of command parameter (AP->microcontroller) + * @command_param_hi: high 32 bits of command parameter (AP->microcontroller) + * + * @response: response code (microcontroller->AP) + * @response_param: response parameter (microcontroller->AP) + * + * @event: event code (microcontroller->AP) + * @event_param: event parameter (microcontroller->AP) + * + * @first_error_address: address of first error-source on SNOC + * @hw_state: state of hardware (including error type information) + * @warning_counter: counter of non-fatal hardware errors + * @interface_version: hardware-reported interface version + */ +struct ipa_uc_shared_area { + u8 command; /* enum ipa_uc_command */ + u8 reserved0[3]; + __le32 command_param; + __le32 command_param_hi; + u8 response; /* enum ipa_uc_response */ + u8 reserved1[3]; + __le32 response_param; + u8 event; /* enum ipa_uc_event */ + u8 reserved2[3]; + + __le32 event_param; + __le32 first_error_address; + u8 hw_state; + u8 warning_counter; + __le16 reserved3; + __le16 interface_version; + __le16 reserved4; +}; + +/** enum ipa_uc_command - commands from the AP to the microcontroller */ +enum ipa_uc_command { + IPA_UC_COMMAND_NO_OP = 0, + IPA_UC_COMMAND_UPDATE_FLAGS = 1, + IPA_UC_COMMAND_DEBUG_RUN_TEST = 2, + IPA_UC_COMMAND_DEBUG_GET_INFO = 3, + IPA_UC_COMMAND_ERR_FATAL = 4, + IPA_UC_COMMAND_CLK_GATE = 5, + IPA_UC_COMMAND_CLK_UNGATE = 6, + IPA_UC_COMMAND_MEMCPY = 7, + IPA_UC_COMMAND_RESET_PIPE = 8, + IPA_UC_COMMAND_REG_WRITE = 9, + IPA_UC_COMMAND_GSI_CH_EMPTY = 10, +}; + +/** enum ipa_uc_response - microcontroller response codes */ +enum ipa_uc_response { + IPA_UC_RESPONSE_NO_OP = 0, + IPA_UC_RESPONSE_INIT_COMPLETED = 1, + IPA_UC_RESPONSE_CMD_COMPLETED = 2, + IPA_UC_RESPONSE_DEBUG_GET_INFO = 3, +}; + +/** enum ipa_uc_event - common cpu events reported by the microcontroller */ +enum ipa_uc_event { + IPA_UC_EVENT_NO_OP = 0, + IPA_UC_EVENT_ERROR = 1, + IPA_UC_EVENT_LOG_INFO = 2, +}; + +/* Microcontroller event IPA interrupt handler */ +static void ipa_uc_event_handler(struct ipa *ipa, + enum ipa_interrupt_id interrupt_id) +{ + struct ipa_uc_shared_area *shared; + + shared = ipa->shared_virt + IPA_SMEM_UC_OFFSET; + dev_err(&ipa->pdev->dev, "unsupported microcontroller event %hhu\n", + shared->event); + WARN_ON(shared->event == IPA_UC_EVENT_ERROR); +} + +/* Microcontroller response IPA interrupt handler */ +static void ipa_uc_response_hdlr(struct ipa *ipa, + enum ipa_interrupt_id interrupt_id) +{ + struct ipa_uc_shared_area *shared; + + /* An INIT_COMPLETED response message is sent to the AP by the + * microcontroller when it is operational. Other than this, the AP + * should only receive responses from the microntroller when it has + * sent it a request message. + * + * We can drop the clock reference taken in ipa_uc_init() once we + * know the microcontroller has finished its initialization. + */ + shared = ipa->shared_virt + IPA_SMEM_UC_OFFSET; + switch (shared->response) { + case IPA_UC_RESPONSE_INIT_COMPLETED: + ipa->uc_loaded = 1; + ipa_clock_put(ipa->clock); + break; + default: + dev_warn(&ipa->pdev->dev, + "unsupported microcontroller response %hhu\n", + shared->response); + break; + } +} + +/* ipa_uc_setup() - Set up the microcontroller */ +void ipa_uc_setup(struct ipa *ipa) +{ + /* The microcontroller needs the IPA clock running until it has + * completed its initialization. It signals this by sending an + * INIT_COMPLETED response message to the AP. This could occur after + * we have finished doing the rest of the IPA initialization, so we + * need to take an extra "proxy" reference, and hold it until we've + * received that signal. (This reference is dropped in + * ipa_uc_response_hdlr(), above.) + */ + ipa_clock_get(ipa->clock); + + ipa->uc_loaded = 0; + ipa_interrupt_add(ipa->interrupt, IPA_INTERRUPT_UC_0, + ipa_uc_event_handler); + ipa_interrupt_add(ipa->interrupt, IPA_INTERRUPT_UC_1, + ipa_uc_response_hdlr); +} + +/* Inverse of ipa_uc_setup() */ +void ipa_uc_teardown(struct ipa *ipa) +{ + ipa_interrupt_remove(ipa->interrupt, IPA_INTERRUPT_UC_1); + ipa_interrupt_remove(ipa->interrupt, IPA_INTERRUPT_UC_0); + if (!ipa->uc_loaded) + ipa_clock_put(ipa->clock); +} + +/* Send a command to the microcontroller */ +static void send_uc_command(struct ipa *ipa, u32 command, u32 command_param) +{ + struct ipa_uc_shared_area *shared; + + shared = ipa->shared_virt + IPA_SMEM_UC_OFFSET; + shared->command = command; + shared->command_param = cpu_to_le32(command_param); + shared->command_param_hi = 0; + shared->response = 0; + shared->response_param = 0; + + iowrite32(1, ipa->reg_virt + IPA_REG_IRQ_UC_OFFSET); +} + +/* Tell the microcontroller the AP is shutting down */ +void ipa_uc_panic_notifier(struct ipa *ipa) +{ + if (!ipa->uc_loaded) + return; + + send_uc_command(ipa, IPA_UC_COMMAND_ERR_FATAL, 0); + + /* give uc enough time to save state */ + udelay(IPA_SEND_DELAY); +} diff --git a/drivers/net/ipa/ipa_uc.h b/drivers/net/ipa/ipa_uc.h new file mode 100644 index 000000000000..c258cb6e1161 --- /dev/null +++ b/drivers/net/ipa/ipa_uc.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ +#ifndef _IPA_UC_H_ +#define _IPA_UC_H_ + +struct ipa; + +/** + * ipa_uc_setup() - set up the IPA microcontroller subsystem + * @ipa: IPA pointer + */ +void ipa_uc_setup(struct ipa *ipa); + +/** + * ipa_uc_teardown() - inverse of ipa_uc_setup() + * @ipa: IPA pointer + */ +void ipa_uc_teardown(struct ipa *ipa); + +/** + * ipa_uc_panic_notifier() + * @ipa: IPA pointer + * + * Notifier function called when the system crashes, to inform the + * microcontroller of the event. + */ +void ipa_uc_panic_notifier(struct ipa *ipa); + +#endif /* _IPA_UC_H_ */ From patchwork Fri May 31 03:53:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 10969617 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 320DC14DB for ; Fri, 31 May 2019 03:58:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1ADEB28BB9 for ; Fri, 31 May 2019 03:58:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0E1C828BCA; Fri, 31 May 2019 03:58:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 98CD828BB9 for ; Fri, 31 May 2019 03:58:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=pM3jk7qXAwD/VlH5Y5JgTQ77leTcvYdCIXty+Fngyy4=; b=TA2zMNIu8Od90f gV/yL5KB2uIhD1gupxFSFod1aCrRHMXhzYYaxztvpz/C0IFEqcdQIQfdi8AYQrvKFxnCpNt99vIpa FCdhZRDlTqHRucmu0CROwKq9bBCThtIYkGQMof/G9cnRrfvXd8dogvcZbe24mYmFS+pZmIxxe1Pwc ThvvucUgSqBOymtiZSHmHFoOWCdYBQiGrz/xEx8fZl89dLNinvV6FYrsW9ltYLXFDKRb1oE76AlM7 +RAlo54yph+u1ekT0803o4vn2CAGTn/8pvfBKbjciHIsq6igukSi31tTGCKO37Okh+/ovbeoc4VK6 2m4tKY7Qg1mPdQBs0P9g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYgK-0007pd-2P; Fri, 31 May 2019 03:58:16 +0000 Received: from mail-io1-xd44.google.com ([2607:f8b0:4864:20::d44]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYcQ-0002qy-HD for linux-arm-kernel@lists.infradead.org; Fri, 31 May 2019 03:54:39 +0000 Received: by mail-io1-xd44.google.com with SMTP id u25so6988883iot.13 for ; Thu, 30 May 2019 20:54:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=P/pdOsY/niyZRmGETxNerH6dJjcttG53YSsCa7ifoY0=; b=LdLXbxUxV/IrXn4d1i/vGdienEoZXb6BsYFxIL5VpNAARTTGVluKaA6MGSJ4HEGwmK aLKqU/vlN6B8wanswPX59AUkIA92XKC976uKcxKb4QFoe/+yYvne3gQycN9GvcP/iJ25 sL4jj4nfvf4SAtWPqfJzmLRVzV5nP+XX4qCSQj4fArc1fgXTTmFkW9mTIh2bICl8TjtI E+RjoaGd7oPTbfy22IAVGy6/ZVePyFgXwYIqVsf7aixl3ziURXrNxVoWHH+9D/5TUczZ +yOfQK7CAChVB5l34hJ9D4ZauMfJ9nfyrP1kk6zBxq/Bl8RiouWy+cWfS8ooBeS2Jbc+ Ouhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=P/pdOsY/niyZRmGETxNerH6dJjcttG53YSsCa7ifoY0=; b=puuwbZkVUv6PKAS1x/+UShOFKZ+He7YgAklqYGx0YAo5KmdD9vD8WzPrB/4oe6tz92 TRaAet4qICuTbQoa8+NINlC+Wi4MXEaWpOjMC3OgkbQpnOqsKnXjGj7LHpzp4f6LU18E NeywfaEJgFqZVTlvBZFeOXRgDD1ev/cnMEUcSH22Z1nwTw0WSz+zkdVDpsbSvWwsq9OY 22ZCMZhcnETfrptSNkeaixToz1uTiR1Us6CL0qusyjLRndXqnIhANizhMjGTSFgbWV/a SwfvcOJ5YoaIOFMPUoqDYOMKwDrxn+QRAgj7eVGNN5ZBuaAAP2u8h8r06vpuBkkdscnO uIOw== X-Gm-Message-State: APjAAAVyyFKEIx0poR+T3FJOOJb1PtN/Q3G5VMRnqjru1p4/HF7JdHsA WiZVg4bB+xjWE+MQ4UgzoGZrKQ== X-Google-Smtp-Source: APXvYqyw1bxBazDS9xdd5Yr6U/DRZA7zOSrNfezkhmrAo1aPM/nKFMpw7mOYi1fYX4ZuYxbTSsmRgA== X-Received: by 2002:a5d:8712:: with SMTP id u18mr4719446iom.18.1559274853194; Thu, 30 May 2019 20:54:13 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.54.11 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:54:12 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Subject: [PATCH v2 13/17] soc: qcom: ipa: AP/modem communications Date: Thu, 30 May 2019 22:53:44 -0500 Message-Id: <20190531035348.7194-14-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190530_205414_879800_7676DE9F X-CRM114-Status: GOOD ( 27.58 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, syadagir@codeaurora.org, ejcaruso@google.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, evgreen@chromium.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, subashab@codeaurora.org, linux-soc@vger.kernel.org, abhishek.esse@gmail.com, cpratapa@codeaurora.org, benchan@google.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch implements two forms of out-of-band communication between the AP and modem. - QMI is a mechanism that allows clients running on the AP interact with services running on the modem (and vice-versa). The AP IPA driver uses QMI to communicate with the corresponding IPA driver resident on the modem, to agree on parameters used with the IPA hardware and to ensure both sides are ready before entering operational mode. - SMP2P is a more primitive mechanism available for the modem and AP to communicate with each other. It provides a means for either the AP or modem to interrupt the other, and furthermore, to provide 32 bits worth of information. The IPA driver uses SMP2P to tell the modem what the state of the IPA clock was in the event of a crash. This allows the modem to safely access the IPA hardware (or avoid doing so) when a crash occurs, for example, to access information within the IPA hardware. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_qmi.c | 402 +++++++++++++++++++++++ drivers/net/ipa/ipa_qmi.h | 35 ++ drivers/net/ipa/ipa_qmi_msg.c | 583 ++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_qmi_msg.h | 238 ++++++++++++++ drivers/net/ipa/ipa_smp2p.c | 304 ++++++++++++++++++ drivers/net/ipa/ipa_smp2p.h | 47 +++ 6 files changed, 1609 insertions(+) create mode 100644 drivers/net/ipa/ipa_qmi.c create mode 100644 drivers/net/ipa/ipa_qmi.h create mode 100644 drivers/net/ipa/ipa_qmi_msg.c create mode 100644 drivers/net/ipa/ipa_qmi_msg.h create mode 100644 drivers/net/ipa/ipa_smp2p.c create mode 100644 drivers/net/ipa/ipa_smp2p.h diff --git a/drivers/net/ipa/ipa_qmi.c b/drivers/net/ipa/ipa_qmi.c new file mode 100644 index 000000000000..e94437508f6c --- /dev/null +++ b/drivers/net/ipa/ipa_qmi.c @@ -0,0 +1,402 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2013-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_endpoint.h" +#include "ipa_mem.h" +#include "ipa_qmi_msg.h" + +#define QMI_INIT_DRIVER_TIMEOUT 60000 /* A minute in milliseconds */ + +/** + * DOC: AP/Modem QMI Handshake + * + * The AP and modem perform a "handshake" at initialization time to ensure + * each side knows the other side is ready. Two QMI handles (endpoints) are + * used for this; one provides service on the modem for AP requests, and the + * other is on the AP to service modem requests (and to supply an indication + * from the AP). + * + * The QMI service on the modem expects to receive an INIT_DRIVER request from + * the AP, which contains parameters used by the modem during initialization. + * The AP sends this request using the client handle as soon as it is knows + * the modem side service is available. The modem responds to this request + * immediately. + * + * When the modem learns the AP service is available, it is able to + * communicate its status to the AP. The modem uses this to tell + * the AP when it is ready to receive an indication, sending an + * INDICATION_REGISTER request to the handle served by the AP. This + * is independent of the modem's initialization of its driver. + * + * When the modem has completed the driver initialization requested by the + * AP, it sends a DRIVER_INIT_COMPLETE request to the AP. This request + * could arrive at the AP either before or after the INDICATION_REGISTER + * request. + * + * The final step in the handshake occurs after the AP has received both + * requests from the modem. The AP completes the handshake by sending an + * INIT_COMPLETE_IND indication message to the modem. + */ + +#define IPA_HOST_SERVICE_SVC_ID 0x31 +#define IPA_HOST_SVC_VERS 1 +#define IPA_HOST_SERVICE_INS_ID 1 + +#define IPA_MODEM_SERVICE_SVC_ID 0x31 +#define IPA_MODEM_SERVICE_INS_ID 2 +#define IPA_MODEM_SVC_VERS 1 + +/* Send an INIT_COMPLETE_IND indication message to the modem */ +static int ipa_send_master_driver_init_complete_ind(struct qmi_handle *qmi, + struct sockaddr_qrtr *sq) +{ + struct ipa_init_complete_ind ind = { }; + + ind.status.result = QMI_RESULT_SUCCESS_V01; + ind.status.error = QMI_ERR_NONE_V01; + + return qmi_send_indication(qmi, sq, IPA_QMI_INIT_COMPLETE_IND, + IPA_QMI_INIT_COMPLETE_IND_SZ, + ipa_init_complete_ind_ei, &ind); +} + +/* This function is called to determine whether to complete the handshake by + * sending an INIT_COMPLETE_IND indication message to the modem. The + * "init_driver" parameter is false when we've received an INDICATION_REGISTER + * request message from the modem, or true when we've received the response + * from the INIT_DRIVER request message we send. If this function decides the + * message should be sent, it calls ipa_send_master_driver_init_complete_ind() + * to send it. + */ +static void ipa_handshake_complete(struct qmi_handle *qmi, + struct sockaddr_qrtr *sq, bool init_driver) +{ + struct ipa *ipa; + bool send_it; + int ret; + + if (init_driver) { + ipa = container_of(qmi, struct ipa, qmi.client_handle); + ipa->qmi.init_driver_response_received = true; + send_it = !!ipa->qmi.indication_register_received; + } else { + ipa = container_of(qmi, struct ipa, qmi.server_handle); + ipa->qmi.indication_register_received = 1; + send_it = !!ipa->qmi.init_driver_response_received; + } + if (!send_it) + return; + + ret = ipa_send_master_driver_init_complete_ind(qmi, sq); + WARN(ret, "error %d sending init complete indication\n", ret); +} + +/* Callback function to handle an INDICATION_REGISTER request message from the + * modem. This informs the AP that the modem is now ready to receive the + * INIT_COMPLETE_IND indication message. + */ +static void ipa_indication_register_fn(struct qmi_handle *qmi, + struct sockaddr_qrtr *sq, + struct qmi_txn *txn, + const void *decoded) +{ + struct ipa_indication_register_rsp rsp = { }; + int ret; + + rsp.rsp.result = QMI_RESULT_SUCCESS_V01; + rsp.rsp.error = QMI_ERR_NONE_V01; + + ret = qmi_send_response(qmi, sq, txn, IPA_QMI_INDICATION_REGISTER, + IPA_QMI_INDICATION_REGISTER_RSP_SZ, + ipa_indication_register_rsp_ei, &rsp); + if (!WARN(ret, "error %d sending response\n", ret)) + ipa_handshake_complete(qmi, sq, false); +} + +/* Callback function to handle a DRIVER_INIT_COMPLETE request message from the + * modem. This informs the AP that the modem has completed the initializion + * of its driver. + */ +static void ipa_driver_init_complete_fn(struct qmi_handle *qmi, + struct sockaddr_qrtr *sq, + struct qmi_txn *txn, + const void *decoded) +{ + struct ipa_driver_init_complete_rsp rsp = { }; + int ret; + + rsp.rsp.result = QMI_RESULT_SUCCESS_V01; + rsp.rsp.error = QMI_ERR_NONE_V01; + + ret = qmi_send_response(qmi, sq, txn, IPA_QMI_DRIVER_INIT_COMPLETE, + IPA_QMI_DRIVER_INIT_COMPLETE_RSP_SZ, + ipa_driver_init_complete_rsp_ei, &rsp); + + WARN(ret, "error %d sending response\n", ret); +} + +/* The server handles two request message types sent by the modem. */ +static struct qmi_msg_handler ipa_server_msg_handlers[] = { + { + .type = QMI_REQUEST, + .msg_id = IPA_QMI_INDICATION_REGISTER, + .ei = ipa_indication_register_req_ei, + .decoded_size = IPA_QMI_INDICATION_REGISTER_REQ_SZ, + .fn = ipa_indication_register_fn, + }, + { + .type = QMI_REQUEST, + .msg_id = IPA_QMI_DRIVER_INIT_COMPLETE, + .ei = ipa_driver_init_complete_req_ei, + .decoded_size = IPA_QMI_DRIVER_INIT_COMPLETE_REQ_SZ, + .fn = ipa_driver_init_complete_fn, + }, +}; + +/* Callback function to handle an IPA_QMI_INIT_DRIVER response message from + * the modem. This only acknowledges that the modem received the request. + * The modem will eventually report that it has completed its modem + * initialization by sending a IPA_QMI_DRIVER_INIT_COMPLETE request. + */ +static void ipa_init_driver_rsp_fn(struct qmi_handle *qmi, + struct sockaddr_qrtr *sq, + struct qmi_txn *txn, + const void *decoded) +{ + txn->result = 0; /* IPA_QMI_INIT_DRIVER request was successful */ + complete(&txn->completion); + + ipa_handshake_complete(qmi, sq, true); +} + +/* The client handles one response message type sent by the modem. */ +static struct qmi_msg_handler ipa_client_msg_handlers[] = { + { + .type = QMI_RESPONSE, + .msg_id = IPA_QMI_INIT_DRIVER, + .ei = ipa_init_modem_driver_rsp_ei, + .decoded_size = IPA_QMI_INIT_DRIVER_RSP_SZ, + .fn = ipa_init_driver_rsp_fn, + }, +}; + +/* Return a pointer to an init modem driver request structure, which contains + * configuration parameters for the modem. The modem may be started multiple + * times, but generally these parameters don't change so we can reuse the + * request structure once it's initialized. The only exception is the + * skip_uc_load field, which will be set only after the microcontroller has + * reported it has completed its initialization. + */ +static const struct ipa_init_modem_driver_req * +init_modem_driver_req(struct ipa_qmi *ipa_qmi) +{ + struct ipa *ipa = container_of(ipa_qmi, struct ipa, qmi); + static struct ipa_init_modem_driver_req req; + + /* This is not the first boot if the microcontroller is loaded */ + req.skip_uc_load = ipa->uc_loaded; + req.skip_uc_load_valid = true; + + /* We only have to initialize most of it once */ + if (req.platform_type_valid) + return &req; + + req.platform_type_valid = true; + req.platform_type = IPA_QMI_PLATFORM_TYPE_MSM_ANDROID; + + req.hdr_tbl_info_valid = IPA_SMEM_MODEM_HDR_SIZE ? 1 : 0; + req.hdr_tbl_info.start = ipa_qmi->base + IPA_SMEM_MODEM_HDR_OFFSET; + req.hdr_tbl_info.end = req.hdr_tbl_info.start + + IPA_SMEM_MODEM_HDR_SIZE - 1; + + req.v4_route_tbl_info_valid = true; + req.v4_route_tbl_info.start = + ipa_qmi->base + IPA_SMEM_V4_RT_NHASH_OFFSET; + req.v4_route_tbl_info.count = IPA_SMEM_MODEM_RT_COUNT; + + req.v6_route_tbl_info_valid = true; + req.v6_route_tbl_info.start = + ipa_qmi->base + IPA_SMEM_V6_RT_NHASH_OFFSET; + req.v6_route_tbl_info.count = IPA_SMEM_MODEM_RT_COUNT; + + req.v4_filter_tbl_start_valid = true; + req.v4_filter_tbl_start = ipa_qmi->base + IPA_SMEM_V4_FLT_NHASH_OFFSET; + + req.v6_filter_tbl_start_valid = true; + req.v6_filter_tbl_start = ipa_qmi->base + IPA_SMEM_V6_FLT_NHASH_OFFSET; + + req.modem_mem_info_valid = IPA_SMEM_MODEM_SIZE ? 1 : 0; + req.modem_mem_info.start = ipa_qmi->base + IPA_SMEM_MODEM_OFFSET; + req.modem_mem_info.size = IPA_SMEM_MODEM_SIZE; + + req.ctrl_comm_dest_end_pt_valid = true; + req.ctrl_comm_dest_end_pt = IPA_ENDPOINT_AP_MODEM_RX; + + req.hdr_proc_ctx_tbl_info_valid = + IPA_SMEM_MODEM_HDR_PROC_CTX_SIZE ? 1 : 0; + req.hdr_proc_ctx_tbl_info.start = + ipa_qmi->base + IPA_SMEM_MODEM_HDR_PROC_CTX_OFFSET; + req.hdr_proc_ctx_tbl_info.end = req.hdr_proc_ctx_tbl_info.start + + IPA_SMEM_MODEM_HDR_PROC_CTX_SIZE - 1; + + req.v4_hash_route_tbl_info_valid = true; + req.v4_hash_route_tbl_info.start = + ipa_qmi->base + IPA_SMEM_V4_RT_HASH_OFFSET; + req.v4_hash_route_tbl_info.count = IPA_SMEM_MODEM_RT_COUNT; + + req.v6_hash_route_tbl_info_valid = true; + req.v6_hash_route_tbl_info.start = + ipa_qmi->base + IPA_SMEM_V6_RT_HASH_OFFSET; + req.v6_hash_route_tbl_info.count = IPA_SMEM_MODEM_RT_COUNT; + + req.v4_hash_filter_tbl_start_valid = true; + req.v4_hash_filter_tbl_start = + ipa_qmi->base + IPA_SMEM_V4_FLT_HASH_OFFSET; + + req.v6_hash_filter_tbl_start_valid = true; + req.v6_hash_filter_tbl_start = + ipa_qmi->base + IPA_SMEM_V6_FLT_HASH_OFFSET; + + return &req; +} + +/* The modem service we requested is now available via the client handle. + * Send an INIT_DRIVER request to the modem. + */ +static int +ipa_client_new_server(struct qmi_handle *qmi, struct qmi_service *svc) +{ + const struct ipa_init_modem_driver_req *req; + struct ipa *ipa; + struct sockaddr_qrtr sq; + struct qmi_txn *txn; + int ret; + + ipa = container_of(qmi, struct ipa, qmi.client_handle); + req = init_modem_driver_req(&ipa->qmi); + + txn = kzalloc(sizeof(*txn), GFP_KERNEL); + if (!txn) + return -ENOMEM; + + ret = qmi_txn_init(qmi, txn, NULL, NULL); + if (ret) { + kfree(txn); + return ret; + } + + sq.sq_family = AF_QIPCRTR; + sq.sq_node = svc->node; + sq.sq_port = svc->port; + + ret = qmi_send_request(qmi, &sq, txn, IPA_QMI_INIT_DRIVER, + IPA_QMI_INIT_DRIVER_REQ_SZ, + ipa_init_modem_driver_req_ei, req); + if (!ret) + ret = qmi_txn_wait(txn, MAX_SCHEDULE_TIMEOUT); + if (ret) + qmi_txn_cancel(txn); + kfree(txn); + + return ret; +} + +/* The only callback we supply for the client handle is notification that the + * service on the modem has become available. + */ +static struct qmi_ops ipa_client_ops = { + .new_server = ipa_client_new_server, +}; + +static int ipa_qmi_initialize(struct ipa *ipa) +{ + struct ipa_qmi *ipa_qmi = &ipa->qmi; + int ret; + + /* The only handle operation that might be interesting for the server + * would be del_client, to find out when the modem side client has + * disappeared. But other than reporting the event, we wouldn't do + * anything about that. So we just pass a null pointer for its handle + * operations. All the real work is done by the message handlers. + */ + ret = qmi_handle_init(&ipa_qmi->server_handle, + IPA_QMI_SERVER_MAX_RCV_SZ, NULL, + ipa_server_msg_handlers); + if (ret) + return ret; + + ret = qmi_add_server(&ipa_qmi->server_handle, IPA_HOST_SERVICE_SVC_ID, + IPA_HOST_SVC_VERS, IPA_HOST_SERVICE_INS_ID); + if (ret) + goto err_release_server_handle; + + /* The client handle is only used for sending an INIT_DRIVER request + * to the modem, and receiving its response message. + */ + ret = qmi_handle_init(&ipa_qmi->client_handle, + IPA_QMI_CLIENT_MAX_RCV_SZ, &ipa_client_ops, + ipa_client_msg_handlers); + if (ret) + goto err_release_server_handle; + + ret = qmi_add_lookup(&ipa_qmi->client_handle, IPA_MODEM_SERVICE_SVC_ID, + IPA_MODEM_SVC_VERS, IPA_MODEM_SERVICE_INS_ID); + if (ret) + goto err_release_client_handle; + + /* All QMI offsets are relative to the start of IPA shared memory */ + ipa_qmi->base = ipa->shared_offset; + ipa_qmi->initialized = 1; + + return 0; + +err_release_client_handle: + /* Releasing the handle also removes registered lookups */ + qmi_handle_release(&ipa_qmi->client_handle); + memset(&ipa_qmi->client_handle, 0, sizeof(ipa_qmi->client_handle)); +err_release_server_handle: + /* Releasing the handle also removes registered services */ + qmi_handle_release(&ipa_qmi->server_handle); + memset(&ipa_qmi->server_handle, 0, sizeof(ipa_qmi->server_handle)); + + return ret; +} + +/* This is called by ipa_netdev_setup(). We can be informed via remoteproc + * that the modem has shut down, in which case this function will be called + * again to prepare for it coming back up again. + */ +int ipa_qmi_setup(struct ipa *ipa) +{ + ipa->qmi.init_driver_response_received = 0; + ipa->qmi.indication_register_received = 0; + + if (!ipa->qmi.initialized) + return ipa_qmi_initialize(ipa); + + return 0; +} + +void ipa_qmi_teardown(struct ipa *ipa) +{ + if (!ipa->qmi.initialized) + return; + + qmi_handle_release(&ipa->qmi.client_handle); + memset(&ipa->qmi.client_handle, 0, sizeof(ipa->qmi.client_handle)); + + qmi_handle_release(&ipa->qmi.server_handle); + memset(&ipa->qmi.server_handle, 0, sizeof(ipa->qmi.server_handle)); + + ipa->qmi.initialized = 0; +} diff --git a/drivers/net/ipa/ipa_qmi.h b/drivers/net/ipa/ipa_qmi.h new file mode 100644 index 000000000000..cfdafa23cf8f --- /dev/null +++ b/drivers/net/ipa/ipa_qmi.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _IPA_QMI_H_ +#define _IPA_QMI_H_ + +#include +#include + +struct ipa; + +/** + * struct ipa_qmi - QMI state associated with an IPA + * @initialized - whether QMI initialization has completed + * @client_handle - used to send an QMI requests to the modem + * @server_handle - used to handle QMI requests from the modem + * @indication_register_received - tracks modem request receipt + * @init_driver_response_received - tracks modem response receipt + */ +struct ipa_qmi { + u32 initialized; + u32 base; + struct qmi_handle client_handle; + struct qmi_handle server_handle; + u32 indication_register_received; + u32 init_driver_response_received; + +}; + +int ipa_qmi_setup(struct ipa *ipa); +void ipa_qmi_teardown(struct ipa *ipa); + +#endif /* !_IPA_QMI_H_ */ diff --git a/drivers/net/ipa/ipa_qmi_msg.c b/drivers/net/ipa/ipa_qmi_msg.c new file mode 100644 index 000000000000..b6b278dff6fb --- /dev/null +++ b/drivers/net/ipa/ipa_qmi_msg.c @@ -0,0 +1,583 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#include +#include + +#include "ipa_qmi_msg.h" + +/* QMI message structure definition for struct ipa_indication_register_req */ +struct qmi_elem_info ipa_indication_register_req_ei[] = { + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_req, + master_driver_init_complete_valid), + .tlv_type = 0x10, + .offset = offsetof(struct ipa_indication_register_req, + master_driver_init_complete_valid), + }, + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_req, + master_driver_init_complete), + .tlv_type = 0x10, + .offset = offsetof(struct ipa_indication_register_req, + master_driver_init_complete), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_req, + data_usage_quota_reached_valid), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_indication_register_req, + data_usage_quota_reached_valid), + }, + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_req, + data_usage_quota_reached), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_indication_register_req, + data_usage_quota_reached), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_indication_register_rsp */ +struct qmi_elem_info ipa_indication_register_rsp_ei[] = { + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_rsp, + rsp), + .tlv_type = 0x02, + .offset = offsetof(struct ipa_indication_register_rsp, + rsp), + .ei_array = qmi_response_type_v01_ei, + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_driver_init_complete_req */ +struct qmi_elem_info ipa_driver_init_complete_req_ei[] = { + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_driver_init_complete_req, + status), + .tlv_type = 0x01, + .offset = offsetof(struct ipa_driver_init_complete_req, + status), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_driver_init_complete_rsp */ +struct qmi_elem_info ipa_driver_init_complete_rsp_ei[] = { + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_driver_init_complete_rsp, + rsp), + .tlv_type = 0x02, + .elem_size = offsetof(struct ipa_driver_init_complete_rsp, + rsp), + .ei_array = qmi_response_type_v01_ei, + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_init_complete_ind */ +struct qmi_elem_info ipa_init_complete_ind_ei[] = { + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_complete_ind, + status), + .tlv_type = 0x02, + .elem_size = offsetof(struct ipa_init_complete_ind, + status), + .ei_array = qmi_response_type_v01_ei, + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_mem_bounds */ +struct qmi_elem_info ipa_mem_bounds_ei[] = { + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_bounds, start), + .offset = offsetof(struct ipa_mem_bounds, start), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_bounds, end), + .offset = offsetof(struct ipa_mem_bounds, end), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_mem_array */ +struct qmi_elem_info ipa_mem_array_ei[] = { + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_array, start), + .offset = offsetof(struct ipa_mem_array, start), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_array, count), + .offset = offsetof(struct ipa_mem_array, count), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_mem_range */ +struct qmi_elem_info ipa_mem_range_ei[] = { + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_range, start), + .offset = offsetof(struct ipa_mem_range, start), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_range, size), + .offset = offsetof(struct ipa_mem_range, size), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_init_modem_driver_req */ +struct qmi_elem_info ipa_init_modem_driver_req_ei[] = { + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + platform_type_valid), + .tlv_type = 0x10, + .elem_size = offsetof(struct ipa_init_modem_driver_req, + platform_type_valid), + }, + { + .data_type = QMI_SIGNED_4_BYTE_ENUM, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + platform_type), + .tlv_type = 0x10, + .offset = offsetof(struct ipa_init_modem_driver_req, + platform_type), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hdr_tbl_info_valid), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_init_modem_driver_req, + hdr_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hdr_tbl_info), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_init_modem_driver_req, + hdr_tbl_info), + .ei_array = ipa_mem_bounds_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_route_tbl_info_valid), + .tlv_type = 0x12, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_route_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_route_tbl_info), + .tlv_type = 0x12, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_route_tbl_info), + .ei_array = ipa_mem_array_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_route_tbl_info_valid), + .tlv_type = 0x13, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_route_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_route_tbl_info), + .tlv_type = 0x13, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_route_tbl_info), + .ei_array = ipa_mem_array_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_filter_tbl_start_valid), + .tlv_type = 0x14, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_filter_tbl_start_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_filter_tbl_start), + .tlv_type = 0x14, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_filter_tbl_start), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_filter_tbl_start_valid), + .tlv_type = 0x15, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_filter_tbl_start_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_filter_tbl_start), + .tlv_type = 0x15, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_filter_tbl_start), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + modem_mem_info_valid), + .tlv_type = 0x16, + .offset = offsetof(struct ipa_init_modem_driver_req, + modem_mem_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + modem_mem_info), + .tlv_type = 0x16, + .offset = offsetof(struct ipa_init_modem_driver_req, + modem_mem_info), + .ei_array = ipa_mem_range_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + ctrl_comm_dest_end_pt_valid), + .tlv_type = 0x17, + .offset = offsetof(struct ipa_init_modem_driver_req, + ctrl_comm_dest_end_pt_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + ctrl_comm_dest_end_pt), + .tlv_type = 0x17, + .offset = offsetof(struct ipa_init_modem_driver_req, + ctrl_comm_dest_end_pt), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + skip_uc_load_valid), + .tlv_type = 0x18, + .offset = offsetof(struct ipa_init_modem_driver_req, + skip_uc_load_valid), + }, + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + skip_uc_load), + .tlv_type = 0x18, + .offset = offsetof(struct ipa_init_modem_driver_req, + skip_uc_load), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hdr_proc_ctx_tbl_info_valid), + .tlv_type = 0x19, + .offset = offsetof(struct ipa_init_modem_driver_req, + hdr_proc_ctx_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hdr_proc_ctx_tbl_info), + .tlv_type = 0x19, + .offset = offsetof(struct ipa_init_modem_driver_req, + hdr_proc_ctx_tbl_info), + .ei_array = ipa_mem_bounds_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + zip_tbl_info_valid), + .tlv_type = 0x1a, + .offset = offsetof(struct ipa_init_modem_driver_req, + zip_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + zip_tbl_info), + .tlv_type = 0x1a, + .offset = offsetof(struct ipa_init_modem_driver_req, + zip_tbl_info), + .ei_array = ipa_mem_bounds_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_hash_route_tbl_info_valid), + .tlv_type = 0x1b, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_hash_route_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_hash_route_tbl_info), + .tlv_type = 0x1b, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_hash_route_tbl_info), + .ei_array = ipa_mem_array_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_hash_route_tbl_info_valid), + .tlv_type = 0x1c, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_hash_route_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_hash_route_tbl_info), + .tlv_type = 0x1c, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_hash_route_tbl_info), + .ei_array = ipa_mem_array_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_hash_filter_tbl_start_valid), + .tlv_type = 0x1d, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_hash_filter_tbl_start_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_hash_filter_tbl_start), + .tlv_type = 0x1d, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_hash_filter_tbl_start), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_hash_filter_tbl_start_valid), + .tlv_type = 0x1e, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_hash_filter_tbl_start_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_hash_filter_tbl_start), + .tlv_type = 0x1e, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_hash_filter_tbl_start), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_init_modem_driver_rsp */ +struct qmi_elem_info ipa_init_modem_driver_rsp_ei[] = { + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + rsp), + .tlv_type = 0x02, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + rsp), + .ei_array = qmi_response_type_v01_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + ctrl_comm_dest_end_pt_valid), + .tlv_type = 0x10, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + ctrl_comm_dest_end_pt_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + ctrl_comm_dest_end_pt), + .tlv_type = 0x10, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + ctrl_comm_dest_end_pt), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + default_end_pt_valid), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + default_end_pt_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + default_end_pt), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + default_end_pt), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + modem_driver_init_pending_valid), + .tlv_type = 0x12, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + modem_driver_init_pending_valid), + }, + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + modem_driver_init_pending), + .tlv_type = 0x12, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + modem_driver_init_pending), + }, + { + .data_type = QMI_EOTI, + }, +}; diff --git a/drivers/net/ipa/ipa_qmi_msg.h b/drivers/net/ipa/ipa_qmi_msg.h new file mode 100644 index 000000000000..cf3cda3bddae --- /dev/null +++ b/drivers/net/ipa/ipa_qmi_msg.h @@ -0,0 +1,238 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _IPA_QMI_MSG_H_ +#define _IPA_QMI_MSG_H_ + +/* === Only "ipa_qmi" and "ipa_qmi_msg.c" should include this file === */ + +#include +#include + +/* Request/response/indication QMI message ids used for IPA. Receiving + * end issues a response for requests; indications require no response. + */ +#define IPA_QMI_INDICATION_REGISTER 0x20 /* modem -> AP request */ +#define IPA_QMI_INIT_DRIVER 0x21 /* AP -> modem request */ +#define IPA_QMI_INIT_COMPLETE_IND 0x22 /* AP -> modem indication */ +#define IPA_QMI_DRIVER_INIT_COMPLETE 0x35 /* modem -> AP request */ + +/* The maximum size required for message types. These sizes include + * the message data, along with type (1 byte) and length (2 byte) + * information for each field. The qmi_send_*() interfaces require + * the message size to be provided. + */ +#define IPA_QMI_INDICATION_REGISTER_REQ_SZ 8 /* -> server handle */ +#define IPA_QMI_INDICATION_REGISTER_RSP_SZ 7 /* <- server handle */ +#define IPA_QMI_INIT_DRIVER_REQ_SZ 134 /* client handle -> */ +#define IPA_QMI_INIT_DRIVER_RSP_SZ 25 /* client handle <- */ +#define IPA_QMI_INIT_COMPLETE_IND_SZ 7 /* server handle -> */ +#define IPA_QMI_DRIVER_INIT_COMPLETE_REQ_SZ 4 /* -> server handle */ +#define IPA_QMI_DRIVER_INIT_COMPLETE_RSP_SZ 7 /* <- server handle */ + +/* Maximum size of messages we expect the AP to receive (max of above) */ +#define IPA_QMI_SERVER_MAX_RCV_SZ 8 +#define IPA_QMI_CLIENT_MAX_RCV_SZ 25 + +/* Request message for the IPA_QMI_INDICATION_REGISTER request */ +struct ipa_indication_register_req { + u8 master_driver_init_complete_valid; + u8 master_driver_init_complete; + u8 data_usage_quota_reached_valid; + u8 data_usage_quota_reached; +}; + +/* The response to a IPA_QMI_INDICATION_REGISTER request consists only of + * a standard QMI response. + */ +struct ipa_indication_register_rsp { + struct qmi_response_type_v01 rsp; +}; + +/* Request message for the IPA_QMI_DRIVER_INIT_COMPLETE request */ +struct ipa_driver_init_complete_req { + u8 status; +}; + +/* The response to a IPA_QMI_DRIVER_INIT_COMPLETE request consists only + * of a standard QMI response. + */ +struct ipa_driver_init_complete_rsp { + struct qmi_response_type_v01 rsp; +}; + +/* The message for the IPA_QMI_INIT_COMPLETE_IND indication consists + * only of a standard QMI response. + */ +struct ipa_init_complete_ind { + struct qmi_response_type_v01 status; +}; + +/* The AP tells the modem its platform type. We assume Android. */ +enum ipa_platform_type { + IPA_QMI_PLATFORM_TYPE_INVALID = 0, /* Invalid */ + IPA_QMI_PLATFORM_TYPE_TN = 1, /* Data card */ + IPA_QMI_PLATFORM_TYPE_LE = 2, /* Data router */ + IPA_QMI_PLATFORM_TYPE_MSM_ANDROID = 3, /* Android MSM */ + IPA_QMI_PLATFORM_TYPE_MSM_WINDOWS = 4, /* Windows MSM */ + IPA_QMI_PLATFORM_TYPE_MSM_QNX_V01 = 5, /* QNX MSM */ +}; + +/* This defines the start and end offset of a range of memory. Both + * fields are offsets relative to the start of IPA shared memory. + * The end value is the last addressable byte *within* the range. + */ +struct ipa_mem_bounds { + u32 start; + u32 end; +}; + +/* This defines the location and size of an array. The start value + * is an offset relative to the start of IPA shared memory. The + * size of the array is implied by the number of entries (the entry + * size is assumed to be known). + */ +struct ipa_mem_array { + u32 start; + u32 count; +}; + +/* This defines the location and size of a range of memory. The + * start is an offset relative to the start of IPA shared memory. + * This differs from the ipa_mem_bounds structure in that the size + * (in bytes) of the memory region is specified rather than the + * offset of its last byte. + */ +struct ipa_mem_range { + u32 start; + u32 size; +}; + +/* The message for the IPA_QMI_INIT_DRIVER request contains information + * from the AP that affects modem initialization. + */ +struct ipa_init_modem_driver_req { + u8 platform_type_valid; + u32 platform_type; /* enum ipa_platform_type */ + + /* Modem header table information. This defines the IPA shared + * memory in which the modem may insert header table entries. + */ + u8 hdr_tbl_info_valid; + struct ipa_mem_bounds hdr_tbl_info; + + /* Routing table information. These define the location and size of + * non-hashable IPv4 and IPv6 filter tables. The start values are + * offsets relative to the start of IPA shared memory. + */ + u8 v4_route_tbl_info_valid; + struct ipa_mem_array v4_route_tbl_info; + u8 v6_route_tbl_info_valid; + struct ipa_mem_array v6_route_tbl_info; + + /* Filter table information. These define the location and size of + * non-hashable IPv4 and IPv6 filter tables. The start values are + * offsets relative to the start of IPA shared memory. + */ + u8 v4_filter_tbl_start_valid; + u32 v4_filter_tbl_start; + u8 v6_filter_tbl_start_valid; + u32 v6_filter_tbl_start; + + /* Modem memory information. This defines the location and + * size of memory available for the modem to use. + */ + u8 modem_mem_info_valid; + struct ipa_mem_range modem_mem_info; + + /* This defines the destination endpoint on the AP to which + * the modem driver can send control commands. IPA supports + * 20 endpoints, so this must be 19 or less. + */ + u8 ctrl_comm_dest_end_pt_valid; + u32 ctrl_comm_dest_end_pt; + + /* This defines whether the modem should load the microcontroller + * or not. It is unnecessary to reload it if the modem is being + * restarted. + * + * NOTE: this field is named "is_ssr_bootup" elsewhere. + */ + u8 skip_uc_load_valid; + u8 skip_uc_load; + + /* Processing context memory information. This defines the memory in + * which the modem may insert header processing context table entries. + */ + u8 hdr_proc_ctx_tbl_info_valid; + struct ipa_mem_bounds hdr_proc_ctx_tbl_info; + + /* Compression command memory information. This defines the memory + * in which the modem may insert compression/decompression commands. + */ + u8 zip_tbl_info_valid; + struct ipa_mem_bounds zip_tbl_info; + + /* Routing table information. These define the location and size + * of hashable IPv4 and IPv6 filter tables. The start values are + * offsets relative to the start of IPA shared memory. + */ + u8 v4_hash_route_tbl_info_valid; + struct ipa_mem_array v4_hash_route_tbl_info; + u8 v6_hash_route_tbl_info_valid; + struct ipa_mem_array v6_hash_route_tbl_info; + + /* Filter table information. These define the location and size + * of hashable IPv4 and IPv6 filter tables. The start values are + * offsets relative to the start of IPA shared memory. + */ + u8 v4_hash_filter_tbl_start_valid; + u32 v4_hash_filter_tbl_start; + u8 v6_hash_filter_tbl_start_valid; + u32 v6_hash_filter_tbl_start; +}; + +/* The response to a IPA_QMI_INIT_DRIVER request begins with a standard + * QMI response, but contains other information as well. Currently we + * simply wait for the the INIT_DRIVER transaction to complete and + * ignore any other data that might be returned. + */ +struct ipa_init_modem_driver_rsp { + struct qmi_response_type_v01 rsp; + + /* This defines the destination endpoint on the modem to which + * the AP driver can send control commands. IPA supports + * 20 endpoints, so this must be 19 or less. + */ + u8 ctrl_comm_dest_end_pt_valid; + u32 ctrl_comm_dest_end_pt; + + /* This defines the default endpoint. The AP driver is not + * required to configure the hardware with this value. IPA + * supports 20 endpoints, so this must be 19 or less. + */ + u8 default_end_pt_valid; + u32 default_end_pt; + + /* This defines whether a second handshake is required to complete + * initialization. + */ + u8 modem_driver_init_pending_valid; + u8 modem_driver_init_pending; +}; + +/* Message structure definitions defined in "ipa_qmi_msg.c" */ +extern struct qmi_elem_info ipa_indication_register_req_ei[]; +extern struct qmi_elem_info ipa_indication_register_rsp_ei[]; +extern struct qmi_elem_info ipa_driver_init_complete_req_ei[]; +extern struct qmi_elem_info ipa_driver_init_complete_rsp_ei[]; +extern struct qmi_elem_info ipa_init_complete_ind_ei[]; +extern struct qmi_elem_info ipa_mem_bounds_ei[]; +extern struct qmi_elem_info ipa_mem_array_ei[]; +extern struct qmi_elem_info ipa_mem_range_ei[]; +extern struct qmi_elem_info ipa_init_modem_driver_req_ei[]; +extern struct qmi_elem_info ipa_init_modem_driver_rsp_ei[]; + +#endif /* !_IPA_QMI_MSG_H_ */ diff --git a/drivers/net/ipa/ipa_smp2p.c b/drivers/net/ipa/ipa_smp2p.c new file mode 100644 index 000000000000..c59f358b44b4 --- /dev/null +++ b/drivers/net/ipa/ipa_smp2p.c @@ -0,0 +1,304 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include + +#include "ipa_smp2p.h" +#include "ipa.h" +#include "ipa_uc.h" +#include "ipa_clock.h" + +/** + * DOC: IPA SMP2P communication with the modem + * + * SMP2P is a primitive communication mechanism available between the AP and + * the modem. The IPA driver uses this for two purposes: to enable the modem + * to state that the GSI hardware is ready to use; and to communicate the + * state of the IPA clock in the event of a crash. + * + * GSI needs to have early initialization completed before it can be used. + * This initialization is done either by Trust Zone or by the modem. In the + * latter case, the modem uses an SMP2P interrupt to tell the AP IPA driver + * when the GSI is ready to use. + * + * The modem is also able to inquire about the current state of the IPA + * clock by trigging another SMP2P interrupt to the AP. We communicate + * whether the clock is enabled using two SMP2P state bits--one to + * indicate the clock state (on or off), and a second to indicate the + * clock state bit is valid. The modem will poll the valid bit until it + * is set, and at that time records whether the AP has the IPA clock enabled. + * + * Finally, if the AP kernel panics, we update the SMP2P state bits even if + * we never receive an interrupt from the modem requesting this. + */ + +/** + * struct ipa_smp2p - IPA SMP2P information + * @ipa: IPA pointer + * @valid_state: SMEM state indicating enabled state is valid + * @enabled_state: SMEM state to indicate clock is enabled + * @valid_bit: Valid bit in 32-bit SMEM state mask + * @enabled_bit: Enabled bit in 32-bit SMEM state mask + * @enabled_bit: Enabled bit in 32-bit SMEM state mask + * @clock_query_irq: IPA interrupt triggered by modem for clock query + * @setup_ready_irq: IPA interrupt triggered by modem to signal GSI ready + * @clock_on: Whether IPA clock is on + * @notified: Whether modem has been notified of clock state + * @disabled: Whether setup ready interrupt handling is disabled + * @mutex mutex: Motex protecting ready interrupt/shutdown interlock + * @panic_notifier: Panic notifier structure +*/ +struct ipa_smp2p { + struct ipa *ipa; + struct qcom_smem_state *valid_state; + struct qcom_smem_state *enabled_state; + u32 valid_bit; + u32 enabled_bit; + u32 clock_query_irq; + u32 setup_ready_irq; + u32 clock_on; + u32 notified; + u32 disabled; + struct mutex mutex; + struct notifier_block panic_notifier; +}; + +/** + * ipa_smp2p_notify() - use SMP2P to tell modem about IPA clock state + * @smp2p: SMP2P information + * + * This is called either when the modem has requested it (by triggering + * the modem clock query IPA interrupt) or whenever the AP is shutting down + * (via a panic notifier). It sets the two SMP2P state bits--one saying + * whether the IPA clock is running, and the other indicating the first bit + * is valid. + */ +static void ipa_smp2p_notify(struct ipa_smp2p *smp2p) +{ + u32 value; + u32 mask; + + if (smp2p->notified) + return; + + smp2p->clock_on = ipa_clock_get_additional(smp2p->ipa->clock) ? 1 : 0; + + /* Signal whether the clock is enabled */ + mask = BIT(smp2p->enabled_bit); + value = smp2p->clock_on ? mask : 0; + qcom_smem_state_update_bits(smp2p->enabled_state, mask, value); + + /* Now indicate that the enabled flag is valid */ + mask = BIT(smp2p->valid_bit); + value = mask; + qcom_smem_state_update_bits(smp2p->valid_state, mask, value); + + smp2p->notified = 1; +} + +/* Threaded IRQ handler for modem "ipa-clock-query" SMP2P interrupt */ +static irqreturn_t ipa_smp2p_modem_clk_query_isr(int irq, void *dev_id) +{ + struct ipa_smp2p *smp2p = dev_id; + + ipa_smp2p_notify(smp2p); + + return IRQ_HANDLED; +} + +static int ipa_smp2p_panic_notifier(struct notifier_block *nb, + unsigned long action, void *data) +{ + struct ipa_smp2p *smp2p; + + smp2p = container_of(nb, struct ipa_smp2p, panic_notifier); + + ipa_smp2p_notify(smp2p); + + if (smp2p->clock_on) + ipa_uc_panic_notifier(smp2p->ipa); + + return NOTIFY_DONE; +} + +static int ipa_smp2p_panic_notifier_register(struct ipa_smp2p *smp2p) +{ + /* IPA panic handler needs to run before modem shuts down */ + smp2p->panic_notifier.notifier_call = ipa_smp2p_panic_notifier; + smp2p->panic_notifier.priority = INT_MAX; /* Do it early */ + + return atomic_notifier_chain_register(&panic_notifier_list, + &smp2p->panic_notifier); +} + +static void ipa_smp2p_panic_notifier_unregister(struct ipa_smp2p *smp2p) +{ + atomic_notifier_chain_unregister(&panic_notifier_list, + &smp2p->panic_notifier); +} + +/* Threaded IRQ handler for modem "ipa-setup-ready" SMP2P interrupt */ +static irqreturn_t ipa_smp2p_modem_setup_ready_isr(int irq, void *dev_id) +{ + struct ipa_smp2p *smp2p = dev_id; + int ret; + + mutex_lock(&smp2p->mutex); + if (!smp2p->disabled) { + ret = ipa_setup(smp2p->ipa); + WARN(ret, "error %d from IPA setup\n", ret); + } + mutex_unlock(&smp2p->mutex); + + return IRQ_HANDLED; +} + +/* Initialize SMP2P interrupts */ +static int ipa_smp2p_irq_init(struct ipa_smp2p *smp2p, const char *name, + irq_handler_t handler) +{ + unsigned int irq; + int ret; + + ret = platform_get_irq_byname(smp2p->ipa->pdev, name); + if (ret < 0) + return ret; + if (!ret) + return -EINVAL; /* IRQ mapping failure */ + irq = ret; + + ret = request_threaded_irq(irq, NULL, handler, 0, name, smp2p); + if (ret) + return ret; + + return irq; +} + +static void ipa_smp2p_irq_exit(struct ipa_smp2p *smp2p, u32 irq) +{ + free_irq(irq, smp2p); +} + +/* Initialize the IPA SMP2P subsystem */ +struct ipa_smp2p *ipa_smp2p_init(struct ipa *ipa, bool modem_init) +{ + struct qcom_smem_state *enabled_state; + struct device *dev = &ipa->pdev->dev; + struct qcom_smem_state *valid_state; + struct ipa_smp2p *smp2p; + u32 enabled_bit; + u32 valid_bit; + int ret; + + valid_state = qcom_smem_state_get(dev, "ipa-clock-enabled-valid", + &valid_bit); + if (IS_ERR(valid_state)) + return ERR_CAST(valid_state); + if (valid_bit >= BITS_PER_LONG) + return ERR_PTR(-EINVAL); + + enabled_state = qcom_smem_state_get(dev, "ipa-clock-enabled", + &enabled_bit); + if (IS_ERR(enabled_state)) + return ERR_CAST(enabled_state); + if (enabled_bit >= BITS_PER_LONG) + return ERR_PTR(-EINVAL); + + smp2p = kzalloc(sizeof(*smp2p), GFP_KERNEL); + if (!smp2p) + return ERR_PTR(-ENOMEM); + + smp2p->ipa = ipa; + + /* These fields are needed by the clock query interrupt + * handler, so initialize them now. + */ + mutex_init(&smp2p->mutex); + smp2p->valid_state = valid_state; + smp2p->valid_bit = valid_bit; + smp2p->enabled_state = enabled_state; + smp2p->enabled_bit = enabled_bit; + + ret = ipa_smp2p_irq_init(smp2p, "ipa-clock-query", + ipa_smp2p_modem_clk_query_isr); + if (ret < 0) + goto err_mutex_destroy; + smp2p->clock_query_irq = ret; + + ret = ipa_smp2p_panic_notifier_register(smp2p); + if (ret) + goto err_irq_exit; + + if (modem_init) { + /* Result will be non-zero (negative for error) */ + ret = ipa_smp2p_irq_init(smp2p, "ipa-setup-ready", + ipa_smp2p_modem_setup_ready_isr); + if (ret < 0) + goto err_notifier_unregister; + smp2p->setup_ready_irq = ret; + } + + return smp2p; + +err_notifier_unregister: + ipa_smp2p_panic_notifier_unregister(smp2p); +err_irq_exit: + ipa_smp2p_irq_exit(smp2p, smp2p->clock_query_irq); +err_mutex_destroy: + mutex_destroy(&smp2p->mutex); + kfree(smp2p); + + return ERR_PTR(ret); +} + +void ipa_smp2p_exit(struct ipa_smp2p *smp2p) +{ + if (smp2p->setup_ready_irq) + ipa_smp2p_irq_exit(smp2p, smp2p->setup_ready_irq); + ipa_smp2p_panic_notifier_unregister(smp2p); + ipa_smp2p_irq_exit(smp2p, smp2p->clock_query_irq); + mutex_destroy(&smp2p->mutex); + kfree(smp2p); +} + +void ipa_smp2p_disable(struct ipa_smp2p *smp2p) +{ + if (smp2p->setup_ready_irq) { + mutex_lock(&smp2p->mutex); + smp2p->disabled = 1; + mutex_unlock(&smp2p->mutex); + } +} + +/* Reset state tracking whether we have notified the modem */ +void ipa_smp2p_notify_reset(struct ipa_smp2p *smp2p) +{ + u32 mask; + + if (!smp2p->notified) + return; + + /* Drop the clock reference if it was taken above */ + if (smp2p->clock_on) { + ipa_clock_put(smp2p->ipa->clock); + smp2p->clock_on = 0; + } + + /* Reset the clock enabled valid flag */ + mask = BIT(smp2p->valid_bit); + qcom_smem_state_update_bits(smp2p->valid_state, mask, 0); + + /* Mark the clock disabled for good measure... */ + mask = BIT(smp2p->enabled_bit); + qcom_smem_state_update_bits(smp2p->enabled_state, mask, 0); + + smp2p->notified = 0; +} diff --git a/drivers/net/ipa/ipa_smp2p.h b/drivers/net/ipa/ipa_smp2p.h new file mode 100644 index 000000000000..9c7e4339a7b0 --- /dev/null +++ b/drivers/net/ipa/ipa_smp2p.h @@ -0,0 +1,47 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ +#ifndef _IPA_SMP2P_H_ +#define _IPA_SMP2P_H_ + +#include + +struct ipa; + +/** + * ipa_smp2p_init() - Initialize the IPA SMP2P subsystem + * @ipa: IPA pointer + * @modem_init: Whether the modem is responsible for GSI initialization + * + * @Return: Pointer to IPA SMP2P info, or a pointer-coded error + */ +struct ipa_smp2p *ipa_smp2p_init(struct ipa *ipa, bool modem_init); + +/** + * ipa_smp2p_exit() - Inverse of ipa_smp2p_init() + * @smp2p: SMP2P information pointer + */ +void ipa_smp2p_exit(struct ipa_smp2p *smp2p); + +/** + * ipa_smp2p_disable() - Prevent "ipa-setup-ready" interrupt handling + * @smp2p: SMP2P information pointer + * + * Prevent handling of the "setup ready" interrupt from the modem. + * This is used before initiating shutdown of the driver. + */ +void ipa_smp2p_disable(struct ipa_smp2p *smp2p); + +/** + * ipa_smp2p_notify_reset() - Reset modem notification state + * @smp2p: SMP2P information pointer + * + * If the modem crashes it queries the IPA clock state. In cleaning + * up after such a crash this is used to reset some state maintained + * for managing this notification. + */ +void ipa_smp2p_notify_reset(struct ipa_smp2p *smp2p); + +#endif /* _IPA_SMP2P_H_ */ From patchwork Fri May 31 03:53:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 10969619 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1434A14DB for ; Fri, 31 May 2019 03:59:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0274E28BC0 for ; Fri, 31 May 2019 03:59:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EAA9228BFA; Fri, 31 May 2019 03:59:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9A48428BC0 for ; Fri, 31 May 2019 03:59:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=c+JaLP6ZPxIHG/JKdmdbYww34Aa6cg+tXWFzZJamSAY=; b=YvRplBlkr8qQmv yxjvuJ18owH7sTYyrznfB2XMoxB0/xF+Eh2bsHOrxsE0f/xIInn342djNsqe4OZRIXrrINPITeBJZ 0dkj7yY4Mu6LUJrcauFWwxDEE78a358adPsUNXa4u5RF3nI49wzDTy9+Oov33DJ04SncvM2nt6BXl HZchpVJaD44OpeNtZo12rD+jFU/9+AvWW20qzB9dAmXB8TNv5ziA/7FvpxP5UxxxCA27A2ezEFhmn w+OLbQmdNIeT+cytt8HI+HzT4Djk/mXT5Jv5snaGtJVqm7o3zoblztjDwVvunNARZShe2jW9cF14+ cq3JvyeSlCCbrCqdS7SA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYh2-0008Jc-Gv; Fri, 31 May 2019 03:59:00 +0000 Received: from mail-it1-x143.google.com ([2607:f8b0:4864:20::143]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYcR-0002rS-3u for linux-arm-kernel@lists.infradead.org; Fri, 31 May 2019 03:54:24 +0000 Received: by mail-it1-x143.google.com with SMTP id j204so7724136ite.4 for ; Thu, 30 May 2019 20:54:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hqFf2+cwBc4XPU/wNoeNiXzq2lmzHpPCTPDaeLdQu4s=; b=dwV6lcEWrxc5B6DJHcpq3zyz85EXyb7kFr66i80SndfVrNSU4O6J2BhCuq4xE9YuzG QJEmlxKobURV1wOZvw/c6rRePWj08be/9fs9y2H4V4dAqvoWGBzKvPWGy3M/GlkBE0dG fB0+1S4FXZaI2IcfQfZRZVtRvdl6d9HBSKFx7wjh0vvlKTQbnW0gnMSsdPc4JZbOEZxZ 9+ldXNVSP69hgIFuV09GrO8sJw5L3I7P9UL2lk12RFXNAOLaRkF4zItXR5B1Uy4SSQ5S 2lnj0k2mR9SfKdIkG9tTaENn3hV+GHxRfykjUcndTjGzcKr5Fj37FZl1j9/iwIRDGMNH /bvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hqFf2+cwBc4XPU/wNoeNiXzq2lmzHpPCTPDaeLdQu4s=; b=Zs+0/wWFm0YgArtsdlLACTpLCNxhmY7uSDeFQhsmRAAIOo7dTw5gu0RjPkMMOszqG7 lu6gPTTitryC28+MsFZ88yvZEEwDAf/56tmDQ66U6DIFEIHWDZ3dokEnAn83bPYcm/X+ 2obIXEV+yzOL68awQXEZry62LNuohTpvJXMlae004d07LAFvE6WpIG14U2JHWiVd1HSK tEGdyCr62iw2uwZPTqTS7Dp8tMY/R82hrINRjfLjSm7tttnFn2E2+fpyzKy88LH32iAz T0Ez43dapPiNq/B9x8xxkR/MD3WbEUgO4x5EzfUTBq/xu4rQJfno3ORirqriNAMSiGss 8aIg== X-Gm-Message-State: APjAAAVsMqxEokupYbyX9Xcz1Jl0pQIbZ7bPC83WnXNyD88SoJT+M5R9 sUZsOfVoJpnp13F/W6viAEf4nw== X-Google-Smtp-Source: APXvYqwWD8aa4VUIefG8yGQZZRSCg34qIIIGxgQIp1ineTqYUGxgNceXvUOJ1ILtPXFr+A14j81EGw== X-Received: by 2002:a24:a09:: with SMTP id 9mr5343497itw.146.1559274854371; Thu, 30 May 2019 20:54:14 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.54.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:54:13 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Subject: [PATCH v2 14/17] soc: qcom: ipa: support build of IPA code Date: Thu, 30 May 2019 22:53:45 -0500 Message-Id: <20190531035348.7194-15-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190530_205415_347372_4295333D X-CRM114-Status: GOOD ( 13.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, syadagir@codeaurora.org, ejcaruso@google.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, evgreen@chromium.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, subashab@codeaurora.org, linux-soc@vger.kernel.org, abhishek.esse@gmail.com, cpratapa@codeaurora.org, benchan@google.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Add build and Kconfig support for the Qualcomm IPA driver. Signed-off-by: Alex Elder --- drivers/net/Kconfig | 2 ++ drivers/net/Makefile | 1 + drivers/net/ipa/Kconfig | 16 ++++++++++++++++ drivers/net/ipa/Makefile | 7 +++++++ 4 files changed, 26 insertions(+) create mode 100644 drivers/net/ipa/Kconfig create mode 100644 drivers/net/ipa/Makefile diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index 48e209e55843..d87fe174eb9f 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -388,6 +388,8 @@ source "drivers/net/fddi/Kconfig" source "drivers/net/hippi/Kconfig" +source "drivers/net/ipa/Kconfig" + config NET_SB1000 tristate "General Instruments Surfboard 1000" depends on PNP diff --git a/drivers/net/Makefile b/drivers/net/Makefile index 0d3ba056cda3..ff8918fe09b0 100644 --- a/drivers/net/Makefile +++ b/drivers/net/Makefile @@ -45,6 +45,7 @@ obj-$(CONFIG_ETHERNET) += ethernet/ obj-$(CONFIG_FDDI) += fddi/ obj-$(CONFIG_HIPPI) += hippi/ obj-$(CONFIG_HAMRADIO) += hamradio/ +obj-$(CONFIG_IPA) += ipa/ obj-$(CONFIG_PLIP) += plip/ obj-$(CONFIG_PPP) += ppp/ obj-$(CONFIG_PPP_ASYNC) += ppp/ diff --git a/drivers/net/ipa/Kconfig b/drivers/net/ipa/Kconfig new file mode 100644 index 000000000000..b1e3f7405992 --- /dev/null +++ b/drivers/net/ipa/Kconfig @@ -0,0 +1,16 @@ +config IPA + tristate "Qualcomm IPA support" + depends on NET + select QCOM_QMI_HELPERS + select QCOM_MDT_LOADER + default n + help + Choose Y here to include support for the Qualcomm IP Accelerator + (IPA), a hardware block present in some Qualcomm SoCs. The IPA + is a programmable protocol processor that is capable of generic + hardware handling of IP packets, including routing, filtering, + and NAT. Currently the IPA driver supports only basic transport + of network traffic between the AP and modem, on the Qualcomm + SDM845 SoC. + + If unsure, say N. diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile new file mode 100644 index 000000000000..a43039c09a25 --- /dev/null +++ b/drivers/net/ipa/Makefile @@ -0,0 +1,7 @@ +obj-$(CONFIG_IPA) += ipa.o + +ipa-y := ipa_main.o ipa_clock.o ipa_mem.o \ + ipa_interrupt.o gsi.o gsi_trans.o \ + ipa_gsi.o ipa_smp2p.o ipa_uc.o \ + ipa_endpoint.o ipa_cmd.o ipa_netdev.o \ + ipa_qmi.o ipa_qmi_msg.o ipa_data-sdm845.o From patchwork Fri May 31 03:53:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 10969613 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4E49A14DB for ; Fri, 31 May 2019 03:58:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3D7AC28BB9 for ; Fri, 31 May 2019 03:58:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 30E4A28BCA; Fri, 31 May 2019 03:58:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7308728BB9 for ; Fri, 31 May 2019 03:57:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xEqNGfM3m5lEYIJK7qTQIHHirV1FOSL80LEOXHu5mPo=; b=UydlqW6Y/q/CAb xRO96hRIRfdQ8R+dMudKHdZ23KKx7pW9s2p2fYB/y+rVP0Sx5wkW3I6+Yxv+SNWTI07r8Q21Ecjxj 5rrANGnPRmV7VeiJB+ST6S7m82xBKYDKx0dZE1tk5ruHCuh5LPlSj81cKY6MweO8ZVdMJo9elYpFU 3Zd972uuj0E/a4RGB5YixsLhww4o0F9HglBEISWkRqDIQWWCD367iK+hWtqvcpYADJK47HsE9lG/m LhxwIwprpK+epv3caTsqa3w3kH/raX3AJWUvbIK24C2jbvPTSysim1A62Q8GOKNY5R2jqnCGQPCxy 1W4WCaHj7J2slClDl6Fw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYfy-0007Mw-H9; Fri, 31 May 2019 03:57:54 +0000 Received: from mail-io1-xd43.google.com ([2607:f8b0:4864:20::d43]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYcS-0002sP-Ar for linux-arm-kernel@lists.infradead.org; Fri, 31 May 2019 03:54:27 +0000 Received: by mail-io1-xd43.google.com with SMTP id x24so7020681ion.5 for ; Thu, 30 May 2019 20:54:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AD2v4+YBh23vSrSakkquxEV0xAfLkc+oDW+gsObgm4g=; b=v1Im+dSykIZ/o17BF2dZBN6qKUORfLiTIwGOO21jMWRdRFVgvCgye8wMMCE2pTy9mO dzHZGVmPOV+yUJSAChQU8I1hW74bryP1A8PNhrYJyXRDQHkatWnUORM7hhHV//8JIVqq 1vJu4XMk1QmOluxXtFqS/p16+wp7r+E82bnLZNBw5zk9lMr988zMbe44q+gzQgxsqULY hUWB3Asp1j5ktucxfoyRCUrr6xbSwwX1rUURVIB0V5NIosbfr7BARxFvkJb9TYnCkFMa Tm3ZpQq7ApNQPFkYHvQ+uWJjGJDXjch4E2gN7wC/sYMAOaJZVC3PpOeVFaLRpsxkfm3B J//Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AD2v4+YBh23vSrSakkquxEV0xAfLkc+oDW+gsObgm4g=; b=PJbL8b+aCwvKn2ClKMBcLvJColQ5c23w0izhV7PKBULE2P3GLwFrwIqgmRn8SNuVTB uEtlpK2mMS4QEAxNWApiZ6hIIj67wZkt/oI6AxOFYNlK+4AP2XEC6pvxT+iSg6aFqi9H 4z8DpM4kQtv8I2g8dVwGU257tUw1/DYHif56P0ahzVwea2eCMp1zvlx0CVtaX3XazXLz uphQU/cawQWdE8rRKCv2vwSWdAmip75wdPGixhj4mZliEppLebp2p8/7M4YdylMWiO7z fytmGsY5BV4eu2dayhCYhYBvhk1nfu2iJhltHD8aCbEA71uexhy+lcmtX79Uc8N6nXF0 4QoQ== X-Gm-Message-State: APjAAAVQh5Emi5JCJ+7guiPYY5jVZObyOaWsIbLVUV63eBKVLq52RGV+ 4LX7QWPvwA/gnloS7TEX61tgJg== X-Google-Smtp-Source: APXvYqycI8C7GJZADcTmt8i31xiUOJTKNoLAX+58tPg2SEvn+jL2s46fJt29gnMhTlelRCo9iy5FmA== X-Received: by 2002:a05:6602:50:: with SMTP id z16mr5477434ioz.302.1559274855582; Thu, 30 May 2019 20:54:15 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.54.14 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:54:15 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Subject: [PATCH v2 15/17] MAINTAINERS: add entry for the Qualcomm IPA driver Date: Thu, 30 May 2019 22:53:46 -0500 Message-Id: <20190531035348.7194-16-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190530_205416_570410_24B9E4E5 X-CRM114-Status: UNSURE ( 9.86 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, syadagir@codeaurora.org, ejcaruso@google.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, evgreen@chromium.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, subashab@codeaurora.org, linux-soc@vger.kernel.org, abhishek.esse@gmail.com, cpratapa@codeaurora.org, benchan@google.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Add an entry in the MAINTAINERS file for the Qualcomm IPA driver Signed-off-by: Alex Elder --- MAINTAINERS | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 429c6c624861..a2dece647641 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -12872,6 +12872,12 @@ L: alsa-devel@alsa-project.org (moderated for non-subscribers) S: Supported F: sound/soc/qcom/ +QCOM IPA DRIVER +M: Alex Elder +L: netdev@vger.kernel.org +S: Supported +F: drivers/net/ipa/ + QEMU MACHINE EMULATOR AND VIRTUALIZER SUPPORT M: Gabriel Somlo M: "Michael S. Tsirkin" From patchwork Fri May 31 03:53:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 10969615 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A51314C0 for ; Fri, 31 May 2019 03:58:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 363CC28BB9 for ; Fri, 31 May 2019 03:58:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 277A128BCA; Fri, 31 May 2019 03:58:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id CF26628BB9 for ; Fri, 31 May 2019 03:58:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=hr7Dwgnd05kWbzEurvvcR209XouspurgAaCxL8USXRU=; b=faofaE2dCx5FON kROFJoHOglKkqqnpDQITL7jXhPUvlFUawqlPSTvbmS2FB98R1NpwdKRRg00LDqY4itVmpKyzaYFhq +uCFsqhUm8kVFLc4bz1eYqF8pcXQyYDXSXy/wfdakB4xOPibB/CWPsjGyqEo2SdYgVd3ZVdbfPf53 eQiM9W6/ObxEu+ApADqXEmVqYOIT4JhKi9T+DSehB65dtVFfiGYhB+7zCQ11lLr/0o+aIsZwaLaQF L0Igondy/d6ICqkWwH8Ymy+a1cVUBkQ7FDx3bEmyrX50CSxGckWM2ybO78fqvftkyYj7Xg/lMqVZb eYkSPtPvctpNcYduyapg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYg9-0007be-II; Fri, 31 May 2019 03:58:05 +0000 Received: from mail-it1-x12a.google.com ([2607:f8b0:4864:20::12a]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYcT-0002tA-Rn for linux-arm-kernel@lists.infradead.org; Fri, 31 May 2019 03:54:30 +0000 Received: by mail-it1-x12a.google.com with SMTP id t184so13549264itf.2 for ; Thu, 30 May 2019 20:54:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kagssQd3QuWugHezIJ6doapoPYga+HnLQRv+8NMyX9g=; b=aVaU2Uk/PSvFOR2ZTUEkpLv50UR082xEm5HsKlrw7QgZckzEHtXA7yO/Wn/WOBbJvJ Dc+RwCg9LLpK+kEt8JnxklzRSCovp7PitZw296lYP8Jd1UwgpjL6wVVrUTMb+9Bc9n+N t3Maw5oZUPapO1nFVOsnDqMKzA6i4fmK+kL1wanbzG8ogV628MK9QBHGEZKqK77sc8Wg gEHiQzbr0xcVtCqK6elbQQ1PkYtX5ZPymVlQtCfCogAJWOQT7ZhIg+V/xQd8Q9pgDbYZ zifSkJyuzPAChGKjgwm6JomDh11WZ8yM0/5ykVmjJzQLsMFc391cvHSKIpBnUCPJl/x6 CBFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kagssQd3QuWugHezIJ6doapoPYga+HnLQRv+8NMyX9g=; b=leyyYmPWnPBI0gQ9gbrqxNKI7icKc98252psgiKY+Pk2ILVDRCRArDrBXrsMYLklX5 DSyfv+If2rN6UG4v8tT9CZZdQqomv+qBwqOlBqz5D0B3UrKfeQk6lylZ6h+jzGh5Vz8+ Ajio5qKWrdbsNDLAAIBxKajZutDVmq7PwLQOr2JxuhOn5uaQYyuKsPpjaZjnb06/HFRH IdD59SkB1kLW21qQ7a1Hp5NKd8C52VHdI+TFV53W4mnCNO5xF9ciUNXYb9QhukD8GeMh qhINF1Z2aLIRuGfCwxskCTT3uau8nVxidZN77AV+sz681C6oNQuizW72fugwh3ICymjy SlCw== X-Gm-Message-State: APjAAAVMKOdL1LW+OkxNoSER/siu4Ckbhg/PLOaLstDyMYAgQBpn0MRT z0wDf4wIc/7Usj6Orol9au5Jag== X-Google-Smtp-Source: APXvYqwU/ZRoDGo6axXL+D8tg1NufQkmrF7bSAtLkJC7p296STpfjSogYdgLZwKpdUbt2XAzDoRcpA== X-Received: by 2002:a24:5095:: with SMTP id m143mr5427071itb.68.1559274856806; Thu, 30 May 2019 20:54:16 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.54.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:54:16 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Subject: [PATCH v2 16/17] arm64: dts: sdm845: add IPA information Date: Thu, 30 May 2019 22:53:47 -0500 Message-Id: <20190531035348.7194-17-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190530_205418_018274_5382B24A X-CRM114-Status: GOOD ( 11.69 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, syadagir@codeaurora.org, ejcaruso@google.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, evgreen@chromium.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, subashab@codeaurora.org, linux-soc@vger.kernel.org, abhishek.esse@gmail.com, cpratapa@codeaurora.org, benchan@google.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Add IPA-related nodes and definitions to "sdm845.dtsi". Signed-off-by: Alex Elder --- arch/arm64/boot/dts/qcom/sdm845.dtsi | 51 ++++++++++++++++++++++++++++ 1 file changed, 51 insertions(+) diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi index fcb93300ca62..985479925af8 100644 --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi @@ -20,6 +20,7 @@ #include #include #include +#include / { interrupt-parent = <&intc>; @@ -517,6 +518,17 @@ interrupt-controller; #interrupt-cells = <2>; }; + + ipa_smp2p_out: ipa-ap-to-modem { + qcom,entry-name = "ipa"; + #qcom,smem-state-cells = <1>; + }; + + ipa_smp2p_in: ipa-modem-to-ap { + qcom,entry-name = "ipa"; + interrupt-controller; + #interrupt-cells = <2>; + }; }; smp2p-slpi { @@ -1268,6 +1280,45 @@ }; }; + ipa@1e40000 { + compatible = "qcom,sdm845-ipa"; + + modem-init; + + reg = <0 0x1e40000 0 0x7000>, + <0 0x1e47000 0 0x2000>, + <0 0x1e04000 0 0x2c000>; + reg-names = "ipa-reg", + "ipa-shared", + "gsi"; + + interrupts-extended = + <&intc 0 311 IRQ_TYPE_EDGE_RISING>, + <&intc 0 432 IRQ_TYPE_LEVEL_HIGH>, + <&ipa_smp2p_in 0 IRQ_TYPE_EDGE_RISING>, + <&ipa_smp2p_in 1 IRQ_TYPE_EDGE_RISING>; + interrupt-names = "ipa", + "gsi", + "ipa-clock-query", + "ipa-setup-ready"; + + clocks = <&rpmhcc RPMH_IPA_CLK>; + clock-names = "core"; + + interconnects = + <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_EBI1>, + <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_IMEM>, + <&rsc_hlos MASTER_APPSS_PROC &rsc_hlos SLAVE_IPA_CFG>; + interconnect-names = "memory", + "imem", + "config"; + + qcom,smem-states = <&ipa_smp2p_out 0>, + <&ipa_smp2p_out 1>; + qcom,smem-state-names = "ipa-clock-enabled-valid", + "ipa-clock-enabled"; + }; + tcsr_mutex_regs: syscon@1f40000 { compatible = "syscon"; reg = <0 0x01f40000 0 0x40000>; From patchwork Fri May 31 03:53:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 10969621 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3CEFE14C0 for ; Fri, 31 May 2019 03:59:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 285C428BB9 for ; Fri, 31 May 2019 03:59:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1429028BCA; Fri, 31 May 2019 03:59:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B782028BB9 for ; Fri, 31 May 2019 03:59:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wIzdDJJ7yQ+7AHKZLSYsrbhHKWLEFlE0FPKwicUhL7Q=; b=uErG86wKT1pQNY leq+cyCAIlMDN1WuDrmm+1xOIdWBAuYL1wqEY4R9Ddtto4UshKN13Sjqgb19erwm7hDa8rcfE8zSA V2UzaYG3akl1ygbPs1ToBFqy8+MQbui+Xxb5+1P4rCSQqGgPRt91eMbrSZUZAVl6cotvDaYrSJLvt XmGY3UPX9H6h8wgI12cf7+ZJGZd3RPEQRskspLadhwCPRuMBeHZg+iBtX9zQNZKoi0zMhHfnh4ppd InWrckxxgzlqGuwWp4s3Lioo/X/EJmMJPWC6wubHfmpebT+/TaO4ROfYYvIJ/cuzrQOgtveeLyVY3 PcvhHfai+wxNL6nAPjlA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYh9-0008SB-ON; Fri, 31 May 2019 03:59:07 +0000 Received: from mail-it1-x143.google.com ([2607:f8b0:4864:20::143]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hWYcU-0002u5-NH for linux-arm-kernel@lists.infradead.org; Fri, 31 May 2019 03:54:38 +0000 Received: by mail-it1-x143.google.com with SMTP id m141so13576793ita.3 for ; Thu, 30 May 2019 20:54:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0MRFaUSMAfw/XRQQZHzE3aa0KzqIA2a6f7/ADiTDy48=; b=uwklmqHcoHhu1WLCfPLSWItOjPdTvghEYLBU6xALfWtsnd113VuUy3W4OCt7Gf7QPo 41T0Kb96J3ZSXspViH4KCr3Z1SxedNQJeBOUBfoI6mWPpGqYqYDdnTOvkIRtYFl7he/C tuHhPYcxS4Bk+X35n/t9s33SCJGdhQWJcvM0GMVAvgKp0JhTyaf6a7pcaeMTEIKcmgXZ 3tE7+sKkPwBES53s3TxVG2diUKbi6dwVeJNQMPdqSZXPSI3AAgq14TRNOQlsumqidDGX FlY+vsFjKBKkr8fzT6M0XgLfkjQO6tSythQDO4ptBHatoj3PqdEFPTk4RzUy6/qfC7sW yRBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0MRFaUSMAfw/XRQQZHzE3aa0KzqIA2a6f7/ADiTDy48=; b=s3eKd5wx30/v2dwDleLac9pgd4rhyuQT7+XdH8letyvvaPJP6T2VDUqKZXYg8GuUFh fx7JTB6pNRWOTMk8rVUYnMK0YKhJGW4OoQHOGjV8a2GoLqFTmKViFEE8ugdJPHrKz01F 7vIgxIsFSRQJzB/d/AbX4uU/CCRoydTFNGS3q4gRSDJneSPA75IB44nvXT2cT+xPkGKP Pkprx2vc8dFVStWiKUV9IjJ6VSkLkTi8dBaTSTr9B97L4ph9Y7RaDkGqJQ72oCINOMhH 8nXwDev7LwHZeNiWSssRelBzCR4qbwChMiGlacPWXnyfVgAjwfmbhc2D1L5+JwLgYIgr 2lfA== X-Gm-Message-State: APjAAAWscnoDYFXiNf/cdjXvQ8sQwYjHwAC6oC/jhd5J963pGM/l+j/S 8jvHNxAvHLa96Mtm5E2HxlHWSw== X-Google-Smtp-Source: APXvYqzAp7WqDtF2svw6m32RESqRJSWfwXgOZsWCJl/PO7gAfHjI0bpJKwO94bXVpo532sRxJONkgg== X-Received: by 2002:a24:6583:: with SMTP id u125mr5701155itb.168.1559274858043; Thu, 30 May 2019 20:54:18 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.54.16 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:54:17 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Subject: [PATCH v2 17/17] arm64: defconfig: enable build of IPA code Date: Thu, 30 May 2019 22:53:48 -0500 Message-Id: <20190531035348.7194-18-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190530_205418_812964_81E01A6A X-CRM114-Status: GOOD ( 10.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, syadagir@codeaurora.org, ejcaruso@google.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, evgreen@chromium.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, subashab@codeaurora.org, linux-soc@vger.kernel.org, abhishek.esse@gmail.com, cpratapa@codeaurora.org, benchan@google.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Add CONFIG_IPA to the 64-bit Arm defconfig. Signed-off-by: Alex Elder --- arch/arm64/configs/defconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig index 4d583514258c..6ed86cb6b597 100644 --- a/arch/arm64/configs/defconfig +++ b/arch/arm64/configs/defconfig @@ -261,6 +261,7 @@ CONFIG_SMSC911X=y CONFIG_SNI_AVE=y CONFIG_SNI_NETSEC=y CONFIG_STMMAC_ETH=m +CONFIG_IPA=m CONFIG_MDIO_BUS_MUX_MMIOREG=y CONFIG_AT803X_PHY=m CONFIG_MARVELL_PHY=m