From patchwork Thu Nov 21 13:53:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Matyukevich X-Patchwork-Id: 11256265 X-Patchwork-Delegate: kvalo@adurom.com Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3342913A4 for ; Thu, 21 Nov 2019 13:54:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 023452070B for ; Thu, 21 Nov 2019 13:54:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=quantenna.com header.i=@quantenna.com header.b="MkJR9CAf" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726502AbfKUNyE (ORCPT ); Thu, 21 Nov 2019 08:54:04 -0500 Received: from mx0b-00183b01.pphosted.com ([67.231.157.42]:55068 "EHLO mx0a-00183b01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726593AbfKUNyE (ORCPT ); Thu, 21 Nov 2019 08:54:04 -0500 Received: from pps.filterd (m0048103.ppops.net [127.0.0.1]) by mx0b-00183b01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id xALDpChd015561 for ; Thu, 21 Nov 2019 06:53:59 -0700 Received: from nam03-dm3-obe.outbound.protection.outlook.com (mail-dm3nam03lp2050.outbound.protection.outlook.com [104.47.41.50]) by mx0b-00183b01.pphosted.com with ESMTP id 2waewa52d7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Thu, 21 Nov 2019 06:53:59 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MIbgwR/9sv9ieUrMqyKHu+/5Nu72qYdtA6pTweLyOlO0xg5RTN/7RW/yZfdXvTlXtAvhB0ACZYIecuyZkSBRcnGqiMuQyrBEtWhfERAovWT34YcjaEDySGJgcMomPX5+9z7FTBlx/Kd4CGEpBJS7fWE4q27Xkj7Qebs56yS3dZVCS3GkDAeYvQWaIK2prOdlWE2PF0Z8GGE5kdDAKOs2tImaioWtD54otviVA2IM6WhNk1XfJQ1xgzox3hTogEYilOEIRrn/c0jXFZ3CyoGtHWDYgSNTHQzEoAOcq75Z13AbcWTI3Np9WXvu8I6J9mvKKBacDkd30GNtUdEonLqJeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UDpxYNCt+lt3Sfm4qBpFTe7XdoRoa5bnkKkJaypf3zw=; b=SumYePSXzWeeKzodgG8g8FctCneHk8hBOQMaA+pphiJkaYWFTcj0XxYINyWq0zoQKoIm/7fRbVC8yt8OJeGOsGjWJvh5zcbbiZyFDK7DYL9w9ahKKwYY3EAaXzLFCDuO7xS0nA77hl8cLeUaZw2SDH7iPsEat0tRPQFH7VavoGRVI01HkDQpEEuYGBTX4IEN5IFvMPagR1bH6SK2YoceG/SqKfHSthrEzbrTvDqvmeZZJA1oFS0WVEM8h31Qq+3QpNCkH7N1nVPg4N0T9IXK1s1BEcR/rDOyNZmhjlfn+biKH6Cbbbs3BH+V6irkjw56diHqLs0+tz1VJ50/BzgF0w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=quantenna.com; dmarc=pass action=none header.from=quantenna.com; dkim=pass header.d=quantenna.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quantenna.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UDpxYNCt+lt3Sfm4qBpFTe7XdoRoa5bnkKkJaypf3zw=; b=MkJR9CAf2Iugp8mK2YkKvhkHX7o9FixizFwA/SfEHa8dInoh8iO9mQsb4xnB5C8cG7blNB/ITO1TuOulMNQEgz10LmGkebmNGUMKj09mRsz2KFTOHFOqcPtb93iUWCKl3bQu1HGBz2eQMYQOCeSF/z3lsa9a6l3chZC/LOyIQ/U= Received: from BN6PR05MB3188.namprd05.prod.outlook.com (10.172.147.11) by BN6PR05MB2818.namprd05.prod.outlook.com (10.168.253.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2474.9; Thu, 21 Nov 2019 13:53:57 +0000 Received: from BN6PR05MB3188.namprd05.prod.outlook.com ([fe80::5922:c40b:d810:f110]) by BN6PR05MB3188.namprd05.prod.outlook.com ([fe80::5922:c40b:d810:f110%10]) with mapi id 15.20.2474.015; Thu, 21 Nov 2019 13:53:56 +0000 Received: from SN6PR05MB4928.namprd05.prod.outlook.com (52.135.117.74) by SN6PR05MB4270.namprd05.prod.outlook.com (52.135.73.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2474.16; Thu, 21 Nov 2019 13:53:33 +0000 Received: from SN6PR05MB4928.namprd05.prod.outlook.com ([fe80::61a0:dd3d:3477:29c9]) by SN6PR05MB4928.namprd05.prod.outlook.com ([fe80::61a0:dd3d:3477:29c9%5]) with mapi id 15.20.2474.019; Thu, 21 Nov 2019 13:53:33 +0000 From: Sergey Matyukevich To: "linux-wireless@vger.kernel.org" CC: Igor Mitsyanko , Mikhail Karpenko , Sergey Matyukevich Subject: [PATCH 1/2] qtnfmac: prepare for the next chip revision Thread-Topic: [PATCH 1/2] qtnfmac: prepare for the next chip revision Thread-Index: AQHVoHMQAtV2PIKxq0GlIpsyRuBieQ== Date: Thu, 21 Nov 2019 13:53:33 +0000 Message-ID: <20191121135324.21715-2-sergey.matyukevich.os@quantenna.com> References: <20191121135324.21715-1-sergey.matyukevich.os@quantenna.com> In-Reply-To: <20191121135324.21715-1-sergey.matyukevich.os@quantenna.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: BYAPR07CA0055.namprd07.prod.outlook.com (2603:10b6:a03:60::32) To SN6PR05MB4928.namprd05.prod.outlook.com (2603:10b6:805:9d::10) x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.11.0 x-originating-ip: [195.182.157.78] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: dd405970-4114-4b76-0f5d-08d76e8a3251 x-ms-traffictypediagnostic: SN6PR05MB4270:|BN6PR05MB2818: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-moderation-data: 11/21/2019 1:53:53 PM x-ms-oob-tlc-oobclassifiers: OLM:70; x-forefront-prvs: 0228DDDDD7 x-forefront-antispam-report: SFV:NSPM;SFS:(10009020)(136003)(366004)(396003)(39850400004)(376002)(346002)(189003)(199004)(446003)(5660300002)(64756008)(256004)(6486002)(52116002)(8936002)(4326008)(76176011)(6116002)(99286004)(3846002)(1076003)(66446008)(305945005)(316002)(66946007)(50226002)(478600001)(66476007)(66556008)(36756003)(54906003)(6916009)(5024004)(436003)(11346002)(107886003)(66066001)(14444005)(2616005)(2906002)(81156014)(8676002)(6506007)(14454004)(6512007)(6436002)(103116003)(5640700003)(186003)(2351001)(86362001)(26005)(81166006)(2501003)(71190400001)(71200400001)(386003)(102836004)(7736002)(30864003)(25786009);DIR:OUT;SFP:1101;SCL:1;SRVR:BN6PR05MB2818;H:BN6PR05MB3188.namprd05.prod.outlook.com;FPR:;SPF:None;LANG:en;PTR:InfoNoRecords;MX:1;A:1; received-spf: None (protection.outlook.com: quantenna.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: p0DCWAgg4/ERYyehIwMituM6+x+cVpx34ZGMxnrZSL4JiNNlJ6dDQHqvV/FX/hH6e0N/dQbjHVWGNpdOFaR6vMazgZ45k6MJGwc37iYrw90BXjo6RT9Cx4TqlAfO5a4AIyVsujixa3+b0y87ZpdiWKogmk0FNs0bSMC9kG/PqlMi1c3CMEUnNf3AsMlYHopd3IHtlOLzSWWX+3sfHQxP0i1SgDPWVa79rYQAD0PxAzzpTiQxoQWZLOxW9ICYY8+T/wc0uaIupxr6+9fcP62Y3/2wu4PbYaebbHG708/0lXfRObUEwaqPEq3Gz7haT64makefj7VVecgutHZn9g0N9+Vyv9/F3TrPhJ7dIiGAshZ8bot9gFvGtUdfDR4seccsWSIitXewHqfTzoWijzRPiZziftym0GBg/fzIsyrp7oN+4JCn6WlAzry5i03WlQXd MIME-Version: 1.0 X-OriginatorOrg: quantenna.com X-MS-Exchange-CrossTenant-Network-Message-Id: dd405970-4114-4b76-0f5d-08d76e8a3251 X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a355dbce-62b4-4789-9446-c1d5582180ff X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: R7lB+0MTcLUspgcW4cY7ZJEFnWWmRpgRtYxr698t1PrCfV/77zPtxEmP8I/L4JYbsKUToVF2Hwh+Vj8QTacd3dV0kU8bwasrICu7Pt0OfVJkJP/euPk8/uETaAbrsYm1nFbTyvOcU8SC6OJq9ebNJeI8KtiZyVV5k++2799jayAIPy/ecD2gDTpZ1JgE/RfBVa+qma9iDbm3WGFJZR7rUEsRBgeovFaFCV/ryJcwWwE= X-MS-Exchange-CrossTenant-originalarrivaltime: 21 Nov 2019 13:53:56.7897 (UTC) X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR05MB2818 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,18.0.572 definitions=2019-11-21_03:2019-11-21,2019-11-21 signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 suspectscore=0 mlxlogscore=999 lowpriorityscore=0 clxscore=1015 phishscore=0 spamscore=0 malwarescore=0 impostorscore=0 bulkscore=0 mlxscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-1910280000 definitions=main-1911210126 Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org Data path operations may differ between chip revisions. Extract such operations and settings and into a separate structure in order to support multiple QSR10G chips revisions with single module. Remove data path counters specific to a single chip revision. Signed-off-by: Sergey Matyukevich --- drivers/net/wireless/quantenna/qtnfmac/bus.h | 3 +- drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c | 3 +- .../wireless/quantenna/qtnfmac/pcie/pearl_pcie.c | 356 +++++++++++++-------- 3 files changed, 220 insertions(+), 142 deletions(-) diff --git a/drivers/net/wireless/quantenna/qtnfmac/bus.h b/drivers/net/wireless/quantenna/qtnfmac/bus.h index 87d048df09d1..b8e1049e7e21 100644 --- a/drivers/net/wireless/quantenna/qtnfmac/bus.h +++ b/drivers/net/wireless/quantenna/qtnfmac/bus.h @@ -52,8 +52,7 @@ struct qtnf_bus_ops { struct qtnf_bus { struct device *dev; enum qtnf_fw_state fw_state; - u32 chip; - u32 chiprev; + u32 chipid; struct qtnf_bus_ops *bus_ops; struct qtnf_wmac *mac[QTNF_MAX_MAC]; struct qtnf_qlink_transport trans; diff --git a/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c b/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c index 5337e67092ca..1a1896c4c042 100644 --- a/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c +++ b/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c @@ -335,10 +335,11 @@ static int qtnf_pcie_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (!bus) return -ENOMEM; + bus->fw_state = QTNF_FW_STATE_DETACHED; + bus->chipid = chipid; pcie_priv = get_bus_priv(bus); pci_set_drvdata(pdev, bus); bus->dev = &pdev->dev; - bus->fw_state = QTNF_FW_STATE_DETACHED; pcie_priv->pdev = pdev; pcie_priv->tx_stopped = 0; pcie_priv->flashboot = flashboot; diff --git a/drivers/net/wireless/quantenna/qtnfmac/pcie/pearl_pcie.c b/drivers/net/wireless/quantenna/qtnfmac/pcie/pearl_pcie.c index 8e0d8018208a..32506f700cca 100644 --- a/drivers/net/wireless/quantenna/qtnfmac/pcie/pearl_pcie.c +++ b/drivers/net/wireless/quantenna/qtnfmac/pcie/pearl_pcie.c @@ -23,9 +23,6 @@ #include "shm_ipc.h" #include "debug.h" -#define PEARL_TX_BD_SIZE_DEFAULT 32 -#define PEARL_RX_BD_SIZE_DEFAULT 256 - struct qtnf_pearl_bda { __le16 bda_len; __le16 bda_version; @@ -73,8 +70,28 @@ struct qtnf_pearl_fw_hdr { __le32 crc; } __packed; +struct qtnf_pcie_pearl_state; + +struct qtnf_pcie_pearl_hdp_ops { + u16 hdp_rx_bd_size_default; + u16 hdp_tx_bd_size_default; + int (*hdp_alloc_bd_table)(struct qtnf_pcie_pearl_state *ps); + void (*hdp_init)(struct qtnf_pcie_pearl_state *ps); + void (*hdp_hhbm_init)(struct qtnf_pcie_pearl_state *ps); + void (*hdp_set_queues)(struct qtnf_pcie_pearl_state *ps, + unsigned int tx_bd_size, + unsigned int rx_bd_size); + void (*hdp_rbd_attach)(struct qtnf_pcie_pearl_state *ps, u16 index, + dma_addr_t paddr); + u32 (*hdp_get_tx_done_index)(struct qtnf_pcie_pearl_state *ps); + void (*hdp_tx_hw_push)(struct qtnf_pcie_pearl_state *ps, int index, + dma_addr_t paddr); + +}; + struct qtnf_pcie_pearl_state { struct qtnf_pcie_bus_priv base; + const struct qtnf_pcie_pearl_hdp_ops *hdp_ops; /* lock for irq configuration changes */ spinlock_t irq_lock; @@ -97,6 +114,180 @@ struct qtnf_pcie_pearl_state { u32 pcie_irq_uf_count; }; +/* HDP common ops */ + +static void hdp_set_queues_common(struct qtnf_pcie_pearl_state *ps, + unsigned int tx_bd_size, + unsigned int rx_bd_size) +{ + struct qtnf_pcie_bus_priv *priv = &ps->base; + + if (tx_bd_size == 0) { + tx_bd_size = ps->hdp_ops->hdp_tx_bd_size_default; + } else if (!is_power_of_2(tx_bd_size)) { + pr_warn("invalid tx_bd_size value %u, use default %u\n", + tx_bd_size, ps->hdp_ops->hdp_tx_bd_size_default); + tx_bd_size = ps->hdp_ops->hdp_tx_bd_size_default; + } + + if (rx_bd_size == 0) { + rx_bd_size = ps->hdp_ops->hdp_rx_bd_size_default; + } else if (!is_power_of_2(rx_bd_size)) { + pr_warn("invalid rx_bd_size value %u, use default %u\n", + tx_bd_size, ps->hdp_ops->hdp_rx_bd_size_default); + rx_bd_size = ps->hdp_ops->hdp_rx_bd_size_default; + } + + priv->tx_bd_num = tx_bd_size; + priv->rx_bd_num = rx_bd_size; +} + +/* HDP ops: rev B */ + +static int hdp_alloc_bd_table_rev_b(struct qtnf_pcie_pearl_state *ps) +{ + struct qtnf_pcie_bus_priv *priv = &ps->base; + dma_addr_t paddr; + void *vaddr; + int len; + + len = priv->tx_bd_num * sizeof(struct qtnf_pearl_tx_bd) + + priv->rx_bd_num * sizeof(struct qtnf_pearl_rx_bd); + + vaddr = dmam_alloc_coherent(&priv->pdev->dev, len, &paddr, GFP_KERNEL); + if (!vaddr) + return -ENOMEM; + + /* tx bd */ + + ps->bd_table_vaddr = vaddr; + ps->bd_table_paddr = paddr; + ps->bd_table_len = len; + + ps->tx_bd_vbase = vaddr; + ps->tx_bd_pbase = paddr; + + pr_debug("TX descriptor table: vaddr=0x%p paddr=%pad\n", vaddr, &paddr); + + /* rx bd */ + + vaddr = ((struct qtnf_pearl_tx_bd *)vaddr) + priv->tx_bd_num; + paddr += priv->tx_bd_num * sizeof(struct qtnf_pearl_tx_bd); + + ps->rx_bd_vbase = vaddr; + ps->rx_bd_pbase = paddr; + + pr_debug("RX descriptor table: vaddr=0x%p paddr=%pad\n", vaddr, &paddr); + + return 0; +} + +static void hdp_rbd_attach_rev_b(struct qtnf_pcie_pearl_state *ps, u16 index, + dma_addr_t paddr) +{ +#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT + writel(QTN_HOST_HI32(paddr), + PCIE_HDP_HHBM_BUF_PTR_H(ps->pcie_reg_base)); +#endif + writel(QTN_HOST_LO32(paddr), + PCIE_HDP_HHBM_BUF_PTR(ps->pcie_reg_base)); + + writel(index, PCIE_HDP_TX_HOST_Q_WR_PTR(ps->pcie_reg_base)); +} + +static void hdp_hhbm_init_rev_b(struct qtnf_pcie_pearl_state *ps) +{ + u32 val; + + val = readl(PCIE_HHBM_CONFIG(ps->pcie_reg_base)); + val |= HHBM_CONFIG_SOFT_RESET; + writel(val, PCIE_HHBM_CONFIG(ps->pcie_reg_base)); + usleep_range(50, 100); + val &= ~HHBM_CONFIG_SOFT_RESET; +#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT + val |= HHBM_64BIT; +#endif + writel(val, PCIE_HHBM_CONFIG(ps->pcie_reg_base)); + writel(ps->base.rx_bd_num, PCIE_HHBM_Q_LIMIT_REG(ps->pcie_reg_base)); +} + +static void hdp_init_rev_b(struct qtnf_pcie_pearl_state *ps) +{ + struct qtnf_pcie_bus_priv *priv = &ps->base; + +#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT + writel(QTN_HOST_HI32(ps->rx_bd_pbase), + PCIE_HDP_TX_HOST_Q_BASE_H(ps->pcie_reg_base)); +#endif + writel(QTN_HOST_LO32(ps->rx_bd_pbase), + PCIE_HDP_TX_HOST_Q_BASE_L(ps->pcie_reg_base)); + writel(priv->rx_bd_num | (sizeof(struct qtnf_pearl_rx_bd)) << 16, + PCIE_HDP_TX_HOST_Q_SZ_CTRL(ps->pcie_reg_base)); +} + +static void hdp_set_queues_rev_b(struct qtnf_pcie_pearl_state *ps, + unsigned int tx_bd_size, + unsigned int rx_bd_size) +{ + struct qtnf_pcie_bus_priv *priv = &ps->base; + u32 val; + + hdp_set_queues_common(ps, tx_bd_size, rx_bd_size); + + val = tx_bd_size * sizeof(struct qtnf_pearl_tx_bd); + if (val > PCIE_HHBM_MAX_SIZE) { + pr_warn("invalid tx_bd_size value %u, use default %u\n", + tx_bd_size, ps->hdp_ops->hdp_tx_bd_size_default); + tx_bd_size = ps->hdp_ops->hdp_tx_bd_size_default; + } + + val = rx_bd_size * sizeof(dma_addr_t); + if (val > PCIE_HHBM_MAX_SIZE) { + pr_warn("invalid rx_bd_size value %u, use default %u\n", + tx_bd_size, ps->hdp_ops->hdp_rx_bd_size_default); + rx_bd_size = ps->hdp_ops->hdp_rx_bd_size_default; + } + + priv->tx_bd_num = tx_bd_size; + priv->rx_bd_num = rx_bd_size; +} + +static u32 hdp_get_tx_done_index_rev_b(struct qtnf_pcie_pearl_state *ps) +{ + struct qtnf_pcie_bus_priv *priv = &ps->base; + u32 v; + + v = readl(PCIE_HDP_RX0DMA_CNT(ps->pcie_reg_base)) + & (priv->tx_bd_num - 1); + + return v; +} + +static void hdp_tx_hw_push_rev_b(struct qtnf_pcie_pearl_state *ps, int index, + dma_addr_t paddr) +{ +#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT + writel(QTN_HOST_HI32(paddr), + PCIE_HDP_HOST_WR_DESC0_H(ps->pcie_reg_base)); +#endif + writel(QTN_HOST_LO32(paddr), + PCIE_HDP_HOST_WR_DESC0(ps->pcie_reg_base)); +} + +static const struct qtnf_pcie_pearl_hdp_ops hdp_ops_rev_b = { + .hdp_tx_bd_size_default = 32, + .hdp_rx_bd_size_default = 256, + .hdp_alloc_bd_table = hdp_alloc_bd_table_rev_b, + .hdp_init = hdp_init_rev_b, + .hdp_hhbm_init = hdp_hhbm_init_rev_b, + .hdp_set_queues = hdp_set_queues_rev_b, + .hdp_rbd_attach = hdp_rbd_attach_rev_b, + .hdp_get_tx_done_index = hdp_get_tx_done_index_rev_b, + .hdp_tx_hw_push = hdp_tx_hw_push_rev_b, +}; + +/* common */ + static inline void qtnf_init_hdp_irqs(struct qtnf_pcie_pearl_state *ps) { unsigned long flags; @@ -229,56 +420,6 @@ static int qtnf_poll_state(__le32 __iomem *reg, u32 state, u32 delay_in_ms) return 0; } -static int pearl_alloc_bd_table(struct qtnf_pcie_pearl_state *ps) -{ - struct qtnf_pcie_bus_priv *priv = &ps->base; - dma_addr_t paddr; - void *vaddr; - int len; - - len = priv->tx_bd_num * sizeof(struct qtnf_pearl_tx_bd) + - priv->rx_bd_num * sizeof(struct qtnf_pearl_rx_bd); - - vaddr = dmam_alloc_coherent(&priv->pdev->dev, len, &paddr, GFP_KERNEL); - if (!vaddr) - return -ENOMEM; - - /* tx bd */ - - ps->bd_table_vaddr = vaddr; - ps->bd_table_paddr = paddr; - ps->bd_table_len = len; - - ps->tx_bd_vbase = vaddr; - ps->tx_bd_pbase = paddr; - - pr_debug("TX descriptor table: vaddr=0x%p paddr=%pad\n", vaddr, &paddr); - - priv->tx_bd_r_index = 0; - priv->tx_bd_w_index = 0; - - /* rx bd */ - - vaddr = ((struct qtnf_pearl_tx_bd *)vaddr) + priv->tx_bd_num; - paddr += priv->tx_bd_num * sizeof(struct qtnf_pearl_tx_bd); - - ps->rx_bd_vbase = vaddr; - ps->rx_bd_pbase = paddr; - -#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT - writel(QTN_HOST_HI32(paddr), - PCIE_HDP_TX_HOST_Q_BASE_H(ps->pcie_reg_base)); -#endif - writel(QTN_HOST_LO32(paddr), - PCIE_HDP_TX_HOST_Q_BASE_L(ps->pcie_reg_base)); - writel(priv->rx_bd_num | (sizeof(struct qtnf_pearl_rx_bd)) << 16, - PCIE_HDP_TX_HOST_Q_SZ_CTRL(ps->pcie_reg_base)); - - pr_debug("RX descriptor table: vaddr=0x%p paddr=%pad\n", vaddr, &paddr); - - return 0; -} - static int pearl_skb2rbd_attach(struct qtnf_pcie_pearl_state *ps, u16 index) { struct qtnf_pcie_bus_priv *priv = &ps->base; @@ -312,14 +453,8 @@ static int pearl_skb2rbd_attach(struct qtnf_pcie_pearl_state *ps, u16 index) /* sync up all descriptor updates */ wmb(); -#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT - writel(QTN_HOST_HI32(paddr), - PCIE_HDP_HHBM_BUF_PTR_H(ps->pcie_reg_base)); -#endif - writel(QTN_HOST_LO32(paddr), - PCIE_HDP_HHBM_BUF_PTR(ps->pcie_reg_base)); + ps->hdp_ops->hdp_rbd_attach(ps, index, paddr); - writel(index, PCIE_HDP_TX_HOST_Q_WR_PTR(ps->pcie_reg_base)); return 0; } @@ -379,66 +514,15 @@ static void qtnf_pearl_free_xfer_buffers(struct qtnf_pcie_pearl_state *ps) } } -static int pearl_hhbm_init(struct qtnf_pcie_pearl_state *ps) -{ - u32 val; - - val = readl(PCIE_HHBM_CONFIG(ps->pcie_reg_base)); - val |= HHBM_CONFIG_SOFT_RESET; - writel(val, PCIE_HHBM_CONFIG(ps->pcie_reg_base)); - usleep_range(50, 100); - val &= ~HHBM_CONFIG_SOFT_RESET; -#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT - val |= HHBM_64BIT; -#endif - writel(val, PCIE_HHBM_CONFIG(ps->pcie_reg_base)); - writel(ps->base.rx_bd_num, PCIE_HHBM_Q_LIMIT_REG(ps->pcie_reg_base)); - - return 0; -} - static int qtnf_pcie_pearl_init_xfer(struct qtnf_pcie_pearl_state *ps, unsigned int tx_bd_size, unsigned int rx_bd_size) { struct qtnf_pcie_bus_priv *priv = &ps->base; int ret; - u32 val; - if (tx_bd_size == 0) - tx_bd_size = PEARL_TX_BD_SIZE_DEFAULT; - - val = tx_bd_size * sizeof(struct qtnf_pearl_tx_bd); - - if (!is_power_of_2(tx_bd_size) || val > PCIE_HHBM_MAX_SIZE) { - pr_warn("invalid tx_bd_size value %u, use default %u\n", - tx_bd_size, PEARL_TX_BD_SIZE_DEFAULT); - priv->tx_bd_num = PEARL_TX_BD_SIZE_DEFAULT; - } else { - priv->tx_bd_num = tx_bd_size; - } - - if (rx_bd_size == 0) - rx_bd_size = PEARL_RX_BD_SIZE_DEFAULT; - - val = rx_bd_size * sizeof(dma_addr_t); - - if (!is_power_of_2(rx_bd_size) || val > PCIE_HHBM_MAX_SIZE) { - pr_warn("invalid rx_bd_size value %u, use default %u\n", - rx_bd_size, PEARL_RX_BD_SIZE_DEFAULT); - priv->rx_bd_num = PEARL_RX_BD_SIZE_DEFAULT; - } else { - priv->rx_bd_num = rx_bd_size; - } - - priv->rx_bd_w_index = 0; - priv->rx_bd_r_index = 0; - - ret = pearl_hhbm_init(ps); - if (ret) { - pr_err("failed to init h/w queues\n"); - return ret; - } + ps->hdp_ops->hdp_set_queues(ps, tx_bd_size, rx_bd_size); + ps->hdp_ops->hdp_hhbm_init(ps); ret = qtnf_pcie_alloc_skb_array(priv); if (ret) { @@ -446,7 +530,7 @@ static int qtnf_pcie_pearl_init_xfer(struct qtnf_pcie_pearl_state *ps, return ret; } - ret = pearl_alloc_bd_table(ps); + ret = ps->hdp_ops->hdp_alloc_bd_table(ps); if (ret) { pr_err("failed to allocate bd table\n"); return ret; @@ -458,6 +542,8 @@ static int qtnf_pcie_pearl_init_xfer(struct qtnf_pcie_pearl_state *ps, return ret; } + ps->hdp_ops->hdp_init(ps); + return ret; } @@ -474,9 +560,7 @@ static void qtnf_pearl_data_tx_reclaim(struct qtnf_pcie_pearl_state *ps) spin_lock_irqsave(&priv->tx_reclaim_lock, flags); - tx_done_index = readl(PCIE_HDP_RX0DMA_CNT(ps->pcie_reg_base)) - & (priv->tx_bd_num - 1); - + tx_done_index = ps->hdp_ops->hdp_get_tx_done_index(ps); i = priv->tx_bd_r_index; while (CIRC_CNT(tx_done_index, i, priv->tx_bd_num)) { @@ -580,18 +664,13 @@ static int qtnf_pcie_skb_send(struct qtnf_bus *bus, struct sk_buff *skb) /* write new TX descriptor to PCIE_RX_FIFO on EP */ txbd_paddr = ps->tx_bd_pbase + i * sizeof(struct qtnf_pearl_tx_bd); -#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT - writel(QTN_HOST_HI32(txbd_paddr), - PCIE_HDP_HOST_WR_DESC0_H(ps->pcie_reg_base)); -#endif - writel(QTN_HOST_LO32(txbd_paddr), - PCIE_HDP_HOST_WR_DESC0(ps->pcie_reg_base)); - if (++i >= priv->tx_bd_num) i = 0; priv->tx_bd_w_index = i; + ps->hdp_ops->hdp_tx_hw_push(ps, i, txbd_paddr); + tx_done: if (ret && skb) { pr_err_ratelimited("drop skb\n"); @@ -739,7 +818,7 @@ static int qtnf_pcie_pearl_rx_poll(struct napi_struct *napi, int budget) consume = 0; } - if (skb && (skb_tailroom(skb) < psize)) { + if (skb && (skb_tailroom(skb) < psize)) { pr_err("skip packet with invalid length: %u > %u\n", psize, skb_tailroom(skb)); consume = 0; @@ -777,7 +856,7 @@ static int qtnf_pcie_pearl_rx_poll(struct napi_struct *napi, int budget) priv->rx_bd_r_index = r_idx; - /* repalce processed buffer by a new one */ + /* replace processed buffer by a new one */ w_idx = priv->rx_bd_w_index; while (CIRC_SPACE(priv->rx_bd_w_index, priv->rx_bd_r_index, priv->rx_bd_num) > 0) { @@ -884,22 +963,10 @@ static int qtnf_dbg_hdp_stats(struct seq_file *s, void *data) seq_printf(s, "tx_reclaim_req(%u)\n", priv->tx_reclaim_req); seq_printf(s, "tx_bd_r_index(%u)\n", priv->tx_bd_r_index); - seq_printf(s, "tx_bd_p_index(%u)\n", - readl(PCIE_HDP_RX0DMA_CNT(ps->pcie_reg_base)) - & (priv->tx_bd_num - 1)); seq_printf(s, "tx_bd_w_index(%u)\n", priv->tx_bd_w_index); - seq_printf(s, "tx queue len(%u)\n", - CIRC_CNT(priv->tx_bd_w_index, priv->tx_bd_r_index, - priv->tx_bd_num)); seq_printf(s, "rx_bd_r_index(%u)\n", priv->rx_bd_r_index); - seq_printf(s, "rx_bd_p_index(%u)\n", - readl(PCIE_HDP_TX0DMA_CNT(ps->pcie_reg_base)) - & (priv->rx_bd_num - 1)); seq_printf(s, "rx_bd_w_index(%u)\n", priv->rx_bd_w_index); - seq_printf(s, "rx alloc queue len(%u)\n", - CIRC_SPACE(priv->rx_bd_w_index, priv->rx_bd_r_index, - priv->rx_bd_num)); return 0; } @@ -1108,7 +1175,8 @@ static u64 qtnf_pearl_dma_mask_get(void) #endif } -static int qtnf_pcie_pearl_probe(struct qtnf_bus *bus, unsigned int tx_bd_size, +static int qtnf_pcie_pearl_probe(struct qtnf_bus *bus, + unsigned int tx_bd_size, unsigned int rx_bd_size) { struct qtnf_shm_ipc_int ipc_int; @@ -1120,6 +1188,16 @@ static int qtnf_pcie_pearl_probe(struct qtnf_bus *bus, unsigned int tx_bd_size, spin_lock_init(&ps->irq_lock); INIT_WORK(&bus->fw_work, qtnf_pearl_fw_work_handler); + switch (bus->chipid) { + case QTN_CHIP_ID_PEARL: + case QTN_CHIP_ID_PEARL_B: + ps->hdp_ops = &hdp_ops_rev_b; + break; + default: + pr_err("unsupported PEARL chip ID 0x%x\n", bus->chipid); + return -ENOTSUPP; + } + ps->pcie_reg_base = ps->base.dmareg_bar; ps->bda = ps->base.epmem_bar; writel(ps->base.msi_enabled, &ps->bda->bda_rc_msi_enabled); From patchwork Thu Nov 21 13:53:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Matyukevich X-Patchwork-Id: 11256267 X-Patchwork-Delegate: kvalo@adurom.com Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E635A13A4 for ; Thu, 21 Nov 2019 13:54:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B58182070B for ; Thu, 21 Nov 2019 13:54:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=quantenna.com header.i=@quantenna.com header.b="dLW2p1vG" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726802AbfKUNyG (ORCPT ); Thu, 21 Nov 2019 08:54:06 -0500 Received: from mx0b-00183b01.pphosted.com ([67.231.157.42]:45504 "EHLO mx0a-00183b01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726613AbfKUNyF (ORCPT ); Thu, 21 Nov 2019 08:54:05 -0500 Received: from pps.filterd (m0059811.ppops.net [127.0.0.1]) by mx0b-00183b01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id xALDoQVa004659 for ; Thu, 21 Nov 2019 06:54:01 -0700 Received: from nam02-cy1-obe.outbound.protection.outlook.com (mail-cys01nam02lp2051.outbound.protection.outlook.com [104.47.37.51]) by mx0b-00183b01.pphosted.com with ESMTP id 2wacgv4yhm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Thu, 21 Nov 2019 06:54:01 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mcVLLU7WSBzyysdhV+wRuS/q6CfIApRaLAD2yaFxDjuQDONgIddFevf5EmBJFI9Ll5Iwp7HmU/mTagwSEjQf2oFPMSP3euS91dMFFRX9o6IPx7041qFjyEChtltk3D1Lmn7aZIJ5JctROXVQZCtrR7vIsIJDkebvVV7E/w5XwlRt8Fnqho8uePT8h5Ljpg0pOux3bMDmNm/buw4JCvwwiyI9cbJDhGHt0EMfgOs489lGbnaKSKPFl2bfT42oN6uho068ZcHkFfs+Wz1OM+Dahm4rh8j6HCciyAsNnoQwRnAuVcV+qm1/fg3jfvn9UbAY4j/wo4Tzu3iyrOtgbhO0uA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=MhRVjIFGxFOD5HADf7q4EfTLYZZaNEyy95ohfhE+pHk=; b=iItshGqTEABH/1Mi4vTZ0Xc5KuEXFpeZhgqJwc2900ovKdC0E0uBEqv5sBrjM4BBljG1wKspYb59d7k/P84hFQO7qpjssvotpM2rmPN2fp79tdS6MtrkvCNKNHbuUliUHV4sIRwomo8iZ+bX8LpvJVG34a74kQ/cHXeXbL6Dw+YNLK0IdPABoWgeJcXDVzNkGLuT/PVMM+LAnAKrJ32H0Ur3dzYn1Emxf+ZGo4ZDsmvWvP8LaUuls+yWbnYB1MPxIWcTVyGL9nrpKlXMD3lAirTiOtfbZWmRl0RkmDeNTnkqvQ+cXrlDC1g/8g/h4yn+vFqMy+GxdlODJFn3LA73ag== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=quantenna.com; dmarc=pass action=none header.from=quantenna.com; dkim=pass header.d=quantenna.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quantenna.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=MhRVjIFGxFOD5HADf7q4EfTLYZZaNEyy95ohfhE+pHk=; b=dLW2p1vGBbgGZ8qxtLcDMPU/rkUSbD1X3L0UrgdOMkgi8JJHY1GZGG4ddRdZNRPF/ICyOIWvUOqvaLKyspSiTcc9Vfr2CzUwHcLBhiUzSZArXueZfxYrbB/U+cSisCD0XymjtVEhntdDX21mfCtvKFJywFp4to5UJpDpBjWkRM0= Received: from SN4PR0501MB3855.namprd05.prod.outlook.com (10.167.139.160) by SN4PR0501MB3790.namprd05.prod.outlook.com (10.167.142.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2495.10; Thu, 21 Nov 2019 13:53:59 +0000 Received: from SN4PR0501MB3855.namprd05.prod.outlook.com ([fe80::f4f6:8d67:626e:5e0d]) by SN4PR0501MB3855.namprd05.prod.outlook.com ([fe80::f4f6:8d67:626e:5e0d%7]) with mapi id 15.20.2474.015; Thu, 21 Nov 2019 13:53:58 +0000 Received: from SN6PR05MB4928.namprd05.prod.outlook.com (52.135.117.74) by SN6PR05MB4270.namprd05.prod.outlook.com (52.135.73.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2474.16; Thu, 21 Nov 2019 13:53:35 +0000 Received: from SN6PR05MB4928.namprd05.prod.outlook.com ([fe80::61a0:dd3d:3477:29c9]) by SN6PR05MB4928.namprd05.prod.outlook.com ([fe80::61a0:dd3d:3477:29c9%5]) with mapi id 15.20.2474.019; Thu, 21 Nov 2019 13:53:35 +0000 From: Sergey Matyukevich To: "linux-wireless@vger.kernel.org" CC: Igor Mitsyanko , Mikhail Karpenko , Sergey Matyukevich Subject: [PATCH 2/2] qtnfmac: add support for the new revision of QSR10g chip Thread-Topic: [PATCH 2/2] qtnfmac: add support for the new revision of QSR10g chip Thread-Index: AQHVoHMR0ukm5y7dAE2j+TmxTB5e3A== Date: Thu, 21 Nov 2019 13:53:35 +0000 Message-ID: <20191121135324.21715-3-sergey.matyukevich.os@quantenna.com> References: <20191121135324.21715-1-sergey.matyukevich.os@quantenna.com> In-Reply-To: <20191121135324.21715-1-sergey.matyukevich.os@quantenna.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: BYAPR07CA0055.namprd07.prod.outlook.com (2603:10b6:a03:60::32) To SN6PR05MB4928.namprd05.prod.outlook.com (2603:10b6:805:9d::10) x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.11.0 x-originating-ip: [195.182.157.78] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 090f7a25-1854-487c-b82a-08d76e8a33cb x-ms-traffictypediagnostic: SN6PR05MB4270:|SN4PR0501MB3790: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-moderation-data: 11/21/2019 1:53:56 PM x-ms-oob-tlc-oobclassifiers: OLM:5797; x-forefront-prvs: 0228DDDDD7 x-forefront-antispam-report: SFV:NSPM;SFS:(10009020)(366004)(346002)(376002)(39840400004)(136003)(396003)(199004)(189003)(8676002)(6436002)(6486002)(478600001)(103116003)(4326008)(6916009)(14454004)(6512007)(5640700003)(107886003)(25786009)(81156014)(446003)(11346002)(3846002)(2616005)(26005)(186003)(386003)(6506007)(102836004)(436003)(2501003)(2906002)(6116002)(81166006)(7736002)(36756003)(50226002)(2351001)(305945005)(8936002)(76176011)(52116002)(66066001)(71200400001)(71190400001)(5660300002)(66446008)(64756008)(66556008)(66476007)(1076003)(30864003)(66946007)(5024004)(99286004)(54906003)(316002)(256004)(86362001);DIR:OUT;SFP:1101;SCL:1;SRVR:SN4PR0501MB3790;H:SN4PR0501MB3855.namprd05.prod.outlook.com;FPR:;SPF:None;LANG:en;PTR:InfoNoRecords;A:1;MX:1; received-spf: None (protection.outlook.com: quantenna.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: //BcUVzrVkYm6c2mh7jl7JmEmFRg2AIeHshQVjQL4KJCetnlFmLNUM1nOKpWVNNDgo2FYrRZa7jwUpKMsYG1EEF5bmqzsplZW3qsFLXGarBHVHzfylQQakLwdGeRu7g0me2m/YgyirUJVW5WPWujK+tFHX2OO1CrV7g3riJmyvwV3T976MVJBMwkeXxnlc1OEQ9FtaOHA5pTyMYZwhY7MuZcgu4p8uWYCT1a61dWAd8S9CUJWgMPlQ3Q8HUzkn+X66oZSdJUamfVHb6kvHIY04DOatnQEPukOkhS34U89u24OgcWevauUHzlK4hO2wPInn2dRQAPYGnsggrHQ/R99tJwaCK5t7pArsrsJVP2X6K4+A5FpJ/VPrzgzB8VaRaVQL4A4o6eIUzk8Qc7WG2i4XDRIEM2zniluTkYRzPrX/Yx9/XiQer4TKxOCZD/1MSc MIME-Version: 1.0 X-OriginatorOrg: quantenna.com X-MS-Exchange-CrossTenant-Network-Message-Id: 090f7a25-1854-487c-b82a-08d76e8a33cb X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a355dbce-62b4-4789-9446-c1d5582180ff X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: QZZbkbSPB0dBmUrq9bCwXat4dvbycUg02V1QcX9fA2Ma4EXlBr4aADiPlT0ChpthBknfTNPKQgKMePDpOw+URxkZoxpuWSrRmA6cK3PoWf/+fncL7unEUGcRhWBzR9B9uOwfahVIOO1DBsl2Wn3OQHGM5evKI6oukdsDzEKvaJXLviQDYgdKSQ/Q4BzM8rlsXHZCbNJIzcItpoh1QlaqufBdNREWz4Tuyf7dpOblVQE= X-MS-Exchange-CrossTenant-originalarrivaltime: 21 Nov 2019 13:53:58.8185 (UTC) X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN4PR0501MB3790 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,18.0.572 definitions=2019-11-21_03:2019-11-21,2019-11-21 signatures=0 X-Proofpoint-Spam-Reason: safe Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org Add support for the new minor revision of QSR10g chip. Major changes from the driver perspective include PCIe data path modifications. Setup is now more complicated, but finally more things have been offloaded to hardware. As a result, less driver boilerplate operations are needed after Tx/Rx descriptors queues have been configured. Besides, restrictions on descriptors queue lengths have been relaxed. Signed-off-by: Sergey Matyukevich --- drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c | 1 + .../wireless/quantenna/qtnfmac/pcie/pearl_pcie.c | 273 +++++++++++++++++++-- .../quantenna/qtnfmac/pcie/pearl_pcie_ipc.h | 3 + .../quantenna/qtnfmac/pcie/pearl_pcie_regs.h | 33 ++- .../net/wireless/quantenna/qtnfmac/qtn_hw_ids.h | 1 + drivers/net/wireless/quantenna/qtnfmac/util.c | 2 + 6 files changed, 297 insertions(+), 16 deletions(-) diff --git a/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c b/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c index 1a1896c4c042..45bb84007bd5 100644 --- a/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c +++ b/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c @@ -322,6 +322,7 @@ static int qtnf_pcie_probe(struct pci_dev *pdev, const struct pci_device_id *id) case QTN_CHIP_ID_PEARL: case QTN_CHIP_ID_PEARL_B: case QTN_CHIP_ID_PEARL_C: + case QTN_CHIP_ID_PEARL_C1: bus = qtnf_pcie_pearl_alloc(pdev); break; case QTN_CHIP_ID_TOPAZ: diff --git a/drivers/net/wireless/quantenna/qtnfmac/pcie/pearl_pcie.c b/drivers/net/wireless/quantenna/qtnfmac/pcie/pearl_pcie.c index 32506f700cca..7b01fa7fab1c 100644 --- a/drivers/net/wireless/quantenna/qtnfmac/pcie/pearl_pcie.c +++ b/drivers/net/wireless/quantenna/qtnfmac/pcie/pearl_pcie.c @@ -57,8 +57,6 @@ struct qtnf_pearl_rx_bd { __le32 addr_h; __le32 info; __le32 info_h; - __le32 next_ptr; - __le32 next_ptr_h; } __packed; struct qtnf_pearl_fw_hdr { @@ -78,12 +76,15 @@ struct qtnf_pcie_pearl_hdp_ops { int (*hdp_alloc_bd_table)(struct qtnf_pcie_pearl_state *ps); void (*hdp_init)(struct qtnf_pcie_pearl_state *ps); void (*hdp_hhbm_init)(struct qtnf_pcie_pearl_state *ps); + void (*hdp_enable)(struct qtnf_pcie_pearl_state *ps); + void (*hdp_disable)(struct qtnf_pcie_pearl_state *ps); void (*hdp_set_queues)(struct qtnf_pcie_pearl_state *ps, unsigned int tx_bd_size, unsigned int rx_bd_size); void (*hdp_rbd_attach)(struct qtnf_pcie_pearl_state *ps, u16 index, dma_addr_t paddr); u32 (*hdp_get_tx_done_index)(struct qtnf_pcie_pearl_state *ps); + void (*hdp_tx_done_wrap)(struct qtnf_pcie_pearl_state *ps); void (*hdp_tx_hw_push)(struct qtnf_pcie_pearl_state *ps, int index, dma_addr_t paddr); @@ -105,8 +106,19 @@ struct qtnf_pcie_pearl_state { struct qtnf_pearl_rx_bd *rx_bd_vbase; dma_addr_t rx_bd_pbase; + dma_addr_t rx_dma_cnt_paddr; + void *rx_dma_cnt_vaddr; + + dma_addr_t tx_dma_cnt_paddr; + void *tx_dma_cnt_vaddr; + dma_addr_t bd_table_paddr; void *bd_table_vaddr; + + u32 tx_bd_ack_wrap; + u16 rx_bd_h_index; + u16 tx_bd_h_index; + u32 bd_table_len; u32 pcie_irq_mask; u32 pcie_irq_rx_count; @@ -280,12 +292,234 @@ static const struct qtnf_pcie_pearl_hdp_ops hdp_ops_rev_b = { .hdp_alloc_bd_table = hdp_alloc_bd_table_rev_b, .hdp_init = hdp_init_rev_b, .hdp_hhbm_init = hdp_hhbm_init_rev_b, + .hdp_enable = NULL, + .hdp_disable = NULL, .hdp_set_queues = hdp_set_queues_rev_b, .hdp_rbd_attach = hdp_rbd_attach_rev_b, .hdp_get_tx_done_index = hdp_get_tx_done_index_rev_b, + .hdp_tx_done_wrap = NULL, .hdp_tx_hw_push = hdp_tx_hw_push_rev_b, }; +/* HDP ops: rev C */ + +static int hdp_alloc_bd_table_rev_c(struct qtnf_pcie_pearl_state *ps) +{ + struct qtnf_pcie_bus_priv *priv = &ps->base; + dma_addr_t paddr; + void *vaddr; + int len; + + len = priv->tx_bd_num * sizeof(struct qtnf_pearl_tx_bd) + + priv->rx_bd_num * sizeof(struct qtnf_pearl_rx_bd) + + 2 * QTN_HDP_DMA_PTR_SIZE; + + vaddr = dmam_alloc_coherent(&priv->pdev->dev, len, &paddr, GFP_KERNEL); + if (!vaddr) + return -ENOMEM; + + /* tx bd */ + + ps->bd_table_vaddr = vaddr; + ps->bd_table_paddr = paddr; + ps->bd_table_len = len; + + ps->tx_bd_vbase = vaddr; + ps->tx_bd_pbase = paddr; + + pr_debug("TX descriptor table: vaddr=0x%p paddr=%pad\n", vaddr, &paddr); + + /* rx bd */ + + vaddr = ((struct qtnf_pearl_tx_bd *)vaddr) + priv->tx_bd_num; + paddr += priv->tx_bd_num * sizeof(struct qtnf_pearl_tx_bd); + + ps->rx_bd_vbase = vaddr; + ps->rx_bd_pbase = paddr; + + pr_debug("RX descriptor table: vaddr=0x%p paddr=%pad\n", vaddr, &paddr); + + /* dma completion counters */ + + vaddr = ((struct qtnf_pearl_rx_bd *)vaddr) + priv->rx_bd_num; + paddr += priv->rx_bd_num * sizeof(struct qtnf_pearl_rx_bd); + + ps->rx_dma_cnt_vaddr = vaddr; + ps->rx_dma_cnt_paddr = paddr; + + vaddr += QTN_HDP_DMA_PTR_SIZE; + paddr += QTN_HDP_DMA_PTR_SIZE; + + ps->tx_dma_cnt_vaddr = vaddr; + ps->tx_dma_cnt_paddr = paddr; + + return 0; +} + +static void hdp_rbd_attach_rev_c(struct qtnf_pcie_pearl_state *ps, u16 index, + dma_addr_t paddr) +{ + u16 ihw; + + ihw = index | (ps->rx_bd_h_index & QTN_HDP_BD_WRAP); + if (ihw < ps->rx_bd_h_index) + ihw ^= QTN_HDP_BD_WRAP; + + writel(ihw | ((ihw ^ QTN_HDP_BD_WRAP) << 16), + PCIE_HDP_TX0_DESC_Q_WR_PTR(ps->pcie_reg_base)); + + ps->rx_bd_h_index = ihw; +} + +static void hdp_hhbm_init_rev_c(struct qtnf_pcie_pearl_state *ps) +{ + u32 val; + + val = readl(PCIE_HHBM_CONFIG(ps->pcie_reg_base)); + val |= HHBM_CONFIG_SOFT_RESET; + writel(val, PCIE_HHBM_CONFIG(ps->pcie_reg_base)); + usleep_range(50, 100); +} + +static void hdp_init_rev_c(struct qtnf_pcie_pearl_state *ps) +{ + struct qtnf_pcie_bus_priv *priv = &ps->base; + int mrrs = pcie_get_readrq(priv->pdev); + int mps = pcie_get_mps(priv->pdev); + u32 val; + + val = readl(PCIE_HDP_AXI_MASTER_CTRL(ps->pcie_reg_base)); + + if (mrrs > PCIE_HDP_AXI_BURST32_SIZE) + val |= PCIE_HDP_AXI_EN_BURST32_READ; + else + val &= ~PCIE_HDP_AXI_EN_BURST32_READ; + + if (mps > PCIE_HDP_AXI_BURST32_SIZE) + val |= PCIE_HDP_AXI_EN_BURST32_WRITE; + else + val &= ~PCIE_HDP_AXI_EN_BURST32_WRITE; + + writel(val, PCIE_HDP_AXI_MASTER_CTRL(ps->pcie_reg_base)); + + /* HDP Tx init */ + + writel(PCIE_HDP_RXDMA_INTERLEAVE | PCIE_HDP_RXDMA_NEW | + PCIE_HDP_RXDMA_WPTR, PCIE_HDP_RXDMA_CTRL(ps->pcie_reg_base)); + writel(PCIE_HDP_TXDMA_NEW, PCIE_HDP_TX_DMA_CTRL(ps->pcie_reg_base)); + + writel(QTN_HOST_LO32(ps->tx_bd_pbase), + PCIE_HDP_RX2_DESC_BASE_ADDR(ps->pcie_reg_base)); +#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT + writel(QTN_HOST_HI32(ps->tx_bd_pbase), + PCIE_HDP_RX2_DESC_BASE_ADDR_H(ps->pcie_reg_base)); +#endif + + writel(priv->tx_bd_num | (sizeof(struct qtnf_pearl_tx_bd) << 16), + PCIE_HDP_RX2_DESC_Q_CTRL(ps->pcie_reg_base)); + + writel(QTN_HOST_LO32(ps->tx_dma_cnt_paddr), + PCIE_HDP_RX2_DEV_PTR_ADDR(ps->pcie_reg_base)); +#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT + writel(QTN_HOST_HI32(ps->tx_dma_cnt_paddr), + PCIE_HDP_RX2_DEV_PTR_ADDR_H(ps->pcie_reg_base)); +#endif + writel(ps->tx_bd_h_index, + PCIE_HDP_RX2_DESC_Q_WR_PTR(ps->pcie_reg_base)); + + /* HDP Rx init */ + + writel(QTN_HOST_LO32(ps->rx_bd_pbase), + PCIE_HDP_TX0_DESC_BASE_ADDR(ps->pcie_reg_base)); +#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT + writel(QTN_HOST_HI32(ps->rx_bd_pbase), + PCIE_HDP_TX0_DESC_BASE_ADDR_H(ps->pcie_reg_base)); +#endif + writel(priv->rx_bd_num | (sizeof(struct qtnf_pearl_rx_bd) << 16), + PCIE_HDP_TX0_DESC_Q_CTRL(ps->pcie_reg_base)); + + writel(QTN_HOST_LO32(ps->rx_dma_cnt_paddr), + PCIE_HDP_TX0_DEV_PTR_ADDR(ps->pcie_reg_base)); +#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT + writel(QTN_HOST_HI32(ps->rx_dma_cnt_paddr), + PCIE_HDP_TX0_DEV_PTR_ADDR_H(ps->pcie_reg_base)); +#endif +} + +static u32 hdp_get_tx_done_index_rev_c(struct qtnf_pcie_pearl_state *ps) +{ + struct qtnf_pcie_bus_priv *priv = &ps->base; + u32 v; + + v = le32_to_cpu(*((__le32 *)ps->tx_dma_cnt_vaddr)) & + (priv->tx_bd_num - 1); + + return v; +} + +static void hdp_tx_done_wrap_rev_c(struct qtnf_pcie_pearl_state *ps) +{ + ps->tx_bd_ack_wrap ^= (QTN_HDP_BD_WRAP << 16); +} + +static void hdp_tx_hw_push_rev_c(struct qtnf_pcie_pearl_state *ps, int index, + dma_addr_t paddr) +{ + struct qtnf_pcie_bus_priv *priv = &ps->base; + u32 ihw; + + ihw = index | (ps->tx_bd_h_index & QTN_HDP_BD_WRAP); + + if (ihw < ps->tx_bd_h_index) + ihw ^= QTN_HDP_BD_WRAP; + + writel(ihw | (priv->tx_bd_r_index << 16) | ps->tx_bd_ack_wrap, + PCIE_HDP_RX2_DESC_Q_WR_PTR(ps->pcie_reg_base)); + + ps->tx_bd_h_index = ihw; +} + +static void hdp_enable_rev_c(struct qtnf_pcie_pearl_state *ps) +{ + u32 val; + + val = readl(PCIE_HDP_RX2_DESC_Q_CTRL(ps->pcie_reg_base)); + val |= PCIE_HDP_DESC_FETCH_EN; + writel(val, PCIE_HDP_RX2_DESC_Q_CTRL(ps->pcie_reg_base)); + + val = readl(PCIE_HDP_TX0_DESC_Q_CTRL(ps->pcie_reg_base)); + val |= PCIE_HDP_DESC_FETCH_EN; + writel(val, PCIE_HDP_TX0_DESC_Q_CTRL(ps->pcie_reg_base)); +} + +static void hdp_disable_rev_c(struct qtnf_pcie_pearl_state *ps) +{ + u32 val; + + val = readl(PCIE_HDP_RX2_DESC_Q_CTRL(ps->pcie_reg_base)); + val &= ~PCIE_HDP_DESC_FETCH_EN; + writel(val, PCIE_HDP_RX2_DESC_Q_CTRL(ps->pcie_reg_base)); + + val = readl(PCIE_HDP_TX0_DESC_Q_CTRL(ps->pcie_reg_base)); + val &= ~PCIE_HDP_DESC_FETCH_EN; + writel(val, PCIE_HDP_TX0_DESC_Q_CTRL(ps->pcie_reg_base)); +} + +static const struct qtnf_pcie_pearl_hdp_ops hdp_ops_rev_c = { + .hdp_rx_bd_size_default = 512, + .hdp_tx_bd_size_default = 512, + .hdp_alloc_bd_table = hdp_alloc_bd_table_rev_c, + .hdp_init = hdp_init_rev_c, + .hdp_hhbm_init = hdp_hhbm_init_rev_c, + .hdp_enable = hdp_enable_rev_c, + .hdp_disable = hdp_disable_rev_c, + .hdp_set_queues = hdp_set_queues_common, + .hdp_rbd_attach = hdp_rbd_attach_rev_c, + .hdp_get_tx_done_index = hdp_get_tx_done_index_rev_c, + .hdp_tx_done_wrap = hdp_tx_done_wrap_rev_c, + .hdp_tx_hw_push = hdp_tx_hw_push_rev_c, +}; + /* common */ static inline void qtnf_init_hdp_irqs(struct qtnf_pcie_pearl_state *ps) @@ -586,8 +820,11 @@ static void qtnf_pearl_data_tx_reclaim(struct qtnf_pcie_pearl_state *ps) priv->tx_skb[i] = NULL; count++; - if (++i >= priv->tx_bd_num) + if (++i >= priv->tx_bd_num) { + if (ps->hdp_ops->hdp_tx_done_wrap) + ps->hdp_ops->hdp_tx_done_wrap(ps); i = 0; + } } priv->tx_reclaim_done += count; @@ -727,11 +964,17 @@ static irqreturn_t qtnf_pcie_pearl_interrupt(int irq, void *data) u32 status; priv->pcie_irq_count++; - status = readl(PCIE_HDP_INT_STATUS(ps->pcie_reg_base)); qtnf_shm_ipc_irq_handler(&priv->shm_ipc_ep_in); qtnf_shm_ipc_irq_handler(&priv->shm_ipc_ep_out); + writel(0x0, PCIE_HDP_INT_EN(ps->pcie_reg_base)); + status = readl(PCIE_HDP_INT_STATUS(ps->pcie_reg_base)); + writel(status & ps->pcie_irq_mask, + PCIE_HDP_INT_STATUS(ps->pcie_reg_base)); + writel(ps->pcie_irq_mask & (~status), + PCIE_HDP_INT_EN(ps->pcie_reg_base)); + if (!(status & ps->pcie_irq_mask)) goto irq_done; @@ -744,20 +987,13 @@ static irqreturn_t qtnf_pcie_pearl_interrupt(int irq, void *data) if (status & PCIE_HDP_INT_HHBM_UF) ps->pcie_irq_uf_count++; - if (status & PCIE_HDP_INT_RX_BITS) { - qtnf_dis_rxdone_irq(ps); + if (status & PCIE_HDP_INT_RX_BITS) napi_schedule(&bus->mux_napi); - } - if (status & PCIE_HDP_INT_TX_BITS) { - qtnf_dis_txdone_irq(ps); + if (status & PCIE_HDP_INT_TX_BITS) tasklet_hi_schedule(&priv->reclaim_tq); - } irq_done: - /* H/W workaround: clean all bits, not only enabled */ - qtnf_non_posted_write(~0U, PCIE_HDP_INT_STATUS(ps->pcie_reg_base)); - if (!priv->msi_enabled) qtnf_deassert_intx(ps); @@ -896,6 +1132,8 @@ static void qtnf_pcie_data_rx_start(struct qtnf_bus *bus) struct qtnf_pcie_pearl_state *ps = (void *)get_bus_priv(bus); qtnf_enable_hdp_irqs(ps); + if (ps->hdp_ops->hdp_enable) + ps->hdp_ops->hdp_enable(ps); napi_enable(&bus->mux_napi); } @@ -904,6 +1142,8 @@ static void qtnf_pcie_data_rx_stop(struct qtnf_bus *bus) struct qtnf_pcie_pearl_state *ps = (void *)get_bus_priv(bus); napi_disable(&bus->mux_napi); + if (ps->hdp_ops->hdp_disable) + ps->hdp_ops->hdp_disable(ps); qtnf_disable_hdp_irqs(ps); } @@ -1124,7 +1364,8 @@ static void qtnf_pearl_fw_work_handler(struct work_struct *work) } else { pr_info("starting firmware upload: %s\n", fwname); - + if (ps->hdp_ops->hdp_enable) + ps->hdp_ops->hdp_enable(ps); ret = qtnf_ep_fw_load(ps, fw->data, fw->size); release_firmware(fw); if (ret) { @@ -1193,6 +1434,10 @@ static int qtnf_pcie_pearl_probe(struct qtnf_bus *bus, case QTN_CHIP_ID_PEARL_B: ps->hdp_ops = &hdp_ops_rev_b; break; + case QTN_CHIP_ID_PEARL_C: + case QTN_CHIP_ID_PEARL_C1: + ps->hdp_ops = &hdp_ops_rev_c; + break; default: pr_err("unsupported PEARL chip ID 0x%x\n", bus->chipid); return -ENOTSUPP; diff --git a/drivers/net/wireless/quantenna/qtnfmac/pcie/pearl_pcie_ipc.h b/drivers/net/wireless/quantenna/qtnfmac/pcie/pearl_pcie_ipc.h index 634480fe6a64..42a67d66d9e8 100644 --- a/drivers/net/wireless/quantenna/qtnfmac/pcie/pearl_pcie_ipc.h +++ b/drivers/net/wireless/quantenna/qtnfmac/pcie/pearl_pcie_ipc.h @@ -61,6 +61,9 @@ #define QTN_ENET_ADDR_LENGTH 6 +#define QTN_HDP_BD_WRAP 0x8000 +#define QTN_HDP_DMA_PTR_SIZE (4 * sizeof(u64)) + #define QTN_TXDONE_MASK ((u32)0x80000000) #define QTN_GET_LEN(x) ((x) & 0xFFFF) diff --git a/drivers/net/wireless/quantenna/qtnfmac/pcie/pearl_pcie_regs.h b/drivers/net/wireless/quantenna/qtnfmac/pcie/pearl_pcie_regs.h index 6e9a5c61d46f..945d27b36852 100644 --- a/drivers/net/wireless/quantenna/qtnfmac/pcie/pearl_pcie_regs.h +++ b/drivers/net/wireless/quantenna/qtnfmac/pcie/pearl_pcie_regs.h @@ -4,7 +4,7 @@ #ifndef __PEARL_PCIE_H #define __PEARL_PCIE_H -/* Pearl PCIe HDP registers */ +/* Pearl rev B PCIe HDP registers */ #define PCIE_HDP_CTRL(base) ((base) + 0x2c00) #define PCIE_HDP_AXI_CTRL(base) ((base) + 0x2c04) #define PCIE_HDP_HOST_WR_DESC0(base) ((base) + 0x2c10) @@ -60,7 +60,6 @@ #define PCIE_HDP_RX3DMA_CNT(base) ((base) + 0x2d1c) #define PCIE_HDP_TX0DMA_CNT(base) ((base) + 0x2d20) #define PCIE_HDP_TX1DMA_CNT(base) ((base) + 0x2d24) -#define PCIE_HDP_RXDMA_CTRL(base) ((base) + 0x2d28) #define PCIE_HDP_TX_HOST_Q_SZ_CTRL(base) ((base) + 0x2d2c) #define PCIE_HDP_TX_HOST_Q_BASE_L(base) ((base) + 0x2d30) #define PCIE_HDP_TX_HOST_Q_BASE_H(base) ((base) + 0x2d34) @@ -68,6 +67,36 @@ #define PCIE_HDP_TX_HOST_Q_RD_PTR(base) ((base) + 0x2d3c) #define PCIE_HDP_TX_HOST_Q_STS(base) ((base) + 0x2d40) +#define PCIE_HDP_TX_DMA_CTRL(base) ((base) + 0x2dcc) +#define PCIE_HDP_TXDMA_NEW (BIT(8)) + +#define PCIE_HDP_RXDMA_CTRL(base) ((base) + 0x2d28) +#define PCIE_HDP_RXDMA_WPTR (BIT(27)) +#define PCIE_HDP_RXDMA_NEW (BIT(29)) +#define PCIE_HDP_RXDMA_INTERLEAVE (BIT(30)) + +/* Pearl rev C PCIe HDP registers */ +#define PCIE_HDP_TX0_DEV_PTR_ADDR(base) ((base) + 0x2db0) +#define PCIE_HDP_TX0_DEV_PTR_ADDR_H(base) ((base) + 0x2db4) +#define PCIE_HDP_TX0_DESC_Q_WR_PTR(base) ((base) + 0x2da4) +#define PCIE_HDP_TX0_DESC_BASE_ADDR(base) ((base) + 0x2dac) +#define PCIE_HDP_TX0_DESC_BASE_ADDR_H(base) ((base) + 0x2da8) + +#define PCIE_HDP_RX2_DESC_BASE_ADDR(base) ((base) + 0x2c20) +#define PCIE_HDP_RX2_DESC_BASE_ADDR_H(base) ((base) + 0x2c24) +#define PCIE_HDP_RX2_DESC_Q_WR_PTR(base) ((base) + 0x2d84) +#define PCIE_HDP_RX2_DEV_PTR_ADDR(base) ((base) + 0x2dd8) +#define PCIE_HDP_RX2_DEV_PTR_ADDR_H(base) ((base) + 0x2ddc) + +#define PCIE_HDP_TX0_DESC_Q_CTRL(base) ((base) + 0x2da0) +#define PCIE_HDP_RX2_DESC_Q_CTRL(base) ((base) + 0x2d80) +#define PCIE_HDP_DESC_FETCH_EN (BIT(31)) + +#define PCIE_HDP_AXI_MASTER_CTRL(base) ((base) + 0x2de0) +#define PCIE_HDP_AXI_EN_BURST32_READ (BIT(3) | BIT(7)) +#define PCIE_HDP_AXI_EN_BURST32_WRITE BIT(11) +#define PCIE_HDP_AXI_BURST32_SIZE (32 * 8) + /* Pearl PCIe HBM pool registers */ #define PCIE_HHBM_CSR_REG(base) ((base) + 0x2e00) #define PCIE_HHBM_Q_BASE_REG(base) ((base) + 0x2e04) diff --git a/drivers/net/wireless/quantenna/qtnfmac/qtn_hw_ids.h b/drivers/net/wireless/quantenna/qtnfmac/qtn_hw_ids.h index 82d879950b62..d962126602cd 100644 --- a/drivers/net/wireless/quantenna/qtnfmac/qtn_hw_ids.h +++ b/drivers/net/wireless/quantenna/qtnfmac/qtn_hw_ids.h @@ -18,6 +18,7 @@ #define QTN_CHIP_ID_PEARL 0x50 #define QTN_CHIP_ID_PEARL_B 0x60 #define QTN_CHIP_ID_PEARL_C 0x70 +#define QTN_CHIP_ID_PEARL_C1 0x80 /* FW names */ diff --git a/drivers/net/wireless/quantenna/qtnfmac/util.c b/drivers/net/wireless/quantenna/qtnfmac/util.c index cda6f5f3f38a..afad12ce3ba5 100644 --- a/drivers/net/wireless/quantenna/qtnfmac/util.c +++ b/drivers/net/wireless/quantenna/qtnfmac/util.c @@ -116,6 +116,8 @@ const char *qtnf_chipid_to_string(unsigned long chip_id) return "Pearl revB"; case QTN_CHIP_ID_PEARL_C: return "Pearl revC"; + case QTN_CHIP_ID_PEARL_C1: + return "Pearl revC1"; default: return "unknown"; }