From patchwork Thu Nov 7 12:51:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nemanov, Michael" X-Patchwork-Id: 13866383 X-Patchwork-Delegate: kuba@kernel.org Received: from fllv0015.ext.ti.com (fllv0015.ext.ti.com [198.47.19.141]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BE60020F5C8; Thu, 7 Nov 2024 12:52:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.19.141 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983967; cv=none; b=XYuK5G920gnG1h22HWpZbD3XLh/3hMakXsPfEHCPsFFU4HJKJwgrb42H64v2qP1rtYGrxuSnBD4gzSNrLBJ9Z+AVl/bYqk4UhkCoro98UJ1V+kWlElb572wEax7mxg/q7WD6ZnSOxn4Iqh+zICdjm/gBrhwzM6e92kgvFGhqBJQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983967; c=relaxed/simple; bh=1LPhXNYqqYpRr9qolKY+f18qbAVT2RtF2x3mmk0Nq48=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=uNJrXGnx+MxlndjWN57ybUj7ZxoLeMRQ9g1W4Vy40bkzjtKOXduxIvsc006ZTyZLxUekLaJzyxnnrMvqf4MpgdKDTXvtuhuNS5/YULs4sPuDCakQqsCVhUxGLb0j/SDyy+TLnh8uYEP/m/VErAHS0bBjbZt4oVeRVexLjqoPPfc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=J4S9hkTU; arc=none smtp.client-ip=198.47.19.141 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="J4S9hkTU" Received: from fllv0035.itg.ti.com ([10.64.41.0]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 4A7CqccR073781; Thu, 7 Nov 2024 06:52:38 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1730983959; bh=gTKZ+qBVNvEEQB6fnFwZmJHL8XfxEFTBmWVWVafOYqc=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=J4S9hkTU2lbkoC31+chqFpdVE+1OM6c2/p2VjWANN+IZ6EUYN0nIlH7DMLyIjQlgX 1l5wS6x55VvzoAV13NCto1m7HgSSFiJ6CmfX1lRrtbukzbYIm9Uif9kS/dymRLjyjv g2sUrT2Ulblnhdh5NCN13hbJamTwmybtGhxvCDP8= Received: from DLEE110.ent.ti.com (dlee110.ent.ti.com [157.170.170.21]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 4A7Cqcl3116035 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 7 Nov 2024 06:52:38 -0600 Received: from DLEE111.ent.ti.com (157.170.170.22) by DLEE110.ent.ti.com (157.170.170.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Thu, 7 Nov 2024 06:52:38 -0600 Received: from lelvsmtp6.itg.ti.com (10.180.75.249) by DLEE111.ent.ti.com (157.170.170.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23 via Frontend Transport; Thu, 7 Nov 2024 06:52:38 -0600 Received: from localhost (udb0389739.dhcp.ti.com [137.167.1.149]) by lelvsmtp6.itg.ti.com (8.15.2/8.15.2) with ESMTP id 4A7CqbYN038352; Thu, 7 Nov 2024 06:52:38 -0600 From: Michael Nemanov To: Kalle Valo , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , , , , CC: Sabeeh Khan , Michael Nemanov Subject: [PATCH v5 01/17] dt-bindings: net: wireless: cc33xx: Add ti,cc33xx.yaml Date: Thu, 7 Nov 2024 14:51:53 +0200 Message-ID: <20241107125209.1736277-2-michael.nemanov@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241107125209.1736277-1-michael.nemanov@ti.com> References: <20241107125209.1736277-1-michael.nemanov@ti.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea X-Patchwork-Delegate: kuba@kernel.org Add device-tree bindings for the CC33xx family. Signed-off-by: Michael Nemanov --- .../bindings/net/wireless/ti,cc33xx.yaml | 59 +++++++++++++++++++ 1 file changed, 59 insertions(+) create mode 100644 Documentation/devicetree/bindings/net/wireless/ti,cc33xx.yaml diff --git a/Documentation/devicetree/bindings/net/wireless/ti,cc33xx.yaml b/Documentation/devicetree/bindings/net/wireless/ti,cc33xx.yaml new file mode 100644 index 000000000000..fd6e1ee8426e --- /dev/null +++ b/Documentation/devicetree/bindings/net/wireless/ti,cc33xx.yaml @@ -0,0 +1,59 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/net/wireless/ti,cc33xx.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Texas Instruments CC33xx Wireless LAN Controller + +maintainers: + - Michael Nemanov + +description: + The CC33xx is a family of IEEE 802.11ax chips from Texas Instruments. + These chips must be connected via SDIO and support in-band / out-of-band IRQ. + +properties: + $nodename: + pattern: "^wifi@2" + + compatible: + oneOf: + - const: ti,cc3300 + - items: + - enum: + - ti,cc3301 + - ti,cc3350 + - ti,cc3351 + - const: ti,cc3300 + + reg: + const: 2 + + interrupts: + description: + The out-of-band interrupt line. + Can be IRQ_TYPE_EDGE_RISING or IRQ_TYPE_LEVEL_HIGH. + If property is omitted, SDIO in-band IRQ will be used. + maxItems: 1 + +required: + - compatible + - reg + +additionalProperties: false + +examples: + - | + #include + + mmc { + #address-cells = <1>; + #size-cells = <0>; + + wifi@2 { + compatible = "ti,cc3300"; + reg = <2>; + interrupts = <19 IRQ_TYPE_EDGE_RISING>; + }; + }; From patchwork Thu Nov 7 12:51:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nemanov, Michael" X-Patchwork-Id: 13866385 X-Patchwork-Delegate: kuba@kernel.org Received: from lelv0143.ext.ti.com (lelv0143.ext.ti.com [198.47.23.248]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CCE2D20FAB3; Thu, 7 Nov 2024 12:52:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.23.248 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983970; cv=none; b=ZgMr15JYqg4U4Pl+US3xGswGNPtivM4yd7DlwFiycrWn9Cm23UoRjhOzwj5JPfqL20yR4dfuxENMXyoJWx9AaQkCKvOdjAr3mDLAnMboa4Wn5F8HnKycEMek5aXhvJh6V1amiet8Dkm7c5kwL1WACACKQFcO5GUPHoFu9F7T4qk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983970; c=relaxed/simple; bh=WfCZcj6DMgLWunhYB8FG/0Q0t4VDpsZjBIAAiuE978Y=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=g+sidRrBGuinljGiwJIU14TJkXwAZWQO2mxpYrTBOB2klchHO+Y/godZ9AMtXRsvjoh31CHldkUw+u93wUCSBRficD4oI+EjqkgMye6NOTxqw+TPfGBl3yP855Po4M99xB0LwetLeFCzSr+72e9SfAcBra7Mmo04oaa+3N9x/4Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=DeXxD88K; arc=none smtp.client-ip=198.47.23.248 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="DeXxD88K" Received: from lelv0265.itg.ti.com ([10.180.67.224]) by lelv0143.ext.ti.com (8.15.2/8.15.2) with ESMTP id 4A7CqeK6090780; Thu, 7 Nov 2024 06:52:40 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1730983960; bh=aHsRm+RMCYHI+WOtDLAexUtemMUM/jxCl8MxdYRwBBk=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=DeXxD88KVOXf/ljScnlvDXQ/gUQYeuVL3mgxf7hDh4ppvhK1DbTFayNmKBhS66UB5 x97p8Q9fJdOdeo9zK+TrEYkFUUS7696BWFw/kYBKTMfOlueULYJ+nGyUwUIw3POiLY 1tV1IoDYvkRK/5U6J9xzhRlBwvCr5IBi0aA2DFhM= Received: from DFLE114.ent.ti.com (dfle114.ent.ti.com [10.64.6.35]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 4A7CqeCJ011107 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 7 Nov 2024 06:52:40 -0600 Received: from DFLE109.ent.ti.com (10.64.6.30) by DFLE114.ent.ti.com (10.64.6.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Thu, 7 Nov 2024 06:52:39 -0600 Received: from lelvsmtp6.itg.ti.com (10.180.75.249) by DFLE109.ent.ti.com (10.64.6.30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23 via Frontend Transport; Thu, 7 Nov 2024 06:52:39 -0600 Received: from localhost (udb0389739.dhcp.ti.com [137.167.1.149]) by lelvsmtp6.itg.ti.com (8.15.2/8.15.2) with ESMTP id 4A7CqdYq038366; Thu, 7 Nov 2024 06:52:39 -0600 From: Michael Nemanov To: Kalle Valo , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , , , , CC: Sabeeh Khan , Michael Nemanov Subject: [PATCH v5 02/17] wifi: cc33xx: Add cc33xx.h, cc33xx_i.h Date: Thu, 7 Nov 2024 14:51:54 +0200 Message-ID: <20241107125209.1736277-3-michael.nemanov@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241107125209.1736277-1-michael.nemanov@ti.com> References: <20241107125209.1736277-1-michael.nemanov@ti.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea X-Patchwork-Delegate: kuba@kernel.org These are header files with definitions common to the entire driver. Signed-off-by: Michael Nemanov --- drivers/net/wireless/ti/cc33xx/cc33xx.h | 483 ++++++++++++++++++++++ drivers/net/wireless/ti/cc33xx/cc33xx_i.h | 459 ++++++++++++++++++++ 2 files changed, 942 insertions(+) create mode 100644 drivers/net/wireless/ti/cc33xx/cc33xx.h create mode 100644 drivers/net/wireless/ti/cc33xx/cc33xx_i.h diff --git a/drivers/net/wireless/ti/cc33xx/cc33xx.h b/drivers/net/wireless/ti/cc33xx/cc33xx.h new file mode 100644 index 000000000000..e756dc6a7d41 --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/cc33xx.h @@ -0,0 +1,483 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#ifndef __CC33XX_H__ +#define __CC33XX_H__ + +#include "cc33xx_i.h" +#include "rx.h" + +/* The maximum number of Tx descriptors in all chip families */ +#define CC33XX_MAX_TX_DESCRIPTORS 32 + +#define CC33XX_CMD_MAX_SIZE (896) +#define CC33XX_INI_PARAM_COMMAND_SIZE (16UL) +#define CC33XX_INI_CMD_MAX_SIZE (CC33X_CONF_SIZE + CC33XX_INI_PARAM_COMMAND_SIZE + sizeof(int)) + +#define CC33XX_CMD_BUFFER_SIZE ((CC33XX_INI_CMD_MAX_SIZE > CC33XX_CMD_MAX_SIZE)\ + ? CC33XX_INI_CMD_MAX_SIZE : CC33XX_CMD_MAX_SIZE) + +#define CC33XX_NUM_MAC_ADDRESSES 3 + +#define CC33XX_AGGR_BUFFER_SIZE (8 * PAGE_SIZE) + +#define CC33XX_NUM_TX_DESCRIPTORS 32 +#define CC33XX_NUM_RX_DESCRIPTORS 32 + +#define CC33XX_RX_BA_MAX_SESSIONS 13 + +#define CC33XX_MAX_AP_STATIONS 16 + +struct cc33xx_tx_hw_descr; +struct cc33xx_rx_descriptor; +struct partial_rx_frame; +struct core_fw_status; +struct core_status; + +enum wl_rx_buf_align; + +struct cc33xx_stats { + void *fw_stats; + unsigned long fw_stats_update; + unsigned int retry_count; + unsigned int excessive_retries; +}; + +struct cc33xx_ant_diversity { + u8 diversity_enable; + s8 rssi_threshold; + u8 default_antenna; + u8 padding; +}; + +struct cc33xx { + bool initialized; + struct ieee80211_hw *hw; + bool mac80211_registered; + + struct device *dev; + struct platform_device *pdev; + + const struct cc33xx_if_operations *if_ops; + + int wakeirq; + + /* Protects all mac80211 operations and parts of Rx, Tx and IRQ handling. + * All functions postfixed with "_locked" (i.e, cc33xx_irq_locked) + * assume this lock is held by caller. + */ + struct mutex mutex; + + /* Used independently from above mutex to protect short critical sections. + * Though many of these sections may theoretically execute in parallel, a + * single lock will be used for simplicity. + */ + spinlock_t cc_lock; + + enum cc33xx_state state; + bool plt; + enum plt_mode plt_mode; + u8 plt_role_id; + u8 fem_manuf; + u8 last_vif_count; + + struct core_status *core_status; + u8 last_fw_rls_idx; + u8 command_result[CC33XX_CMD_MAX_SIZE]; + u16 result_length; + struct partial_rx_frame partial_rx; + + unsigned long flags; + + void *nvs_mac_addr; + size_t nvs_mac_addr_len; + struct cc33xx_fw_download *fw_download; + + struct mac_address addresses[CC33XX_NUM_MAC_ADDRESSES]; + + unsigned long links_map[BITS_TO_LONGS(CC33XX_MAX_LINKS)]; + unsigned long roles_map[BITS_TO_LONGS(CC33XX_MAX_ROLES)]; + unsigned long roc_map[BITS_TO_LONGS(CC33XX_MAX_ROLES)]; + unsigned long rate_policies_map[BITS_TO_LONGS(CC33XX_MAX_RATE_POLICIES)]; + + u8 session_ids[CC33XX_MAX_LINKS]; + + struct list_head wlvif_list; + + u8 sta_count; + u8 ap_count; + + struct cc33xx_acx_mem_map *target_mem_map; + + /* Accounting for allocated / available TX blocks on HW */ + + u32 tx_blocks_available; + u32 tx_allocated_blocks; + + /* Accounting for allocated / available Tx packets in HW */ + + u32 tx_allocated_pkts[NUM_TX_QUEUES]; + + /* Time-offset between host and chipset clocks */ + + /* Frames scheduled for transmission, not handled yet */ + int tx_queue_count[NUM_TX_QUEUES]; + unsigned long queue_stop_reasons[NUM_TX_QUEUES * CC33XX_NUM_MAC_ADDRESSES]; + + /* Frames received, not handled yet by mac80211 */ + struct sk_buff_head deferred_rx_queue; + + /* Frames sent, not returned yet to mac80211 */ + struct sk_buff_head deferred_tx_queue; + + struct work_struct tx_work; + struct workqueue_struct *freezable_wq; + + /*freezable wq for netstack_work*/ + struct workqueue_struct *freezable_netstack_wq; + + /* Pending TX frames */ + unsigned long tx_frames_map[BITS_TO_LONGS(CC33XX_MAX_TX_DESCRIPTORS)]; + struct sk_buff *tx_frames[CC33XX_MAX_TX_DESCRIPTORS]; + int tx_frames_cnt; + + /* FW Rx counter */ + u32 rx_counter; + + /* Intermediate buffer, used for packet aggregation */ + u8 *aggr_buf; + u32 aggr_buf_size; + size_t max_transaction_len; + + /* Reusable dummy packet template */ + struct sk_buff *dummy_packet; + + /* Network stack work */ + struct work_struct netstack_work; + /* FW log buffer */ + u8 *fwlog; + + /* Number of valid bytes in the FW log buffer */ + ssize_t fwlog_size; + + /* Hardware recovery work */ + struct work_struct recovery_work; + + struct work_struct irq_deferred_work; + + /* Reg domain last configuration */ + DECLARE_BITMAP(reg_ch_conf_last, 64); + /* Reg domain pending configuration */ + DECLARE_BITMAP(reg_ch_conf_pending, 64); + + /* Lock-less list for deferred event handling */ + struct llist_head event_list; + /* The mbox event mask */ + u32 event_mask; + /* events to unmask only when ap interface is up */ + u32 ap_event_mask; + + /* Are we currently scanning */ + struct cc33xx_vif *scan_wlvif; + struct cc33xx_scan scan; + struct delayed_work scan_complete_work; + + struct ieee80211_vif *roc_vif; + struct delayed_work roc_complete_work; + + struct cc33xx_vif *sched_vif; + + u8 mac80211_scan_stopped; + + /* The current band */ + enum nl80211_band band; + + /* in dBm */ + int power_level; + + struct cc33xx_stats stats; + + __le32 *buffer_32; + + /* Current chipset configuration */ + struct cc33xx_conf_file conf; + + bool enable_11a; + + /* bands supported by this instance of cc33xx */ + struct ieee80211_supported_band bands[CC33XX_NUM_BANDS]; + + /* wowlan trigger was configured during suspend. + * (currently, only "ANY" and "PATTERN" trigger is supported) + */ + + bool keep_device_power; + + /* AP-mode - links indexed by HLID. The global and broadcast links + * are always active. + */ + struct cc33xx_link links[CC33XX_MAX_LINKS]; + + /* number of currently active links */ + int active_link_count; + + /* AP-mode - a bitmap of links currently in PS mode according to FW */ + unsigned long ap_fw_ps_map; + + /* AP-mode - a bitmap of links currently in PS mode in mac80211 */ + unsigned long ap_ps_map; + + /* Quirks of specific hardware revisions */ + unsigned int quirks; + + /* number of currently active RX BA sessions */ + int ba_rx_session_count; + + /* AP-mode - number of currently connected stations */ + int active_sta_count; + + /* last wlvif we transmitted from */ + struct cc33xx_vif *last_wlvif; + + /* work to fire when Tx is stuck */ + struct delayed_work tx_watchdog_work; + + /* HW HT (11n) capabilities */ + struct ieee80211_sta_ht_cap ht_cap[CC33XX_NUM_BANDS]; + + /* the current dfs region */ + enum nl80211_dfs_regions dfs_region; + bool radar_debug_mode; + + /* RX Data filter rule state - enabled/disabled */ + /* used in CONFIG PM AND W8 Code */ + unsigned long rx_filter_enabled[BITS_TO_LONGS(CC33XX_MAX_RX_FILTERS)]; + + /* mutex for protecting the tx_flush function */ + struct mutex flush_mutex; + + /* sleep auth value currently configured to FW */ + int sleep_auth; + + /*ble_enable value - 1=enabled, 0=disabled. */ + int ble_enable; + + /* parameters for joining a TWT agreement */ + int min_wake_duration_usec; + int min_wake_interval_mantissa; + int min_wake_interval_exponent; + int max_wake_interval_mantissa; + int max_wake_interval_exponent; + + /* the number of allocated MAC addresses in this chip */ + int num_mac_addr; + + /* sta role index - if 0 - wlan0 primary station interface, + * if 1 - wlan2 - secondary station interface + */ + u8 sta_role_idx; + + u16 max_cmd_size; + + struct completion nvs_loading_complete; + struct completion command_complete; + + /* dynamic fw traces */ + u32 dynamic_fw_traces; + + /* buffer for sending commands to FW */ + u8 cmd_buf[CC33XX_CMD_BUFFER_SIZE]; + + /* number of keys requiring extra spare mem-blocks */ + int extra_spare_key_count; + + u8 efuse_mac_address[ETH_ALEN]; + + u32 fuse_rom_structure_version; + u32 device_part_number; + u32 pg_version; + u8 disable_5g; + u8 disable_6g; + + struct cc33xx_acx_fw_versions *fw_ver; + + u8 antenna_selection; + + /* burst mode cfg */ + u8 burst_disable; + + struct cc33xx_ant_diversity diversity; +}; + +void cc33xx_update_inconn_sta(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct cc33xx_station *wl_sta, bool in_conn); + +void cc33xx_irq(void *cookie); + +/* Quirks */ + +/* the first start_role(sta) sometimes doesn't work on wl12xx */ +#define CC33XX_QUIRK_START_STA_FAILS BIT(1) + +/* wl127x and SPI don't support SDIO block size alignment */ +#define CC33XX_QUIRK_TX_BLOCKSIZE_ALIGN BIT(2) + +/* means aggregated Rx packets are aligned to a SDIO block */ +#define CC33XX_QUIRK_RX_BLOCKSIZE_ALIGN BIT(3) + +/* pad only the last frame in the aggregate buffer */ +#define CC33XX_QUIRK_TX_PAD_LAST_FRAME BIT(7) + +/* extra header space is required for TKIP */ +#define CC33XX_QUIRK_TKIP_HEADER_SPACE BIT(8) + +/* Some firmwares not support sched scans while connected */ +#define CC33XX_QUIRK_NO_SCHED_SCAN_WHILE_CONN BIT(9) + +/* separate probe response templates for one-shot and sched scans */ +#define CC33XX_QUIRK_DUAL_PROBE_TMPL BIT(10) + +/* Firmware requires reg domain configuration for active calibration */ +#define CC33XX_QUIRK_REGDOMAIN_CONF BIT(11) + +/* The FW only support a zero session id for AP */ +#define CC33XX_QUIRK_AP_ZERO_SESSION_ID BIT(12) + +/* TODO: move all these common registers and values elsewhere */ +#define HW_ACCESS_ELP_CTRL_REG 0x1FFFC + +enum CC33xx_FRAME_FORMAT { + CC33xx_B_SHORT = 0, + CC33xx_B_LONG, + CC33xx_LEGACY_OFDM, + CC33xx_HT_MF, + CC33xx_HT_GF, + CC33xx_HE_SU, + CC33xx_HE_MU, + CC33xx_HE_SU_ER, + CC33xx_HE_TB, + CC33xx_HE_TB_NDP_FB, + CC33xx_VHT +}; + +/* CC33xx HW Common Definitions */ + +#define HOST_SYNC_PATTERN 0x5C5C5C5C +#define DEVICE_SYNC_PATTERN 0xABCDDCBA +#define NAB_DATA_ADDR 0x0000BFF0 +#define NAB_CONTROL_ADDR 0x0000BFF8 +#define NAB_STATUS_ADDR 0x0000BFFC + +#define NAB_SEND_CMD 0x940d +#define NAB_SEND_FLAGS 0x08 +#define CC33xx_INTERNAL_DESC_SIZE 200 +#define NAB_EXTRA_BYTES 4 + +#define TX_RESULT_QUEUE_SIZE 108 + +struct control_info_descriptor { + __le16 type_and_length; +}; + +enum control_message_type { + CTRL_MSG_NONE = 0, + CTRL_MSG_EVENT = 1, + CTRL_MSG_COMMND_COMPLETE = 2 +}; + +struct core_fw_status { + u8 tx_result_queue_index; + u8 reserved1[3]; + u8 tx_result_queue[TX_RESULT_QUEUE_SIZE]; + + /* A bitmap (where each bit represents a single HLID) + * to indicate PS/Active mode of the link + */ + __le32 link_ps_bitmap; + + /* A bitmap (where each bit represents a single HLID) + * to indicate if the station is in Fast mode + */ + __le32 link_fast_bitmap; + + /* A bitmap (where each bit represents a single HLID) + * to indicate if a links is suspended/about to be suspended + */ + __le32 link_suspend_bitmap; + + /* Host TX Flow Control descriptor per AC threshold */ + u8 tx_flow_control_ac_threshold; + + /* Host TX Flow Control descriptor PS link threshold */ + u8 tx_ps_threshold; + + /* Host TX Flow Control descriptor Suspended link threshold */ + u8 tx_suspend_threshold; + + /* Host TX Flow Control descriptor Slow link threshold */ + u8 tx_slow_link_prio_threshold; + + /* Host TX Flow Control descriptor Fast link threshold */ + u8 tx_fast_link_prio_threshold; + + /* Host TX Flow Control descriptor Stop Slow link threshold */ + u8 tx_slow_stop_threshold; + + /* Host TX Flow Control descriptor Stop Fast link threshold */ + u8 tx_fast_stop_threshold; + + u8 reserved2; + /* Additional information can be added here */ +} __packed; + +struct core_status { + __le32 block_pad[28]; + __le32 host_interrupt_status; + __le32 rx_status; + struct core_fw_status fw_info; + __le32 tsf; +} __packed; + +struct NAB_header { + __le32 sync_pattern; + __le16 opcode; + __le16 len; +}; + +/* rx_status lower bytes hold the rx byte count */ +#define RX_BYTE_COUNT_MASK 0xFFFF + +#define HINT_NEW_TX_RESULT 0x1 +#define HINT_COMMAND_COMPLETE 0x2 +#define HINT_ROM_LOADER_INIT_COMPLETE 0x8 +#define HINT_SECOND_LOADER_INIT_COMPLETE 0x10 +#define HINT_FW_WAKEUP_COMPLETE 0x20 +#define HINT_FW_INIT_COMPLETE 0x40 +#define HINT_GENERAL_ERROR 0x80000000 + +#define BOOT_TIME_INTERRUPTS (\ + HINT_ROM_LOADER_INIT_COMPLETE | \ + HINT_SECOND_LOADER_INIT_COMPLETE | \ + HINT_FW_WAKEUP_COMPLETE | \ + HINT_FW_INIT_COMPLETE) + +struct NAB_tx_header { + __le32 sync; + __le16 opcode; + __le16 len; + __le16 desc_length; + u8 sd; + u8 flags; +} __packed; + +struct NAB_rx_header { + __le32 cnys; + __le16 opcode; + __le16 len; + __le32 rx_desc; + __le32 reserved; +} __packed; + +#endif /* __CC33XX_H__ */ diff --git a/drivers/net/wireless/ti/cc33xx/cc33xx_i.h b/drivers/net/wireless/ti/cc33xx/cc33xx_i.h new file mode 100644 index 000000000000..99a296a5e94e --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/cc33xx_i.h @@ -0,0 +1,459 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#ifndef __CC33XX_I_H__ +#define __CC33XX_I_H__ + +#include +#include + +#include "conf.h" + +struct cc33xx_family_data { + const char *name; + const char *nvs_name; /* nvs file */ + const char *cfg_name; /* cfg file */ +}; + +#define CC33XX_TX_SECURITY_LO16(s) ((u16)((s) & 0xffff)) +#define CC33XX_TX_SECURITY_HI32(s) ((u32)(((s) >> 16) & 0xffffffff)) +#define CC33XX_TX_SQN_POST_RECOVERY_PADDING 0xff +/* Use smaller padding for GEM, as some APs have issues when it's too big */ +#define CC33XX_TX_SQN_POST_RECOVERY_PADDING_GEM 0x20 + +#define CC33XX_CIPHER_SUITE_GEM 0x00147201 + +#define CC33XX_BUSY_WORD_LEN (sizeof(u32)) + +#define CC33XX_DEFAULT_BEACON_INT 100 + +#define CC33XX_MAX_ROLES 4 +#define CC33XX_INVALID_ROLE_ID 0xff +#define CC33XX_INVALID_LINK_ID 0xff + +#define CC33XX_MAX_LINKS 21 + +/* the driver supports the 2.4Ghz and 5Ghz bands */ +#define CC33XX_NUM_BANDS 2 + +#define CC33XX_MAX_RATE_POLICIES 16 + +/* Defined by FW as 0. Will not be freed or allocated. */ +#define CC33XX_SYSTEM_HLID 0 + +/* When in AP-mode, we allow (at least) this number of packets + * to be transmitted to FW for a STA in PS-mode. Only when packets are + * present in the FW buffers it will wake the sleeping STA. We want to put + * enough packets for the driver to transmit all of its buffered data before + * the STA goes to sleep again. But we don't want to take too much memory + * as it might hurt the throughput of active STAs. + */ +#define CC33XX_PS_STA_MAX_PACKETS 2 + +#define CC33XX_AP_BSS_INDEX 0 + +enum cc33xx_state { + CC33XX_STATE_OFF, + CC33XX_STATE_RESTARTING, + CC33XX_STATE_ON, +}; + +struct cc33xx; + +#define NUM_TX_QUEUES 4 + +#define CC33XX_MAX_CHANNELS 64 +struct cc33xx_scan { + struct cfg80211_scan_request *req; + unsigned long scanned_ch[BITS_TO_LONGS(CC33XX_MAX_CHANNELS)]; + bool failed; + u8 state; + u8 ssid[IEEE80211_MAX_SSID_LEN + 1]; + size_t ssid_len; +}; + +struct cc33xx_if_operations { + void (*interface_claim)(struct device *child); + void (*interface_release)(struct device *child); + int __must_check (*read)(struct device *child, int addr, void *buf, + size_t len, bool fixed); + int __must_check (*write)(struct device *child, int addr, void *buf, + size_t len, bool fixed); + void (*reset)(struct device *child); + void (*init)(struct device *child); + int (*power)(struct device *child, bool enable); + void (*set_block_size)(struct device *child, unsigned int blksz); + size_t (*get_max_transaction_len)(struct device *child); + void (*set_irq_handler)(struct device *child, void *irq_handler); + void (*enable_irq)(struct device *child); + void (*disable_irq)(struct device *child); +}; + +struct cc33xx_platdev_data { + const struct cc33xx_if_operations *if_ops; + const struct cc33xx_family_data *family; + void (*irq_handler)(struct platform_device *pdev); + int gpio_irq_num; + + bool ref_clock_xtal; /* specify whether the clock is XTAL or not */ + bool pwr_in_suspend; +}; + +#define MAX_NUM_KEYS 14 +#define MAX_KEY_SIZE 32 + +struct cc33xx_ap_key { + u8 id; + u8 key_type; + u8 key_size; + u8 key[MAX_KEY_SIZE]; + u8 hlid; + u32 tx_seq_32; + u16 tx_seq_16; +}; + +enum cc33xx_flags { + CC33XX_FLAG_GPIO_POWER, + CC33XX_FLAG_TX_PENDING, + CC33XX_FLAG_IN_ELP, + CC33XX_FLAG_FW_TX_BUSY, + CC33XX_FLAG_DUMMY_PACKET_PENDING, + CC33XX_FLAG_SUSPENDED, + CC33XX_FLAG_PENDING_WORK, + CC33XX_FLAG_SOFT_GEMINI, + CC33XX_FLAG_DRIVER_REMOVED, + CC33XX_FLAG_RECOVERY_IN_PROGRESS, + CC33XX_FLAG_VIF_CHANGE_IN_PROGRESS, + CC33XX_FLAG_IO_FAILED, + CC33XX_FLAG_REINIT_TX_WDOG, +}; + +enum cc33xx_vif_flags { + WLVIF_FLAG_INITIALIZED, + WLVIF_FLAG_STA_ASSOCIATED, + WLVIF_FLAG_STA_AUTHORIZED, + WLVIF_FLAG_IBSS_JOINED, + WLVIF_FLAG_AP_STARTED, + WLVIF_FLAG_IN_PS, + WLVIF_FLAG_STA_STATE_SENT, + WLVIF_FLAG_PSPOLL_FAILURE, + WLVIF_FLAG_CS_PROGRESS, + WLVIF_FLAG_AP_PROBE_RESP_SET, + WLVIF_FLAG_IN_USE, + WLVIF_FLAG_ACTIVE, + WLVIF_FLAG_BEACON_DISABLED, +}; + +struct cc33xx_vif; + +struct cc33xx_link { + /* AP-mode - TX queue per AC in link */ + struct sk_buff_head tx_queue[NUM_TX_QUEUES]; + + /* accounting for allocated / freed packets in FW */ + u8 allocated_pkts; + u8 prev_freed_pkts; + + u8 addr[ETH_ALEN]; + + /* bitmap of TIDs where RX BA sessions are active for this link */ + u8 ba_bitmap; + + /* the last fw rate index we used for this link */ + u8 fw_rate_idx; + + /* the last fw rate [Mbps] we used for this link */ + u8 fw_rate_mbps; + + /* The wlvif this link belongs to. Might be null for global links */ + struct cc33xx_vif *wlvif; + + /* total freed FW packets on the link - used for tracking the + * AES/TKIP PN across recoveries. Re-initialized each time + * from the cc33xx_station structure. + */ + u64 total_freed_pkts; +}; + +#define CC33XX_MAX_RX_FILTERS 5 +#define CC33XX_RX_FILTER_MAX_FIELDS 8 + +#define CC33XX_RX_FILTER_ETH_HEADER_SIZE 14 +#define CC33XX_RX_FILTER_MAX_FIELDS_SIZE 95 +#define RX_FILTER_FIELD_OVERHEAD \ + (sizeof(struct cc33xx_rx_filter_field) - sizeof(u8 *)) +#define CC33XX_RX_FILTER_MAX_PATTERN_SIZE \ + (CC33XX_RX_FILTER_MAX_FIELDS_SIZE - RX_FILTER_FIELD_OVERHEAD) + +#define CC33XX_RX_FILTER_FLAG_IP_HEADER 0 +#define CC33XX_RX_FILTER_FLAG_ETHERNET_HEADER BIT(1) + +struct ieee80211_header { + __le16 frame_ctl; + __le16 duration_id; + u8 da[ETH_ALEN]; + u8 sa[ETH_ALEN]; + u8 bssid[ETH_ALEN]; + __le16 seq_ctl; + u8 payload[]; +} __packed; + +enum rx_filter_action { + FILTER_DROP = 0, + FILTER_SIGNAL = 1, + FILTER_FW_HANDLE = 2 +}; + +enum plt_mode { + PLT_OFF = 0, + PLT_ON = 1, + PLT_FEM_DETECT = 2, + PLT_CHIP_AWAKE = 3 +}; + +struct cc33xx_rx_filter_field { + __le16 offset; + u8 len; + u8 flags; + u8 *pattern; +} __packed; + +struct cc33xx_rx_filter { + u8 action; + int num_fields; + struct cc33xx_rx_filter_field fields[CC33XX_RX_FILTER_MAX_FIELDS]; +}; + +struct cc33xx_station { + u8 hlid; + bool in_connection; + + /* total freed FW packets on the link to the STA - used for tracking the + * AES/TKIP PN across recoveries. Re-initialized each time from the + * cc33xx_station structure. + * Used in both AP and STA mode. + */ + u64 total_freed_pkts; +}; + +struct cc33xx_vif { + struct cc33xx *cc; + struct list_head list; + unsigned long flags; + u8 bss_type; + u8 p2p; /* we are using p2p role */ + u8 role_id; + + /* sta/ibss specific */ + u8 dev_role_id; + u8 dev_hlid; + + union { + struct { + u8 hlid; + + u8 basic_rate_idx; + u8 ap_rate_idx; + u8 p2p_rate_idx; + + bool qos; + /* channel type we started the STA role with */ + enum nl80211_channel_type role_chan_type; + } sta; + struct { + u8 global_hlid; + u8 bcast_hlid; + + /* HLIDs bitmap of associated stations */ + unsigned long sta_hlid_map[BITS_TO_LONGS(CC33XX_MAX_LINKS)]; + + /* recoreded keys - set here before AP startup */ + struct cc33xx_ap_key *recorded_keys[MAX_NUM_KEYS]; + + u8 mgmt_rate_idx; + u8 bcast_rate_idx; + u8 ucast_rate_idx[CONF_TX_MAX_AC_COUNT]; + } ap; + }; + + /* the hlid of the last transmitted skb */ + int last_tx_hlid; + + /* counters of packets per AC, across all links in the vif */ + int tx_queue_count[NUM_TX_QUEUES]; + + unsigned long links_map[BITS_TO_LONGS(CC33XX_MAX_LINKS)]; + + u8 ssid[IEEE80211_MAX_SSID_LEN + 1]; + u8 ssid_len; + + /* The current band */ + enum nl80211_band band; + int channel; + enum nl80211_channel_type channel_type; + + u32 bitrate_masks[CC33XX_NUM_BANDS]; + u32 basic_rate_set; + + /* currently configured rate set: + * bits 0-15 - 802.11abg rates + * bits 16-23 - 802.11n MCS index mask + * support only 1 stream, thus only 8 bits for the MCS rates (0-7). + */ + u32 basic_rate; + u32 rate_set; + + /* probe-req template for the current AP */ + struct sk_buff *probereq; + + /* Beaconing interval (needed for ad-hoc) */ + u32 beacon_int; + + /* Default key (for WEP) */ + u32 default_key; + + /* Our association ID */ + u16 aid; + + /* retry counter for PSM entries */ + u8 psm_entry_retry; + + /* in dBm */ + int power_level; + + int rssi_thold; + int last_rssi_event; + + /* save the current encryption type for auto-arp config */ + u8 encryption_type; + __be32 ip_addr; + + /* RX BA constraint value */ + bool ba_support; + bool ba_allowed; + + bool wmm_enabled; + + bool radar_enabled; + + struct delayed_work channel_switch_work; + struct delayed_work connection_loss_work; + + /* number of in connection stations */ + int inconn_count; + + /* This vif's queues are mapped to mac80211 HW queues as: + * VO - hw_queue_base + * VI - hw_queue_base + 1 + * BE - hw_queue_base + 2 + * BK - hw_queue_base + 3 + */ + int hw_queue_base; + + /* do we have a pending auth reply? (and ROC) */ + bool ap_pending_auth_reply; + + /* time when we sent the pending auth reply */ + unsigned long pending_auth_reply_time; + + /* work for canceling ROC after pending auth reply */ + struct delayed_work pending_auth_complete_work; + + struct delayed_work roc_timeout_work; + + /* update rate control */ + enum ieee80211_sta_rx_bandwidth rc_update_bw; + struct ieee80211_sta_ht_cap rc_ht_cap; + struct work_struct rc_update_work; + + /* total freed FW packets on the link. + * For STA this holds the PN of the link to the AP. + * For AP this holds the PN of the broadcast link. + */ + u64 total_freed_pkts; + + /* for MBSSID: this BSS is a nontransmitted BSS profile + * Relevant for STA role + */ + bool nontransmitted; + + /* for MBSSID: update transmitter BSSID */ + u8 transmitter_bssid[ETH_ALEN]; + + /* for MBSSID: BSSID index */ + u8 bssid_index; + + /* for MBSSID: BSSID indicator */ + u8 bssid_indicator; + + /* for STA: if connection established and has HE support*/ + u8 sta_has_he; + + /* This struct must be last! + * data that has to be saved acrossed reconfigs (e.g. recovery) + * should be declared in this struct. + */ + u8 persistent[]; +}; + +static inline struct cc33xx_vif *cc33xx_vif_to_data(struct ieee80211_vif *vif) +{ + WARN_ON(!vif); + return (struct cc33xx_vif *)vif->drv_priv; +} + +static inline +struct ieee80211_vif *cc33xx_wlvif_to_vif(struct cc33xx_vif *wlvif) +{ + return container_of((void *)wlvif, struct ieee80211_vif, drv_priv); +} + +static inline bool cc33xx_is_p2p_mgmt(struct cc33xx_vif *wlvif) +{ + return cc33xx_wlvif_to_vif(wlvif)->type == NL80211_IFTYPE_P2P_DEVICE; +} + +#define cc33xx_for_each_wlvif(cc, wlvif) \ + list_for_each_entry(wlvif, &(cc)->wlvif_list, list) + +#define cc33xx_for_each_wlvif_continue(cc, wlvif) \ + list_for_each_entry_continue(wlvif, &(cc)->wlvif_list, list) + +#define cc33xx_for_each_wlvif_bss_type(cc, wlvif, _bss_type) \ + cc33xx_for_each_wlvif((cc), (wlvif)) \ + if ((wlvif)->bss_type == (_bss_type)) + +#define cc33xx_for_each_wlvif_sta(cc, wlvif) \ + cc33xx_for_each_wlvif_bss_type(cc, wlvif, BSS_TYPE_STA_BSS) + +#define cc33xx_for_each_wlvif_ap(cc, wlvif) \ + cc33xx_for_each_wlvif_bss_type(cc, wlvif, BSS_TYPE_AP_BSS) + +int cc33xx_plt_start(struct cc33xx *cc, const enum plt_mode plt_mode); +int cc33xx_plt_stop(struct cc33xx *cc); +void cc33xx_queue_recovery_work(struct cc33xx *cc); +void cc33xx_flush_deferred_work(struct cc33xx *cc); + +#define SESSION_COUNTER_INVALID 7 /* used with dummy_packet */ + +#define CC33XX_MAX_TXPWR 21 /* maximum power limit is 21dBm */ +#define CC33XX_MIN_TXPWR -10 /* minimum power limit is -10dBm */ + +#define CC33XX_TX_QUEUE_LOW_WATERMARK 32 +#define CC33XX_TX_QUEUE_HIGH_WATERMARK 256 + +#define CC33XX_RX_QUEUE_MAX_LEN 256 + +/* cc33xx needs a 200ms sleep after power on, and a 20ms sleep before power + * on in case is has been shut down shortly before + */ +#define CC33XX_PRE_POWER_ON_SLEEP 20 /* in milliseconds */ +#define CC33XX_POWER_ON_SLEEP 200 /* in milliseconds */ + +/* Macros to handle cc33xx.sta_rate_set */ +#define HW_HT_RATES_OFFSET 16 +#define HW_MIMO_RATES_OFFSET 24 + +#endif /* __CC33XX_I_H__ */ From patchwork Thu Nov 7 12:51:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nemanov, Michael" X-Patchwork-Id: 13866418 X-Patchwork-Delegate: kuba@kernel.org Received: from lelv0142.ext.ti.com (lelv0142.ext.ti.com [198.47.23.249]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AFB6D212EEB; Thu, 7 Nov 2024 12:52:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.23.249 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983980; cv=none; b=b27nuZIM4mmYYDAUBu1iFTLTfSdGpO/k53HSRuPplmHQnDxwqCkqCNsQNpKHYHyR6tzF3sp4NudAqCvZcrnzOKL4tDbl6eU1hAqJmWJtmyu9tuMSNKjHciSAAwS4hpXQ7+E/EyCtLetQKqV3wYniuJWS/8Ei32KTBzJpsuzk0xM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983980; c=relaxed/simple; bh=4zqDyWqTQTrK+jNEhpJzM11nbPhPLqhRqG0GcClrp2s=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Sjmi6G7YHVoS5py0T0/fl9uL0wzSdWd0kk4wHAQIV/yB00pEPY8WgLDM/RejhApb0gGoLQw/6IY+ybFWP0OjuhkWNkVDebWjwdQwHaY5wp1FH7vvI25Or2Vw3DZ21aHk+ZhlgMFRhbGWbGaQRyNXT4wK7r/y676JTg+p+yOxu7A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=c8u0qlib; arc=none smtp.client-ip=198.47.23.249 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="c8u0qlib" Received: from fllv0034.itg.ti.com ([10.64.40.246]) by lelv0142.ext.ti.com (8.15.2/8.15.2) with ESMTP id 4A7CqfLt041931; Thu, 7 Nov 2024 06:52:41 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1730983961; bh=D3BYH+c4+v9/TCb1eLjg2rWqpH715CLIMEdGywtJ/Ac=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=c8u0qlib/rWvJXz/ueEFmSCH4bM+T/YcianOEMpYkpSMvc7SxbWQtUxJavl7Do9na E0u92ZbgqHYSis6HKKSholdG4b8y2L64ASMheny6I+rcQ/4GJf4cnaiE7DstI5alfd hueAXdCFgGJOQ1G2DQH9eu1cmQtdveEWSMGqGTc4= Received: from DFLE114.ent.ti.com (dfle114.ent.ti.com [10.64.6.35]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 4A7Cqfgc060319 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 7 Nov 2024 06:52:41 -0600 Received: from DFLE115.ent.ti.com (10.64.6.36) by DFLE114.ent.ti.com (10.64.6.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Thu, 7 Nov 2024 06:52:41 -0600 Received: from lelvsmtp5.itg.ti.com (10.180.75.250) by DFLE115.ent.ti.com (10.64.6.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23 via Frontend Transport; Thu, 7 Nov 2024 06:52:41 -0600 Received: from localhost (udb0389739.dhcp.ti.com [137.167.1.149]) by lelvsmtp5.itg.ti.com (8.15.2/8.15.2) with ESMTP id 4A7Cqea1054776; Thu, 7 Nov 2024 06:52:41 -0600 From: Michael Nemanov To: Kalle Valo , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , , , , CC: Sabeeh Khan , Michael Nemanov Subject: [PATCH v5 03/17] wifi: cc33xx: Add debug.h Date: Thu, 7 Nov 2024 14:51:55 +0200 Message-ID: <20241107125209.1736277-4-michael.nemanov@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241107125209.1736277-1-michael.nemanov@ti.com> References: <20241107125209.1736277-1-michael.nemanov@ti.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea X-Patchwork-Delegate: kuba@kernel.org These are trace macros used throughout the driver. Signed-off-by: Michael Nemanov --- drivers/net/wireless/ti/cc33xx/debug.h | 92 ++++++++++++++++++++++++++ 1 file changed, 92 insertions(+) create mode 100644 drivers/net/wireless/ti/cc33xx/debug.h diff --git a/drivers/net/wireless/ti/cc33xx/debug.h b/drivers/net/wireless/ti/cc33xx/debug.h new file mode 100644 index 000000000000..53b2168c04c7 --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/debug.h @@ -0,0 +1,92 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#ifndef __DEBUG_H__ +#define __DEBUG_H__ + +#define DRIVER_NAME "cc33xx" +#define DRIVER_PREFIX DRIVER_NAME ": " + +enum { + DEBUG_NONE = 0, + DEBUG_IRQ = BIT(0), + DEBUG_SPI = BIT(1), + DEBUG_BOOT = BIT(2), + DEBUG_CORE_STATUS = BIT(3), + DEBUG_TESTMODE = BIT(4), + DEBUG_EVENT = BIT(5), + DEBUG_TX = BIT(6), + DEBUG_RX = BIT(7), + DEBUG_SCAN = BIT(8), + DEBUG_CRYPT = BIT(9), + DEBUG_PSM = BIT(10), + DEBUG_MAC80211 = BIT(11), + DEBUG_CMD = BIT(12), + DEBUG_ACX = BIT(13), + DEBUG_SDIO = BIT(14), + DEBUG_FILTERS = BIT(15), + DEBUG_ADHOC = BIT(16), + DEBUG_AP = BIT(17), + DEBUG_PROBE = BIT(18), + DEBUG_IO = BIT(19), + DEBUG_MASTER = (DEBUG_ADHOC | DEBUG_AP), + DEBUG_CC33xx = BIT(20), + DEBUG_ALL = GENMASK(31, 0), + DEBUG_NO_DATAPATH = (DEBUG_ALL & ~DEBUG_IRQ & ~DEBUG_TX & ~DEBUG_RX & ~DEBUG_CORE_STATUS), +}; + +extern u32 cc33xx_debug_level; + +#define DEBUG_DUMP_LIMIT 1024 + +#define cc33xx_error(fmt, arg...) \ + pr_err(DRIVER_PREFIX "ERROR " fmt "\n", ##arg) + +#define cc33xx_warning(fmt, arg...) \ + pr_warn(DRIVER_PREFIX "WARNING " fmt "\n", ##arg) + +#define cc33xx_notice(fmt, arg...) \ + pr_info(DRIVER_PREFIX fmt "\n", ##arg) + +#define cc33xx_info(fmt, arg...) \ + pr_info(DRIVER_PREFIX fmt "\n", ##arg) + +/* define the debug macro differently if dynamic debug is supported */ +#if defined(CONFIG_DYNAMIC_DEBUG) +#define cc33xx_debug(level, fmt, arg...) \ + do { \ + if (unlikely((level) & cc33xx_debug_level)) \ + dynamic_pr_debug(DRIVER_PREFIX fmt "\n", ##arg); \ + } while (0) +#else +#define cc33xx_debug(level, fmt, arg...) \ + do { \ + if (unlikely((level) & cc33xx_debug_level)) \ + pr_debug(pr_fmt(DRIVER_PREFIX fmt "\n"), \ + ##arg); \ + } while (0) +#endif /* CONFIG_DYNAMIC_DEBUG */ + +#define cc33xx_dump(level, prefix, buf, len) \ + do { \ + if ((level) & cc33xx_debug_level) \ + print_hex_dump_debug(DRIVER_PREFIX prefix, \ + DUMP_PREFIX_OFFSET, 16, 1, \ + buf, \ + min_t(size_t, len, DEBUG_DUMP_LIMIT), \ + 0); \ + } while (0) + +#define cc33xx_dump_ascii(level, prefix, buf, len) \ + do { \ + if ((level) & cc33xx_debug_level) \ + print_hex_dump_debug(DRIVER_PREFIX prefix, \ + DUMP_PREFIX_OFFSET, 16, 1, \ + buf, \ + min_t(size_t, len, DEBUG_DUMP_LIMIT), \ + true); \ + } while (0) + +#endif /* __DEBUG_H__ */ From patchwork Thu Nov 7 12:51:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nemanov, Michael" X-Patchwork-Id: 13866386 X-Patchwork-Delegate: kuba@kernel.org Received: from fllv0016.ext.ti.com (fllv0016.ext.ti.com [198.47.19.142]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0E6B2212188; Thu, 7 Nov 2024 12:52:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.19.142 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983972; cv=none; b=Sd0fZ0rY0aY2JsRONJWePdeof0e8ZrtP1aidAhIBUXYB5v3xeQV1i5Pyf10u8//CY7X9k+fAbSLjHNSiGryuT8hDWMH3tJLW6RmFCwjai8YEisuPNgjGoQus/mqHuHDOJkKbvuyEIEiqL7IfgoEMwbi03upousCkDlQ01OyWzR8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983972; c=relaxed/simple; bh=0zA88ojAQhYclRfbpCEidZHhpgM1g/WSCdBILP5PXO8=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=cVnQWHiaXpB3tqqABd5J4Vu6EiEdwzBXYh+xn6dqIV0CSKIk7B+gg6jJ0QFZ6tXQE9sIvN2J5YauOrfORgtYff84722M6+XubTaRhEfnOkdIzSUT4Aopl2+PvQaMUoPGCeqZZDhjEDAXaE9Amo7MZFU1WyDE0nULCRkKKfaoPoc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=grNSQncn; arc=none smtp.client-ip=198.47.19.142 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="grNSQncn" Received: from lelv0265.itg.ti.com ([10.180.67.224]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 4A7CqhpN049061; Thu, 7 Nov 2024 06:52:43 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1730983963; bh=V7tfcBB0+f4u5HvHRbGAKLVJq8+z08unEIMxXvXXpb8=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=grNSQncnhrhYOqrnelRPOFwtV/b5pWJSuf5JENAY2X7HZ4TPOWby3P2GMcKInlALQ PpELcWlU7NPbqK5VbkSobj0rzgQ8XJanhNBxPCvQrFO216+YSkCTt6f9kOntz32+fr mxMOB6Daj6hGso3lpjxvb+LalSUlldEDg20pvfnE= Received: from DLEE112.ent.ti.com (dlee112.ent.ti.com [157.170.170.23]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 4A7CqhgA011118 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 7 Nov 2024 06:52:43 -0600 Received: from DLEE108.ent.ti.com (157.170.170.38) by DLEE112.ent.ti.com (157.170.170.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Thu, 7 Nov 2024 06:52:42 -0600 Received: from lelvsmtp6.itg.ti.com (10.180.75.249) by DLEE108.ent.ti.com (157.170.170.38) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23 via Frontend Transport; Thu, 7 Nov 2024 06:52:42 -0600 Received: from localhost (udb0389739.dhcp.ti.com [137.167.1.149]) by lelvsmtp6.itg.ti.com (8.15.2/8.15.2) with ESMTP id 4A7Cqgqx038388; Thu, 7 Nov 2024 06:52:42 -0600 From: Michael Nemanov To: Kalle Valo , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , , , , CC: Sabeeh Khan , Michael Nemanov Subject: [PATCH v5 04/17] wifi: cc33xx: Add sdio.c, io.c, io.h Date: Thu, 7 Nov 2024 14:51:56 +0200 Message-ID: <20241107125209.1736277-5-michael.nemanov@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241107125209.1736277-1-michael.nemanov@ti.com> References: <20241107125209.1736277-1-michael.nemanov@ti.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea X-Patchwork-Delegate: kuba@kernel.org sdio.c implements SDIO transport functions. These are bound into struct cc33xx_if_operations and accessed via io.h in order to abstract multiple transport interfaces such as SPI in the future. The CC33xx driver supports the SDIO in-band IRQ option so the IRQ from the device received here as well. Unlike wl1xxx products, there is no longer mapping between HW and SDIO / SPI address space of any kind. There are only 3 valid addresses for control, data and status transactions each with a predefined structure. Signed-off-by: Michael Nemanov --- drivers/net/wireless/ti/cc33xx/io.c | 129 +++++++ drivers/net/wireless/ti/cc33xx/io.h | 26 ++ drivers/net/wireless/ti/cc33xx/sdio.c | 530 ++++++++++++++++++++++++++ 3 files changed, 685 insertions(+) create mode 100644 drivers/net/wireless/ti/cc33xx/io.c create mode 100644 drivers/net/wireless/ti/cc33xx/io.h create mode 100644 drivers/net/wireless/ti/cc33xx/sdio.c diff --git a/drivers/net/wireless/ti/cc33xx/io.c b/drivers/net/wireless/ti/cc33xx/io.c new file mode 100644 index 000000000000..59696004efe9 --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/io.c @@ -0,0 +1,129 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#include "cc33xx.h" +#include "debug.h" +#include "io.h" +#include "tx.h" + +bool cc33xx_set_block_size(struct cc33xx *cc) +{ + if (cc->if_ops->set_block_size) { + cc->if_ops->set_block_size(cc->dev, CC33XX_BUS_BLOCK_SIZE); + cc33xx_debug(DEBUG_CC33xx, + "Set BLKsize to %d", CC33XX_BUS_BLOCK_SIZE); + return true; + } + + cc33xx_debug(DEBUG_CC33xx, "Could not set BLKsize"); + return false; +} + +void cc33xx_disable_interrupts_nosync(struct cc33xx *cc) +{ + cc->if_ops->disable_irq(cc->dev); +} + +void cc33xx_irq(void *cookie); +void cc33xx_enable_interrupts(struct cc33xx *cc) +{ + cc->if_ops->enable_irq(cc->dev); + + cc33xx_irq(cc); +} + +void cc33xx_io_reset(struct cc33xx *cc) +{ + if (cc->if_ops->reset) + cc->if_ops->reset(cc->dev); +} + +void cc33xx_io_init(struct cc33xx *cc) +{ + if (cc->if_ops->init) + cc->if_ops->init(cc->dev); +} + +/* Raw target IO, address is not translated */ +static int __must_check cc33xx_raw_write(struct cc33xx *cc, int addr, + void *buf, size_t len, bool fixed) +{ + int ret; + + if (test_bit(CC33XX_FLAG_IO_FAILED, &cc->flags) || + WARN_ON((test_bit(CC33XX_FLAG_IN_ELP, &cc->flags) && + addr != HW_ACCESS_ELP_CTRL_REG))) + return -EIO; + + ret = cc->if_ops->write(cc->dev, addr, buf, len, fixed); + if (ret && cc->state != CC33XX_STATE_OFF) + set_bit(CC33XX_FLAG_IO_FAILED, &cc->flags); + + return ret; +} + +int __must_check cc33xx_raw_read(struct cc33xx *cc, int addr, + void *buf, size_t len, bool fixed) +{ + int ret; + + if (test_bit(CC33XX_FLAG_IO_FAILED, &cc->flags) || + WARN_ON((test_bit(CC33XX_FLAG_IN_ELP, &cc->flags) && + addr != HW_ACCESS_ELP_CTRL_REG))) + return -EIO; + + ret = cc->if_ops->read(cc->dev, addr, buf, len, fixed); + if (ret && cc->state != CC33XX_STATE_OFF) + set_bit(CC33XX_FLAG_IO_FAILED, &cc->flags); + + return ret; +} + +int __must_check cc33xx_write(struct cc33xx *cc, int addr, + void *buf, size_t len, bool fixed) +{ + return cc33xx_raw_write(cc, addr, buf, len, fixed); +} + +void claim_core_status_lock(struct cc33xx *cc) +{ + /* When accessing core-status data (read or write) the transport lock + * should be held. + */ + cc->if_ops->interface_claim(cc->dev); +} + +void release_core_status_lock(struct cc33xx *cc) +{ + /* After accessing core-status data (read or write) the transport lock + * should be released. + */ + cc->if_ops->interface_release(cc->dev); +} + +void cc33xx_power_off(struct cc33xx *cc) +{ + int ret = 0; + + if (!test_bit(CC33XX_FLAG_GPIO_POWER, &cc->flags)) + return; + + if (cc->if_ops->power) + ret = cc->if_ops->power(cc->dev, false); + if (!ret) + clear_bit(CC33XX_FLAG_GPIO_POWER, &cc->flags); +} + +int cc33xx_power_on(struct cc33xx *cc) +{ + int ret = 0; + + if (cc->if_ops->power) + ret = cc->if_ops->power(cc->dev, true); + if (ret == 0) + set_bit(CC33XX_FLAG_GPIO_POWER, &cc->flags); + + return ret; +} diff --git a/drivers/net/wireless/ti/cc33xx/io.h b/drivers/net/wireless/ti/cc33xx/io.h new file mode 100644 index 000000000000..cc5abd428d99 --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/io.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#ifndef __IO_H__ +#define __IO_H__ + +struct cc33xx; + +void cc33xx_disable_interrupts_nosync(struct cc33xx *cc); +void cc33xx_enable_interrupts(struct cc33xx *cc); +void cc33xx_io_reset(struct cc33xx *cc); +void cc33xx_io_init(struct cc33xx *cc); +int __must_check cc33xx_raw_read(struct cc33xx *cc, int addr, + void *buf, size_t len, bool fixed); +int __must_check cc33xx_write(struct cc33xx *cc, int addr, + void *buf, size_t len, bool fixed); +void claim_core_status_lock(struct cc33xx *cc); +void release_core_status_lock(struct cc33xx *cc); +void cc33xx_power_off(struct cc33xx *cc); +int cc33xx_power_on(struct cc33xx *cc); +int cc33xx_translate_addr(struct cc33xx *cc, int addr); +bool cc33xx_set_block_size(struct cc33xx *cc); + +#endif /* __IO_H__ */ diff --git a/drivers/net/wireless/ti/cc33xx/sdio.c b/drivers/net/wireless/ti/cc33xx/sdio.c new file mode 100644 index 000000000000..835e627e2514 --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/sdio.c @@ -0,0 +1,530 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#include +#include +#include +#include +#include + +#include "cc33xx.h" +#include "io.h" + +#ifndef SDIO_VENDOR_ID_TI +#define SDIO_VENDOR_ID_TI 0x0097 +#endif + +#define SDIO_DEVICE_ID_CC33XX_NO_EFUSE 0x4076 +#define SDIO_DEVICE_ID_TI_CC33XX 0x4077 + +struct cc33xx_sdio_glue { + struct device *dev; + struct platform_device *core; +}; + +static void cc33xx_sdio_claim(struct device *child) +{ + struct cc33xx_sdio_glue *glue = dev_get_drvdata(child->parent); + struct sdio_func *func = dev_to_sdio_func(glue->dev); + + sdio_claim_host(func); +} + +static void cc33xx_sdio_release(struct device *child) +{ + struct cc33xx_sdio_glue *glue = dev_get_drvdata(child->parent); + struct sdio_func *func = dev_to_sdio_func(glue->dev); + + sdio_release_host(func); +} + +static void cc33xx_sdio_set_block_size(struct device *child, + unsigned int blksz) +{ + struct cc33xx_sdio_glue *glue = dev_get_drvdata(child->parent); + struct sdio_func *func = dev_to_sdio_func(glue->dev); + + sdio_claim_host(func); + sdio_set_block_size(func, blksz); + sdio_release_host(func); +} + +static int __must_check cc33xx_sdio_raw_read(struct device *child, int addr, + void *buf, size_t len, bool fixed) +{ + int ret; + struct cc33xx_sdio_glue *glue = dev_get_drvdata(child->parent); + struct sdio_func *func = dev_to_sdio_func(glue->dev); + + sdio_claim_host(func); + + if (unlikely(addr == HW_ACCESS_ELP_CTRL_REG)) { + ((u8 *)buf)[0] = sdio_f0_readb(func, addr, &ret); + dev_dbg(child->parent, "sdio read 52 addr 0x%x, byte 0x%02x\n", + addr, ((u8 *)buf)[0]); + } else { + if (fixed) + ret = sdio_readsb(func, buf, addr, len); + else + ret = sdio_memcpy_fromio(func, buf, addr, len); + + dev_dbg(child->parent, "sdio read 53 addr 0x%x, %zu bytes\n", + addr, len); + } + + sdio_release_host(func); + + if (WARN_ON(ret)) + dev_err(child->parent, "sdio read failed (%d)\n", ret); + + return ret; +} + +static int __must_check cc33xx_sdio_raw_write(struct device *child, int addr, + void *buf, size_t len, bool fixed) +{ + int ret; + struct cc33xx_sdio_glue *glue = dev_get_drvdata(child->parent); + struct sdio_func *func = dev_to_sdio_func(glue->dev); + + sdio_claim_host(func); + + if (unlikely(addr == HW_ACCESS_ELP_CTRL_REG)) { + sdio_f0_writeb(func, ((u8 *)buf)[0], addr, &ret); + } else { + if (fixed) + ret = sdio_writesb(func, addr, buf, len); + else + ret = sdio_memcpy_toio(func, addr, buf, len); + } + + sdio_release_host(func); + + if (WARN_ON(ret)) + dev_err(child->parent, "sdio write failed (%d)\n", ret); + + return ret; +} + +static int cc33xx_sdio_power_on(struct cc33xx_sdio_glue *glue) +{ + int ret; + struct sdio_func *func = dev_to_sdio_func(glue->dev); + struct mmc_card *card = func->card; + + ret = pm_runtime_get_sync(&card->dev); + if (ret < 0) { + pm_runtime_put_noidle(&card->dev); + dev_err(glue->dev, "%s: failed to get_sync(%d)\n", + __func__, ret); + + return ret; + } + + sdio_claim_host(func); + sdio_enable_func(func); + sdio_release_host(func); + + return 0; +} + +static int cc33xx_sdio_power_off(struct cc33xx_sdio_glue *glue) +{ + struct sdio_func *func = dev_to_sdio_func(glue->dev); + struct mmc_card *card = func->card; + + sdio_claim_host(func); + sdio_disable_func(func); + sdio_release_host(func); + + /* Let runtime PM know the card is powered off */ + pm_runtime_put(&card->dev); + return 0; +} + +static int cc33xx_sdio_set_power(struct device *child, bool enable) +{ + struct cc33xx_sdio_glue *glue = dev_get_drvdata(child->parent); + + if (enable) + return cc33xx_sdio_power_on(glue); + else + return cc33xx_sdio_power_off(glue); +} + +/** + * inband_irq_handler - Called from the MMC subsystem when the + * function's IRQ is signaled. + * @func: an SDIO function of the card + * + * Note that the host is already claimed when handler is invoked. + */ +static void inband_irq_handler(struct sdio_func *func) +{ + struct cc33xx_sdio_glue *glue = sdio_get_drvdata(func); + struct platform_device *pdev = glue->core; + struct cc33xx_platdev_data *pdev_data = dev_get_platdata(&pdev->dev); + + if (WARN_ON(!pdev_data->irq_handler)) + return; + + pdev_data->irq_handler(pdev); +} + +static void cc33xx_enable_async_interrupt(struct sdio_func *func) +{ + u8 reg_val; + const int CCCR_REG_16_ADDR = 0x16; + const int ENABLE_ASYNC_IRQ_BIT = BIT(1); + + reg_val = sdio_f0_readb(func, CCCR_REG_16_ADDR, NULL); + reg_val |= ENABLE_ASYNC_IRQ_BIT; + sdio_f0_writeb(func, reg_val, CCCR_REG_16_ADDR, NULL); +} + +static void cc33xx_sdio_enable_irq(struct device *child) +{ + struct cc33xx_sdio_glue *glue = dev_get_drvdata(child->parent); + struct sdio_func *func = dev_to_sdio_func(glue->dev); + + sdio_claim_host(func); + cc33xx_enable_async_interrupt(func); + sdio_claim_irq(func, inband_irq_handler); + sdio_release_host(func); +} + +static void cc33xx_sdio_disable_irq(struct device *child) +{ + struct cc33xx_sdio_glue *glue = dev_get_drvdata(child->parent); + struct sdio_func *func = dev_to_sdio_func(glue->dev); + + sdio_claim_host(func); + sdio_release_irq(func); + sdio_release_host(func); +} + +static void cc33xx_enable_line_irq(struct device *child) +{ + struct cc33xx_sdio_glue *glue = dev_get_drvdata(child->parent); + struct platform_device *pdev = glue->core; + struct cc33xx_platdev_data *pdev_data = dev_get_platdata(&pdev->dev); + + enable_irq(pdev_data->gpio_irq_num); +} + +static void cc33xx_disable_line_irq(struct device *child) +{ + struct cc33xx_sdio_glue *glue = dev_get_drvdata(child->parent); + struct platform_device *pdev = glue->core; + struct cc33xx_platdev_data *pdev_data = dev_get_platdata(&pdev->dev); + + disable_irq_nosync(pdev_data->gpio_irq_num); +} + +static void cc33xx_set_irq_handler(struct device *child, void *handler) +{ + struct cc33xx_sdio_glue *glue = dev_get_drvdata(child->parent); + struct platform_device *pdev = glue->core; + struct cc33xx_platdev_data *pdev_data = dev_get_platdata(&pdev->dev); + + pdev_data->irq_handler = handler; +} + +static const struct cc33xx_if_operations sdio_ops_gpio_irq = { + .interface_claim = cc33xx_sdio_claim, + .interface_release = cc33xx_sdio_release, + .read = cc33xx_sdio_raw_read, + .write = cc33xx_sdio_raw_write, + .power = cc33xx_sdio_set_power, + .set_block_size = cc33xx_sdio_set_block_size, + .set_irq_handler = cc33xx_set_irq_handler, + .disable_irq = cc33xx_disable_line_irq, + .enable_irq = cc33xx_enable_line_irq, +}; + +static const struct cc33xx_if_operations sdio_ops_inband_irq = { + .interface_claim = cc33xx_sdio_claim, + .interface_release = cc33xx_sdio_release, + .read = cc33xx_sdio_raw_read, + .write = cc33xx_sdio_raw_write, + .power = cc33xx_sdio_set_power, + .set_block_size = cc33xx_sdio_set_block_size, + .set_irq_handler = cc33xx_set_irq_handler, + .disable_irq = cc33xx_sdio_disable_irq, + .enable_irq = cc33xx_sdio_enable_irq, +}; + +#ifdef CONFIG_OF +static const struct cc33xx_family_data cc33xx_data = { + .name = "cc33xx", + .cfg_name = "ti-connectivity/cc33xx-conf.bin", + .nvs_name = "ti-connectivity/cc33xx-nvs.bin", +}; + +static const struct of_device_id cc33xx_sdio_of_match_table[] = { + { .compatible = "ti,cc3300", .data = &cc33xx_data }, + { } +}; + +static int cc33xx_probe_of(struct device *dev, int *irq, int *wakeirq, + struct cc33xx_platdev_data *pdev_data) +{ + struct device_node *np = dev->of_node; + const struct of_device_id *of_id; + + of_id = of_match_node(cc33xx_sdio_of_match_table, np); + if (!of_id) + return -ENODEV; + + pdev_data->family = of_id->data; + + *irq = irq_of_parse_and_map(np, 0); + + *wakeirq = irq_of_parse_and_map(np, 1); + + return 0; +} +#else +static int cc33xx_probe_of(struct device *dev, int *irq, int *wakeirq, + struct cc33xx_platdev_data *pdev_data) +{ + return -ENODATA; +} +#endif /* CONFIG_OF */ + +static irqreturn_t gpio_irq_hard_handler(int irq, void *cookie) +{ + return IRQ_WAKE_THREAD; +} + +static irqreturn_t gpio_irq_thread_handler(int irq, void *cookie) +{ + struct sdio_func *func = cookie; + struct cc33xx_sdio_glue *glue = sdio_get_drvdata(func); + struct platform_device *pdev = glue->core; + struct cc33xx_platdev_data *pdev_data = dev_get_platdata(&pdev->dev); + + if (WARN_ON(!pdev_data->irq_handler)) + return IRQ_HANDLED; + + pdev_data->irq_handler(pdev); + + return IRQ_HANDLED; +} + +static int sdio_cc33xx_probe(struct sdio_func *func, + const struct sdio_device_id *id) +{ + struct cc33xx_platdev_data *pdev_data; + struct cc33xx_sdio_glue *glue; + struct resource res[1]; + mmc_pm_flag_t mmcflags; + int ret = -ENOMEM; + int gpio_irq, wakeirq, irq_flags; + + /* We are only able to handle the wlan function */ + if (func->num != 0x02) + return -ENODEV; + + pdev_data = devm_kzalloc(&func->dev, sizeof(*pdev_data), GFP_KERNEL); + if (!pdev_data) + return -ENOMEM; + + glue = devm_kzalloc(&func->dev, sizeof(*glue), GFP_KERNEL); + if (!glue) + return -ENOMEM; + + glue->dev = &func->dev; + + /* Grab access to FN0 for ELP reg. */ + func->card->quirks |= MMC_QUIRK_LENIENT_FN0; + + /* Use block mode for transferring over one block size of data */ + func->card->quirks |= MMC_QUIRK_BLKSZ_FOR_BYTE_MODE; + + ret = cc33xx_probe_of(&func->dev, &gpio_irq, &wakeirq, pdev_data); + if (ret) + goto out; + + /* if sdio can keep power while host is suspended, enable wow */ + mmcflags = sdio_get_host_pm_caps(func); + + sdio_set_drvdata(func, glue); + + /* Tell PM core that we don't need the card to be powered now */ + pm_runtime_put_noidle(&func->dev); + + glue->core = platform_device_alloc("cc33xx", PLATFORM_DEVID_AUTO); + if (!glue->core) { + dev_err(glue->dev, "can't allocate platform_device"); + ret = -ENOMEM; + goto out; + } + + glue->core->dev.parent = &func->dev; + + if (gpio_irq) { + irq_flags = irqd_get_trigger_type(irq_get_irq_data(gpio_irq)); + + irq_set_status_flags(gpio_irq, IRQ_NOAUTOEN); + + if (irq_flags & (IRQF_TRIGGER_HIGH | IRQF_TRIGGER_LOW)) + irq_flags |= IRQF_ONESHOT; + + ret = request_threaded_irq(gpio_irq, gpio_irq_hard_handler, + gpio_irq_thread_handler, + irq_flags, glue->core->name, func); + if (ret) { + dev_err(glue->dev, "can't register GPIO IRQ handler\n"); + goto out_dev_put; + } + + pdev_data->gpio_irq_num = gpio_irq; + + if ((mmcflags & MMC_PM_KEEP_POWER) && + (enable_irq_wake(gpio_irq) == 0)) + pdev_data->pwr_in_suspend = true; + + pdev_data->if_ops = &sdio_ops_gpio_irq; + } else { + pdev_data->if_ops = &sdio_ops_inband_irq; + } + + if (wakeirq > 0) { + res[0].start = wakeirq; + res[0].flags = IORESOURCE_IRQ | + irqd_get_trigger_type(irq_get_irq_data(wakeirq)); + res[0].name = "wakeirq"; + + ret = platform_device_add_resources(glue->core, res, 1); + if (ret) { + dev_err(glue->dev, "can't add resources\n"); + goto out_dev_put; + } + } + + ret = platform_device_add_data(glue->core, pdev_data, + sizeof(*pdev_data)); + if (ret) { + dev_err(glue->dev, "can't add platform data\n"); + goto out_dev_put; + } + + ret = platform_device_add(glue->core); + if (ret) { + dev_err(glue->dev, "can't add platform device\n"); + goto out_dev_put; + } + return 0; + +out_dev_put: + platform_device_put(glue->core); + + if (pdev_data->gpio_irq_num) + free_irq(pdev_data->gpio_irq_num, func); + +out: + return ret; +} + +static void sdio_cc33xx_remove(struct sdio_func *func) +{ + struct cc33xx_sdio_glue *glue = sdio_get_drvdata(func); + struct platform_device *pdev = glue->core; + struct cc33xx_platdev_data *pdev_data = dev_get_platdata(&pdev->dev); + + /* Undo decrement done above in sdio_cc33xx_probe */ + pm_runtime_get_noresume(&func->dev); + + platform_device_unregister(glue->core); + + if (pdev_data->gpio_irq_num) { + free_irq(pdev_data->gpio_irq_num, func); + if (pdev_data->pwr_in_suspend) + disable_irq_wake(pdev_data->gpio_irq_num); + } else { + sdio_claim_host(func); + sdio_release_irq(func); + sdio_release_host(func); + } +} + +#ifdef CONFIG_PM +static int cc33xx_suspend(struct device *dev) +{ + /* Tell MMC/SDIO core it's OK to power down the card + * (if it isn't already), but not to remove it completely + */ + struct sdio_func *func = dev_to_sdio_func(dev); + struct cc33xx_sdio_glue *glue = sdio_get_drvdata(func); + struct cc33xx *cc = platform_get_drvdata(glue->core); + mmc_pm_flag_t sdio_flags; + int ret = 0; + + if (!cc) { + dev_err(dev, "no wilink module was probed\n"); + goto out; + } + + dev_dbg(dev, "cc33xx suspend. keep_device_power: %d\n", + cc->keep_device_power); + + if (cc->keep_device_power) { + sdio_flags = sdio_get_host_pm_caps(func); + + if (!(sdio_flags & MMC_PM_KEEP_POWER)) { + dev_err(dev, "can't keep power while host is suspended\n"); + ret = -EINVAL; + goto out; + } + + /* keep power while host suspended */ + ret = sdio_set_host_pm_flags(func, MMC_PM_KEEP_POWER); + if (ret) { + dev_err(dev, "error while trying to keep power\n"); + goto out; + } + } +out: + return ret; +} + +static int cc33xx_resume(struct device *dev) +{ + return 0; +} + +static const struct dev_pm_ops cc33xx_sdio_pm_ops = { + .suspend = cc33xx_suspend, + .resume = cc33xx_resume, +}; +#endif + +static const struct sdio_device_id cc33xx_devices[] = { + { SDIO_DEVICE(SDIO_VENDOR_ID_TI, SDIO_DEVICE_ID_TI_CC33XX) }, + { SDIO_DEVICE(SDIO_VENDOR_ID_TI, SDIO_DEVICE_ID_CC33XX_NO_EFUSE) }, + {} +}; + +MODULE_DEVICE_TABLE(sdio, cc33xx_devices); + +static struct sdio_driver cc33xx_sdio_driver = { + .name = "cc33xx_sdio", + .id_table = cc33xx_devices, + .probe = sdio_cc33xx_probe, + .remove = sdio_cc33xx_remove, +#ifdef CONFIG_PM + .drv = { + .pm = &cc33xx_sdio_pm_ops, + }, +#endif /* CONFIG_PM */ +}; + +module_sdio_driver(cc33xx_sdio_driver); + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("SDIO transport for Texas Instruments CC33xx WLAN driver"); +MODULE_AUTHOR("Michael Nemanov "); +MODULE_AUTHOR("Sabeeh Khan "); From patchwork Thu Nov 7 12:51:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nemanov, Michael" X-Patchwork-Id: 13866419 X-Patchwork-Delegate: kuba@kernel.org Received: from fllv0016.ext.ti.com (fllv0016.ext.ti.com [198.47.19.142]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 52CA8212D1C; Thu, 7 Nov 2024 12:52:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.19.142 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983981; cv=none; b=cKTU2Lvw1SD5yB3do26ZONhUpKeIZacgLs5yS5ecacbD4ffcrmBBRA7MJcAWNiR0sRQetzXuIY/3L6PkfAmwdRkjyDAsjpjhmZHxm5RqQSbCANZOXBoQjLvopPSfr4f48qT2S5XtBwRhvvV3rTuGneUNjCQ9hITkXncsHsDgiSs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983981; c=relaxed/simple; bh=UAH7+Q33ZUGHJrt5Oh8bgx4KpUNC1rF/TCEE4ReHQLs=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=RegNbR+CNOmjLT2I64eqTKdIg0w9Jp967nsNLeUCrhIZnPRQ77bQrf75iU/CSKvemllspzXd4qofFZmZog10+VarzM7XUpryCOMjNF8KHv0RlhaFnAhx2gJwT72DueaFE/+OToxy0qsPGhq8MmvzCzvWe48a6Ooown6Ci0JEnV8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=DsR1qCna; arc=none smtp.client-ip=198.47.19.142 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="DsR1qCna" Received: from fllv0034.itg.ti.com ([10.64.40.246]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 4A7Cqjas049067; Thu, 7 Nov 2024 06:52:45 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1730983965; bh=Th9bssxZvYCXt+5zwmiPzt/nRIfMhcGeHZ/lz8KZ40A=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=DsR1qCnar8Ry+hZ388Ontrnpp74SoYvIE2Xb9EA3sFfl4QWMTUNVnw5Ot1rVA58HF MknzHBFLN/m9OuEJgpdgM3gXoNRD48K8KG+bSCoQ/7HnuCkHqxDOu69QVy0cHrIbT6 T8S3w34j3vYcE6upRUykglDU2rZQmNvJSJqmR7oM= Received: from DLEE104.ent.ti.com (dlee104.ent.ti.com [157.170.170.34]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 4A7CqjWX060335 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 7 Nov 2024 06:52:45 -0600 Received: from DLEE111.ent.ti.com (157.170.170.22) by DLEE104.ent.ti.com (157.170.170.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Thu, 7 Nov 2024 06:52:44 -0600 Received: from lelvsmtp5.itg.ti.com (10.180.75.250) by DLEE111.ent.ti.com (157.170.170.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23 via Frontend Transport; Thu, 7 Nov 2024 06:52:44 -0600 Received: from localhost (udb0389739.dhcp.ti.com [137.167.1.149]) by lelvsmtp5.itg.ti.com (8.15.2/8.15.2) with ESMTP id 4A7Cqhw9054793; Thu, 7 Nov 2024 06:52:43 -0600 From: Michael Nemanov To: Kalle Valo , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , , , , CC: Sabeeh Khan , Michael Nemanov Subject: [PATCH v5 05/17] wifi: cc33xx: Add cmd.c, cmd.h Date: Thu, 7 Nov 2024 14:51:57 +0200 Message-ID: <20241107125209.1736277-6-michael.nemanov@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241107125209.1736277-1-michael.nemanov@ti.com> References: <20241107125209.1736277-1-michael.nemanov@ti.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea X-Patchwork-Delegate: kuba@kernel.org This is the command infrastructure for the CC33xx. Similar to wlcore, all commands eventually reach __cc33xx_cmd_send which fills a generic command header and send the buffer via the IO abstraction layer. Unlike wlcore, there is no polling for command completion which is received via an interrupt which signals cc->command_complete. Signed-off-by: Michael Nemanov --- drivers/net/wireless/ti/cc33xx/cmd.c | 1920 ++++++++++++++++++++++++++ drivers/net/wireless/ti/cc33xx/cmd.h | 700 ++++++++++ 2 files changed, 2620 insertions(+) create mode 100644 drivers/net/wireless/ti/cc33xx/cmd.c create mode 100644 drivers/net/wireless/ti/cc33xx/cmd.h diff --git a/drivers/net/wireless/ti/cc33xx/cmd.c b/drivers/net/wireless/ti/cc33xx/cmd.c new file mode 100644 index 000000000000..bcdca2435dfd --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/cmd.c @@ -0,0 +1,1920 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#include "acx.h" +#include "event.h" +#include "io.h" +#include "tx.h" + +#define CC33XX_REBOOT_TIMEOUT_MSEC 100 + +static void init_cmd_header(struct cc33xx_cmd_header *header, + size_t cmd_len, u16 id) +{ + header->NAB_header.len = cpu_to_le16(cmd_len); + WARN_ON(le16_to_cpu(header->NAB_header.len) != cmd_len); + + header->NAB_header.sync_pattern = cpu_to_le32(HOST_SYNC_PATTERN); + header->NAB_header.opcode = cpu_to_le16(id); +} + +int cc33xx_set_max_buffer_size(struct cc33xx *cc, enum buffer_size max_buffer_size) +{ + switch (max_buffer_size) { + case INI_MAX_BUFFER_SIZE: + /* INI FILE PAYLOAD SIZE + INI CMD PARAM + INT */ + cc->max_cmd_size = CC33XX_INI_CMD_MAX_SIZE; + cc->max_cmd_size += sizeof(struct cc33xx_cmd_ini_params_download); + cc->max_cmd_size += sizeof(u32); + break; + + case CMD_MAX_BUFFER_SIZE: + cc->max_cmd_size = CC33XX_CMD_MAX_SIZE; + break; + + default: + cc33xx_warning("max_buffer_size invalid, not changing buffer size"); + break; + } + + return 0; +} + +static int send_buffer(struct cc33xx *cc, int cmd_box_addr, + void *buf, size_t len) +{ + size_t max_cmd_size_align; + + memcpy(cc->cmd_buf, buf, len); + + memset(cc->cmd_buf + len, 0, (CC33XX_CMD_BUFFER_SIZE) - len); + + max_cmd_size_align = __ALIGN_MASK(cc->max_cmd_size, + CC33XX_BUS_BLOCK_SIZE * 2 - 1); + + return cc33xx_write(cc, cmd_box_addr, cc->cmd_buf, + max_cmd_size_align, true); +} + +/* send command to firmware + * + * @cc: cc struct + * @id: command id + * @buf: buffer containing the command, must work with dma + * @len: length of the buffer + * return the cmd status code on success. + */ +static int __cc33xx_cmd_send(struct cc33xx *cc, u16 id, void *buf, + size_t len, size_t res_len, bool sync) +{ + struct cc33xx_cmd_header *cmd; + unsigned long timeout; + int ret; + + if (id >= CMD_LAST_SUPPORTED_COMMAND) { + cc33xx_debug(DEBUG_CMD, "command ID: %d, blocked", id); + return CMD_STATUS_SUCCESS; + } + + if (WARN_ON(len < sizeof(*cmd)) || + WARN_ON(len > cc->max_cmd_size) || + WARN_ON(len % 4 != 0)) + return -EIO; + + cmd = buf; + cmd->id = cpu_to_le16(id); + cmd->status = 0; + + init_cmd_header(cmd, len, id); + init_completion(&cc->command_complete); + ret = send_buffer(cc, NAB_DATA_ADDR, buf, len); + + if (ret < 0) + return ret; + + if (unlikely(!sync)) + return CMD_STATUS_SUCCESS; + + timeout = msecs_to_jiffies(CC33XX_COMMAND_TIMEOUT); + ret = wait_for_completion_timeout(&cc->command_complete, timeout); + + if (ret < 1) { + cc33xx_debug(DEBUG_CMD, "Command T.O"); + return -EIO; + } + + switch (id) { + case CMD_INTERROGATE: + case CMD_DEBUG_READ: + case CMD_TEST_MODE: + case CMD_BM_READ_DEVICE_INFO: + if (!res_len) + break; /* Response should be discarded */ + + if (WARN_ON(cc->result_length > res_len)) { + cc33xx_error("Error, insufficient response buffer"); + break; + } + + memcpy(buf + sizeof(struct NAB_header), + cc->command_result, + cc->result_length); + + break; + + default: + break; + } + + return CMD_STATUS_SUCCESS; +} + +/* send command to fw and return cmd status on success + * valid_rets contains a bitmap of allowed error codes + */ +static int cc33xx_cmd_send_failsafe(struct cc33xx *cc, u16 id, void *buf, + size_t len, size_t res_len, + unsigned long valid_rets) +{ + int ret = __cc33xx_cmd_send(cc, id, buf, len, res_len, true); + + if (ret < 0) + goto fail; + + /* success is always a valid status */ + valid_rets |= BIT(CMD_STATUS_SUCCESS); + + if (ret >= MAX_COMMAND_STATUS || !test_bit(ret, &valid_rets)) { + cc33xx_error("command execute failure %d", ret); + ret = -EIO; + } + + return ret; +fail: + cc33xx_queue_recovery_work(cc); + return ret; +} + +/* wrapper for cc33xx_cmd_send that accept only CMD_STATUS_SUCCESS + * return 0 on success. + */ +int cc33xx_cmd_send(struct cc33xx *cc, u16 id, void *buf, + size_t len, size_t res_len) +{ + int ret; + /* Support id */ + switch ((enum cc33xx_cmd)id) { + case CMD_EMPTY: + case CMD_START_DHCP_MGMT_SEQ: + case CMD_STOP_DHCP_MGMT_SEQ: + case CMD_START_SECURITY_MGMT_SEQ: + case CMD_STOP_SECURITY_MGMT_SEQ: + case CMD_START_ARP_MGMT_SEQ: + case CMD_STOP_ARP_MGMT_SEQ: + case CMD_START_DNS_MGMT_SEQ: + case CMD_STOP_DNS_MGMT_SEQ: + case CMD_SEND_DEAUTH_DISASSOC: + case CMD_SCHED_STATE_EVENT: + { + return 0; + } break; + default: + { + if ((enum cc33xx_cmd)id >= CMD_LAST_SUPPORTED_COMMAND) + return 0; + goto send; + } + } +send: + ret = cc33xx_cmd_send_failsafe(cc, id, buf, len, res_len, 0); + if (ret < 0) + return ret; + return 0; +} + +static int cc33xx_count_role_set_bits(unsigned long role_map) +{ + int count = 0; + /* if device bit is set ( BIT_2 = ROLE_DEVICE) + * since role device is not counted + * remove it from map + */ + role_map &= ~BIT(2); + + while (role_map != 0) { + count += role_map & 1; + role_map >>= 1; + } + + return count; +} + +int cc33xx_cmd_role_enable(struct cc33xx *cc, u8 *addr, + u8 role_type, u8 *role_id) +{ + struct cc33xx_cmd_role_enable *cmd; + + int ret; + unsigned long role_count; + + struct cc33xx_cmd_complete_role_enable *command_complete = + (struct cc33xx_cmd_complete_role_enable *)&cc->command_result; + + role_count = *cc->roles_map; + ret = cc33xx_count_role_set_bits(role_count); + + /* do not enable more than 2 roles at once, exception is device role */ + if (ret >= 2 && role_type != CC33XX_ROLE_DEVICE) { + cc33xx_error("failed to initiate cmd role enable"); + ret = -EBUSY; + goto out; + } + + if (WARN_ON(*role_id != CC33XX_INVALID_ROLE_ID)) + return -EBUSY; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + memcpy(cmd->mac_address, addr, ETH_ALEN); + cmd->role_type = role_type; + + ret = cc33xx_cmd_send(cc, CMD_ROLE_ENABLE, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to initiate cmd role enable"); + goto out_free; + } + __set_bit(command_complete->role_id, cc->roles_map); + *role_id = command_complete->role_id; + +out_free: + kfree(cmd); +out: + return ret; +} + +int cc33xx_cmd_role_disable(struct cc33xx *cc, u8 *role_id) +{ + struct cc33xx_cmd_role_disable *cmd; + int ret; + + if (WARN_ON(*role_id == CC33XX_INVALID_ROLE_ID)) + return -ENOENT; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + cmd->role_id = *role_id; + + ret = cc33xx_cmd_send(cc, CMD_ROLE_DISABLE, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to initiate cmd role disable"); + goto out_free; + } + + __clear_bit(*role_id, cc->roles_map); + *role_id = CC33XX_INVALID_ROLE_ID; + +out_free: + kfree(cmd); + +out: + return ret; +} + +int cc33xx_set_link(struct cc33xx *cc, struct cc33xx_vif *wlvif, u8 link) +{ + unsigned long flags; + + /* these bits are used by op_tx */ + spin_lock_irqsave(&cc->cc_lock, flags); + __set_bit(link, cc->links_map); + __set_bit(link, wlvif->links_map); + spin_unlock_irqrestore(&cc->cc_lock, flags); + cc->links[link].wlvif = wlvif; + + /* Take saved value for total freed packets from wlvif, in case this is + * recovery/resume + */ + if (wlvif->bss_type != BSS_TYPE_AP_BSS) + cc->links[link].total_freed_pkts = wlvif->total_freed_pkts; + + cc->active_link_count++; + return 0; +} + +void cc33xx_clear_link(struct cc33xx *cc, struct cc33xx_vif *wlvif, u8 *hlid) +{ + unsigned long flags; + + if (*hlid == CC33XX_INVALID_LINK_ID) + return; + + /* these bits are used by op_tx */ + spin_lock_irqsave(&cc->cc_lock, flags); + __clear_bit(*hlid, cc->links_map); + __clear_bit(*hlid, wlvif->links_map); + spin_unlock_irqrestore(&cc->cc_lock, flags); + + cc->links[*hlid].prev_freed_pkts = 0; + cc->links[*hlid].ba_bitmap = 0; + eth_zero_addr(cc->links[*hlid].addr); + + /* At this point op_tx() will not add more packets to the queues. We + * can purge them. + */ + cc33xx_tx_reset_link_queues(cc, *hlid); + cc->links[*hlid].wlvif = NULL; + + if (wlvif->bss_type == BSS_TYPE_AP_BSS && + *hlid == wlvif->ap.bcast_hlid) { + u32 sqn_padding = CC33XX_TX_SQN_POST_RECOVERY_PADDING; + /* save the total freed packets in the wlvif, in case this is + * recovery or suspend + */ + wlvif->total_freed_pkts = cc->links[*hlid].total_freed_pkts; + + /* increment the initial seq number on recovery to account for + * transmitted packets that we haven't yet got in the FW status + */ + if (wlvif->encryption_type == KEY_GEM) + sqn_padding = CC33XX_TX_SQN_POST_RECOVERY_PADDING_GEM; + + if (test_bit(CC33XX_FLAG_RECOVERY_IN_PROGRESS, &cc->flags)) + wlvif->total_freed_pkts += sqn_padding; + } + + cc->links[*hlid].total_freed_pkts = 0; + + *hlid = CC33XX_INVALID_LINK_ID; + cc->active_link_count--; + WARN_ON_ONCE(cc->active_link_count < 0); +} + +static u8 cc33xx_get_native_channel_type(u8 nl_channel_type) +{ + switch (nl_channel_type) { + case NL80211_CHAN_NO_HT: + return CC33XX_CHAN_NO_HT; + case NL80211_CHAN_HT20: + return CC33XX_CHAN_HT20; + case NL80211_CHAN_HT40MINUS: + return CC33XX_CHAN_HT40MINUS; + case NL80211_CHAN_HT40PLUS: + return CC33XX_CHAN_HT40PLUS; + default: + WARN_ON(1); + return CC33XX_CHAN_NO_HT; + } +} + +static int cc33xx_cmd_role_start_dev(struct cc33xx *cc, struct cc33xx_vif *wlvif, + enum nl80211_band band, int channel) +{ + struct cc33xx_cmd_role_start *cmd; + int ret; + + struct cc33xx_cmd_complete_role_start *command_complete = + (struct cc33xx_cmd_complete_role_start *)&cc->command_result; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + cmd->role_id = wlvif->dev_role_id; + cmd->role_type = CC33XX_ROLE_DEVICE; + if (band == NL80211_BAND_5GHZ) + cmd->band = CC33XX_BAND_5GHZ; + cmd->channel = channel; + cmd->channel_type = cc33xx_get_native_channel_type(wlvif->channel_type); + + ret = cc33xx_cmd_send(cc, CMD_ROLE_START, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to initiate cmd role device start"); + goto err_hlid; + } + + wlvif->dev_hlid = command_complete->sta.hlid; + cc->links[wlvif->dev_hlid].allocated_pkts = 0; + cc->session_ids[wlvif->dev_hlid] = command_complete->sta.session; + cc33xx_debug(DEBUG_CMD, "role start: roleid=%d, hlid=%d, session=%d ", + wlvif->dev_role_id, command_complete->sta.hlid, + command_complete->sta.session); + ret = cc33xx_set_link(cc, wlvif, wlvif->dev_hlid); + goto out_free; + +err_hlid: + /* clear links on error */ + cc33xx_clear_link(cc, wlvif, &wlvif->dev_hlid); + +out_free: + kfree(cmd); + +out: + return ret; +} + +int cc33xx_cmd_role_stop_transceiver(struct cc33xx *cc) +{ + struct cc33xx_cmd_role_stop *cmd; + int ret; + + if (unlikely(cc->state != CC33XX_STATE_ON || !cc->plt)) { + ret = -EINVAL; + goto out; + } + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + cmd->role_id = cc->plt_role_id; + + ret = cc33xx_cmd_send(cc, CMD_ROLE_STOP, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("transceiver - failed to initiate cmd role stop"); + goto out_free; + } + +out_free: + kfree(cmd); + +out: + return ret; +} + +int cc33xx_cmd_plt_disable(struct cc33xx *cc) +{ + struct cc33xx_cmd_PLT_disable *cmd; + int ret; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + ret = cc33xx_cmd_send(cc, CMD_PLT_DISABLE, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("transceiver: failed to disable Transceiver mode"); + goto out_free; + } + +out_free: + kfree(cmd); + +out: + return ret; +} + +static int cc333xx_cmd_role_stop_dev(struct cc33xx *cc, + struct cc33xx_vif *wlvif) +{ + struct cc33xx_cmd_role_stop *cmd; + int ret; + + if (WARN_ON(wlvif->dev_hlid == CC33XX_INVALID_LINK_ID)) + return -EINVAL; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + cmd->role_id = wlvif->dev_role_id; + + ret = cc33xx_cmd_send(cc, CMD_ROLE_STOP, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to initiate cmd role stop"); + goto out_free; + } + + cc33xx_clear_link(cc, wlvif, &wlvif->dev_hlid); + +out_free: + kfree(cmd); + +out: + return ret; +} + +int cc33xx_cmd_plt_enable(struct cc33xx *cc, u8 role_id) +{ + struct cc33xx_cmd_PLT_enable *cmd; + s32 ret; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + ret = cc33xx_cmd_send(cc, CMD_PLT_ENABLE, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("Failed to send CMD_PLT_ENABLE"); + goto out_free; + } + +out_free: + kfree(cmd); +out: + return ret; +} + +int cc33xx_cmd_role_start_transceiver(struct cc33xx *cc, u8 role_id) +{ + struct cc33xx_cmd_role_start *cmd; + s32 ret; + u8 role_type = ROLE_TRANSCEIVER; + + /* Default values */ + u8 band = NL80211_BAND_2GHZ; + u8 channel = 6; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + cmd->role_type = role_type; + cmd->role_id = role_id; + cmd->channel = channel; + cmd->band = band; + + ret = cc33xx_cmd_send(cc, CMD_ROLE_START, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to initiate cmd role start PLT"); + goto out_free; + } + +out_free: + kfree(cmd); +out: + return ret; +} + +int cc33xx_cmd_role_start_sta(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + struct ieee80211_vif *vif = cc33xx_wlvif_to_vif(wlvif); + struct cc33xx_cmd_role_start *cmd; + + u32 supported_rates; + int ret; + + struct cc33xx_cmd_complete_role_start *command_complete = + (struct cc33xx_cmd_complete_role_start *)&cc->command_result; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + cmd->role_id = wlvif->role_id; + cmd->role_type = CC33XX_ROLE_STA; + cmd->channel = wlvif->channel; + if (wlvif->band == NL80211_BAND_5GHZ) { + cmd->band = CC33XX_BAND_5GHZ; + cmd->sta.basic_rate_set = cpu_to_le32(wlvif->basic_rate_set + & ~CONF_TX_CCK_RATES); + } else { + cmd->sta.basic_rate_set = cpu_to_le32(wlvif->basic_rate_set); + } + cmd->sta.beacon_interval = cpu_to_le16(wlvif->beacon_int); + cmd->sta.ssid_type = CC33XX_SSID_TYPE_ANY; + cmd->sta.ssid_len = wlvif->ssid_len; + memcpy(cmd->sta.ssid, wlvif->ssid, wlvif->ssid_len); + memcpy(cmd->sta.bssid, vif->bss_conf.bssid, ETH_ALEN); + + supported_rates = CONF_TX_ENABLED_RATES | CONF_TX_MCS_RATES | wlvif->rate_set; + if (wlvif->band == NL80211_BAND_5GHZ) + supported_rates &= ~CONF_TX_CCK_RATES; + + if (wlvif->p2p) + supported_rates &= ~CONF_TX_CCK_RATES; + + cmd->sta.local_rates = cpu_to_le32(supported_rates); + + cmd->channel_type = cc33xx_get_native_channel_type(wlvif->channel_type); + + /* We don't have the correct remote rates in this stage. The + * rates will be reconfigured later, after association, if the + * firmware supports ACX_PEER_CAP. Otherwise, there's nothing + * we can do, so use all supported_rates here. + */ + cmd->sta.remote_rates = cpu_to_le32(supported_rates); + + ret = cc33xx_cmd_send(cc, CMD_ROLE_START, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to initiate cmd role start sta"); + goto err_hlid; + } + + wlvif->sta.role_chan_type = wlvif->channel_type; + + wlvif->sta.hlid = command_complete->sta.hlid; + cc->links[wlvif->sta.hlid].allocated_pkts = 0; + cc->session_ids[wlvif->sta.hlid] = command_complete->sta.session; + cc33xx_debug(DEBUG_CMD, "role start: roleid=%d, hlid=%d, session=%d basic_rate_set: 0x%x, remote_rates: 0x%x", + wlvif->role_id, + command_complete->sta.hlid, command_complete->sta.session, + wlvif->basic_rate_set, wlvif->rate_set); + ret = cc33xx_set_link(cc, wlvif, wlvif->sta.hlid); + + goto out_free; + +err_hlid: + cc33xx_clear_link(cc, wlvif, &wlvif->sta.hlid); + +out_free: + kfree(cmd); +out: + return ret; +} + +/* use this function to stop ibss as well */ +int cc33xx_cmd_role_stop_sta(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + struct cc33xx_cmd_role_stop *cmd; + int ret; + + if (WARN_ON(wlvif->sta.hlid == CC33XX_INVALID_LINK_ID)) + return -EINVAL; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + cmd->role_id = wlvif->role_id; + + ret = cc33xx_cmd_send(cc, CMD_ROLE_STOP, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to initiate cmd role stop sta"); + goto out_free; + } + + cc33xx_clear_link(cc, wlvif, &wlvif->sta.hlid); + +out_free: + kfree(cmd); + +out: + return ret; +} + +int cc33xx_cmd_role_start_ap(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + struct cc33xx_cmd_role_start *cmd; + struct ieee80211_vif *vif = cc33xx_wlvif_to_vif(wlvif); + struct ieee80211_bss_conf *bss_conf = &vif->bss_conf; + u32 supported_rates; + int ret; + + struct cc33xx_cmd_complete_role_start *command_complete = + (struct cc33xx_cmd_complete_role_start *)&cc->command_result; + + /* If MESH --> ssid_len is always 0 */ + if (!ieee80211_vif_is_mesh(vif)) { + /* trying to use hidden SSID with an old hostapd version */ + if (wlvif->ssid_len == 0 && !bss_conf->hidden_ssid) { + cc33xx_error("got a null SSID from beacon/bss"); + ret = -EINVAL; + goto out; + } + } + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + cmd->role_id = wlvif->role_id; + cmd->role_type = CC33XX_ROLE_AP; + cmd->ap.basic_rate_set = cpu_to_le32(wlvif->basic_rate_set); + cmd->ap.beacon_interval = cpu_to_le16(wlvif->beacon_int); + cmd->ap.dtim_interval = bss_conf->dtim_period; + cmd->ap.wmm = wlvif->wmm_enabled; + cmd->channel = wlvif->channel; + cmd->channel_type = cc33xx_get_native_channel_type(wlvif->channel_type); + + supported_rates = CONF_TX_ENABLED_RATES | CONF_TX_MCS_RATES; + if (wlvif->p2p) + supported_rates &= ~CONF_TX_CCK_RATES; + + cmd->ap.local_rates = cpu_to_le32(supported_rates); + + switch (wlvif->band) { + case NL80211_BAND_2GHZ: + cmd->band = CC33XX_BAND_2_4GHZ; + break; + case NL80211_BAND_5GHZ: + cmd->band = CC33XX_BAND_5GHZ; + break; + default: + cc33xx_warning("ap start - unknown band: %d", (int)wlvif->band); + cmd->band = CC33XX_BAND_2_4GHZ; + break; + } + + ret = cc33xx_cmd_send(cc, CMD_ROLE_START, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to initiate cmd role start ap"); + goto out_free_bcast; + } + + wlvif->ap.global_hlid = command_complete->ap.global_hlid; + wlvif->ap.bcast_hlid = command_complete->ap.broadcast_hlid; + cc->session_ids[wlvif->ap.global_hlid] = command_complete->ap.global_session_id; + cc->session_ids[wlvif->ap.bcast_hlid] = command_complete->ap.bcast_session_id; + + cc33xx_debug(DEBUG_CMD, "role start: roleid=%d, global_hlid=%d, broadcast_hlid=%d, global_session_id=%d, bcast_session_id=%d, basic_rate_set: 0x%x, remote_rates: 0x%x", + wlvif->role_id, + command_complete->ap.global_hlid, + command_complete->ap.broadcast_hlid, + command_complete->ap.global_session_id, + command_complete->ap.bcast_session_id, + wlvif->basic_rate_set, wlvif->rate_set); + + ret = cc33xx_set_link(cc, wlvif, wlvif->ap.global_hlid); + ret = cc33xx_set_link(cc, wlvif, wlvif->ap.bcast_hlid); + + goto out_free; + +out_free_bcast: + cc33xx_clear_link(cc, wlvif, &wlvif->ap.bcast_hlid); + cc33xx_clear_link(cc, wlvif, &wlvif->ap.global_hlid); + +out_free: + kfree(cmd); + +out: + return ret; +} + +int cc33xx_cmd_role_stop_ap(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + struct cc33xx_cmd_role_stop *cmd; + int ret; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + cmd->role_id = wlvif->role_id; + + ret = cc33xx_cmd_send(cc, CMD_ROLE_STOP, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to initiate cmd role stop ap"); + goto out_free; + } + + cc33xx_clear_link(cc, wlvif, &wlvif->ap.bcast_hlid); + cc33xx_clear_link(cc, wlvif, &wlvif->ap.global_hlid); + +out_free: + kfree(cmd); + +out: + return ret; +} + +int cc33xx_cmd_role_start_ibss(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + struct ieee80211_vif *vif = cc33xx_wlvif_to_vif(wlvif); + struct cc33xx_cmd_role_start *cmd; + struct ieee80211_bss_conf *bss_conf = &vif->bss_conf; + int ret; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + cmd->role_id = wlvif->role_id; + cmd->role_type = CC33XX_ROLE_IBSS; + if (wlvif->band == NL80211_BAND_5GHZ) + cmd->band = CC33XX_BAND_5GHZ; + cmd->channel = wlvif->channel; + cmd->ibss.basic_rate_set = cpu_to_le32(wlvif->basic_rate_set); + cmd->ibss.beacon_interval = cpu_to_le16(wlvif->beacon_int); + cmd->ibss.dtim_interval = bss_conf->dtim_period; + cmd->ibss.ssid_type = CC33XX_SSID_TYPE_ANY; + cmd->ibss.ssid_len = wlvif->ssid_len; + memcpy(cmd->ibss.ssid, wlvif->ssid, wlvif->ssid_len); + memcpy(cmd->ibss.bssid, vif->bss_conf.bssid, ETH_ALEN); + cmd->sta.local_rates = cpu_to_le32(wlvif->rate_set); + + if (wlvif->sta.hlid == CC33XX_INVALID_LINK_ID) { + ret = cc33xx_set_link(cc, wlvif, wlvif->sta.hlid); + if (ret) + goto out_free; + } + cmd->ibss.hlid = wlvif->sta.hlid; + cmd->ibss.remote_rates = cpu_to_le32(wlvif->rate_set); + + ret = cc33xx_cmd_send(cc, CMD_ROLE_START, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to initiate cmd role enable"); + goto err_hlid; + } + + goto out_free; + +err_hlid: + /* clear links on error. */ + cc33xx_clear_link(cc, wlvif, &wlvif->sta.hlid); + +out_free: + kfree(cmd); + +out: + return ret; +} + +/* send test command to firmware + * + * @cc: cc struct + * @buf: buffer containing the command, with all headers, must work with dma + * @len: length of the buffer + * @answer: is answer needed + */ +int cc33xx_cmd_test(struct cc33xx *cc, void *buf, size_t buf_len, u8 answer) +{ + int ret; + size_t res_len = 0; + + if (answer) + res_len = buf_len; + + ret = cc33xx_cmd_send(cc, CMD_TEST_MODE, buf, buf_len, res_len); + + if (ret < 0) { + cc33xx_warning("TEST command failed"); + return ret; + } + + return ret; +} + +/* read acx from firmware + * + * @cc: cc struct + * @id: acx id + * @buf: buffer for the response, including all headers, must work with dma + * @len: length of buf + */ +int cc33xx_cmd_interrogate(struct cc33xx *cc, u16 id, void *buf, + size_t cmd_len, size_t res_len) +{ + struct acx_header *acx = buf; + int ret; + + acx->id = cpu_to_le16(id); + + /* response payload length, does not include any headers */ + acx->len = cpu_to_le16(res_len - sizeof(*acx)); + + ret = cc33xx_cmd_send(cc, CMD_INTERROGATE, acx, cmd_len, res_len); + if (ret < 0) + cc33xx_error("INTERROGATE command failed"); + + return ret; +} + +/* read debug acx from firmware + * + * @cc: cc struct + * @id: acx id + * @buf: buffer for the response, including all headers, must work with dma + * @len: length of buf + */ +int cc33xx_cmd_debug_inter(struct cc33xx *cc, u16 id, void *buf, + size_t cmd_len, size_t res_len) +{ + struct acx_header *acx = buf; + int ret; + + acx->id = cpu_to_le16(id); + + /* response payload length, does not include any headers */ + acx->len = cpu_to_le16(res_len - sizeof(*acx)); + + ret = cc33xx_cmd_send(cc, CMD_DEBUG_READ, acx, cmd_len, res_len); + if (ret < 0) + cc33xx_error("CMD_DEBUG_READ command failed"); + + return ret; +} + +/* write acx value to firmware + * + * @cc: cc struct + * @id: acx id + * @buf: buffer containing acx, including all headers, must work with dma + * @len: length of buf + * @valid_rets: bitmap of valid cmd status codes (i.e. return values). + * return the cmd status on success. + */ +int cc33xx_cmd_configure_failsafe(struct cc33xx *cc, u16 id, void *buf, + size_t len, unsigned long valid_rets) +{ + struct acx_header *acx = buf; + int ret; + + if (WARN_ON_ONCE(len < sizeof(*acx))) + return -EIO; + + acx->id = cpu_to_le16(id); + + /* payload length, does not include any headers */ + acx->len = cpu_to_le16(len - sizeof(*acx)); + + ret = cc33xx_cmd_send_failsafe(cc, CMD_CONFIGURE, acx, len, 0, + valid_rets); + if (ret < 0) { + cc33xx_warning("CONFIGURE command NOK"); + return ret; + } + + return ret; +} + +/* wrapper for cc33xx_cmd_configure that accepts only success status. + * return 0 on success + */ +int cc33xx_cmd_configure(struct cc33xx *cc, u16 id, void *buf, size_t len) +{ + int ret = cc33xx_cmd_configure_failsafe(cc, id, buf, len, 0); + + if (ret < 0) + return ret; + return 0; +} + +/* write acx value to firmware + * + * @cc: cc struct + * @id: acx id + * @buf: buffer containing debug, including all headers, must work with dma + * @len: length of buf + * @valid_rets: bitmap of valid cmd status codes (i.e. return values). + * return the cmd status on success. + */ +static int cc33xx_cmd_debug_failsafe(struct cc33xx *cc, u16 id, void *buf, + size_t len, unsigned long valid_rets) +{ + struct debug_header *acx = buf; + int ret; + + if (WARN_ON_ONCE(len < sizeof(*acx))) + return -EIO; + + acx->id = cpu_to_le16(id); + + /* payload length, does not include any headers */ + acx->len = cpu_to_le16(len - sizeof(*acx)); + + ret = cc33xx_cmd_send_failsafe(cc, CMD_DEBUG, acx, len, 0, + valid_rets); + if (ret < 0) { + cc33xx_warning("CONFIGURE command NOK"); + return ret; + } + + return ret; +} + +/* wrapper for cc33xx_cmd_debug that accepts only success status. + * return 0 on success + */ +int cc33xx_cmd_debug(struct cc33xx *cc, u16 id, void *buf, size_t len) +{ + int ret = cc33xx_cmd_debug_failsafe(cc, id, buf, len, 0); + + if (ret < 0) + return ret; + return 0; +} + +int cc33xx_cmd_ps_mode(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u8 ps_mode, u16 auto_ps_timeout) +{ + struct cc33xx_cmd_ps_params *ps_params = NULL; + int ret = 0; + + ps_params = kzalloc(sizeof(*ps_params), GFP_KERNEL); + if (!ps_params) { + ret = -ENOMEM; + goto out; + } + + ps_params->role_id = wlvif->role_id; + ps_params->ps_mode = ps_mode; + ps_params->auto_ps_timeout = cpu_to_le16(auto_ps_timeout); + + ret = cc33xx_cmd_send(cc, CMD_SET_PS_MODE, ps_params, + sizeof(*ps_params), 0); + if (ret < 0) { + cc33xx_error("cmd set_ps_mode failed"); + goto out; + } + +out: + kfree(ps_params); + return ret; +} + +int cc33xx_cmd_set_default_wep_key(struct cc33xx *cc, u8 id, u8 hlid) +{ + struct cc33xx_cmd_set_keys *cmd; + int ret = 0; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + cmd->hlid = hlid; + cmd->key_id = id; + cmd->lid_key_type = WEP_DEFAULT_LID_TYPE; + cmd->key_action = cpu_to_le16(KEY_SET_ID); + cmd->key_type = KEY_WEP; + + ret = cc33xx_cmd_send(cc, CMD_SET_KEYS, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_warning("cmd set_default_wep_key failed: %d", ret); + goto out; + } + +out: + kfree(cmd); + + return ret; +} + +int cc33xx_cmd_set_sta_key(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u16 action, u8 id, u8 key_type, u8 key_size, + const u8 *key, const u8 *addr, u32 tx_seq_32, + u16 tx_seq_16) +{ + struct cc33xx_cmd_set_keys *cmd; + int ret = 0; + + /* hlid might have already been deleted */ + if (wlvif->sta.hlid == CC33XX_INVALID_LINK_ID) + return 0; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + cmd->hlid = wlvif->sta.hlid; + + if (key_type == KEY_WEP) + cmd->lid_key_type = WEP_DEFAULT_LID_TYPE; + else if (is_broadcast_ether_addr(addr)) + cmd->lid_key_type = BROADCAST_LID_TYPE; + else + cmd->lid_key_type = UNICAST_LID_TYPE; + + cmd->key_action = cpu_to_le16(action); + cmd->key_size = key_size; + cmd->key_type = key_type; + + cmd->ac_seq_num16[0] = cpu_to_le16(tx_seq_16); + cmd->ac_seq_num32[0] = cpu_to_le32(tx_seq_32); + + cmd->key_id = id; + + if (key_type == KEY_TKIP) { + /* We get the key in the following form: + * TKIP (16 bytes) - TX MIC (8 bytes) - RX MIC (8 bytes) + * but the target is expecting: + * TKIP - RX MIC - TX MIC + */ + memcpy(cmd->key, key, 16); + memcpy(cmd->key + 16, key + 24, 8); + memcpy(cmd->key + 24, key + 16, 8); + + } else { + memcpy(cmd->key, key, key_size); + } + + ret = cc33xx_cmd_send(cc, CMD_SET_KEYS, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_warning("could not set keys"); + goto out; + } + +out: + kfree(cmd); + + return ret; +} + +int cc33xx_cmd_set_ap_key(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u16 action, u8 id, u8 key_type, u8 key_size, + const u8 *key, u8 hlid, u32 tx_seq_32, u16 tx_seq_16) +{ + struct cc33xx_cmd_set_keys *cmd; + int ret = 0; + u8 lid_type; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) + return -ENOMEM; + + if (hlid == wlvif->ap.bcast_hlid) { + if (key_type == KEY_WEP) + lid_type = WEP_DEFAULT_LID_TYPE; + else + lid_type = BROADCAST_LID_TYPE; + } else { + lid_type = UNICAST_LID_TYPE; + } + + cmd->lid_key_type = lid_type; + cmd->hlid = hlid; + cmd->key_action = cpu_to_le16(action); + cmd->key_size = key_size; + cmd->key_type = key_type; + cmd->key_id = id; + cmd->ac_seq_num16[0] = cpu_to_le16(tx_seq_16); + cmd->ac_seq_num32[0] = cpu_to_le32(tx_seq_32); + + if (key_type == KEY_TKIP) { + /* We get the key in the following form: + * TKIP (16 bytes) - TX MIC (8 bytes) - RX MIC (8 bytes) + * but the target is expecting: + * TKIP - RX MIC - TX MIC + */ + memcpy(cmd->key, key, 16); + memcpy(cmd->key + 16, key + 24, 8); + memcpy(cmd->key + 24, key + 16, 8); + } else { + memcpy(cmd->key, key, key_size); + } + + cc33xx_dump(DEBUG_CRYPT, "TARGET AP KEY: ", cmd, sizeof(*cmd)); + + ret = cc33xx_cmd_send(cc, CMD_SET_KEYS, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_warning("could not set ap keys"); + goto out; + } + +out: + kfree(cmd); + return ret; +} + +int cc33xx_cmd_set_peer_state(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u8 hlid) +{ + struct cc33xx_cmd_set_peer_state *cmd; + int ret = 0; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + cmd->hlid = hlid; + cmd->state = CC33XX_CMD_STA_STATE_CONNECTED; + + ret = cc33xx_cmd_send(cc, CMD_SET_LINK_CONNECTION_STATE, + cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to send set peer state command"); + goto out_free; + } + +out_free: + kfree(cmd); + +out: + return ret; +} + +int cc33xx_cmd_add_peer(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct ieee80211_sta *sta, u8 *hlid, u8 is_connected) +{ + struct cc33xx_cmd_add_peer *cmd; + + struct cc33xx_cmd_complete_add_peer *command_complete = + (struct cc33xx_cmd_complete_add_peer *)&cc->command_result; + + int i, ret; + u32 sta_rates; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + cmd->is_connected = is_connected; + cmd->role_id = wlvif->role_id; + cmd->role_type = CC33XX_ROLE_AP; + cmd->link_type = 1; + + memcpy(cmd->addr, sta->addr, ETH_ALEN); + cmd->bss_index = CC33XX_AP_BSS_INDEX; + cmd->aid = cpu_to_le16(sta->aid); + cmd->sp_len = sta->max_sp; + cmd->wmm = sta->wme ? 1 : 0; + + for (i = 0; i < NUM_ACCESS_CATEGORIES_COPY; i++) { + if (sta->wme && (sta->uapsd_queues & BIT(i))) { + cmd->psd_type[NUM_ACCESS_CATEGORIES_COPY - 1 - i] = + CC33XX_PSD_UPSD_TRIGGER; + } else { + cmd->psd_type[NUM_ACCESS_CATEGORIES_COPY - 1 - i] = + CC33XX_PSD_LEGACY; + } + } + + sta_rates = sta->deflink.supp_rates[wlvif->band]; + if (sta->deflink.ht_cap.ht_supported) { + sta_rates |= + (sta->deflink.ht_cap.mcs.rx_mask[0] << HW_HT_RATES_OFFSET) | + (sta->deflink.ht_cap.mcs.rx_mask[1] << HW_MIMO_RATES_OFFSET); + } + + cmd->supported_rates = + cpu_to_le32(cc33xx_tx_enabled_rates_get(cc, sta_rates, + wlvif->band)); + + if (!cmd->supported_rates) + cmd->supported_rates = cpu_to_le32(wlvif->basic_rate_set); + + if (sta->deflink.ht_cap.ht_supported) { + cmd->ht_capabilities = cpu_to_le32(sta->deflink.ht_cap.cap); + cmd->ht_capabilities |= cpu_to_le32(CC33XX_HT_CAP_HT_OPERATION); + cmd->ampdu_params = sta->deflink.ht_cap.ampdu_factor | + sta->deflink.ht_cap.ampdu_density; + } + + cmd->has_he = sta->deflink.he_cap.has_he; + cmd->mfp = sta->mfp; + ret = cc33xx_cmd_send(cc, CMD_ADD_PEER, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to initiate cmd add peer"); + goto out_free; + } + + if (hlid) { + if (le16_to_cpu(command_complete->header.status) == CMD_STATUS_SUCCESS) { + *hlid = command_complete->hlid; + cc->links[*hlid].allocated_pkts = 0; + cc->session_ids[*hlid] = command_complete->session_id; + } else { + ret = -EMLINK; + } + } +out_free: + kfree(cmd); + +out: + return ret; +} + +int cc33xx_cmd_remove_peer(struct cc33xx *cc, + struct cc33xx_vif *wlvif, u8 hlid) +{ + struct cc33xx_cmd_remove_peer *cmd; + int ret; + bool timeout = false; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + cmd->hlid = hlid; + + cmd->role_id = wlvif->role_id; + + ret = cc33xx_cmd_send(cc, CMD_REMOVE_PEER, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to initiate cmd remove peer"); + goto out_free; + } + + ret = cc33xx_wait_for_event(cc, CC33XX_EVENT_PEER_REMOVE_COMPLETE, + &timeout); + + /* We are ok with a timeout here. The event is sometimes not sent + * due to a firmware bug. In case of another error (like SDIO timeout) + * queue a recovery. + */ + if (ret < 0) + cc33xx_queue_recovery_work(cc); + +out_free: + kfree(cmd); + +out: + return ret; +} + +static int cc33xx_get_reg_conf_ch_idx(enum nl80211_band band, u16 ch) +{ + /* map the given band/channel to the respective predefined + * bit expected by the fw + */ + switch (band) { + case NL80211_BAND_2GHZ: + /* channels 1..14 are mapped to 0..13 */ + if (ch >= 1 && ch <= 14) + return ch - 1; + break; + case NL80211_BAND_5GHZ: + switch (ch) { + case 8 ... 16: + /* channels 8,12,16 are mapped to 18,19,20 */ + return 18 + (ch - 8) / 4; + case 34 ... 48: + /* channels 34,36..48 are mapped to 21..28 */ + return 21 + (ch - 34) / 2; + case 52 ... 64: + /* channels 52,56..64 are mapped to 29..32 */ + return 29 + (ch - 52) / 4; + case 100 ... 140: + /* channels 100,104..140 are mapped to 33..43 */ + return 33 + (ch - 100) / 4; + case 149 ... 165: + /* channels 149,153..165 are mapped to 44..48 */ + return 44 + (ch - 149) / 4; + default: + break; + } + break; + default: + break; + } + + cc33xx_error("%s: unknown band/channel: %d/%d", __func__, band, ch); + return -1; +} + +void cc33xx_set_pending_regdomain_ch(struct cc33xx *cc, u16 channel, + enum nl80211_band band) +{ + int ch_bit_idx = 0; + + if (!(cc->quirks & CC33XX_QUIRK_REGDOMAIN_CONF)) + return; + + ch_bit_idx = cc33xx_get_reg_conf_ch_idx(band, channel); + + if (ch_bit_idx >= 0 && ch_bit_idx <= CC33XX_MAX_CHANNELS) + __set_bit_le(ch_bit_idx, (long *)cc->reg_ch_conf_pending); +} + +int cc33xx_cmd_regdomain_config_locked(struct cc33xx *cc) +{ + struct cc33xx_cmd_regdomain_dfs_config *cmd = NULL; + int ret = 0, i, b, ch_bit_idx; + __le32 tmp_ch_bitmap[2] __aligned(sizeof(unsigned long)); + struct wiphy *wiphy = cc->hw->wiphy; + struct ieee80211_supported_band *band; + bool timeout = false; + + if (!(cc->quirks & CC33XX_QUIRK_REGDOMAIN_CONF)) + return 0; + + memcpy(tmp_ch_bitmap, cc->reg_ch_conf_pending, sizeof(tmp_ch_bitmap)); + + for (b = NL80211_BAND_2GHZ; b <= NL80211_BAND_5GHZ; b++) { + band = wiphy->bands[b]; + for (i = 0; i < band->n_channels; i++) { + struct ieee80211_channel *channel = &band->channels[i]; + u16 ch = channel->hw_value; + u32 flags = channel->flags; + + if (flags & (IEEE80211_CHAN_DISABLED | + IEEE80211_CHAN_NO_IR)) + continue; + + if ((flags & IEEE80211_CHAN_RADAR) && + channel->dfs_state != NL80211_DFS_AVAILABLE) + continue; + + ch_bit_idx = cc33xx_get_reg_conf_ch_idx(b, ch); + if (ch_bit_idx < 0) + continue; + + __set_bit_le(ch_bit_idx, (long *)tmp_ch_bitmap); + } + } + + if (!memcmp(tmp_ch_bitmap, cc->reg_ch_conf_last, sizeof(tmp_ch_bitmap))) + goto out; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + cmd->ch_bit_map1 = tmp_ch_bitmap[0]; + cmd->ch_bit_map2 = tmp_ch_bitmap[1]; + cmd->dfs_region = cc->dfs_region; + + ret = cc33xx_cmd_send(cc, CMD_DFS_CHANNEL_CONFIG, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to send reg domain dfs config"); + goto out; + } + + ret = cc33xx_wait_for_event(cc, CC33XX_EVENT_DFS_CONFIG_COMPLETE, + &timeout); + + if (ret < 0 || timeout) { + cc33xx_error("reg domain conf %serror", + timeout ? "completion " : ""); + ret = timeout ? -ETIMEDOUT : ret; + goto out; + } + + memcpy(cc->reg_ch_conf_last, tmp_ch_bitmap, sizeof(tmp_ch_bitmap)); + memset(cc->reg_ch_conf_pending, 0, sizeof(cc->reg_ch_conf_pending)); + +out: + kfree(cmd); + return ret; +} + +int cc33xx_cmd_config_fwlog(struct cc33xx *cc) +{ + struct cc33xx_cmd_config_fwlog *cmd; + int ret = 0; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + cmd->logger_mode = cc->conf.host_conf.fwlog.mode; + cmd->log_severity = cc->conf.host_conf.fwlog.severity; + cmd->timestamp = cc->conf.host_conf.fwlog.timestamp; + cmd->output = cc->conf.host_conf.fwlog.output; + cmd->threshold = cc->conf.host_conf.fwlog.threshold; + + ret = cc33xx_cmd_send(cc, CMD_CONFIG_FWLOGGER, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to send config firmware logger command"); + goto out_free; + } + +out_free: + kfree(cmd); + +out: + return ret; +} + +static int cc33xx_cmd_roc(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u8 role_id, enum nl80211_band band, u8 channel) +{ + struct cc33xx_cmd_roc *cmd; + int ret = 0; + + if (WARN_ON(role_id == CC33XX_INVALID_ROLE_ID)) + return -EINVAL; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + cmd->role_id = role_id; + cmd->channel = channel; + switch (band) { + case NL80211_BAND_2GHZ: + cmd->band = CC33XX_BAND_2_4GHZ; + break; + case NL80211_BAND_5GHZ: + cmd->band = CC33XX_BAND_5GHZ; + break; + default: + cc33xx_error("roc - unknown band: %d", (int)wlvif->band); + ret = -EINVAL; + goto out_free; + } + + ret = cc33xx_cmd_send(cc, CMD_REMAIN_ON_CHANNEL, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to send ROC command"); + goto out_free; + } + +out_free: + kfree(cmd); + +out: + return ret; +} + +static int cc33xx_cmd_croc(struct cc33xx *cc, u8 role_id) +{ + struct cc33xx_cmd_croc *cmd; + int ret = 0; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + cmd->role_id = role_id; + + ret = cc33xx_cmd_send(cc, CMD_CANCEL_REMAIN_ON_CHANNEL, cmd, + sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to send ROC command"); + goto out_free; + } + +out_free: + kfree(cmd); + +out: + return ret; +} + +int cc33xx_roc(struct cc33xx *cc, struct cc33xx_vif *wlvif, u8 role_id, + enum nl80211_band band, u8 channel) +{ + int ret = 0; + + if (WARN_ON(test_bit(role_id, cc->roc_map))) + return 0; + + ret = cc33xx_cmd_roc(cc, wlvif, role_id, band, channel); + if (ret < 0) + goto out; + + __set_bit(role_id, cc->roc_map); +out: + return ret; +} + +int cc33xx_croc(struct cc33xx *cc, u8 role_id) +{ + int ret = 0; + + if (WARN_ON(!test_bit(role_id, cc->roc_map))) + return 0; + + ret = cc33xx_cmd_croc(cc, role_id); + if (ret < 0) + goto out; + + __clear_bit(role_id, cc->roc_map); + + /* Rearm the tx watchdog when removing the last ROC. This prevents + * recoveries due to just finished ROCs - when Tx hasn't yet had + * a chance to get out. + */ + if (find_first_bit(cc->roc_map, CC33XX_MAX_ROLES) >= CC33XX_MAX_ROLES) + cc33xx_rearm_tx_watchdog_locked(cc); +out: + return ret; +} + +int cc33xx_cmd_stop_channel_switch(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + struct cc33xx_cmd_stop_channel_switch *cmd; + int ret; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + cmd->role_id = wlvif->role_id; + + ret = cc33xx_cmd_send(cc, CMD_STOP_CHANNEL_SWITCH, + cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to stop channel switch command"); + goto out_free; + } + +out_free: + kfree(cmd); + +out: + return ret; +} + +/* start dev role and roc on its channel */ +int cc33xx_start_dev(struct cc33xx *cc, struct cc33xx_vif *wlvif, + enum nl80211_band band, int channel) +{ + int ret; + + if (WARN_ON(!(wlvif->bss_type == BSS_TYPE_STA_BSS || + wlvif->bss_type == BSS_TYPE_IBSS))) + return -EINVAL; + + /* the dev role is already started for p2p mgmt interfaces */ + + if (!cc33xx_is_p2p_mgmt(wlvif)) { + ret = cc33xx_cmd_role_enable(cc, + cc33xx_wlvif_to_vif(wlvif)->addr, + CC33XX_ROLE_DEVICE, + &wlvif->dev_role_id); + if (ret < 0) + goto out; + } + + ret = cc33xx_cmd_role_start_dev(cc, wlvif, band, channel); + if (ret < 0) + goto out_disable; + + ret = cc33xx_roc(cc, wlvif, wlvif->dev_role_id, band, channel); + if (ret < 0) + goto out_stop; + + return 0; + +out_stop: + cc333xx_cmd_role_stop_dev(cc, wlvif); +out_disable: + if (!cc33xx_is_p2p_mgmt(wlvif)) + cc33xx_cmd_role_disable(cc, &wlvif->dev_role_id); +out: + return ret; +} + +/* croc dev hlid, and stop the role */ +int cc33xx_stop_dev(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + int ret; + + if (WARN_ON(!(wlvif->bss_type == BSS_TYPE_STA_BSS || + wlvif->bss_type == BSS_TYPE_IBSS))) + return -EINVAL; + + /* flush all pending packets */ + ret = cc33xx_tx_work_locked(cc); + if (ret < 0) + goto out; + + if (test_bit(wlvif->dev_role_id, cc->roc_map)) { + ret = cc33xx_croc(cc, wlvif->dev_role_id); + if (ret < 0) + goto out; + } + + ret = cc333xx_cmd_role_stop_dev(cc, wlvif); + if (ret < 0) + goto out; + + if (!cc33xx_is_p2p_mgmt(wlvif)) { + ret = cc33xx_cmd_role_disable(cc, &wlvif->dev_role_id); + if (ret < 0) + goto out; + } + +out: + return ret; +} + +int cc33xx_cmd_generic_cfg(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u8 feature, u8 enable, u8 value) +{ + struct cc33xx_cmd_generic_cfg *cmd; + int ret; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) + return -ENOMEM; + + cmd->role_id = wlvif->role_id; + cmd->feature = feature; + cmd->enable = enable; + cmd->value = value; + + ret = cc33xx_cmd_send(cc, CMD_GENERIC_CFG, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to send generic cfg command"); + goto out_free; + } +out_free: + kfree(cmd); + return ret; +} + +int cmd_channel_switch(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct ieee80211_channel_switch *ch_switch) +{ + struct cmd_channel_switch *cmd; + u32 supported_rates; + int ret; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + cmd->role_id = wlvif->role_id; + cmd->channel = ch_switch->chandef.chan->hw_value; + cmd->switch_time = ch_switch->count; + cmd->stop_tx = ch_switch->block_tx; + + switch (ch_switch->chandef.chan->band) { + case NL80211_BAND_2GHZ: + cmd->band = CC33XX_BAND_2_4GHZ; + break; + case NL80211_BAND_5GHZ: + cmd->band = CC33XX_BAND_5GHZ; + break; + default: + cc33xx_error("invalid channel switch band: %d", + ch_switch->chandef.chan->band); + ret = -EINVAL; + goto out_free; + } + + supported_rates = CONF_TX_ENABLED_RATES | CONF_TX_MCS_RATES; + supported_rates |= wlvif->rate_set; + if (wlvif->p2p) + supported_rates &= ~CONF_TX_CCK_RATES; + cmd->local_supported_rates = cpu_to_le32(supported_rates); + cmd->channel_type = wlvif->channel_type; + + ret = cc33xx_cmd_send(cc, CMD_CHANNEL_SWITCH, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to send channel switch command"); + goto out_free; + } + +out_free: + kfree(cmd); +out: + return ret; +} + +int cmd_dfs_master_restart(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + struct cmd_dfs_master_restart *cmd; + int ret = 0; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) + return -ENOMEM; + + cmd->role_id = wlvif->role_id; + + ret = cc33xx_cmd_send(cc, CMD_DFS_MASTER_RESTART, + cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to send dfs master restart command"); + goto out_free; + } +out_free: + kfree(cmd); + return ret; +} + +int cmd_set_cac(struct cc33xx *cc, struct cc33xx_vif *wlvif, bool start) +{ + struct cmd_cac_start *cmd; + int ret = 0; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) + return -ENOMEM; + + cmd->role_id = wlvif->role_id; + cmd->channel = wlvif->channel; + if (wlvif->band == NL80211_BAND_5GHZ) + cmd->band = CC33XX_BAND_5GHZ; + cmd->bandwidth = cc33xx_get_native_channel_type(wlvif->channel_type); + + ret = cc33xx_cmd_send(cc, start ? CMD_CAC_START : CMD_CAC_STOP, + cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to send cac command"); + goto out_free; + } + +out_free: + kfree(cmd); + return ret; +} + +int cmd_set_bd_addr(struct cc33xx *cc, u8 *bd_addr) +{ + struct cmd_set_bd_addr *cmd; + int ret = 0; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + memcpy(cmd->bd_addr, bd_addr, sizeof(cmd->bd_addr)); + + ret = cc33xx_cmd_send(cc, CMD_SET_BD_ADDR, cmd, sizeof(*cmd), 0); + if (ret < 0) { + cc33xx_error("failed to set BD address"); + goto out_free; + } + +out_free: + kfree(cmd); +out: + return ret; +} + +int cmd_get_device_info(struct cc33xx *cc, u8 *info_buffer, size_t buffer_len) +{ + struct cc33xx_cmd_get_device_info *cmd; + int ret = 0; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) + return -ENOMEM; + + ret = cc33xx_cmd_send(cc, CMD_BM_READ_DEVICE_INFO, cmd, + sizeof(*cmd), sizeof(*cmd)); + if (ret < 0) { + cc33xx_error("Device info command failure "); + } else { + WARN_ON(buffer_len > sizeof(cmd->device_info)); + memcpy(info_buffer, cmd->device_info, buffer_len); + } + + kfree(cmd); + + return ret; +} + +int cmd_download_container_chunk(struct cc33xx *cc, u8 *chunk, + size_t chunk_len, bool is_last_chunk) +{ + struct cc33xx_cmd_container_download *cmd; + const size_t command_size = sizeof(*cmd) + chunk_len; + int ret; + bool is_sync_transfer = !is_last_chunk; + + cmd = kzalloc(command_size, GFP_KERNEL); + + if (!cmd) { + cc33xx_error("Chunk buffer allocation failure"); + return -ENOMEM; + } + + memcpy(cmd->payload, chunk, chunk_len); + cmd->length = cpu_to_le32(chunk_len); + + if (is_last_chunk) { + cc33xx_debug(DEBUG_BOOT, "Suspending IRQ while device reboots"); + cc33xx_disable_interrupts_nosync(cc); + } + + ret = __cc33xx_cmd_send(cc, CMD_CONTAINER_DOWNLOAD, cmd, + command_size, sizeof(u32), is_sync_transfer); + + kfree(cmd); + + if (is_last_chunk) { + msleep(CC33XX_REBOOT_TIMEOUT_MSEC); + cc33xx_enable_interrupts(cc); + } + + return ret; +} diff --git a/drivers/net/wireless/ti/cc33xx/cmd.h b/drivers/net/wireless/ti/cc33xx/cmd.h new file mode 100644 index 000000000000..f6e4877cf7b4 --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/cmd.h @@ -0,0 +1,700 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#ifndef __CMD_H__ +#define __CMD_H__ + +#include "cc33xx.h" + +struct acx_header; + +enum buffer_size { + INI_MAX_BUFFER_SIZE, + CMD_MAX_BUFFER_SIZE +}; + +int cc33xx_set_max_buffer_size(struct cc33xx *cc, enum buffer_size max_buffer_size); +int cc33xx_cmd_send(struct cc33xx *cc, u16 id, void *buf, + size_t len, size_t res_len); +int cc33xx_cmd_role_enable(struct cc33xx *cc, u8 *addr, + u8 role_type, u8 *role_id); +int cc33xx_cmd_role_disable(struct cc33xx *cc, u8 *role_id); +int cc33xx_cmd_role_start_sta(struct cc33xx *cc, struct cc33xx_vif *wlvif); +int cc33xx_cmd_role_stop_sta(struct cc33xx *cc, struct cc33xx_vif *wlvif); +int cc33xx_cmd_role_start_ap(struct cc33xx *cc, struct cc33xx_vif *wlvif); +int cc33xx_cmd_role_stop_ap(struct cc33xx *cc, struct cc33xx_vif *wlvif); +int cc33xx_cmd_role_start_ibss(struct cc33xx *cc, struct cc33xx_vif *wlvif); +int cc33xx_start_dev(struct cc33xx *cc, struct cc33xx_vif *wlvif, + enum nl80211_band band, int channel); +int cc33xx_stop_dev(struct cc33xx *cc, struct cc33xx_vif *wlvif); +int cc33xx_cmd_test(struct cc33xx *cc, void *buf, size_t buf_len, u8 answer); +int cc33xx_cmd_interrogate(struct cc33xx *cc, u16 id, void *buf, + size_t cmd_len, size_t res_len); +int cc33xx_cmd_debug_inter(struct cc33xx *cc, u16 id, void *buf, + size_t cmd_len, size_t res_len); +int cc33xx_cmd_configure(struct cc33xx *cc, u16 id, void *buf, size_t len); +int cc33xx_cmd_debug(struct cc33xx *cc, u16 id, void *buf, size_t len); +int cc33xx_cmd_configure_failsafe(struct cc33xx *cc, u16 id, void *buf, + size_t len, unsigned long valid_rets); +int cc33xx_cmd_ps_mode(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u8 ps_mode, u16 auto_ps_timeout); +int cc33xx_cmd_set_default_wep_key(struct cc33xx *cc, u8 id, u8 hlid); +int cc33xx_cmd_set_sta_key(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u16 action, u8 id, u8 key_type, + u8 key_size, const u8 *key, const u8 *addr, + u32 tx_seq_32, u16 tx_seq_16); +int cc33xx_cmd_set_ap_key(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u16 action, u8 id, u8 key_type, u8 key_size, + const u8 *key, u8 hlid, u32 tx_seq_32, u16 tx_seq_16); +int cc33xx_cmd_set_peer_state(struct cc33xx *cc, + struct cc33xx_vif *wlvif, u8 hlid); +int cc33xx_roc(struct cc33xx *cc, struct cc33xx_vif *wlvif, u8 role_id, + enum nl80211_band band, u8 channel); +int cc33xx_croc(struct cc33xx *cc, u8 role_id); +int cc33xx_cmd_add_peer(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct ieee80211_sta *sta, u8 *hlid, u8 is_connected); +int cc33xx_cmd_remove_peer(struct cc33xx *cc, + struct cc33xx_vif *wlvif, u8 hlid); +void cc33xx_set_pending_regdomain_ch(struct cc33xx *cc, u16 channel, + enum nl80211_band band); +int cc33xx_cmd_regdomain_config_locked(struct cc33xx *cc); +int cc33xx_cmd_generic_cfg(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u8 feature, u8 enable, u8 value); +int cc33xx_cmd_config_fwlog(struct cc33xx *cc); +int cc33xx_cmd_stop_channel_switch(struct cc33xx *cc, struct cc33xx_vif *wlvif); +int cc33xx_set_link(struct cc33xx *cc, struct cc33xx_vif *wlvif, u8 link); +void cc33xx_clear_link(struct cc33xx *cc, struct cc33xx_vif *wlvif, u8 *hlid); +int cc33xx_cmd_role_start_transceiver(struct cc33xx *cc, u8 role_id); +int cc33xx_cmd_role_stop_transceiver(struct cc33xx *cc); +int cc33xx_cmd_plt_enable(struct cc33xx *cc, u8 role_id); +int cc33xx_cmd_plt_disable(struct cc33xx *cc); +int cmd_channel_switch(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct ieee80211_channel_switch *ch_switch); +int cmd_dfs_master_restart(struct cc33xx *cc, struct cc33xx_vif *wlvif); +int cmd_set_cac(struct cc33xx *cc, struct cc33xx_vif *wlvif, bool start); +int cmd_set_bd_addr(struct cc33xx *cc, u8 *bd_addr); +int cmd_get_device_info(struct cc33xx *cc, u8 *info_buffer, size_t buffer_len); +int cmd_download_container_chunk(struct cc33xx *cc, u8 *chunk, + size_t chunk_len, bool is_last_chunk); + +enum cc33xx_cmd { + CMD_EMPTY, + CMD_SET_KEYS = 1, + CMD_SET_LINK_CONNECTION_STATE = 2, + + CMD_CHANNEL_SWITCH = 3, + CMD_STOP_CHANNEL_SWITCH = 4, + + CMD_REMAIN_ON_CHANNEL = 5, + CMD_CANCEL_REMAIN_ON_CHANNEL = 6, + + CMD_START_DHCP_MGMT_SEQ = 7, + CMD_STOP_DHCP_MGMT_SEQ = 8, + + CMD_START_SECURITY_MGMT_SEQ = 9, + CMD_STOP_SECURITY_MGMT_SEQ = 10, + + CMD_START_ARP_MGMT_SEQ = 11, + CMD_STOP_ARP_MGMT_SEQ = 12, + + CMD_START_DNS_MGMT_SEQ = 13, + CMD_STOP_DNS_MGMT_SEQ = 14, + + /* Access point commands */ + CMD_ADD_PEER = 15, + CMD_REMOVE_PEER = 16, + + /* Role API */ + CMD_ROLE_ENABLE = 17, + CMD_ROLE_DISABLE = 18, + CMD_ROLE_START = 19, + CMD_ROLE_STOP = 20, + + CMD_AP_SET_BEACON_INFO = 21, /* Set AP beacon template */ + + /* Managed sequence of sending deauth / disassoc frame */ + CMD_SEND_DEAUTH_DISASSOC = 22, + + CMD_SCHED_STATE_EVENT = 23, + CMD_SCAN = 24, + CMD_STOP_SCAN = 25, + CMD_SET_PROBE_IE = 26, + + CMD_CONFIGURE = 27, + CMD_INTERROGATE = 28, + + CMD_DEBUG = 29, + CMD_DEBUG_READ = 30, + + CMD_TEST_MODE = 31, + CMD_PLT_ENABLE = 32, + CMD_PLT_DISABLE = 33, + CMD_CONNECTION_SCAN_SSID_CFG = 34, + CMD_BM_READ_DEVICE_INFO = 35, + CMD_CONTAINER_DOWNLOAD = 36, + CMD_DOWNLOAD_INI_PARAMS = 37, + CMD_SET_BD_ADDR = 38, + CMD_BLE_COMMANDS = 39, + + CMD_LAST_SUPPORTED_COMMAND, + + /* The following commands are legacy and are not yet supported */ + + CMD_SET_PS_MODE, + CMD_DFS_CHANNEL_CONFIG, + CMD_CONFIG_FWLOGGER, + CMD_START_FWLOGGER, + CMD_STOP_FWLOGGER, + CMD_GENERIC_CFG, + CMD_DFS_MASTER_RESTART, + CMD_CAC_START, + CMD_CAC_STOP, + CMD_DFS_RADAR_DETECTION_DEBUG, + + MAX_COMMAND_ID_CC33xx = 0x7FFF, +}; + +#define MAX_CMD_PARAMS 572 + +/* unit ms */ +#define CC33XX_COMMAND_TIMEOUT 2000 +#define CC33XX_CMD_TEMPL_MAX_SIZE 512 +#define CC33XX_EVENT_TIMEOUT 5000 + +struct cc33xx_cmd_header { + struct NAB_header NAB_header; + __le16 id; + __le16 status; + + /* payload */ + u8 data[]; +} __packed; + +#define CC33XX_CMD_MAX_PARAMS 572 + +struct cc33xx_command { + struct cc33xx_cmd_header header; + u8 parameters[CC33XX_CMD_MAX_PARAMS]; +} __packed; + +enum { + CMD_MAILBOX_IDLE = 0, + CMD_STATUS_SUCCESS = 1, + CMD_STATUS_UNKNOWN_CMD = 2, + CMD_STATUS_UNKNOWN_IE = 3, + CMD_STATUS_REJECT_MEAS_SG_ACTIVE = 11, + CMD_STATUS_RX_BUSY = 13, + CMD_STATUS_INVALID_PARAM = 14, + CMD_STATUS_TEMPLATE_TOO_LARGE = 15, + CMD_STATUS_OUT_OF_MEMORY = 16, + CMD_STATUS_STA_TABLE_FULL = 17, + CMD_STATUS_RADIO_ERROR = 18, + CMD_STATUS_WRONG_NESTING = 19, + CMD_STATUS_TIMEOUT = 21, /* Driver internal use.*/ + CMD_STATUS_FW_RESET = 22, /* Driver internal use.*/ + CMD_STATUS_TEMPLATE_OOM = 23, + CMD_STATUS_NO_RX_BA_SESSION = 24, + + MAX_COMMAND_STATUS +}; + +enum { + BSS_TYPE_IBSS = 0, + BSS_TYPE_STA_BSS = 2, + BSS_TYPE_AP_BSS = 3, + MAX_BSS_TYPE = 0xFF +}; + +struct cc33xx_cmd_role_enable { + struct cc33xx_cmd_header header; + + u8 role_type; + u8 mac_address[ETH_ALEN]; + u8 padding; +} __packed; + +struct command_complete_header { + __le16 id; + __le16 status; + + /* payload */ + u8 data[]; +} __packed; + +struct cc33xx_cmd_complete_role_enable { + struct command_complete_header header; + u8 role_id; + u8 padding[3]; +} __packed; + +struct cc33xx_cmd_role_disable { + struct cc33xx_cmd_header header; + + u8 role_id; + u8 padding[3]; +} __packed; + +enum cc33xx_band { + CC33XX_BAND_2_4GHZ = 0, + CC33XX_BAND_5GHZ = 1, + CC33XX_BAND_JAPAN_4_9_GHZ = 2, + CC33XX_BAND_DEFAULT = CC33XX_BAND_2_4GHZ, + CC33XX_BAND_INVALID = 0x7E, + CC33XX_BAND_MAX_RADIO = 0x7F, +}; + +enum cc33xx_channel_type { + CC33XX_CHAN_NO_HT, + CC33XX_CHAN_HT20, + CC33XX_CHAN_HT40MINUS, + CC33XX_CHAN_HT40PLUS +}; + +struct cc33xx_cmd_role_start { + struct cc33xx_cmd_header header; + u8 role_id; + u8 role_type; + u8 band; + u8 channel; + + u8 channel_type; + + union { + struct { + u8 padding_1[54]; + } __packed device; + /* sta & p2p_cli use the same struct */ + struct { + u8 bssid[ETH_ALEN]; + + __le32 remote_rates; /* remote supported rates */ + + /* The target uses this field to determine the rate at + * which to transmit control frame responses (such as + * ACK or CTS frames). + */ + __le32 basic_rate_set; + __le32 local_rates; /* local supported rates */ + + u8 ssid_type; + u8 ssid_len; + u8 ssid[IEEE80211_MAX_SSID_LEN]; + + __le16 beacon_interval; /* in TBTTs */ + } __packed sta; + struct { + u8 bssid[ETH_ALEN]; + u8 hlid; /* data hlid */ + u8 dtim_interval; + __le32 remote_rates; /* remote supported rates */ + + __le32 basic_rate_set; + __le32 local_rates; /* local supported rates */ + + u8 ssid_type; + u8 ssid_len; + u8 ssid[IEEE80211_MAX_SSID_LEN]; + + __le16 beacon_interval; /* in TBTTs */ + + u8 padding_1[2]; + } __packed ibss; + /* ap & p2p_go use the same struct */ + struct { + __le16 beacon_interval; /* in TBTTs */ + + __le32 basic_rate_set; + __le32 local_rates; /* local supported rates */ + + u8 dtim_interval; + /* ap supports wmm (note that there is additional + * per-sta wmm configuration) + */ + u8 wmm; + u8 padding_1[42]; + } __packed ap; + }; + u8 padding; +} __packed; + +struct cc33xx_cmd_complete_role_start { + struct command_complete_header header; + union { + struct { + u8 hlid; + u8 session; + u8 padding[2]; + } __packed sta; + struct { + /* The host link id for the AP's global queue */ + u8 global_hlid; + /* The host link id for the AP's broadcast queue */ + u8 broadcast_hlid; + u8 bcast_session_id; + u8 global_session_id; + } __packed ap; + }; +} __packed; + +struct cc33xx_cmd_role_stop { + struct cc33xx_cmd_header header; + + u8 role_id; + u8 padding[3]; +} __packed; + +struct cmd_enabledisable_path { + struct cc33xx_cmd_header header; + + u8 channel; + u8 padding[3]; +} __packed; + +enum cc33xx_cmd_ps_mode_e { + STATION_AUTO_PS_MODE, /* Dynamic Power Save */ + STATION_ACTIVE_MODE, + STATION_POWER_SAVE_MODE +}; + +struct cc33xx_cmd_ps_params { + struct cc33xx_cmd_header header; + + u8 role_id; + u8 ps_mode; /* STATION_* */ + __le16 auto_ps_timeout; +} __packed; + +/* HW encryption keys */ +#define NUM_ACCESS_CATEGORIES_COPY 4 + +enum cc33xx_cmd_key_action { + KEY_ADD_OR_REPLACE = 1, + KEY_REMOVE = 2, + KEY_SET_ID = 3, + MAX_KEY_ACTION = 0xffff, +}; + +enum cc33xx_cmd_lid_key_type { + UNICAST_LID_TYPE = 0, + BROADCAST_LID_TYPE = 1, + WEP_DEFAULT_LID_TYPE = 2 +}; + +enum cc33xx_cmd_key_type { + KEY_NONE = 0, + KEY_WEP = 1, + KEY_TKIP = 2, + KEY_AES = 3, /* aes_ccmp_128 */ + KEY_GEM = 4, + KEY_IGTK = 5, /* bip_cmac_128 */ + KEY_CMAC_256 = 6, + KEY_GMAC_128 = 7, + KEY_GMAC_256 = 8, + KEY_GCMP_256 = 9, + KEY_CCMP256 = 10, + KEY_GCMP128 = 11, +}; + +struct cc33xx_cmd_set_keys { + struct cc33xx_cmd_header header; + + /* Indicates whether the HLID is a unicast key set + * or broadcast key set. A special value 0xFF is + * used to indicate that the HLID is on WEP-default + * (multi-hlids). of type cc33xx_cmd_lid_key_type. + */ + u8 hlid; + + /* In WEP-default network (hlid == 0xFF) used to + * indicate which network STA/IBSS/AP role should be + * changed + */ + u8 lid_key_type; + + /* Key ID - For TKIP and AES key types, this field + * indicates the value that should be inserted into + * the KeyID field of frames transmitted using this + * key entry. For broadcast keys the index use as a + * marker for TX/RX key. + * For WEP default network (HLID=0xFF), this field + * indicates the ID of the key to add or remove. + */ + u8 key_id; + u8 reserved_1; + + /* key_action_e */ + __le16 key_action; + + /* key size in bytes */ + u8 key_size; + + /* key_type_e */ + u8 key_type; + + /* This field holds the security key data to add to the STA table */ + u8 key[MAX_KEY_SIZE]; + __le16 ac_seq_num16[NUM_ACCESS_CATEGORIES_COPY]; + __le32 ac_seq_num32[NUM_ACCESS_CATEGORIES_COPY]; +} __packed; + +struct cc33xx_cmd_test_header { + u8 id; + u8 padding[3]; +} __packed; + +#define CC33XX_CMD_STA_STATE_CONNECTED 1 + +struct cc33xx_cmd_set_peer_state { + struct cc33xx_cmd_header header; + + u8 hlid; + u8 state; + u8 padding[2]; +} __packed; + +struct cc33xx_cmd_roc { + struct cc33xx_cmd_header header; + + u8 role_id; + u8 channel; + u8 band; + u8 padding; +}; + +struct cc33xx_cmd_croc { + struct cc33xx_cmd_header header; + + u8 role_id; + u8 padding[3]; +}; + +enum cc33xx_ssid_type { + CC33XX_SSID_TYPE_PUBLIC = 0, + CC33XX_SSID_TYPE_HIDDEN = 1, + CC33XX_SSID_TYPE_ANY = 2, +}; + +enum CC33XX_psd_type { + CC33XX_PSD_LEGACY = 0, + CC33XX_PSD_UPSD_TRIGGER = 1, + CC33XX_PSD_LEGACY_PSPOLL = 2, + CC33XX_PSD_SAPSD = 3 +}; + +#define MAX_SIZE_BEACON_TEMP (450) +struct cc33xx_cmd_set_beacon_info { + struct cc33xx_cmd_header header; + + u8 role_id; + __le16 beacon_len; + u8 beacon[MAX_SIZE_BEACON_TEMP]; + u8 padding[3]; +} __packed; + +struct cc33xx_cmd_add_peer { + struct cc33xx_cmd_header header; + + u8 is_connected; + u8 role_id; + u8 role_type; + u8 link_type; + u8 addr[ETH_ALEN]; + __le16 aid; + u8 psd_type[NUM_ACCESS_CATEGORIES_COPY]; + __le32 supported_rates; + u8 bss_index; + u8 sp_len; + u8 wmm; + __le32 ht_capabilities; + u8 ampdu_params; + + /* HE peer support */ + bool has_he; + bool mfp; + u8 padding[2]; +} __packed; + +struct cc33xx_cmd_complete_add_peer { + struct command_complete_header header; + u8 hlid; + u8 session_id; +} __packed; + +struct cc33xx_cmd_remove_peer { + struct cc33xx_cmd_header header; + u8 hlid; + u8 role_id; + u8 padding[2]; +} __packed; + +/* Continuous mode - packets are transferred to the host periodically + * via the data path. + * On demand - Log messages are stored in a cyclic buffer in the + * firmware, and only transferred to the host when explicitly requested + */ +enum cc33xx_fwlogger_log_mode { + CC33XX_FWLOG_CONTINUOUS, +}; + +/* Include/exclude timestamps from the log messages */ +enum cc33xx_fwlogger_timestamp { + CC33XX_FWLOG_TIMESTAMP_DISABLED, + CC33XX_FWLOG_TIMESTAMP_ENABLED +}; + +/* Logs can be routed to the debug pinouts (where available), to the host bus + * (SDIO/SPI), or dropped + */ +enum cc33xx_fwlogger_output { + CC33XX_FWLOG_OUTPUT_NONE, + CC33XX_FWLOG_OUTPUT_DBG_PINS, + CC33XX_FWLOG_OUTPUT_HOST, +}; + +struct cc33xx_cmd_regdomain_dfs_config { + struct cc33xx_cmd_header header; + + __le32 ch_bit_map1; + __le32 ch_bit_map2; + u8 dfs_region; + u8 padding[3]; +} __packed; + +enum cc33xx_generic_cfg_feature { + CC33XX_CFG_FEATURE_RADAR_DEBUG = 2, +}; + +struct cc33xx_cmd_generic_cfg { + struct cc33xx_cmd_header header; + + u8 role_id; + u8 feature; + u8 enable; + u8 value; +} __packed; + +struct cc33xx_cmd_config_fwlog { + struct cc33xx_cmd_header header; + + /* See enum cc33xx_fwlogger_log_mode */ + u8 logger_mode; + + /* Minimum log level threshold */ + u8 log_severity; + + /* Include/exclude timestamps from the log messages */ + u8 timestamp; + + /* See enum cc33xx_fwlogger_output */ + u8 output; + + /* Regulates the frequency of log messages */ + u8 threshold; + + u8 padding[3]; +} __packed; + +struct cc33xx_cmd_stop_channel_switch { + struct cc33xx_cmd_header header; + + u8 role_id; + u8 padding[3]; +} __packed; + +/* Used to check radio status after calibration */ +#define MAX_TLV_LENGTH 500 +#define TEST_CMD_P2G_CAL 2 /* TX BiP */ + +struct cc33xx_cmd_cal_p2g { + struct cc33xx_cmd_header header; + + struct cc33xx_cmd_test_header test; + + __le32 ver; + __le16 len; + u8 buf[MAX_TLV_LENGTH]; + u8 type; + u8 padding; + + __le16 radio_status; + + u8 sub_band_mask; + u8 padding2; +} __packed; + +struct cmd_channel_switch { + struct cc33xx_cmd_header header; + + u8 role_id; + + /* The new serving channel */ + u8 channel; + /* Relative time of the serving channel switch in TBTT units */ + u8 switch_time; + /* Stop the role TX, should expect it after radar detection */ + u8 stop_tx; + + __le32 local_supported_rates; + + u8 channel_type; + u8 band; + + u8 padding[2]; +} __packed; + +struct cmd_set_bd_addr { + struct cc33xx_cmd_header header; + + u8 bd_addr[ETH_ALEN]; + u8 padding[2]; +} __packed; + +struct cmd_dfs_master_restart { + struct cc33xx_cmd_header header; + + u8 role_id; + u8 padding[3]; +} __packed; + +/* cac_start and cac_stop share the same params */ +struct cmd_cac_start { + struct cc33xx_cmd_header header; + + u8 role_id; + u8 channel; + u8 band; + u8 bandwidth; +} __packed; + +/* PLT structs */ + +struct cc33xx_cmd_PLT_enable { + struct cc33xx_cmd_header header; + __le32 dummy; +}; + +struct cc33xx_cmd_PLT_disable { + struct cc33xx_cmd_header header; + __le32 dummy; +}; + +struct cc33xx_cmd_ini_params_download { + struct cc33xx_cmd_header header; + __le32 length; + u8 payload[]; +} __packed; + +struct cc33xx_cmd_container_download { + struct cc33xx_cmd_header header; + __le32 length; + u8 payload[]; +} __packed; + +struct cc33xx_cmd_get_device_info { + struct cc33xx_cmd_header header; + u8 device_info[700]; +} __packed; + +#endif /* __CC33XX_CMD_H__ */ From patchwork Thu Nov 7 12:51:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nemanov, Michael" X-Patchwork-Id: 13866417 X-Patchwork-Delegate: kuba@kernel.org Received: from fllv0015.ext.ti.com (fllv0015.ext.ti.com [198.47.19.141]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E30E8212D1B; Thu, 7 Nov 2024 12:52:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.19.141 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983980; cv=none; b=kimTrqFhKCD1Er7xZ0hyCEPiyAwzjRk5NHCTmXqUSDcMF//6jie/h5H5nkI7089UsyOrwELfZtdj7eOsrMfR5H9Newca4J3VcQIgxCCuxkqpWQ9Uy9cLSJbxD9qV0dd++tHm/2HfdDzPt2ojdCwQfir3G1i8biZ02SjJb34ndF8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983980; c=relaxed/simple; bh=gzFZMa6vpcUH6yacfLdpnuLg5jL6UhGO7Bd9+4SYXlM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=QeCHReUV8ROPgvRflTgHt/DR89u3dCfJ+OKS6bdxNYFPeuFOHPVJkWAcU/cQEeIW2gNJBtVLe422dkER2qlspLbOFfNeSPpRE4tj9QVGhD09A2l9bOtP35i010uAVlEcXL7jUs1Av4glyqbo1LB1HbqPiH3HGaPyw74WfAA/6y0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=xsSX6Jzn; arc=none smtp.client-ip=198.47.19.141 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="xsSX6Jzn" Received: from lelv0266.itg.ti.com ([10.180.67.225]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 4A7Cqk9u073875; Thu, 7 Nov 2024 06:52:46 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1730983966; bh=tluAt8SmMQlvjM0CblhCM6gSnzzdaW73b7PMsf5DsCM=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=xsSX6JznhzHovcaNraAHr0qMN3QOJ8rQqqOavr1HiaQHY2Hw6AP1dM7BN+175+lTo 4pO1OsQMj3VvI7RFSjN/L95KizGYnZUxyxHhQiDDTxQG3efE1rJ6yFNPd85AUlBJKn XdhUXFDB/ZyVWzDZjaTG2ZZNLGQfEt7WyL0dTprI= Received: from DLEE101.ent.ti.com (dlee101.ent.ti.com [157.170.170.31]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTP id 4A7Cqk9R100057; Thu, 7 Nov 2024 06:52:46 -0600 Received: from DLEE107.ent.ti.com (157.170.170.37) by DLEE101.ent.ti.com (157.170.170.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Thu, 7 Nov 2024 06:52:45 -0600 Received: from lelvsmtp5.itg.ti.com (10.180.75.250) by DLEE107.ent.ti.com (157.170.170.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23 via Frontend Transport; Thu, 7 Nov 2024 06:52:46 -0600 Received: from localhost (udb0389739.dhcp.ti.com [137.167.1.149]) by lelvsmtp5.itg.ti.com (8.15.2/8.15.2) with ESMTP id 4A7Cqjcm054809; Thu, 7 Nov 2024 06:52:45 -0600 From: Michael Nemanov To: Kalle Valo , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , , , , CC: Sabeeh Khan , Michael Nemanov Subject: [PATCH v5 06/17] wifi: cc33xx: Add acx.c, acx.h Date: Thu, 7 Nov 2024 14:51:58 +0200 Message-ID: <20241107125209.1736277-7-michael.nemanov@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241107125209.1736277-1-michael.nemanov@ti.com> References: <20241107125209.1736277-1-michael.nemanov@ti.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea X-Patchwork-Delegate: kuba@kernel.org These file contain various WLAN-oriented APIs Signed-off-by: Michael Nemanov --- drivers/net/wireless/ti/cc33xx/acx.c | 931 +++++++++++++++++++++++++++ drivers/net/wireless/ti/cc33xx/acx.h | 835 ++++++++++++++++++++++++ 2 files changed, 1766 insertions(+) create mode 100644 drivers/net/wireless/ti/cc33xx/acx.c create mode 100644 drivers/net/wireless/ti/cc33xx/acx.h diff --git a/drivers/net/wireless/ti/cc33xx/acx.c b/drivers/net/wireless/ti/cc33xx/acx.c new file mode 100644 index 000000000000..882a6ccbf24e --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/acx.c @@ -0,0 +1,931 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#include "acx.h" + +int cc33xx_acx_clear_statistics(struct cc33xx *cc) +{ + struct acx_header *acx; + int ret = 0; + + acx = kzalloc(sizeof(*acx), GFP_KERNEL); + if (!acx) { + ret = -ENOMEM; + goto out; + } + + ret = cc33xx_cmd_configure(cc, ACX_CLEAR_STATISTICS, acx, sizeof(*acx)); + if (ret < 0) { + cc33xx_warning("failed to clear firmware statistics: %d", ret); + goto out; + } + +out: + kfree(acx); + return ret; +} + +int cc33xx_acx_wake_up_conditions(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u8 wake_up_event, u8 listen_interval) +{ + struct acx_wake_up_condition *wake_up; + int ret; + + wake_up = kzalloc(sizeof(*wake_up), GFP_KERNEL); + if (!wake_up) { + ret = -ENOMEM; + goto out; + } + + wake_up->wake_up_event = wake_up_event; + wake_up->listen_interval = listen_interval; + + ret = cc33xx_cmd_configure(cc, WAKE_UP_CONDITIONS_CFG, + wake_up, sizeof(*wake_up)); + if (ret < 0) { + cc33xx_warning("could not set wake up conditions: %d", ret); + goto out; + } + +out: + kfree(wake_up); + return ret; +} + +int cc33xx_acx_sleep_auth(struct cc33xx *cc, u8 sleep_auth) +{ + struct acx_sleep_auth *auth; + int ret; + + auth = kzalloc(sizeof(*auth), GFP_KERNEL); + if (!auth) { + ret = -ENOMEM; + goto out; + } + + auth->sleep_auth = sleep_auth; + + ret = cc33xx_cmd_configure(cc, ACX_SLEEP_AUTH, auth, sizeof(*auth)); + if (ret < 0) { + cc33xx_error("could not configure sleep_auth to %d: %d", + sleep_auth, ret); + goto out; + } + + cc->sleep_auth = sleep_auth; +out: + kfree(auth); + return ret; +} + +int cc33xx_ble_enable(struct cc33xx *cc, u8 ble_enable) +{ + struct debug_header *buf; + int ret; + + buf = kzalloc(sizeof(*buf), GFP_KERNEL); + if (!buf) { + ret = -ENOMEM; + goto out; + } + + ret = cc33xx_cmd_debug(cc, BLE_ENABLE, buf, sizeof(*buf)); + if (ret < 0) { + cc33xx_error("could not enable ble"); + goto out; + } + + cc->ble_enable = 1; +out: + kfree(buf); + return ret; +} + +int cc33xx_acx_tx_power(struct cc33xx *cc, struct cc33xx_vif *wlvif, + int power) +{ + struct acx_tx_power_cfg *acx; + int ret; + + if (power < CC33XX_MIN_TXPWR) { + cc33xx_warning("Configured Tx power %d dBm. Increasing to minimum %d dBm", + power, CC33XX_MIN_TXPWR); + power = CC33XX_MIN_TXPWR; + } else if (power > CC33XX_MAX_TXPWR) { + cc33xx_warning("Configured Tx power %d dBm is bigger than upper limit: %d dBm. Attenuating to max limit", + power, CC33XX_MAX_TXPWR); + power = CC33XX_MAX_TXPWR; + } + + acx = kzalloc(sizeof(*acx), GFP_KERNEL); + if (!acx) { + ret = -ENOMEM; + goto out; + } + + acx->role_id = wlvif->role_id; + acx->tx_power = power; + + ret = cc33xx_cmd_configure(cc, TX_POWER_CFG, acx, sizeof(*acx)); + if (ret < 0) { + cc33xx_warning("Configure of tx power failed: %d", ret); + goto out; + } + + wlvif->power_level = power; + +out: + kfree(acx); + return ret; +} + +static int cc33xx_acx_mem_map(struct cc33xx *cc, + struct acx_header *memeroy_map, size_t len) +{ + int ret; + + ret = cc33xx_cmd_interrogate(cc, MEM_MAP_INTR, memeroy_map, + sizeof(struct acx_header), len); + if (ret < 0) + return ret; + + return 0; +} + +static int cc33xx_acx_get_fw_versions(struct cc33xx *cc, + struct cc33xx_acx_fw_versions *get_fw_versions, + size_t len) +{ + int ret; + + ret = cc33xx_cmd_interrogate(cc, GET_FW_VERSIONS_INTR, get_fw_versions, + sizeof(struct cc33xx_acx_fw_versions), len); + if (ret < 0) + return ret; + return 0; +} + +int cc33xx_acx_slot(struct cc33xx *cc, struct cc33xx_vif *wlvif, + enum acx_slot_type slot_time) +{ + struct acx_slot *slot; + int ret; + + slot = kzalloc(sizeof(*slot), GFP_KERNEL); + if (!slot) { + ret = -ENOMEM; + goto out; + } + + slot->role_id = wlvif->role_id; + slot->slot_time = slot_time; + ret = cc33xx_cmd_configure(cc, SLOT_CFG, slot, sizeof(*slot)); + + if (ret < 0) { + cc33xx_warning("failed to set slot time: %d", ret); + goto out; + } + +out: + kfree(slot); + return ret; +} + +int cc33xx_acx_group_address_tbl(struct cc33xx *cc, bool enable, void *mc_list, u32 mc_list_len) +{ + struct acx_dot11_grp_addr_tbl *acx; + int ret; + + acx = kzalloc(sizeof(*acx), GFP_KERNEL); + if (!acx) { + ret = -ENOMEM; + goto out; + } + + acx->enabled = enable; + acx->num_groups = mc_list_len; + memcpy(acx->mac_table, mc_list, mc_list_len * ETH_ALEN); + + ret = cc33xx_cmd_configure(cc, DOT11_GROUP_ADDRESS_TBL, + acx, sizeof(*acx)); + if (ret < 0) { + cc33xx_warning("failed to set group addr table: %d", ret); + goto out; + } +out: + kfree(acx); + return ret; +} + +int cc33xx_acx_beacon_filter_opt(struct cc33xx *cc, struct cc33xx_vif *wlvif, + bool enable_filter) +{ + struct acx_beacon_filter_option *beacon_filter = NULL; + int ret = 0; + + if (enable_filter && + cc->conf.host_conf.conn.bcn_filt_mode == CONF_BCN_FILT_MODE_DISABLED) + goto out; + + beacon_filter = kzalloc(sizeof(*beacon_filter), GFP_KERNEL); + if (!beacon_filter) { + ret = -ENOMEM; + goto out; + } + + beacon_filter->role_id = wlvif->role_id; + beacon_filter->enable = enable_filter; + + /* When set to zero, and the filter is enabled, beacons + * without the unicast TIM bit set are dropped. + */ + beacon_filter->max_num_beacons = 0; + + ret = cc33xx_cmd_configure(cc, BEACON_FILTER_OPT, + beacon_filter, sizeof(*beacon_filter)); + if (ret < 0) { + cc33xx_warning("failed to set beacon filter opt: %d", ret); + goto out; + } + +out: + kfree(beacon_filter); + return ret; +} + +int cc33xx_acx_beacon_filter_table(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + struct acx_beacon_filter_ie_table *ie_table; + struct conf_bcn_filt_rule *r = &cc->conf.host_conf.conn.bcn_filt_ie0; + struct conf_bcn_filt_rule *itr_end; + int idx = 0; + int ret; + bool vendor_spec = false; + + if (WARN_ON(cc->conf.host_conf.conn.bcn_filt_ie_count > CONF_MAX_BCN_FILT_IE_COUNT)) + return -EINVAL; + + itr_end = r + cc->conf.host_conf.conn.bcn_filt_ie_count; + + ie_table = kzalloc(sizeof(*ie_table), GFP_KERNEL); + if (!ie_table) { + ret = -ENOMEM; + goto out; + } + + /* configure default beacon pass-through rules */ + ie_table->role_id = wlvif->role_id; + ie_table->num_ie = 0; + + while (r < itr_end) { + ie_table->table[idx++] = r->ie; + ie_table->table[idx++] = r->rule; + + if (r->ie == WLAN_EID_VENDOR_SPECIFIC) { + /* only one vendor specific ie allowed */ + if (vendor_spec) + continue; + + /* for vendor specific rules configure the + * additional fields + */ + memcpy(&ie_table->table[idx], r->oui, + CONF_BCN_IE_OUI_LEN); + idx += CONF_BCN_IE_OUI_LEN; + ie_table->table[idx++] = r->type; + memcpy(&ie_table->table[idx], r->version, + CONF_BCN_IE_VER_LEN); + idx += CONF_BCN_IE_VER_LEN; + vendor_spec = true; + } + + ie_table->num_ie++; + r++; + } + + ret = cc33xx_cmd_configure(cc, BEACON_FILTER_TABLE, + ie_table, sizeof(*ie_table)); + if (ret < 0) { + cc33xx_warning("failed to set beacon filter table: %d", ret); + goto out; + } + +out: + kfree(ie_table); + return ret; +} + +int cc33xx_assoc_info_cfg(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct ieee80211_sta *sta, u16 aid) +{ + struct assoc_info_cfg *cfg; + int ret; + + cfg = kzalloc(sizeof(*cfg), GFP_KERNEL); + if (!cfg) { + ret = -ENOMEM; + goto out; + } + + cfg->role_id = wlvif->role_id; + cfg->aid = cpu_to_le16(aid); + cfg->wmm_enabled = wlvif->wmm_enabled; + + cfg->nontransmitted = wlvif->nontransmitted; + cfg->bssid_index = wlvif->bssid_index; + cfg->bssid_indicator = wlvif->bssid_indicator; + cfg->ht_supported = sta->deflink.ht_cap.ht_supported; + cfg->vht_supported = sta->deflink.vht_cap.vht_supported; + cfg->has_he = sta->deflink.he_cap.has_he; + memcpy(cfg->transmitter_bssid, wlvif->transmitter_bssid, ETH_ALEN); + ret = cc33xx_cmd_configure(cc, ASSOC_INFO_CFG, cfg, sizeof(*cfg)); + if (ret < 0) { + cc33xx_warning("failed to set aid: %d", ret); + goto out; + } + +out: + kfree(cfg); + return ret; +} + +int cc33xx_acx_set_preamble(struct cc33xx *cc, struct cc33xx_vif *wlvif, + enum acx_preamble_type preamble) +{ + struct acx_preamble *acx; + int ret; + + acx = kzalloc(sizeof(*acx), GFP_KERNEL); + if (!acx) { + ret = -ENOMEM; + goto out; + } + + acx->role_id = wlvif->role_id; + acx->preamble = preamble; + + ret = cc33xx_cmd_configure(cc, PREAMBLE_TYPE_CFG, acx, sizeof(*acx)); + if (ret < 0) { + cc33xx_warning("Setting of preamble failed: %d", ret); + goto out; + } + +out: + kfree(acx); + return ret; +} + +int cc33xx_acx_cts_protect(struct cc33xx *cc, struct cc33xx_vif *wlvif, + enum acx_ctsprotect_type ctsprotect) +{ + struct acx_ctsprotect *acx; + int ret; + + acx = kzalloc(sizeof(*acx), GFP_KERNEL); + if (!acx) { + ret = -ENOMEM; + goto out; + } + + acx->role_id = wlvif->role_id; + acx->ctsprotect = ctsprotect; + + ret = cc33xx_cmd_configure(cc, CTS_PROTECTION_CFG, acx, sizeof(*acx)); + if (ret < 0) { + cc33xx_warning("Setting of ctsprotect failed: %d", ret); + goto out; + } + +out: + kfree(acx); + return ret; +} + +int cc33xx_acx_statistics(struct cc33xx *cc, void *stats) +{ + int ret; + + ret = cc33xx_cmd_interrogate(cc, ACX_STATISTICS, stats, + sizeof(struct acx_header), + sizeof(struct cc33xx_acx_statistics)); + if (ret < 0) { + cc33xx_warning("acx statistics failed: %d", ret); + return -ENOMEM; + } + + return 0; +} + +int cc33xx_update_ap_rates(struct cc33xx *cc, u8 role_id, + u32 basic_rates_set, u32 supported_rates) +{ + struct ap_rates_class_cfg *cfg; + int ret; + + cfg = kzalloc(sizeof(*cfg), GFP_KERNEL); + + if (!cfg) { + ret = -ENOMEM; + goto out; + } + + cfg->basic_rates_set = cpu_to_le32(basic_rates_set); + cfg->supported_rates = cpu_to_le32(supported_rates); + cfg->role_id = role_id; + ret = cc33xx_cmd_configure(cc, AP_RATES_CFG, cfg, sizeof(*cfg)); + if (ret < 0) { + cc33xx_warning("Updating AP Rates failed: %d", ret); + goto out; + } + +out: + kfree(cfg); + return ret; +} + +int cc33xx_tx_param_cfg(struct cc33xx *cc, struct cc33xx_vif *wlvif, u8 ac, + u8 cw_min, u16 cw_max, u8 aifsn, u16 txop, bool acm, + u8 ps_scheme, u8 is_mu_edca, u8 mu_edca_aifs, + u8 mu_edca_ecw_min_max, u8 mu_edca_timer) +{ + struct tx_param_cfg *cfg; + int ret = 0; + + cfg = kzalloc(sizeof(*cfg), GFP_KERNEL); + + if (!cfg) { + ret = -ENOMEM; + goto out; + } + + cfg->role_id = wlvif->role_id; + cfg->ac = ac; + cfg->cw_min = cw_min; + cfg->cw_max = cpu_to_le16(cw_max); + cfg->aifsn = aifsn; + cfg->tx_op_limit = cpu_to_le16(txop); + cfg->acm = cpu_to_le16(acm); + cfg->ps_scheme = ps_scheme; + cfg->is_mu_edca = is_mu_edca; + cfg->mu_edca_aifs = mu_edca_aifs; + cfg->mu_edca_ecw_min_max = mu_edca_ecw_min_max; + cfg->mu_edca_timer = mu_edca_timer; + + ret = cc33xx_cmd_configure(cc, TX_PARAMS_CFG, cfg, sizeof(*cfg)); + if (ret < 0) { + cc33xx_warning("tx param cfg failed: %d", ret); + goto out; + } + +out: + kfree(cfg); + return ret; +} + +int cc33xx_acx_init_mem_config(struct cc33xx *cc) +{ + int ret; + + cc->target_mem_map = kzalloc(sizeof(*cc->target_mem_map), + GFP_KERNEL); + if (!cc->target_mem_map) { + cc33xx_error("couldn't allocate target memory map"); + return -ENOMEM; + } + + /* we now ask for the firmware built memory map */ + ret = cc33xx_acx_mem_map(cc, (void *)cc->target_mem_map, + sizeof(struct cc33xx_acx_mem_map)); + if (ret < 0) { + cc33xx_error("couldn't retrieve firmware memory map"); + kfree(cc->target_mem_map); + cc->target_mem_map = NULL; + return ret; + } + + /* initialize TX block book keeping */ + cc->tx_blocks_available = + le32_to_cpu(cc->target_mem_map->num_tx_mem_blocks); + + return 0; +} + +int cc33xx_acx_init_get_fw_versions(struct cc33xx *cc) +{ + int ret; + + cc->fw_ver = kzalloc(sizeof(*cc->fw_ver), + GFP_KERNEL); + if (!cc->fw_ver) { + cc33xx_error("couldn't allocate cc33xx_acx_fw_versions"); + return -ENOMEM; + } + + ret = cc33xx_acx_get_fw_versions(cc, (void *)cc->fw_ver, + sizeof(struct cc33xx_acx_fw_versions)); + if (ret < 0) { + cc33xx_error("couldn't retrieve firmware versions"); + kfree(cc->fw_ver); + cc->fw_ver = NULL; + return ret; + } + + return 0; +} + +int cc33xx_acx_set_ht_information(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u16 ht_operation_mode, u32 he_oper_params, + u16 he_oper_nss_set) +{ + struct cc33xx_acx_ht_information *acx; + int ret = 0; + + acx = kzalloc(sizeof(*acx), GFP_KERNEL); + if (!acx) { + ret = -ENOMEM; + goto out; + } + + acx->role_id = wlvif->role_id; + acx->ht_protection = + (u8)(ht_operation_mode & IEEE80211_HT_OP_MODE_PROTECTION); + acx->rifs_mode = 0; + acx->gf_protection = + !!(ht_operation_mode & IEEE80211_HT_OP_MODE_NON_GF_STA_PRSNT); + + acx->dual_cts_protection = 0; + + acx->he_operation = cpu_to_le32(he_oper_params); + acx->bss_basic_mcs_set = cpu_to_le16(he_oper_nss_set); + acx->qos_info_more_data_ack_bit = 0; + ret = cc33xx_cmd_configure(cc, BSS_OPERATION_CFG, acx, sizeof(*acx)); + + if (ret < 0) { + cc33xx_warning("acx ht information setting failed: %d", ret); + goto out; + } + +out: + kfree(acx); + return ret; +} + +/* setup BA session receiver setting in the FW. */ +int cc33xx_acx_set_ba_receiver_session(struct cc33xx *cc, u8 tid_index, u16 ssn, + bool enable, u8 peer_hlid, u8 win_size) +{ + struct cc33xx_acx_ba_receiver_setup *acx; + int ret; + + acx = kzalloc(sizeof(*acx), GFP_KERNEL); + if (!acx) { + ret = -ENOMEM; + goto out; + } + + acx->hlid = peer_hlid; + acx->tid = tid_index; + acx->enable = enable; + acx->win_size = win_size; + acx->ssn = cpu_to_le16(ssn); + + ret = cc33xx_cmd_configure_failsafe(cc, BA_SESSION_RX_SETUP_CFG, + acx, sizeof(*acx), + BIT(CMD_STATUS_NO_RX_BA_SESSION)); + if (ret < 0) { + cc33xx_warning("acx ba receiver session failed: %d", ret); + goto out; + } + + /* sometimes we can't start the session */ + if (ret == CMD_STATUS_NO_RX_BA_SESSION) { + cc33xx_warning("no fw rx ba on tid %d", tid_index); + ret = -EBUSY; + goto out; + } + + ret = 0; +out: + kfree(acx); + return ret; +} + +int cc33xx_acx_tsf_info(struct cc33xx *cc, + struct cc33xx_vif *wlvif, u64 *mactime) +{ + struct cc33xx_acx_fw_tsf_information *tsf_info; + int ret = 0; + + tsf_info = kzalloc(sizeof(*tsf_info), GFP_KERNEL); + if (!tsf_info) { + ret = -ENOMEM; + goto out; + } + + tsf_info->role_id = wlvif->role_id; + + *mactime = le32_to_cpu(tsf_info->current_tsf_low) | + ((u64)le32_to_cpu(tsf_info->current_tsf_high) << 32); + +out: + kfree(tsf_info); + return ret; +} + +int cc33xx_acx_config_ps(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + struct cc33xx_acx_config_ps *config_ps; + int ret; + + config_ps = kzalloc(sizeof(*config_ps), GFP_KERNEL); + if (!config_ps) { + ret = -ENOMEM; + goto out; + } + + config_ps->exit_retries = cc->conf.host_conf.conn.psm_exit_retries; + config_ps->enter_retries = cc->conf.host_conf.conn.psm_entry_retries; + config_ps->null_data_rate = cpu_to_le32(wlvif->basic_rate); + + ret = cc33xx_cmd_configure(cc, ACX_CONFIG_PS, config_ps, + sizeof(*config_ps)); + + if (ret < 0) { + cc33xx_warning("acx config ps failed: %d", ret); + goto out; + } + +out: + kfree(config_ps); + return ret; +} + +int cc33xx_acx_average_rssi(struct cc33xx *cc, + struct cc33xx_vif *wlvif, s8 *avg_rssi) +{ + struct acx_roaming_stats *acx; + int ret = 0; + + acx = kzalloc(sizeof(*acx), GFP_KERNEL); + if (!acx) { + ret = -ENOMEM; + goto out; + } + + acx->role_id = wlvif->role_id; + + ret = cc33xx_cmd_interrogate(cc, RSSI_INTR, + acx, sizeof(*acx), sizeof(*acx)); + if (ret < 0) { + cc33xx_warning("acx roaming statistics failed: %d", ret); + ret = -ENOMEM; + goto out; + } + + *avg_rssi = acx->rssi_beacon; + +out: + kfree(acx); + return ret; +} + +static const u16 cc33xx_idx_to_rate_100kbps[] = { + 10, 20, 55, 110, 60, 90, 120, 180, 240, 360, 480, 540 +}; + +int cc33xx_acx_get_tx_rate(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct station_info *sinfo) +{ + struct acx_preamble_and_tx_rate *acx; + int ret; + + acx = kzalloc(sizeof(*acx), GFP_KERNEL); + if (!acx) { + ret = -ENOMEM; + goto out; + } + + acx->role_id = wlvif->role_id; + + ret = cc33xx_cmd_interrogate(cc, GET_PREAMBLE_AND_TX_RATE_INTR, + acx, sizeof(*acx), sizeof(*acx)); + if (ret < 0) { + cc33xx_warning("acx get preamble and tx rate failed: %d", ret); + ret = -ENOMEM; + goto out; + } + + sinfo->txrate.flags = 0; + if (acx->preamble == CONF_PREAMBLE_TYPE_AC_VHT) + sinfo->txrate.flags = RATE_INFO_FLAGS_VHT_MCS; + else if ((acx->preamble >= CONF_PREAMBLE_TYPE_AX_SU) && + (acx->preamble <= CONF_PREAMBLE_TYPE_AX_TB_NDP_FB)) + sinfo->txrate.flags = RATE_INFO_FLAGS_HE_MCS; + else if ((acx->preamble == CONF_PREAMBLE_TYPE_N_MIXED_MODE) || + (acx->preamble == CONF_PREAMBLE_TYPE_GREENFIELD)) + sinfo->txrate.flags = RATE_INFO_FLAGS_MCS; + + if (acx->tx_rate >= CONF_HW_RATE_INDEX_MCS0) + sinfo->txrate.mcs = acx->tx_rate - CONF_HW_RATE_INDEX_MCS0; + else + sinfo->txrate.legacy = cc33xx_idx_to_rate_100kbps[acx->tx_rate - 1]; + + sinfo->txrate.nss = 1; + sinfo->txrate.bw = RATE_INFO_BW_20; + sinfo->txrate.he_gi = NL80211_RATE_INFO_HE_GI_3_2; + sinfo->txrate.he_dcm = 0; + sinfo->txrate.he_ru_alloc = 0; + sinfo->txrate.n_bonded_ch = 0; + sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_BITRATE); + +out: + kfree(acx); + return ret; +} + +#ifdef CONFIG_PM +/* Set the global behaviour of RX filters - On/Off + default action */ +int cc33xx_acx_default_rx_filter_enable(struct cc33xx *cc, bool enable, + enum rx_filter_action action) +{ + struct acx_default_rx_filter *acx; + int ret; + + acx = kzalloc(sizeof(*acx), GFP_KERNEL); + if (!acx) + return -ENOMEM; + + acx->enable = enable; + acx->default_action = action; + acx->special_packet_bitmask = 0; + + ret = cc33xx_cmd_configure(cc, ACX_ENABLE_RX_DATA_FILTER, acx, + sizeof(*acx)); + if (ret < 0) { + cc33xx_warning("acx default rx filter enable failed: %d", ret); + goto out; + } + +out: + kfree(acx); + return ret; +} + +static int cc33xx_rx_filter_get_fields_size(struct cc33xx_rx_filter *filter) +{ + int i, fields_size = 0; + + for (i = 0; i < filter->num_fields; i++) { + fields_size += filter->fields[i].len - sizeof(u8 *) + + sizeof(struct cc33xx_rx_filter_field); + } + + return fields_size; +} + +static void cc33xx_rx_filter_flatten_fields(struct cc33xx_rx_filter *filter, + u8 *buf) +{ + int i; + struct cc33xx_rx_filter_field *field; + + for (i = 0; i < filter->num_fields; i++) { + field = (struct cc33xx_rx_filter_field *)buf; + + field->offset = filter->fields[i].offset; + field->flags = filter->fields[i].flags; + field->len = filter->fields[i].len; + + memcpy(&field->pattern, filter->fields[i].pattern, field->len); + buf += sizeof(struct cc33xx_rx_filter_field) - sizeof(u8 *); + buf += field->len; + } +} + +/* Configure or disable a specific RX filter pattern */ +int cc33xx_acx_set_rx_filter(struct cc33xx *cc, u8 index, bool enable, + struct cc33xx_rx_filter *filter) +{ + struct acx_rx_filter_cfg *acx; + int fields_size = 0; + int acx_size; + int ret; + + WARN_ON(enable && !filter); + WARN_ON(index >= CC33XX_MAX_RX_FILTERS); + + if (enable) + fields_size = cc33xx_rx_filter_get_fields_size(filter); + + acx_size = ALIGN(sizeof(*acx) + fields_size, 4); + acx = kzalloc(acx_size, GFP_KERNEL); + + if (!acx) + return -ENOMEM; + + acx->enable = enable; + acx->index = index; + + if (enable) { + acx->num_fields = filter->num_fields; + acx->action = filter->action; + cc33xx_rx_filter_flatten_fields(filter, acx->fields); + } + + cc33xx_dump(DEBUG_ACX, "RX_FILTER: ", acx, acx_size); + + ret = cc33xx_cmd_configure(cc, ACX_SET_RX_DATA_FILTER, acx, acx_size); + if (ret < 0) { + cc33xx_warning("setting rx filter failed: %d", ret); + goto out; + } + +out: + kfree(acx); + return ret; +} +#endif /* CONFIG_PM */ + +/* this command is basically the same as cc33xx_acx_ht_capabilities, + * with the addition of supported rates. they should be unified in + * the next fw api change + */ +int cc33xx_acx_set_peer_cap(struct cc33xx *cc, + struct ieee80211_sta_ht_cap *ht_cap, + struct ieee80211_sta_he_cap *he_cap, + struct cc33xx_vif *wlvif, bool allow_ht_operation, + u32 rate_set, u8 hlid) +{ + struct cc33xx_acx_peer_cap *acx; + int ret = 0; + u32 ht_capabilites = 0; + u8 *cap_info = NULL; + u8 dcm_max_const_rx_mask = IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_RX_MASK; + u8 partial_bw_ext_range = IEEE80211_HE_PHY_CAP6_PARTIAL_BW_EXT_RANGE; + + acx = kzalloc(sizeof(*acx), GFP_KERNEL); + if (!acx) { + ret = -ENOMEM; + goto out; + } + + if (allow_ht_operation && ht_cap->ht_supported) { + /* no need to translate capabilities - use the spec values */ + ht_capabilites = ht_cap->cap; + + /* this bit is not employed by the spec but only by FW to + * indicate peer HT support + */ + ht_capabilites |= CC33XX_HT_CAP_HT_OPERATION; + + /* get data from A-MPDU parameters field */ + acx->ampdu_max_length = ht_cap->ampdu_factor; + acx->ampdu_min_spacing = ht_cap->ampdu_density; + } + + acx->ht_capabilites = cpu_to_le32(ht_capabilites); + acx->supported_rates = cpu_to_le32(rate_set); + + acx->role_id = wlvif->role_id; + acx->has_he = he_cap->has_he; + memcpy(acx->mac_cap_info, he_cap->he_cap_elem.mac_cap_info, 6); + cap_info = he_cap->he_cap_elem.phy_cap_info; + acx->nominal_packet_padding = (cap_info[8] & NOMINAL_PACKET_PADDING); + /* Max DCM constelation for RX - bits [4:3] in PHY capabilities byte 3 */ + acx->dcm_max_constelation = (cap_info[3] & dcm_max_const_rx_mask) >> 3; + acx->er_upper_supported = ((cap_info[6] & partial_bw_ext_range) != 0); + ret = cc33xx_cmd_configure(cc, PEER_CAP_CFG, acx, sizeof(*acx)); + + if (ret < 0) { + cc33xx_warning("acx ht capabilities setting failed: %d", ret); + goto out; + } + +out: + kfree(acx); + return ret; +} + +int cc33xx_acx_trigger_fw_assert(struct cc33xx *cc) +{ + struct debug_header *buf; + int ret; + + buf = kzalloc(sizeof(*buf), GFP_KERNEL); + if (!buf) { + ret = -ENOMEM; + goto out; + } + + ret = cc33xx_cmd_debug(cc, TRIGGER_FW_ASSERT, buf, sizeof(*buf)); + if (ret < 0) { + cc33xx_error("failed to trigger firmware assert"); + goto out; + } + +out: + kfree(buf); + return ret; +} diff --git a/drivers/net/wireless/ti/cc33xx/acx.h b/drivers/net/wireless/ti/cc33xx/acx.h new file mode 100644 index 000000000000..f250758c3da5 --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/acx.h @@ -0,0 +1,835 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#ifndef __ACX_H__ +#define __ACX_H__ + +#include "cmd.h" +#include "debug.h" + +enum { + /* Regular PS: simple sending of packets */ + PS_SCHEME_LEGACY = 0, + /* UPSD: sending a packet triggers a UPSD downstream*/ + PS_SCHEME_UPSD_TRIGGER = 1, + /* Mixed mode is partially supported: we are not going to sleep, and + * triggers (on APSD AC's) are not sent when service period ends with + * more_data = 1. + */ + PS_SCHEME_MIXED_MODE = 2, + /* Legacy PSPOLL: a PSPOLL packet will be sent before every data packet + * transmission in this queue. + */ + PS_SCHEME_LEGACY_PSPOLL = 3, + /* Scheduled APSD mode. */ + PS_SCHEME_SAPSD = 4, + /* No PSPOLL: move to active after first packet. no need to sent pspoll */ + PS_SCHEME_NOPSPOLL = 5, + + MAX_PS_SCHEME = PS_SCHEME_NOPSPOLL +}; + +/* Target's information element */ +struct acx_header { + struct cc33xx_cmd_header cmd; + + /* acx (or information element) header */ + __le16 id; + + /* payload length (not including headers */ + __le16 len; +} __packed; + +struct debug_header { + struct cc33xx_cmd_header cmd; + + /* debug (or information element) header */ + __le16 id; + + /* payload length (not including headers */ + __le16 len; +} __packed; + +enum cc33xx_role { + CC33XX_ROLE_STA = 0, + CC33XX_ROLE_IBSS, + CC33XX_ROLE_AP, + CC33XX_ROLE_DEVICE, + CC33XX_ROLE_P2P_CL, + CC33XX_ROLE_P2P_GO, + CC33XX_ROLE_MESH_POINT, + + ROLE_TRANSCEIVER = 16, + + CC33XX_INVALID_ROLE_TYPE = 0xff +}; + +enum cc33xx_psm_mode { + /* Active mode */ + CC33XX_PSM_CAM = 0, + + /* Power save mode */ + CC33XX_PSM_PS = 1, + + /* Extreme low power */ + CC33XX_PSM_ELP = 2, + + CC33XX_PSM_MAX = CC33XX_PSM_ELP, + + /* illegal out of band value of PSM mode */ + CC33XX_PSM_ILLEGAL = 0xff +}; + +struct acx_sleep_auth { + struct acx_header header; + + /* The sleep level authorization of the device. */ + /* 0 - Always active*/ + /* 1 - Power down mode: light / fast sleep*/ + /* 2 - ELP mode: Deep / Max sleep*/ + u8 sleep_auth; + u8 padding[3]; +} __packed; + +enum acx_slot_type { + SLOT_TIME_LONG = 0, + SLOT_TIME_SHORT = 1, + DEFAULT_SLOT_TIME = SLOT_TIME_SHORT, + MAX_SLOT_TIMES = 0xFF +}; + +struct acx_slot { + struct acx_header header; + + u8 role_id; + u8 slot_time; + u8 reserved[2]; +} __packed; + +#define ACX_MC_ADDRESS_GROUP_MAX (20) +#define ADDRESS_GROUP_MAX_LEN (ETH_ALEN * ACX_MC_ADDRESS_GROUP_MAX) + +struct acx_dot11_grp_addr_tbl { + struct acx_header header; + + u8 enabled; + u8 num_groups; + u8 pad[2]; + u8 mac_table[ADDRESS_GROUP_MAX_LEN]; +} __packed; + +struct acx_beacon_filter_option { + struct acx_header header; + + u8 role_id; + u8 enable; + /* The number of beacons without the unicast TIM + * bit set that the firmware buffers before + * signaling the host about ready frames. + * When set to 0 and the filter is enabled, beacons + * without the unicast TIM bit set are dropped. + */ + u8 max_num_beacons; + u8 pad; +} __packed; + +/* ACXBeaconFilterEntry (not 221) + * Byte Offset Size (Bytes) Definition + * =========== ============ ========== + * 0 1 IE identifier + * 1 1 Treatment bit mask + * + * ACXBeaconFilterEntry (221) + * Byte Offset Size (Bytes) Definition + * =========== ============ ========== + * 0 1 IE identifier + * 1 1 Treatment bit mask + * 2 3 OUI + * 5 1 Type + * 6 2 Version + * + * + * Treatment bit mask - The information element handling: + * bit 0 - The information element is compared and transferred + * in case of change. + * bit 1 - The information element is transferred to the host + * with each appearance or disappearance. + * Note that both bits can be set at the same time. + */ + +enum { + BEACON_FILTER_TABLE_MAX_IE_NUM = 32, + BEACON_FILTER_TABLE_MAX_VENDOR_SPECIFIC_IE_NUM = 6, + BEACON_FILTER_TABLE_IE_ENTRY_SIZE = 2, + BEACON_FILTER_TABLE_EXTRA_VENDOR_SPECIFIC_IE_SIZE = 6 +}; + +#define BEACON_FILTER_TABLE_MAX_SIZE \ + ((BEACON_FILTER_TABLE_MAX_IE_NUM * \ + BEACON_FILTER_TABLE_IE_ENTRY_SIZE) + \ + (BEACON_FILTER_TABLE_MAX_VENDOR_SPECIFIC_IE_NUM * \ + BEACON_FILTER_TABLE_EXTRA_VENDOR_SPECIFIC_IE_SIZE)) + +struct acx_beacon_filter_ie_table { + struct acx_header header; + + u8 role_id; + u8 num_ie; + u8 pad[2]; + u8 table[BEACON_FILTER_TABLE_MAX_SIZE]; +} __packed; + +struct acx_energy_detection { + struct acx_header header; + + /* The RX Clear Channel Assessment threshold in the PHY */ + __le16 rx_cca_threshold; + u8 tx_energy_detection; + u8 pad; +} __packed; + +struct acx_event_mask { + struct acx_header header; + + __le32 event_mask; + __le32 high_event_mask; /* Unused */ +} __packed; + +struct acx_tx_power_cfg { + struct acx_header header; + + u8 role_id; + s8 tx_power; + u8 padding[2]; +} __packed; + +struct acx_wake_up_condition { + struct acx_header header; + + u8 wake_up_event; + u8 listen_interval; + u8 padding[2]; +} __packed; + +struct assoc_info_cfg { + struct acx_header header; + + u8 role_id; + __le16 aid; + u8 wmm_enabled; + u8 nontransmitted; + u8 bssid_index; + u8 bssid_indicator; + u8 transmitter_bssid[ETH_ALEN]; + u8 ht_supported; + u8 vht_supported; + u8 has_he; +} __packed; + +enum acx_preamble_type { + ACX_PREAMBLE_LONG = 0, + ACX_PREAMBLE_SHORT = 1 +}; + +struct acx_preamble { + struct acx_header header; + + /* When set, the WiLink transmits the frames with a short preamble and + * when cleared, the WiLink transmits the frames with a long preamble. + */ + u8 role_id; + u8 preamble; + u8 padding[2]; +} __packed; + +enum acx_ctsprotect_type { + CTSPROTECT_DISABLE = 0, + CTSPROTECT_ENABLE = 1 +}; + +struct acx_ctsprotect { + struct acx_header header; + u8 role_id; + u8 ctsprotect; + u8 padding[2]; +} __packed; + +struct ap_rates_class_cfg { + struct acx_header header; + u8 role_id; + __le32 basic_rates_set; + __le32 supported_rates; + u8 padding[3]; +} __packed; + +struct tx_param_cfg { + struct acx_header header; + + u8 role_id; + u8 ac; + u8 aifsn; + u8 cw_min; + + __le16 cw_max; + __le16 tx_op_limit; + + __le16 acm; + + u8 ps_scheme; + + u8 is_mu_edca; + u8 mu_edca_aifs; + u8 mu_edca_ecw_min_max; + u8 mu_edca_timer; + + u8 reserved; +} __packed; + +struct cc33xx_acx_config_memory { + struct acx_header header; + + u8 rx_mem_block_num; + u8 tx_min_mem_block_num; + u8 num_stations; + u8 num_ssid_profiles; + __le32 total_tx_descriptors; + u8 dyn_mem_enable; + u8 tx_free_req; + u8 rx_free_req; + u8 tx_min; + u8 fwlog_blocks; + u8 padding[3]; +} __packed; + +struct cc33xx_acx_mem_map { + struct acx_header header; + + /* Number of blocks FW allocated for TX packets */ + __le32 num_tx_mem_blocks; + + /* Number of blocks FW allocated for RX packets */ + __le32 num_rx_mem_blocks; + + /* Number of TX descriptor that allocated. */ + __le32 num_tx_descriptor; + + __le32 tx_result; + +} __packed; + +struct cc33xx_acx_fw_versions { + struct acx_header header; + + __le16 major_version; + __le16 minor_version; + __le16 api_version; + __le16 build_version; + + u8 phy_version[6]; + u8 padding[2]; +} __packed; + +/* special capability bit (not employed by the 802.11n spec) */ +#define CC33XX_HT_CAP_HT_OPERATION BIT(16) + +/* ACX_HT_BSS_OPERATION + * Configure HT capabilities - AP rules for behavior in the BSS. + */ +struct cc33xx_acx_ht_information { + struct acx_header header; + + u8 role_id; + + /* Values: 0 - RIFS not allowed, 1 - RIFS allowed */ + u8 rifs_mode; + + /* Values: 0 - 3 like in spec */ + u8 ht_protection; + + /* Values: 0 - GF protection not required, 1 - GF protection required */ + u8 gf_protection; + + /* Values: 0 - Dual CTS protection not required, + * 1 - Dual CTS Protection required + * Note: When this value is set to 1 FW will protect all TXOP with RTS + * frame and will not use CTS-to-self regardless of the value of the + * ACX_CTS_PROTECTION information element + */ + u8 dual_cts_protection; + + __le32 he_operation; + + __le16 bss_basic_mcs_set; + u8 qos_info_more_data_ack_bit; + +} __packed; + +struct cc33xx_acx_ba_receiver_setup { + struct acx_header header; + + /* Specifies link id, range 0-31 */ + u8 hlid; + + u8 tid; + + u8 enable; + + /* Windows size in number of packets */ + u8 win_size; + + /* BA session starting sequence number. RANGE 0-FFF */ + __le16 ssn; + + u8 padding[2]; +} __packed; + +struct cc33xx_acx_fw_tsf_information { + struct acx_header header; + + u8 role_id; + u8 padding1[3]; + __le32 current_tsf_high; + __le32 current_tsf_low; + __le32 last_bttt_high; + __le32 last_tbtt_low; + u8 last_dtim_count; + u8 padding2[3]; +} __packed; + +struct cc33xx_acx_config_ps { + struct acx_header header; + + u8 exit_retries; + u8 enter_retries; + u8 padding[2]; + __le32 null_data_rate; +} __packed; + +#define ACX_RATE_MGMT_ALL_PARAMS 0xff + +struct acx_default_rx_filter { + struct acx_header header; + u8 enable; + + /* action of type FILTER_XXX */ + u8 default_action; + + /* special packet bitmask - packet that use for trigger the host */ + u8 special_packet_bitmask; + + u8 padding; +} __packed; + +struct acx_rx_filter_cfg { + struct acx_header header; + + u8 enable; + + /* 0 - WL1271_MAX_RX_FILTERS-1 */ + u8 index; + + u8 action; + + u8 num_fields; + u8 fields[]; +} __packed; + +struct acx_roaming_stats { + struct acx_header header; + + u8 role_id; + u8 pad[3]; + __le32 missed_beacons; + u8 snr_data; + u8 snr_bacon; + s8 rssi_data; + s8 rssi_beacon; +} __packed; + +enum cfg { + CTS_PROTECTION_CFG = 0, + TX_PARAMS_CFG = 1, + ASSOC_INFO_CFG = 2, + PEER_CAP_CFG = 3, + BSS_OPERATION_CFG = 4, + SLOT_CFG = 5, + PREAMBLE_TYPE_CFG = 6, + DOT11_GROUP_ADDRESS_TBL = 7, + BA_SESSION_RX_SETUP_CFG = 8, + ACX_SLEEP_AUTH = 9, + STATIC_CALIBRATION_CFG = 10, + AP_RATES_CFG = 11, + WAKE_UP_CONDITIONS_CFG = 12, + SET_ANTENNA_SELECT_CFG = 13, + TX_POWER_CFG = 14, + VENDOR_IE_CFG = 15, + START_COEX_STATISTICS_CFG = 16, + BEACON_FILTER_OPT = 17, + BEACON_FILTER_TABLE = 18, + ACX_ENABLE_RX_DATA_FILTER = 19, + ACX_SET_RX_DATA_FILTER = 20, + ACX_GET_DATA_FILTER_STATISTICS = 21, + TWT_SETUP = 22, + TWT_TERMINATE = 23, + TWT_SUSPEND = 24, + TWT_RESUME = 25, + ANT_DIV_ENABLE = 26, + ANT_DIV_SET_RSSI_THRESHOLD = 27, + ANT_DIV_SELECT_DEFAULT_ANTENNA = 28, + + LAST_CFG_VALUE, + MAX_DOT11_CFG = LAST_CFG_VALUE, + + MAX_CFG = 0xFFFF /*force enumeration to 16bits*/ +}; + +enum cmd_debug { + UPLINK_MULTI_USER_CFG, + UPLINK_MULTI_USER_DATA_CFG, + OPERATION_MODE_CTRL_CFG, + UPLINK_POWER_HEADER_CFG, + MCS_FIXED_RATE_CFG, + GI_LTF_CFG, + TRANSMIT_OMI_CFG, + TB_ONLY_CFG, + BA_SESSION_CFG, + FORCE_PS_CFG, + RATE_OVERRRIDE_CFG, + BLS_CFG, + BLE_ENABLE, + SET_TSF, + RTS_TH_CFG, + LINK_ADAPT_CFG, + CALIB_BITMAP_CFG, + PWR_PARTIAL_MODES_CFG, + TRIGGER_FW_ASSERT, + BURST_MODE_CFG, + + LAST_DEBUG_VALUE, + + MAX_DEBUG = 0xFFFF /*force enumeration to 16bits*/ + +}; + +enum interrogate_opt { + MEM_MAP_INTR = 0, + GET_FW_VERSIONS_INTR = 1, + RSSI_INTR = 2, + GET_ANTENNA_SELECT_INTR = 3, + GET_PREAMBLE_AND_TX_RATE_INTR = 4, + GET_MAC_ADDRESS = 5, + READ_COEX_STATISTICS = 6, + LAST_IE_VALUE, + MAX_DOT11_IE = LAST_IE_VALUE, + + MAX_IE = 0xFFFF /*force enumeration to 16bits*/ +}; + +enum { + ACX_STATISTICS = LAST_CFG_VALUE, + ACX_CONFIG_PS, + ACX_CLEAR_STATISTICS = 0x0054, +}; + +struct cc33xx_acx_error_stats { + __le32 error_frame_non_ctrl; + __le32 error_frame_ctrl; + __le32 error_frame_during_protection; + __le32 null_frame_tx_start; + __le32 null_frame_cts_start; + __le32 bar_retry; + __le32 num_frame_cts_nul_flid; + __le32 tx_abort_failure; + __le32 tx_resume_failure; + __le32 rx_cmplt_db_overflow_cnt; + __le32 elp_while_rx_exch; + __le32 elp_while_tx_exch; + __le32 elp_while_tx; + __le32 elp_while_nvic_pending; + __le32 rx_excessive_frame_len; + __le32 burst_mismatch; + __le32 tbc_exch_mismatch; +} __packed; + +#define NUM_OF_RATES_INDEXES 30 +struct cc33xx_acx_tx_stats { + __le32 tx_prepared_descs; + __le32 tx_cmplt; + __le32 tx_template_prepared; + __le32 tx_data_prepared; + __le32 tx_template_programmed; + __le32 tx_data_programmed; + __le32 tx_burst_programmed; + __le32 tx_starts; + __le32 tx_stop; + __le32 tx_start_templates; + __le32 tx_start_int_templates; + __le32 tx_start_fw_gen; + __le32 tx_start_data; + __le32 tx_start_null_frame; + __le32 tx_exch; + __le32 tx_retry_template; + __le32 tx_retry_data; + __le32 tx_retry_per_rate[NUM_OF_RATES_INDEXES]; + __le32 tx_exch_pending; + __le32 tx_exch_expiry; + __le32 tx_done_template; + __le32 tx_done_data; + __le32 tx_done_int_template; + __le32 tx_cfe1; + __le32 tx_cfe2; + __le32 frag_called; + __le32 frag_mpdu_alloc_failed; + __le32 frag_init_called; + __le32 frag_in_process_called; + __le32 frag_tkip_called; + __le32 frag_key_not_found; + __le32 frag_need_fragmentation; + __le32 frag_bad_mblk_num; + __le32 frag_failed; + __le32 frag_cache_hit; + __le32 frag_cache_miss; +} __packed; + +struct cc33xx_acx_rx_stats { + __le32 rx_beacon_early_term; + __le32 rx_out_of_mpdu_nodes; + __le32 rx_hdr_overflow; + __le32 rx_dropped_frame; + __le32 rx_done_stage; + __le32 rx_done; + __le32 rx_defrag; + __le32 rx_defrag_end; + __le32 rx_cmplt; + __le32 rx_pre_complt; + __le32 rx_cmplt_task; + __le32 rx_phy_hdr; + __le32 rx_timeout; + __le32 rx_rts_timeout; + __le32 rx_timeout_wa; + __le32 defrag_called; + __le32 defrag_init_called; + __le32 defrag_in_process_called; + __le32 defrag_tkip_called; + __le32 defrag_need_defrag; + __le32 defrag_decrypt_failed; + __le32 decrypt_key_not_found; + __le32 defrag_need_decrypt; + __le32 rx_tkip_replays; + __le32 rx_xfr; +} __packed; + +struct cc33xx_acx_isr_stats { + __le32 irqs; +} __packed; + +#define PWR_STAT_MAX_CONT_MISSED_BCNS_SPREAD 10 + +struct cc33xx_acx_pwr_stats { + __le32 missing_bcns_cnt; + __le32 rcvd_bcns_cnt; + __le32 connection_out_of_sync; + __le32 cont_miss_bcns_spread[PWR_STAT_MAX_CONT_MISSED_BCNS_SPREAD]; + __le32 rcvd_awake_bcns_cnt; + __le32 sleep_time_count; + __le32 sleep_time_avg; + __le32 sleep_cycle_avg; + __le32 sleep_percent; + __le32 ap_sleep_active_conf; + __le32 ap_sleep_user_conf; + __le32 ap_sleep_counter; +} __packed; + +struct cc33xx_acx_rx_filter_stats { + __le32 beacon_filter; + __le32 arp_filter; + __le32 mc_filter; + __le32 dup_filter; + __le32 data_filter; + __le32 ibss_filter; + __le32 protection_filter; + __le32 accum_arp_pend_requests; + __le32 max_arp_queue_dep; +} __packed; + +struct cc33xx_acx_rx_rate_stats { + __le32 rx_frames_per_rates[50]; +} __packed; + +#define AGGR_STATS_TX_AGG 16 +#define AGGR_STATS_RX_SIZE_LEN 16 + +struct cc33xx_acx_aggr_stats { + __le32 tx_agg_rate[AGGR_STATS_TX_AGG]; + __le32 tx_agg_len[AGGR_STATS_TX_AGG]; + __le32 rx_size[AGGR_STATS_RX_SIZE_LEN]; +} __packed; + +#define PIPE_STATS_HW_FIFO 11 + +struct cc33xx_acx_pipeline_stats { + __le32 hs_tx_stat_fifo_int; + __le32 hs_rx_stat_fifo_int; + __le32 enc_tx_stat_fifo_int; + __le32 enc_rx_stat_fifo_int; + __le32 rx_complete_stat_fifo_int; + __le32 pre_proc_swi; + __le32 post_proc_swi; + __le32 sec_frag_swi; + __le32 pre_to_defrag_swi; + __le32 defrag_to_rx_xfer_swi; + __le32 dec_packet_in; + __le32 dec_packet_in_fifo_full; + __le32 dec_packet_out; + __le16 pipeline_fifo_full[PIPE_STATS_HW_FIFO]; + __le16 padding; +} __packed; + +#define DIVERSITY_STATS_NUM_OF_ANT 2 + +struct cc33xx_acx_diversity_stats { + __le32 num_of_packets_per_ant[DIVERSITY_STATS_NUM_OF_ANT]; + __le32 total_num_of_toggles; +} __packed; + +struct cc33xx_acx_thermal_stats { + __le16 irq_thr_low; + __le16 irq_thr_high; + __le16 tx_stop; + __le16 tx_resume; + __le16 false_irq; + __le16 adc_source_unexpected; +} __packed; + +#define CC33XX_NUM_OF_CALIBRATIONS_ERRORS 18 +struct cc33xx_acx_calib_failure_stats { + __le16 fail_count[CC33XX_NUM_OF_CALIBRATIONS_ERRORS]; + __le32 calib_count; +} __packed; + +struct cc33xx_roaming_stats { + s32 rssi_level; +} __packed; + +struct cc33xx_dfs_stats { + __le32 num_of_radar_detections; +} __packed; + +struct cc33xx_acx_statistics { + struct acx_header header; + + struct cc33xx_acx_error_stats error; + struct cc33xx_acx_tx_stats tx; + struct cc33xx_acx_rx_stats rx; + struct cc33xx_acx_isr_stats isr; + struct cc33xx_acx_pwr_stats pwr; + struct cc33xx_acx_rx_filter_stats rx_filter; + struct cc33xx_acx_rx_rate_stats rx_rate; + struct cc33xx_acx_aggr_stats aggr_size; + struct cc33xx_acx_pipeline_stats pipeline; + struct cc33xx_acx_diversity_stats diversity; + struct cc33xx_acx_thermal_stats thermal; + struct cc33xx_acx_calib_failure_stats calib; + struct cc33xx_roaming_stats roaming; + struct cc33xx_dfs_stats dfs; +} __packed; + +/* ACX_PEER_CAP + * this struct is very similar to cc33xx_acx_ht_capabilities, with the + * addition of supported rates + */ +#define NOMINAL_PACKET_PADDING (0xC0) +struct cc33xx_acx_peer_cap { + struct acx_header header; + + u8 role_id; + + /* rates supported by the remote peer */ + __le32 supported_rates; + + /* bitmask of capability bits supported by the peer */ + __le32 ht_capabilites; + /* This the maximum A-MPDU length supported by the AP. The FW may not + * exceed this length when sending A-MPDUs + */ + u8 ampdu_max_length; + + /* This is the minimal spacing required when sending A-MPDUs to the AP*/ + u8 ampdu_min_spacing; + + /* HE capabilities */ + u8 mac_cap_info[8]; + + /* Nominal packet padding value, used for determining the packet extension duration */ + u8 nominal_packet_padding; + + /* HE peer support */ + bool has_he; + + u8 dcm_max_constelation; + + u8 er_upper_supported; + + u8 padding; +} __packed; + +struct acx_preamble_and_tx_rate { + struct acx_header header; + u16 tx_rate; + u8 preamble; + u8 role_id; +} __packed; + +int cc33xx_acx_wake_up_conditions(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u8 wake_up_event, u8 listen_interval); +int cc33xx_acx_sleep_auth(struct cc33xx *cc, u8 sleep_auth); +int cc33xx_ble_enable(struct cc33xx *cc, u8 ble_enable); +int cc33xx_acx_tx_power(struct cc33xx *cc, struct cc33xx_vif *wlvif, int power); +int cc33xx_acx_slot(struct cc33xx *cc, struct cc33xx_vif *wlvif, + enum acx_slot_type slot_time); +int cc33xx_acx_group_address_tbl(struct cc33xx *cc, bool enable, void *mc_list, u32 mc_list_len); +int cc33xx_acx_beacon_filter_opt(struct cc33xx *cc, struct cc33xx_vif *wlvif, + bool enable_filter); +int cc33xx_acx_beacon_filter_table(struct cc33xx *cc, struct cc33xx_vif *wlvif); +int cc33xx_assoc_info_cfg(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct ieee80211_sta *sta, u16 aid); +int cc33xx_acx_set_preamble(struct cc33xx *cc, struct cc33xx_vif *wlvif, + enum acx_preamble_type preamble); +int cc33xx_acx_cts_protect(struct cc33xx *cc, struct cc33xx_vif *wlvif, + enum acx_ctsprotect_type ctsprotect); +int cc33xx_acx_statistics(struct cc33xx *cc, void *stats); +int cc33xx_tx_param_cfg(struct cc33xx *cc, struct cc33xx_vif *wlvif, u8 ac, + u8 cw_min, u16 cw_max, u8 aifsn, u16 txop, bool acm, + u8 ps_scheme, u8 is_mu_edca, u8 mu_edca_aifs, + u8 mu_edca_ecw_min_max, u8 mu_edca_timer); +int cc33xx_update_ap_rates(struct cc33xx *cc, u8 role_id, + u32 basic_rates_set, u32 supported_rates); +int cc33xx_acx_init_mem_config(struct cc33xx *cc); +int cc33xx_acx_init_get_fw_versions(struct cc33xx *cc); +int cc33xx_acx_set_ht_information(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u16 ht_operation_mode, u32 he_oper_params, + u16 he_oper_nss_set); +int cc33xx_acx_set_ba_receiver_session(struct cc33xx *cc, u8 tid_index, u16 ssn, + bool enable, u8 peer_hlid, u8 win_size); +int cc33xx_acx_tsf_info(struct cc33xx *cc, + struct cc33xx_vif *wlvif, u64 *mactime); +int cc33xx_acx_config_ps(struct cc33xx *cc, struct cc33xx_vif *wlvif); +int cc33xx_acx_get_tx_rate(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct station_info *sinfo); +int cc33xx_acx_average_rssi(struct cc33xx *cc, + struct cc33xx_vif *wlvif, s8 *avg_rssi); +int cc33xx_acx_default_rx_filter_enable(struct cc33xx *cc, bool enable, + enum rx_filter_action action); +int cc33xx_acx_set_rx_filter(struct cc33xx *cc, u8 index, bool enable, + struct cc33xx_rx_filter *filter); +int cc33xx_acx_clear_statistics(struct cc33xx *cc); +int cc33xx_acx_set_peer_cap(struct cc33xx *cc, + struct ieee80211_sta_ht_cap *ht_cap, + struct ieee80211_sta_he_cap *he_cap, + struct cc33xx_vif *wlvif, bool allow_ht_operation, + u32 rate_set, u8 hlid); +int cc33xx_acx_trigger_fw_assert(struct cc33xx *cc); + +#endif /* __CC33XX_ACX_H__ */ From patchwork Thu Nov 7 12:51:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nemanov, Michael" X-Patchwork-Id: 13866415 X-Patchwork-Delegate: kuba@kernel.org Received: from fllv0015.ext.ti.com (fllv0015.ext.ti.com [198.47.19.141]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 61CBA212D10; Thu, 7 Nov 2024 12:52:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.19.141 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983977; cv=none; b=GRGIVe0sZK/LNXSvOJWpsCzLJEGeBBHCY26oTnazysYtSj9AdeYBU3iZwpM+Fya3hErEl/V+jrDyANclanCtOvEJCVJjyudUW6q8IxkuMKexqWhQkq1m6ZGE3Lx1DtUEqo04lAEz8yXj4nJDU0iMnk3WU91tC5VguPhtudqRgQI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983977; c=relaxed/simple; bh=HYzOAaWuisgIsp1/MpKHI7xtqAXJCQluHLd5y69CNqQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=l70lgs6ZmkWSEU7S7mRN/FDPRXUXmIerfejMQA9/+yc/nSYd23Vssaw2SNJmBRqZyaj+XKa5D+lH6sSuoYlBS87zfJfqPGPTA5h5NvVOibbZTLbZzLTLCnogt407Vzj2NOfDa77XWYFdbRi+xXlk83gPdLrVmzT9RhWR1JY99dE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=q1ciL5fM; arc=none smtp.client-ip=198.47.19.141 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="q1ciL5fM" Received: from fllv0034.itg.ti.com ([10.64.40.246]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 4A7CqmQH073880; Thu, 7 Nov 2024 06:52:48 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1730983968; bh=7b1MLeew0wppspPKdb4sMbnfVhaeT7HEweeVebDjg/M=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=q1ciL5fM69QEzgQJ/2ffrrFJHRd4KGHz4NUv2bYLSLlBVEmqP5L7AnOsjd5bU0Tar X7lLI38fg+i5/BNKUXYTu3X2jQE79vtmm0P9S4d/WBx4t4wOUN3O+jnLM2FymMV6Yh GRpS4MexfXFnWMmr7XV1/X4j8Ib1Qzg9yv3b+EQs= Received: from DLEE103.ent.ti.com (dlee103.ent.ti.com [157.170.170.33]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 4A7CqmK4060347 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 7 Nov 2024 06:52:48 -0600 Received: from DLEE112.ent.ti.com (157.170.170.23) by DLEE103.ent.ti.com (157.170.170.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Thu, 7 Nov 2024 06:52:47 -0600 Received: from lelvsmtp6.itg.ti.com (10.180.75.249) by DLEE112.ent.ti.com (157.170.170.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23 via Frontend Transport; Thu, 7 Nov 2024 06:52:47 -0600 Received: from localhost (udb0389739.dhcp.ti.com [137.167.1.149]) by lelvsmtp6.itg.ti.com (8.15.2/8.15.2) with ESMTP id 4A7CqkJL038423; Thu, 7 Nov 2024 06:52:47 -0600 From: Michael Nemanov To: Kalle Valo , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , , , , CC: Sabeeh Khan , Michael Nemanov Subject: [PATCH v5 07/17] wifi: cc33xx: Add event.c, event.h Date: Thu, 7 Nov 2024 14:51:59 +0200 Message-ID: <20241107125209.1736277-8-michael.nemanov@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241107125209.1736277-1-michael.nemanov@ti.com> References: <20241107125209.1736277-1-michael.nemanov@ti.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea X-Patchwork-Delegate: kuba@kernel.org Unlike in wlcore, events are queued on linked list (cc->event_list) and are handled outside the IRQ context. This will be more clear when looking at main.c Signed-off-by: Michael Nemanov --- drivers/net/wireless/ti/cc33xx/event.c | 362 +++++++++++++++++++++++++ drivers/net/wireless/ti/cc33xx/event.h | 71 +++++ 2 files changed, 433 insertions(+) create mode 100644 drivers/net/wireless/ti/cc33xx/event.c create mode 100644 drivers/net/wireless/ti/cc33xx/event.h diff --git a/drivers/net/wireless/ti/cc33xx/event.c b/drivers/net/wireless/ti/cc33xx/event.c new file mode 100644 index 000000000000..099abbeddba4 --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/event.c @@ -0,0 +1,362 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#include "acx.h" +#include "event.h" +#include "ps.h" +#include "io.h" +#include "scan.h" + +#define CC33XX_WAIT_EVENT_FAST_POLL_COUNT 20 + +struct cc33xx_event_mailbox { + __le32 events_vector; + + u8 number_of_scan_results; + u8 number_of_sched_scan_results; + + __le16 channel_switch_role_id_bitmap; + + s8 rssi_snr_trigger_metric[NUM_OF_RSSI_SNR_TRIGGERS]; + + /* bitmap of removed links */ + __le32 hlid_removed_bitmap; + + /* rx ba constraint */ + __le16 rx_ba_role_id_bitmap; /* 0xfff means any role. */ + __le16 rx_ba_allowed_bitmap; + + /* bitmap of roc completed (by role id) */ + __le16 roc_completed_bitmap; + + /* bitmap of stations (by role id) with bss loss */ + __le16 bss_loss_bitmap; + + /* bitmap of stations (by HLID) which exceeded max tx retries */ + __le16 tx_retry_exceeded_bitmap; + + /* time sync high msb*/ + __le16 time_sync_tsf_high_msb; + + /* bitmap of inactive stations (by HLID) */ + __le16 inactive_sta_bitmap; + + /* time sync high lsb*/ + __le16 time_sync_tsf_high_lsb; + + /* rx BA win size indicated by RX_BA_WIN_SIZE_CHANGE_EVENT_ID */ + u8 rx_ba_role_id; + u8 rx_ba_link_id; + u8 rx_ba_win_size; + u8 padding; + + /* smart config */ + u8 sc_ssid_len; + u8 sc_pwd_len; + u8 sc_token_len; + u8 padding1; + u8 sc_ssid[32]; + u8 sc_pwd[64]; + u8 sc_token[32]; + + /* smart config sync channel */ + u8 sc_sync_channel; + u8 sc_sync_band; + + /* time sync low msb*/ + __le16 time_sync_tsf_low_msb; + + /* radar detect */ + u8 radar_channel; + u8 radar_type; + + /* time sync low lsb*/ + __le16 time_sync_tsf_low_lsb; + + u8 ble_event[260]; + +} __packed; + +struct event_node { + struct llist_node node; + struct cc33xx_event_mailbox event_data; +}; + +void deffer_event(struct cc33xx *cc, + const void *event_payload, size_t event_length) +{ + struct event_node *event_node; + + if (WARN_ON(event_length != sizeof(event_node->event_data))) + return; + + event_node = kzalloc(sizeof(*event_node), GFP_KERNEL); + if (WARN_ON(!event_node)) + return; + + memcpy(&event_node->event_data, + event_payload, sizeof(event_node->event_data)); + + llist_add(&event_node->node, &cc->event_list); + queue_work(cc->freezable_wq, &cc->irq_deferred_work); +} + +static inline struct llist_node *get_event_list(struct cc33xx *cc) +{ + struct llist_node *node; + + node = llist_del_all(&cc->event_list); + if (!node) + return NULL; + + return llist_reverse_order(node); +} + +void flush_deferred_event_list(struct cc33xx *cc) +{ + struct event_node *event_node, *tmp; + struct llist_node *event_list; + + event_list = get_event_list(cc); + llist_for_each_entry_safe(event_node, tmp, event_list, node) { + kfree(event_node); + } +} + +static int wait_for_event_or_timeout(struct cc33xx *cc, u32 mask, bool *timeout) +{ + u32 event; + unsigned long timeout_time; + u16 poll_count = 0; + int ret = 0; + struct event_node *event_node, *tmp; + struct llist_node *event_list; + u32 vector; + + *timeout = false; + + timeout_time = jiffies + msecs_to_jiffies(CC33XX_EVENT_TIMEOUT); + + do { + if (time_after(jiffies, timeout_time)) { + *timeout = true; + goto out; + } + + poll_count++; + if (poll_count < CC33XX_WAIT_EVENT_FAST_POLL_COUNT) + usleep_range(50, 51); + else + usleep_range(1000, 5000); + + vector = 0; + event_list = get_event_list(cc); + llist_for_each_entry_safe(event_node, tmp, event_list, node) { + vector |= le32_to_cpu(event_node->event_data.events_vector); + } + + event = vector & mask; + } while (!event); + +out: + + return ret; +} + +int cc33xx_wait_for_event(struct cc33xx *cc, enum cc33xx_wait_event event, + bool *timeout) +{ + u32 local_event; + + switch (event) { + case CC33XX_EVENT_PEER_REMOVE_COMPLETE: + local_event = PEER_REMOVE_COMPLETE_EVENT_ID; + break; + + case CC33XX_EVENT_DFS_CONFIG_COMPLETE: + local_event = DFS_CHANNELS_CONFIG_COMPLETE_EVENT; + break; + + default: + /* event not implemented */ + return 0; + } + return wait_for_event_or_timeout(cc, local_event, timeout); +} + +static void cc33xx_event_sched_scan_completed(struct cc33xx *cc, u8 status) +{ + if (cc->mac80211_scan_stopped) { + cc->mac80211_scan_stopped = false; + } else { + if (cc->sched_vif) { + ieee80211_sched_scan_stopped(cc->hw); + cc->sched_vif = NULL; + } + } +} + +static void cc33xx_event_channel_switch(struct cc33xx *cc, + unsigned long roles_bitmap, + bool success) +{ + struct cc33xx_vif *wlvif; + struct ieee80211_vif *vif; + + cc33xx_for_each_wlvif(cc, wlvif) { + if (wlvif->role_id == CC33XX_INVALID_ROLE_ID || + !test_bit(wlvif->role_id, &roles_bitmap)) + continue; + + if (!test_and_clear_bit(WLVIF_FLAG_CS_PROGRESS, + &wlvif->flags)) + continue; + + vif = cc33xx_wlvif_to_vif(wlvif); + + if (wlvif->bss_type == BSS_TYPE_STA_BSS) { + ieee80211_chswitch_done(vif, success, 0); + cancel_delayed_work(&wlvif->channel_switch_work); + } else { + set_bit(WLVIF_FLAG_BEACON_DISABLED, &wlvif->flags); + ieee80211_csa_finish(vif, 0); + } + } +} + +static void cc33xx_disconnect_sta(struct cc33xx *cc, unsigned long sta_bitmap) +{ + u32 num_packets = cc->conf.host_conf.tx.max_tx_retries; + struct cc33xx_vif *wlvif; + struct ieee80211_vif *vif; + struct ieee80211_sta *sta; + const u8 *addr; + int h; + + for_each_set_bit(h, &sta_bitmap, CC33XX_MAX_LINKS) { + bool found = false; + /* find the ap vif connected to this sta */ + cc33xx_for_each_wlvif_ap(cc, wlvif) { + if (!test_bit(h, wlvif->ap.sta_hlid_map)) + continue; + found = true; + break; + } + if (!found) + continue; + + vif = cc33xx_wlvif_to_vif(wlvif); + addr = cc->links[h].addr; + + rcu_read_lock(); + sta = ieee80211_find_sta(vif, addr); + if (sta) + ieee80211_report_low_ack(sta, num_packets); + + rcu_read_unlock(); + } +} + +static void cc33xx_event_max_tx_failure(struct cc33xx *cc, + unsigned long sta_bitmap) +{ + cc33xx_disconnect_sta(cc, sta_bitmap); +} + +static void cc33xx_event_roc_complete(struct cc33xx *cc) +{ + if (cc->roc_vif) + ieee80211_ready_on_channel(cc->hw); +} + +static void cc33xx_event_beacon_loss(struct cc33xx *cc, + unsigned long roles_bitmap) +{ + /* We are HW_MONITOR device. On beacon loss - queue + * connection loss work. Cancel it on REGAINED event. + */ + struct cc33xx_vif *wlvif; + struct ieee80211_vif *vif; + int delay = cc->conf.host_conf.conn.synch_fail_thold; + + delay *= cc->conf.host_conf.conn.bss_lose_timeout; + + cc33xx_for_each_wlvif_sta(cc, wlvif) { + if (wlvif->role_id == CC33XX_INVALID_ROLE_ID || + !test_bit(wlvif->role_id, &roles_bitmap)) + continue; + + vif = cc33xx_wlvif_to_vif(wlvif); + + /* don't attempt roaming in case of p2p */ + if (wlvif->p2p) { + ieee80211_connection_loss(vif); + continue; + } + + /* if the work is already queued, it should take place. + * We don't want to delay the connection loss + * indication any more. + */ + ieee80211_queue_delayed_work(cc->hw, + &wlvif->connection_loss_work, + msecs_to_jiffies(delay)); + + ieee80211_cqm_beacon_loss_notify(vif, GFP_KERNEL); + } +} + +void process_deferred_events(struct cc33xx *cc) +{ + struct event_node *event_node, *tmp; + struct llist_node *event_list; + u32 vector; + + event_list = get_event_list(cc); + + llist_for_each_entry_safe(event_node, tmp, event_list, node) { + struct cc33xx_event_mailbox *event_data; + + event_data = &event_node->event_data; + + vector = le32_to_cpu(event_node->event_data.events_vector); + + if (vector & SCAN_COMPLETE_EVENT_ID) { + if (cc->scan_wlvif) + cc33xx_scan_completed(cc, cc->scan_wlvif); + } + + if (vector & PERIODIC_SCAN_COMPLETE_EVENT_ID) + cc33xx_event_sched_scan_completed(cc, 1); + + if (vector & BSS_LOSS_EVENT_ID) { + u16 bss_loss_bitmap = le16_to_cpu(event_data->bss_loss_bitmap); + + cc33xx_event_beacon_loss(cc, bss_loss_bitmap); + } + + if (vector & MAX_TX_FAILURE_EVENT_ID) { + u16 tx_retry_exceeded_bitmap = + le16_to_cpu(event_data->tx_retry_exceeded_bitmap); + + cc33xx_event_max_tx_failure(cc, tx_retry_exceeded_bitmap); + } + + if (vector & PERIODIC_SCAN_REPORT_EVENT_ID) + cc33xx_scan_sched_scan_results(cc); + + if (vector & CHANNEL_SWITCH_COMPLETE_EVENT_ID) { + u16 channel_switch_role_id_bitmap = + le16_to_cpu(event_data->channel_switch_role_id_bitmap); + + cc33xx_event_channel_switch(cc, channel_switch_role_id_bitmap, true); + } + + if (vector & REMAIN_ON_CHANNEL_COMPLETE_EVENT_ID) + cc33xx_event_roc_complete(cc); + + kfree(event_node); + } +} diff --git a/drivers/net/wireless/ti/cc33xx/event.h b/drivers/net/wireless/ti/cc33xx/event.h new file mode 100644 index 000000000000..7952f0b7b4aa --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/event.h @@ -0,0 +1,71 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#ifndef __EVENT_H__ +#define __EVENT_H__ + +/* Mbox events + * + * The event mechanism is based on a pair of event buffers (buffers A and + * B) at fixed locations in the target's memory. The host processes one + * buffer while the other buffer continues to collect events. If the host + * is not processing events, an interrupt is issued to signal that a buffer + * is ready. Once the host is done with processing events from one buffer, + * it signals the target (with an ACK interrupt) that the event buffer is + * free. + */ + +enum { + RSSI_SNR_TRIGGER_0_EVENT_ID = BIT(0), + RSSI_SNR_TRIGGER_1_EVENT_ID = BIT(1), + RSSI_SNR_TRIGGER_2_EVENT_ID = BIT(2), + RSSI_SNR_TRIGGER_3_EVENT_ID = BIT(3), + RSSI_SNR_TRIGGER_4_EVENT_ID = BIT(4), + RSSI_SNR_TRIGGER_5_EVENT_ID = BIT(5), + RSSI_SNR_TRIGGER_6_EVENT_ID = BIT(6), + RSSI_SNR_TRIGGER_7_EVENT_ID = BIT(7), + + EVENT_MBOX_ALL_EVENT_ID = 0x7fffffff, +}; + +enum { + SCAN_COMPLETE_EVENT_ID = BIT(8), + RADAR_DETECTED_EVENT_ID = BIT(9), + CHANNEL_SWITCH_COMPLETE_EVENT_ID = BIT(10), + BSS_LOSS_EVENT_ID = BIT(11), + MAX_TX_FAILURE_EVENT_ID = BIT(12), + DUMMY_PACKET_EVENT_ID = BIT(13), + INACTIVE_STA_EVENT_ID = BIT(14), + PEER_REMOVE_COMPLETE_EVENT_ID = BIT(15), + PERIODIC_SCAN_COMPLETE_EVENT_ID = BIT(16), + BA_SESSION_RX_CONSTRAINT_EVENT_ID = BIT(17), + REMAIN_ON_CHANNEL_COMPLETE_EVENT_ID = BIT(18), + DFS_CHANNELS_CONFIG_COMPLETE_EVENT = BIT(19), + PERIODIC_SCAN_REPORT_EVENT_ID = BIT(20), + RX_BA_WIN_SIZE_CHANGE_EVENT_ID = BIT(21), + SMART_CONFIG_SYNC_EVENT_ID = BIT(22), + SMART_CONFIG_DECODE_EVENT_ID = BIT(23), + TIME_SYNC_EVENT_ID = BIT(24), + FW_LOGGER_INDICATION = BIT(25), +}; + +/* events the driver might want to wait for */ +enum cc33xx_wait_event { + CC33XX_EVENT_ROLE_STOP_COMPLETE, + CC33XX_EVENT_PEER_REMOVE_COMPLETE, + CC33XX_EVENT_DFS_CONFIG_COMPLETE +}; + +#define NUM_OF_RSSI_SNR_TRIGGERS 8 + +struct cc33xx; + +int cc33xx_wait_for_event(struct cc33xx *cc, enum cc33xx_wait_event event, + bool *timeout); +void deffer_event(struct cc33xx *cc, const void *event_payload, size_t event_length); +void process_deferred_events(struct cc33xx *cc); +void flush_deferred_event_list(struct cc33xx *cc); + +#endif /* __EVENT_H__ */ From patchwork Thu Nov 7 12:52:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nemanov, Michael" X-Patchwork-Id: 13866416 X-Patchwork-Delegate: kuba@kernel.org Received: from lelv0143.ext.ti.com (lelv0143.ext.ti.com [198.47.23.248]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E7D3212D16; Thu, 7 Nov 2024 12:52:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.23.248 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983978; cv=none; b=dZQvZE8Tnr412IHKGgMorEqwvksHqEW7at+6VanWJQiPD+bjzb6SgsMHnQh4scoSX0AIoD2eL90GxDm3yDskmaPtVcNfG/s0nuqkqlLWv+qDSNPW60iT+BQzcKCzhWkCG90WfQnt2MEUNs+Dyi6frO9kO/eN7KhD05DWvQ9whdo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983978; c=relaxed/simple; bh=a0k1wa1qqreq2w28rkenT6P63WxFlhXBlGN7dCS5o9U=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=BYxlSCJRSEpEzrGjnE4Dmex2xUXSYcvlX6pW1h6Ot9Td1DN9JdDMpkyMKxrmY1OWjQ0jxB75TkqAPBqvGkj47IIsjNTL4UPPTtExcA6og+m3vLo1go6VCrTb7t9nIXTcXFUyBjNPIiua41cb9oWTVqyBvIlUpZp4BP3xnKjLAOo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=fnPh1FLb; arc=none smtp.client-ip=198.47.23.248 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="fnPh1FLb" Received: from fllv0035.itg.ti.com ([10.64.41.0]) by lelv0143.ext.ti.com (8.15.2/8.15.2) with ESMTP id 4A7Cqnpv090808; Thu, 7 Nov 2024 06:52:49 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1730983969; bh=k+c/Gd8lP6TLmr43PVUGJJiCdYAdoDz0yeqhgRRqme8=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=fnPh1FLbK08t4ADD6X2C6si3CIzZ+19J6kWHp9MRybNYmoJE4Q5HPnSh5JKp+9sxr yDVH4ofHWSZ5FrVfBQ4jo6p+5OVFzuANrNYZZvmcmMw8TqLDZ9jAe2vi5GNicUoIrU C9CBGpu6MtEmiLBKEoxEoF9B6y7CFRaQ9N80e/S8= Received: from DLEE102.ent.ti.com (dlee102.ent.ti.com [157.170.170.32]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 4A7Cqn6K116091 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 7 Nov 2024 06:52:49 -0600 Received: from DLEE106.ent.ti.com (157.170.170.36) by DLEE102.ent.ti.com (157.170.170.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Thu, 7 Nov 2024 06:52:48 -0600 Received: from lelvsmtp5.itg.ti.com (10.180.75.250) by DLEE106.ent.ti.com (157.170.170.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23 via Frontend Transport; Thu, 7 Nov 2024 06:52:48 -0600 Received: from localhost (udb0389739.dhcp.ti.com [137.167.1.149]) by lelvsmtp5.itg.ti.com (8.15.2/8.15.2) with ESMTP id 4A7CqmQS054836; Thu, 7 Nov 2024 06:52:48 -0600 From: Michael Nemanov To: Kalle Valo , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , , , , CC: Sabeeh Khan , Michael Nemanov Subject: [PATCH v5 08/17] wifi: cc33xx: Add boot.c, boot.h Date: Thu, 7 Nov 2024 14:52:00 +0200 Message-ID: <20241107125209.1736277-9-michael.nemanov@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241107125209.1736277-1-michael.nemanov@ti.com> References: <20241107125209.1736277-1-michael.nemanov@ti.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea X-Patchwork-Delegate: kuba@kernel.org Implements FW download for CC33xx. The FW comes in 2 parts - a 2nd stage bootloader (cc33xx_2nd_loader.bin) and the actual FW (cc33xx_fw.bin). Each file is requested from user space, and transferred to device chunk by chunk. A dedicated IRQ is excepted after each stage (Device power-on -> 2nd stage loader -> FW). This logic is implemnted in cc33xx_init_fw. Signed-off-by: Michael Nemanov --- drivers/net/wireless/ti/cc33xx/boot.c | 345 ++++++++++++++++++++++++++ drivers/net/wireless/ti/cc33xx/boot.h | 24 ++ 2 files changed, 369 insertions(+) create mode 100644 drivers/net/wireless/ti/cc33xx/boot.c create mode 100644 drivers/net/wireless/ti/cc33xx/boot.h diff --git a/drivers/net/wireless/ti/cc33xx/boot.c b/drivers/net/wireless/ti/cc33xx/boot.c new file mode 100644 index 000000000000..25efbe937837 --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/boot.c @@ -0,0 +1,345 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#include +#include + +#include "boot.h" +#include "cmd.h" +#include "debug.h" +#include "init.h" +#include "io.h" + +#define CC33XX_BOOT_TIMEOUT 2000 + +struct hwinfo_bitmap { + u32 disable_5g : 1u; + u32 disable_6g : 1u; + u32 disable_ble : 1u; + u32 disable_ble_m0plus : 1u; + u32 disable_m33 : 1u; + u64 udi : 64u; + u32 pg_version : 4u; + u32 metal_version : 4u; + u32 boot_rom_version : 4u; + u32 m3_rom_version : 4u; + u32 fuse_rom_structure_version : 4u; + u64 mac_address : 48u; + u32 device_part_number : 6u; + u32 package_type : 4u; + u32 fw_rollback_protection_1 : 32u; + u32 fw_rollback_protection_2 : 32u; + u32 fw_rollback_protection_3 : 32u; + u32 reserved : 13u; +} /* Aligned with boot code, must not be __packed */; + +union hw_info { + struct hwinfo_bitmap bitmap; + u8 bytes[sizeof(struct hwinfo_bitmap)]; +}; + +/* Called from threaded irq context */ +void cc33xx_handle_boot_irqs(struct cc33xx *cc, u32 pending_interrupts) +{ + if (WARN_ON(!cc->fw_download)) + return; + + atomic_or(pending_interrupts, &cc->fw_download->pending_irqs); + complete(&cc->fw_download->wait_on_irq); +} + +static u8 *fetch_container(struct cc33xx *cc, const char *container_name, + size_t *container_len) +{ + u8 *container_data = NULL; + const struct firmware *container; + int ret; + + ret = request_firmware(&container, container_name, cc->dev); + + if (ret < 0) { + cc33xx_error("could not get container %s: (%d)", + container_name, ret); + return NULL; + } + + if (container->size % 4) { + cc33xx_error("container size is not word-aligned: %zu", + container->size); + goto out; + } + + *container_len = container->size; + container_data = vmalloc(container->size); + + if (!container_data) { + cc33xx_error("could not allocate memory for the container"); + goto out; + } + + memcpy(container_data, container->data, container->size); + +out: + release_firmware(container); + return container_data; +} + +static int cc33xx_set_power_on(struct cc33xx *cc) +{ + int ret; + + msleep(CC33XX_PRE_POWER_ON_SLEEP); + ret = cc33xx_power_on(cc); + if (ret < 0) + goto out; + msleep(CC33XX_POWER_ON_SLEEP); + cc33xx_io_reset(cc); + cc33xx_io_init(cc); + +out: + return ret; +} + +static int cc33xx_chip_wakeup(struct cc33xx *cc) +{ + int ret = 0; + + ret = cc33xx_set_power_on(cc); + if (ret < 0) + goto out; + + if (!cc33xx_set_block_size(cc)) + cc->quirks &= ~CC33XX_QUIRK_TX_BLOCKSIZE_ALIGN; + +out: + return ret; +} + +static int wait_for_boot_irq(struct cc33xx *cc, u32 boot_irq_mask, + unsigned long timeout) +{ + int ret; + u32 pending_irqs; + struct cc33xx_fw_download *fw_download; + + fw_download = cc->fw_download; + + ret = wait_for_completion_interruptible_timeout(&fw_download->wait_on_irq, + msecs_to_jiffies(timeout)); + + /* Fetch pending IRQs while clearing them in fw_download */ + pending_irqs = atomic_fetch_and(0, &fw_download->pending_irqs); + pending_irqs &= ~HINT_COMMAND_COMPLETE; + + reinit_completion(&fw_download->wait_on_irq); + + if (ret == 0) { + cc33xx_error("boot IRQ timeout"); + return -1; + } else if (ret < 0) { + cc33xx_error("boot IRQ completion error %d", ret); + return -2; + } + + if (boot_irq_mask != pending_irqs) { + cc33xx_error("Unexpected IRQ received @ boot: 0x%x", + pending_irqs); + return -3; + } + + return 0; +} + +static int download_container(struct cc33xx *cc, u8 *container, size_t len) +{ + int ret = 0; + u8 *current_transfer; + size_t current_transfer_size; + u8 *const container_end = container + len; + size_t max_transfer_size = cc->fw_download->max_transfer_size; + bool is_last_transfer; + + current_transfer = container; + + while (current_transfer < container_end) { + current_transfer_size = container_end - current_transfer; + current_transfer_size = + min(current_transfer_size, max_transfer_size); + + is_last_transfer = (current_transfer + current_transfer_size >= container_end); + + ret = cmd_download_container_chunk(cc, + current_transfer, + current_transfer_size, + is_last_transfer); + + current_transfer += current_transfer_size; + + if (ret < 0) { + cc33xx_error("Chunk transfer failed"); + goto out; + } + } + +out: + return ret; +} + +static int container_download_and_wait(struct cc33xx *cc, + const char *container_name, + const u32 irq_wait_mask) +{ + int ret = -1; + u8 *container_data; + size_t container_len; + + container_data = fetch_container(cc, container_name, &container_len); + if (!container_data) + return ret; + + ret = download_container(cc, container_data, container_len); + if (ret < 0) { + cc33xx_error("Transfer error while downloading %s", + container_name); + goto out; + } + + ret = wait_for_boot_irq(cc, irq_wait_mask, CC33XX_BOOT_TIMEOUT); + + if (ret < 0) { + cc33xx_error("%s boot signal timeout", container_name); + goto out; + } + + ret = 0; + +out: + vfree(container_data); + return ret; +} + +static int fw_download_alloc(struct cc33xx *cc) +{ + if (WARN_ON(cc->fw_download)) + return -EFAULT; + + cc->fw_download = kzalloc(sizeof(*cc->fw_download), GFP_KERNEL); + if (!cc->fw_download) + return -ENOMEM; + + init_completion(&cc->fw_download->wait_on_irq); + + return 0; +} + +static void fw_download_free(struct cc33xx *cc) +{ + if (WARN_ON(!cc->fw_download)) + return; + + kfree(cc->fw_download); + cc->fw_download = NULL; +} + +static int get_device_info(struct cc33xx *cc) +{ + int ret; + union hw_info hw_info; + u64 mac_address; + + ret = cmd_get_device_info(cc, hw_info.bytes, sizeof(hw_info.bytes)); + if (ret < 0) + return ret; + + cc->fw_download->max_transfer_size = 640; + + mac_address = hw_info.bitmap.mac_address; + + cc->fuse_rom_structure_version = hw_info.bitmap.fuse_rom_structure_version; + cc->pg_version = hw_info.bitmap.pg_version; + cc->device_part_number = hw_info.bitmap.device_part_number; + cc->disable_5g = hw_info.bitmap.disable_5g; + cc->disable_6g = hw_info.bitmap.disable_6g; + + cc->efuse_mac_address[5] = (u8)(mac_address); + cc->efuse_mac_address[4] = (u8)(mac_address >> 8); + cc->efuse_mac_address[3] = (u8)(mac_address >> 16); + cc->efuse_mac_address[2] = (u8)(mac_address >> 24); + cc->efuse_mac_address[1] = (u8)(mac_address >> 32); + cc->efuse_mac_address[0] = (u8)(mac_address >> 40); + + return 0; +} + +int cc33xx_init_fw(struct cc33xx *cc) +{ + int ret; + + cc->max_cmd_size = CC33XX_CMD_MAX_SIZE; + + ret = fw_download_alloc(cc); + if (ret < 0) + return ret; + + reinit_completion(&cc->fw_download->wait_on_irq); + + ret = cc33xx_chip_wakeup(cc); + if (ret < 0) + goto power_off; + + cc33xx_enable_interrupts(cc); + + ret = wait_for_boot_irq(cc, HINT_ROM_LOADER_INIT_COMPLETE, + CC33XX_BOOT_TIMEOUT); + if (ret < 0) + goto disable_irq; + + ret = get_device_info(cc); + if (ret < 0) + goto disable_irq; + + ret = container_download_and_wait(cc, SECOND_LOADER_NAME, + HINT_SECOND_LOADER_INIT_COMPLETE); + if (ret < 0) + goto disable_irq; + + ret = container_download_and_wait(cc, FW_NAME, + HINT_FW_WAKEUP_COMPLETE); + if (ret < 0) + goto disable_irq; + + ret = cc33xx_download_ini_params_and_wait(cc); + + if (ret < 0) + goto disable_irq; + + ret = wait_for_boot_irq(cc, HINT_FW_INIT_COMPLETE, CC33XX_BOOT_TIMEOUT); + + if (ret < 0) + goto disable_irq; + + ret = cc33xx_hw_init(cc); + if (ret < 0) + goto disable_irq; + + /* Now we know if 11a is supported (info from the INI File), so disable + * 11a channels if not supported + */ + cc->enable_11a = cc->conf.core.enable_5ghz; + + cc->state = CC33XX_STATE_ON; + ret = 0; + goto out; + +disable_irq: + cc33xx_disable_interrupts_nosync(cc); + +power_off: + cc33xx_power_off(cc); + +out: + fw_download_free(cc); + return ret; +} diff --git a/drivers/net/wireless/ti/cc33xx/boot.h b/drivers/net/wireless/ti/cc33xx/boot.h new file mode 100644 index 000000000000..d5b7763dcd0f --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/boot.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#ifndef __BOOT_H__ +#define __BOOT_H__ + +#include "cc33xx.h" + +int cc33xx_init_fw(struct cc33xx *cc); + +void cc33xx_handle_boot_irqs(struct cc33xx *cc, u32 pending_interrupts); + +#define SECOND_LOADER_NAME "ti-connectivity/cc33xx_2nd_loader.bin" +#define FW_NAME "ti-connectivity/cc33xx_fw.bin" + +struct cc33xx_fw_download { + atomic_t pending_irqs; + struct completion wait_on_irq; + size_t max_transfer_size; +}; + +#endif /* __BOOT_H__ */ From patchwork Thu Nov 7 12:52:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nemanov, Michael" X-Patchwork-Id: 13866426 X-Patchwork-Delegate: kuba@kernel.org Received: from lelv0143.ext.ti.com (lelv0143.ext.ti.com [198.47.23.248]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 741F4215F72; Thu, 7 Nov 2024 12:53:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.23.248 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983997; cv=none; b=JkgUDTgolYsA/9baX9IZQT2fQSyeL6B//rUql1XibZty8osAYJXcmmrH/WEysxyvFWDClJWWvK6/BmJznGEvx1x5is0MaWJjQ232RtDWT6T0KnEXK14Y2qgBU4/ogZjKAMTq5llZFSWSTX5CExTImvfSDrYoKV1XcaeFc6RSlZQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983997; c=relaxed/simple; bh=5JreSozXspUbFBwIkozrCX3G4oCzRxj9DY1O/5wehZQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=j0XH9lJ3CkdZtD03BfJDX2oyJNeysTcuPtVY76X1uDFuIpL+uUEXwKq5XrEWBmOvh6K/n4Eyhat5buXZNmahRrvJ3RovuaAmmfzeLDILwIJTNlY63AJWddWfRN6LLHYxUgcIbrTrdPdaO1cx56nyXQLiiGE0fhzrATEeOUc3oRQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=j10M9ADK; arc=none smtp.client-ip=198.47.23.248 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="j10M9ADK" Received: from lelv0265.itg.ti.com ([10.180.67.224]) by lelv0143.ext.ti.com (8.15.2/8.15.2) with ESMTP id 4A7CqpBJ090814; Thu, 7 Nov 2024 06:52:51 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1730983971; bh=1RR3J502Wb8KVZoAf2XC9QcjVyR6A8p0Qm1ZYBGoBr0=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=j10M9ADKZpnflXk6ql6huya7dizD9OSX3pdJQmioNUjUmb5pQOhvBbj0mXqOCtlO5 Klx2vDahWA0eH4kckh2XOL+I4P5G2uY9uqf2j0+jDri1gLSefwMbp/y8AnNsrxT5sO 5tiBtX5rpK1tO2aPZi5koLR9b99LkEHLHmstw2ug= Received: from DLEE102.ent.ti.com (dlee102.ent.ti.com [157.170.170.32]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 4A7Cqp9S011149 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 7 Nov 2024 06:52:51 -0600 Received: from DLEE106.ent.ti.com (157.170.170.36) by DLEE102.ent.ti.com (157.170.170.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Thu, 7 Nov 2024 06:52:50 -0600 Received: from lelvsmtp6.itg.ti.com (10.180.75.249) by DLEE106.ent.ti.com (157.170.170.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23 via Frontend Transport; Thu, 7 Nov 2024 06:52:50 -0600 Received: from localhost (udb0389739.dhcp.ti.com [137.167.1.149]) by lelvsmtp6.itg.ti.com (8.15.2/8.15.2) with ESMTP id 4A7CqnZE038447; Thu, 7 Nov 2024 06:52:50 -0600 From: Michael Nemanov To: Kalle Valo , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , , , , CC: Sabeeh Khan , Michael Nemanov Subject: [PATCH v5 09/17] wifi: cc33xx: Add main.c Date: Thu, 7 Nov 2024 14:52:01 +0200 Message-ID: <20241107125209.1736277-10-michael.nemanov@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241107125209.1736277-1-michael.nemanov@ti.com> References: <20241107125209.1736277-1-michael.nemanov@ti.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea X-Patchwork-Delegate: kuba@kernel.org General code and structures. Notably: cc33xx_irq - Handles IRQs received from the device. process_core_status - Core status is a new concept in CC33xx. it's a structure that is appended to each transfer from the device and contains its most up-to-date status report (IRQs, buffers, etc.). See struct core_status for details. process_event_and_cmd_result - Responses to driver commands and FW events both arrive asynchronously. Therefore, driver cannot know what he read from HW until inspecting the payload. This code reads and dispatches the data accordingly. cc33xx_recovery_work - Driver supports basic recovery on FW crash and other illegal conditions. This implements the recovery flow (Remove all vifs, turn device off and on, download FW, let ieee80211_restart_hw do the rest). irq_deferred_work - Does irq-related work that requires holding the cc->mutex. Thisd is mostly in response to HW's Tx/Rx IRQs. cc33xx_nvs_cb - Callback for the NVS FW request API. Similar to wlcore, this is where the init of the HW is performed. cc33xx_load_ini_bin_file - Loads a configuration file from user-space via the request FW API. The structure is described in a separate patch. cc33xx_op_X - MAC80211 operation handlers. Signed-off-by: Michael Nemanov --- drivers/net/wireless/ti/cc33xx/main.c | 5687 +++++++++++++++++++++++++ 1 file changed, 5687 insertions(+) create mode 100644 drivers/net/wireless/ti/cc33xx/main.c diff --git a/drivers/net/wireless/ti/cc33xx/main.c b/drivers/net/wireless/ti/cc33xx/main.c new file mode 100644 index 000000000000..983ee08cb717 --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/main.c @@ -0,0 +1,5687 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#include +#include +#include +#include +#include + +#include "../net/mac80211/ieee80211_i.h" + +#include "acx.h" +#include "boot.h" +#include "io.h" +#include "tx.h" +#include "ps.h" +#include "init.h" +#include "testmode.h" +#include "scan.h" +#include "event.h" + +#define CC33XX_FW_RX_PACKET_RAM (9 * 1024) +static int no_recovery = -1; + +u32 cc33xx_debug_level = DEBUG_NO_DATAPATH; + +/* HT cap appropriate for wide channels in 2Ghz */ +static struct ieee80211_sta_ht_cap cc33xx_siso40_ht_cap_2ghz = { + .cap = IEEE80211_HT_CAP_SGI_20 | IEEE80211_HT_CAP_SGI_40 | + IEEE80211_HT_CAP_SUP_WIDTH_20_40 | IEEE80211_HT_CAP_DSSSCCK40 | + IEEE80211_HT_CAP_GRN_FLD, + .ht_supported = true, + .ampdu_factor = IEEE80211_HT_MAX_AMPDU_8K, + .ampdu_density = IEEE80211_HT_MPDU_DENSITY_16, + .mcs = { + .rx_mask = { 0xff, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, + .rx_highest = cpu_to_le16(150), + .tx_params = IEEE80211_HT_MCS_TX_DEFINED, + }, +}; + +/* HT cap appropriate for wide channels in 5Ghz */ +static struct ieee80211_sta_ht_cap cc33xx_siso40_ht_cap_5ghz = { + .cap = IEEE80211_HT_CAP_SGI_20 | IEEE80211_HT_CAP_SGI_40 | + IEEE80211_HT_CAP_SUP_WIDTH_20_40 | + IEEE80211_HT_CAP_GRN_FLD, + .ht_supported = true, + .ampdu_factor = IEEE80211_HT_MAX_AMPDU_8K, + .ampdu_density = IEEE80211_HT_MPDU_DENSITY_16, + .mcs = { + .rx_mask = { 0xff, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, + .rx_highest = cpu_to_le16(150), + .tx_params = IEEE80211_HT_MCS_TX_DEFINED, + }, +}; + +/* HT cap appropriate for SISO 20 */ +static struct ieee80211_sta_ht_cap cc33xx_siso20_ht_cap = { + .cap = IEEE80211_HT_CAP_SGI_20 | + IEEE80211_HT_CAP_MAX_AMSDU, + .ht_supported = true, + .ampdu_factor = IEEE80211_HT_MAX_AMPDU_8K, + .ampdu_density = IEEE80211_HT_MPDU_DENSITY_16, + .mcs = { + .rx_mask = { 0xff, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, + .rx_highest = cpu_to_le16(72), + .tx_params = IEEE80211_HT_MCS_TX_DEFINED, + }, +}; + +#ifdef CONFIG_MAC80211_MESH +static const struct ieee80211_iface_limit cc33xx_iface_limits[] = { + { + .max = 2, + .types = BIT(NL80211_IFTYPE_STATION) + | BIT(NL80211_IFTYPE_P2P_CLIENT), + }, + { + .max = 1, + .types = BIT(NL80211_IFTYPE_AP) | BIT(NL80211_IFTYPE_P2P_GO) + | BIT(NL80211_IFTYPE_MESH_POINT) + }, + { + .max = 1, + .types = BIT(NL80211_IFTYPE_P2P_DEVICE), + }, +}; + +static inline u16 cc33xx_wiphy_interface_modes(void) +{ + return BIT(NL80211_IFTYPE_STATION) | BIT(NL80211_IFTYPE_P2P_GO) | + BIT(NL80211_IFTYPE_MESH_POINT) | BIT(NL80211_IFTYPE_AP) | + BIT(NL80211_IFTYPE_P2P_CLIENT) | BIT(NL80211_IFTYPE_P2P_DEVICE); +} +#else +static const struct ieee80211_iface_limit cc33xx_iface_limits[] = { + { + .max = 2, + .types = BIT(NL80211_IFTYPE_STATION) + | BIT(NL80211_IFTYPE_P2P_CLIENT), + }, + { + .max = 1, + .types = BIT(NL80211_IFTYPE_AP) | BIT(NL80211_IFTYPE_P2P_GO) + }, + { + .max = 1, + .types = BIT(NL80211_IFTYPE_P2P_DEVICE), + }, +}; + +static inline u16 cc33xx_wiphy_interface_modes(void) +{ + return BIT(NL80211_IFTYPE_STATION) | BIT(NL80211_IFTYPE_P2P_GO) | + BIT(NL80211_IFTYPE_P2P_CLIENT) | BIT(NL80211_IFTYPE_AP) | + BIT(NL80211_IFTYPE_P2P_DEVICE); +} +#endif /* CONFIG_MAC80211_MESH */ + +static const struct ieee80211_iface_combination +cc33xx_iface_combinations[] = { + { + .max_interfaces = 3, + .limits = cc33xx_iface_limits, + .n_limits = ARRAY_SIZE(cc33xx_iface_limits), + .num_different_channels = 2, + } +}; + +static const u8 cc33xx_rate_to_idx_2ghz[] = { + CONF_HW_RXTX_RATE_UNSUPPORTED, + 0, /* RATE_INDEX_1MBPS */ + 1, /* RATE_INDEX_2MBPS */ + 2, /* RATE_INDEX_5_5MBPS */ + 3, /* RATE_INDEX_11MBPS */ + 4, /* RATE_INDEX_6MBPS */ + 5, /* RATE_INDEX_9MBPS */ + 6, /* RATE_INDEX_12MBPS */ + 7, /* RATE_INDEX_18MBPS */ + 8, /* RATE_INDEX_24MBPS */ + 9, /* RATE_INDEX_36MBPS */ + 10, /* RATE_INDEX_48MBPS */ + 11, /* RATE_INDEX_54MBPS */ + 0, /* RATE_INDEX_MCS0 */ + 1, /* RATE_INDEX_MCS1 */ + 2, /* RATE_INDEX_MCS2 */ + 3, /* RATE_INDEX_MCS3 */ + 4, /* RATE_INDEX_MCS4 */ + 5, /* RATE_INDEX_MCS5 */ + 6, /* RATE_INDEX_MCS6 */ + 7 /* RATE_INDEX_MCS7 */ +}; + +static const u8 cc33xx_rate_to_idx_5ghz[] = { + CONF_HW_RXTX_RATE_UNSUPPORTED, + CONF_HW_RXTX_RATE_UNSUPPORTED, /* RATE_INDEX_1MBPS */ + CONF_HW_RXTX_RATE_UNSUPPORTED, /* RATE_INDEX_2MBPS */ + CONF_HW_RXTX_RATE_UNSUPPORTED, /* RATE_INDEX_5_5MBPS */ + CONF_HW_RXTX_RATE_UNSUPPORTED, /* RATE_INDEX_11MBPS */ + 0, /* RATE_INDEX_6MBPS */ + 1, /* RATE_INDEX_9MBPS */ + 2, /* RATE_INDEX_12MBPS */ + 3, /* RATE_INDEX_18MBPS */ + 4, /* RATE_INDEX_24MBPS */ + 5, /* RATE_INDEX_36MBPS */ + 6, /* RATE_INDEX_48MBPS */ + 7, /* RATE_INDEX_54MBPS */ + 0, /* RATE_INDEX_MCS0 */ + 1, /* RATE_INDEX_MCS1 */ + 2, /* RATE_INDEX_MCS2 */ + 3, /* RATE_INDEX_MCS3 */ + 4, /* RATE_INDEX_MCS4 */ + 5, /* RATE_INDEX_MCS5 */ + 6, /* RATE_INDEX_MCS6 */ + 7 /* RATE_INDEX_MCS7 */ +}; + +static const u8 *cc33xx_band_rate_to_idx[] = { + [NL80211_BAND_2GHZ] = cc33xx_rate_to_idx_2ghz, + [NL80211_BAND_5GHZ] = cc33xx_rate_to_idx_5ghz +}; + +/* can't be const, mac80211 writes to this */ +static struct ieee80211_rate cc33xx_rates[] = { + { .bitrate = 10, + .hw_value = CONF_HW_BIT_RATE_1MBPS, + .hw_value_short = CONF_HW_BIT_RATE_1MBPS, }, + { .bitrate = 20, + .hw_value = CONF_HW_BIT_RATE_2MBPS, + .hw_value_short = CONF_HW_BIT_RATE_2MBPS, + .flags = IEEE80211_RATE_SHORT_PREAMBLE }, + { .bitrate = 55, + .hw_value = CONF_HW_BIT_RATE_5_5MBPS, + .hw_value_short = CONF_HW_BIT_RATE_5_5MBPS, + .flags = IEEE80211_RATE_SHORT_PREAMBLE }, + { .bitrate = 110, + .hw_value = CONF_HW_BIT_RATE_11MBPS, + .hw_value_short = CONF_HW_BIT_RATE_11MBPS, + .flags = IEEE80211_RATE_SHORT_PREAMBLE }, + { .bitrate = 60, + .hw_value = CONF_HW_BIT_RATE_6MBPS, + .hw_value_short = CONF_HW_BIT_RATE_6MBPS, }, + { .bitrate = 90, + .hw_value = CONF_HW_BIT_RATE_9MBPS, + .hw_value_short = CONF_HW_BIT_RATE_9MBPS, }, + { .bitrate = 120, + .hw_value = CONF_HW_BIT_RATE_12MBPS, + .hw_value_short = CONF_HW_BIT_RATE_12MBPS, }, + { .bitrate = 180, + .hw_value = CONF_HW_BIT_RATE_18MBPS, + .hw_value_short = CONF_HW_BIT_RATE_18MBPS, }, + { .bitrate = 240, + .hw_value = CONF_HW_BIT_RATE_24MBPS, + .hw_value_short = CONF_HW_BIT_RATE_24MBPS, }, + { .bitrate = 360, + .hw_value = CONF_HW_BIT_RATE_36MBPS, + .hw_value_short = CONF_HW_BIT_RATE_36MBPS, }, + { .bitrate = 480, + .hw_value = CONF_HW_BIT_RATE_48MBPS, + .hw_value_short = CONF_HW_BIT_RATE_48MBPS, }, + { .bitrate = 540, + .hw_value = CONF_HW_BIT_RATE_54MBPS, + .hw_value_short = CONF_HW_BIT_RATE_54MBPS, }, +}; + +/* can't be const, mac80211 writes to this */ +static struct ieee80211_channel cc33xx_channels[] = { + { .hw_value = 1, .center_freq = 2412, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 2, .center_freq = 2417, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 3, .center_freq = 2422, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 4, .center_freq = 2427, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 5, .center_freq = 2432, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 6, .center_freq = 2437, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 7, .center_freq = 2442, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 8, .center_freq = 2447, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 9, .center_freq = 2452, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 10, .center_freq = 2457, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 11, .center_freq = 2462, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 12, .center_freq = 2467, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 13, .center_freq = 2472, .max_power = CC33XX_MAX_TXPWR }, +}; + +static const struct ieee80211_sband_iftype_data iftype_data_2ghz[] = {{ + .types_mask = BIT(NL80211_IFTYPE_STATION), + .he_cap = { + .has_he = true, + .he_cap_elem = { + .mac_cap_info[0] = + IEEE80211_HE_MAC_CAP0_HTC_HE | + IEEE80211_HE_MAC_CAP0_TWT_REQ, + .mac_cap_info[1] = + IEEE80211_HE_MAC_CAP1_TF_MAC_PAD_DUR_16US | + IEEE80211_HE_MAC_CAP1_MULTI_TID_AGG_RX_QOS_8, + .mac_cap_info[2] = + IEEE80211_HE_MAC_CAP2_32BIT_BA_BITMAP | + IEEE80211_HE_MAC_CAP2_ALL_ACK | + IEEE80211_HE_MAC_CAP2_TRS | + IEEE80211_HE_MAC_CAP2_BSR | + IEEE80211_HE_MAC_CAP2_ACK_EN, + .mac_cap_info[3] = + IEEE80211_HE_MAC_CAP3_OMI_CONTROL | + IEEE80211_HE_MAC_CAP3_RX_CTRL_FRAME_TO_MULTIBSS, + .mac_cap_info[4] = + IEEE80211_HE_MAC_CAP4_AMSDU_IN_AMPDU | + IEEE80211_HE_MAC_CAP4_NDP_FB_REP | + IEEE80211_HE_MAC_CAP4_MULTI_TID_AGG_TX_QOS_B39, + .mac_cap_info[5] = + IEEE80211_HE_MAC_CAP5_HT_VHT_TRIG_FRAME_RX, + .phy_cap_info[0] = 0, + .phy_cap_info[1] = + IEEE80211_HE_PHY_CAP1_DEVICE_CLASS_A | + IEEE80211_HE_PHY_CAP1_HE_LTF_AND_GI_FOR_HE_PPDUS_0_8US, + .phy_cap_info[2] = + IEEE80211_HE_PHY_CAP2_NDP_4x_LTF_AND_3_2US, + .phy_cap_info[3] = + IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_TX_NO_DCM | + IEEE80211_HE_PHY_CAP3_DCM_MAX_TX_NSS_1 | + IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_RX_16_QAM | + IEEE80211_HE_PHY_CAP3_DCM_MAX_RX_NSS_1, + .phy_cap_info[4] = + IEEE80211_HE_PHY_CAP4_SU_BEAMFORMEE | + IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_UNDER_80MHZ_4, + .phy_cap_info[5] = + IEEE80211_HE_PHY_CAP5_NG16_SU_FEEDBACK | + IEEE80211_HE_PHY_CAP5_NG16_MU_FEEDBACK, + .phy_cap_info[6] = + IEEE80211_HE_PHY_CAP6_CODEBOOK_SIZE_42_SU | + IEEE80211_HE_PHY_CAP6_CODEBOOK_SIZE_75_MU | + IEEE80211_HE_PHY_CAP6_TRIG_SU_BEAMFORMING_FB | + IEEE80211_HE_PHY_CAP6_TRIG_MU_BEAMFORMING_PARTIAL_BW_FB | + IEEE80211_HE_PHY_CAP6_TRIG_CQI_FB | + IEEE80211_HE_PHY_CAP6_PARTIAL_BW_EXT_RANGE, + .phy_cap_info[7] = + IEEE80211_HE_PHY_CAP7_HE_SU_MU_PPDU_4XLTF_AND_08_US_GI, + .phy_cap_info[8] = + IEEE80211_HE_PHY_CAP8_HE_ER_SU_PPDU_4XLTF_AND_08_US_GI | + IEEE80211_HE_PHY_CAP8_20MHZ_IN_40MHZ_HE_PPDU_IN_2G | + IEEE80211_HE_PHY_CAP8_HE_ER_SU_1XLTF_AND_08_US_GI, + .phy_cap_info[9] = + IEEE80211_HE_PHY_CAP9_NON_TRIGGERED_CQI_FEEDBACK | + IEEE80211_HE_PHY_CAP9_RX_FULL_BW_SU_USING_MU_WITH_COMP_SIGB | + IEEE80211_HE_PHY_CAP9_RX_FULL_BW_SU_USING_MU_WITH_NON_COMP_SIGB | + IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_16US, + }, + /* Set default Tx/Rx HE MCS NSS Support field. + * Indicate support for up to 2 spatial streams and all + * MCS, without any special cases + */ + .he_mcs_nss_supp = { + .rx_mcs_80 = cpu_to_le16(0xfffc), + .tx_mcs_80 = cpu_to_le16(0xfffc), + .rx_mcs_160 = cpu_to_le16(0xffff), + .tx_mcs_160 = cpu_to_le16(0xffff), + .rx_mcs_80p80 = cpu_to_le16(0xffff), + .tx_mcs_80p80 = cpu_to_le16(0xffff), + }, + /* Set default PPE thresholds, with PPET16 set to 0, + * PPET8 set to 7 + */ + .ppe_thres = {0xff, 0xff, 0xff, 0xff}, + }, +}}; + +/* can't be const, mac80211 writes to this */ +static struct ieee80211_supported_band cc33xx_band_2ghz = { + .channels = cc33xx_channels, + .n_channels = ARRAY_SIZE(cc33xx_channels), + .bitrates = cc33xx_rates, + .n_bitrates = ARRAY_SIZE(cc33xx_rates), +}; + +/* 5 GHz data rates for cc33xx */ +static struct ieee80211_rate cc33xx_rates_5ghz[] = { + { .bitrate = 60, + .hw_value = CONF_HW_BIT_RATE_6MBPS, + .hw_value_short = CONF_HW_BIT_RATE_6MBPS, }, + { .bitrate = 90, + .hw_value = CONF_HW_BIT_RATE_9MBPS, + .hw_value_short = CONF_HW_BIT_RATE_9MBPS, }, + { .bitrate = 120, + .hw_value = CONF_HW_BIT_RATE_12MBPS, + .hw_value_short = CONF_HW_BIT_RATE_12MBPS, }, + { .bitrate = 180, + .hw_value = CONF_HW_BIT_RATE_18MBPS, + .hw_value_short = CONF_HW_BIT_RATE_18MBPS, }, + { .bitrate = 240, + .hw_value = CONF_HW_BIT_RATE_24MBPS, + .hw_value_short = CONF_HW_BIT_RATE_24MBPS, }, + { .bitrate = 360, + .hw_value = CONF_HW_BIT_RATE_36MBPS, + .hw_value_short = CONF_HW_BIT_RATE_36MBPS, }, + { .bitrate = 480, + .hw_value = CONF_HW_BIT_RATE_48MBPS, + .hw_value_short = CONF_HW_BIT_RATE_48MBPS, }, + { .bitrate = 540, + .hw_value = CONF_HW_BIT_RATE_54MBPS, + .hw_value_short = CONF_HW_BIT_RATE_54MBPS, }, +}; + +/* 5 GHz band channels for cc33xx */ +static struct ieee80211_channel cc33xx_channels_5ghz[] = { + { .hw_value = 36, .center_freq = 5180, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 40, .center_freq = 5200, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 44, .center_freq = 5220, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 48, .center_freq = 5240, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 52, .center_freq = 5260, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 56, .center_freq = 5280, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 60, .center_freq = 5300, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 64, .center_freq = 5320, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 100, .center_freq = 5500, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 104, .center_freq = 5520, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 108, .center_freq = 5540, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 112, .center_freq = 5560, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 116, .center_freq = 5580, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 120, .center_freq = 5600, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 124, .center_freq = 5620, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 128, .center_freq = 5640, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 132, .center_freq = 5660, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 136, .center_freq = 5680, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 140, .center_freq = 5700, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 149, .center_freq = 5745, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 153, .center_freq = 5765, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 157, .center_freq = 5785, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 161, .center_freq = 5805, .max_power = CC33XX_MAX_TXPWR }, + { .hw_value = 165, .center_freq = 5825, .max_power = CC33XX_MAX_TXPWR }, +}; + +static const struct ieee80211_sband_iftype_data iftype_data_5ghz[] = {{ + .types_mask = BIT(NL80211_IFTYPE_STATION), + .he_cap = { + .has_he = true, + .he_cap_elem = { + .mac_cap_info[0] = + IEEE80211_HE_MAC_CAP0_HTC_HE | + IEEE80211_HE_MAC_CAP0_TWT_REQ, + .mac_cap_info[1] = + IEEE80211_HE_MAC_CAP1_TF_MAC_PAD_DUR_16US | + IEEE80211_HE_MAC_CAP1_MULTI_TID_AGG_RX_QOS_8, + .mac_cap_info[2] = + IEEE80211_HE_MAC_CAP2_32BIT_BA_BITMAP | + IEEE80211_HE_MAC_CAP2_ALL_ACK | + IEEE80211_HE_MAC_CAP2_TRS | + IEEE80211_HE_MAC_CAP2_BSR | + IEEE80211_HE_MAC_CAP2_ACK_EN, + .mac_cap_info[3] = + IEEE80211_HE_MAC_CAP3_OMI_CONTROL | + IEEE80211_HE_MAC_CAP3_RX_CTRL_FRAME_TO_MULTIBSS, + .mac_cap_info[4] = + IEEE80211_HE_MAC_CAP4_AMSDU_IN_AMPDU | + IEEE80211_HE_MAC_CAP4_NDP_FB_REP | + IEEE80211_HE_MAC_CAP4_MULTI_TID_AGG_TX_QOS_B39, + .mac_cap_info[5] = + IEEE80211_HE_MAC_CAP5_HT_VHT_TRIG_FRAME_RX, + .phy_cap_info[0] = 0, + .phy_cap_info[1] = + IEEE80211_HE_PHY_CAP1_DEVICE_CLASS_A | + IEEE80211_HE_PHY_CAP1_HE_LTF_AND_GI_FOR_HE_PPDUS_0_8US, + .phy_cap_info[2] = + IEEE80211_HE_PHY_CAP2_NDP_4x_LTF_AND_3_2US, + .phy_cap_info[3] = + IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_TX_NO_DCM | + IEEE80211_HE_PHY_CAP3_DCM_MAX_TX_NSS_1 | + IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_RX_16_QAM | + IEEE80211_HE_PHY_CAP3_DCM_MAX_RX_NSS_1, + .phy_cap_info[4] = + IEEE80211_HE_PHY_CAP4_SU_BEAMFORMEE | + IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_UNDER_80MHZ_4, + .phy_cap_info[5] = + IEEE80211_HE_PHY_CAP5_NG16_SU_FEEDBACK | + IEEE80211_HE_PHY_CAP5_NG16_MU_FEEDBACK, + .phy_cap_info[6] = + IEEE80211_HE_PHY_CAP6_CODEBOOK_SIZE_42_SU | + IEEE80211_HE_PHY_CAP6_CODEBOOK_SIZE_75_MU | + IEEE80211_HE_PHY_CAP6_TRIG_SU_BEAMFORMING_FB | + IEEE80211_HE_PHY_CAP6_TRIG_MU_BEAMFORMING_PARTIAL_BW_FB | + IEEE80211_HE_PHY_CAP6_TRIG_CQI_FB | + IEEE80211_HE_PHY_CAP6_PARTIAL_BW_EXT_RANGE, + .phy_cap_info[7] = + IEEE80211_HE_PHY_CAP7_HE_SU_MU_PPDU_4XLTF_AND_08_US_GI, + .phy_cap_info[8] = + IEEE80211_HE_PHY_CAP8_HE_ER_SU_PPDU_4XLTF_AND_08_US_GI | + IEEE80211_HE_PHY_CAP8_20MHZ_IN_40MHZ_HE_PPDU_IN_2G | + IEEE80211_HE_PHY_CAP8_HE_ER_SU_1XLTF_AND_08_US_GI, + .phy_cap_info[9] = + IEEE80211_HE_PHY_CAP9_NON_TRIGGERED_CQI_FEEDBACK | + IEEE80211_HE_PHY_CAP9_RX_FULL_BW_SU_USING_MU_WITH_COMP_SIGB | + IEEE80211_HE_PHY_CAP9_RX_FULL_BW_SU_USING_MU_WITH_NON_COMP_SIGB | + IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_16US, + }, + /* Set default Tx/Rx HE MCS NSS Support field. + * Indicate support for up to 2 spatial streams and all + * MCS, without any special cases + */ + .he_mcs_nss_supp = { + .rx_mcs_80 = cpu_to_le16(0xfffc), + .tx_mcs_80 = cpu_to_le16(0xfffc), + .rx_mcs_160 = cpu_to_le16(0xffff), + .tx_mcs_160 = cpu_to_le16(0xffff), + .rx_mcs_80p80 = cpu_to_le16(0xffff), + .tx_mcs_80p80 = cpu_to_le16(0xffff), + }, + /* Set default PPE thresholds, with PPET16 set to 0, + * PPET8 set to 7 + */ + .ppe_thres = {0xff, 0xff, 0xff, 0xff}, + }, +}}; + +static struct ieee80211_supported_band cc33xx_band_5ghz = { + .channels = cc33xx_channels_5ghz, + .n_channels = ARRAY_SIZE(cc33xx_channels_5ghz), + .bitrates = cc33xx_rates_5ghz, + .n_bitrates = ARRAY_SIZE(cc33xx_rates_5ghz), + .vht_cap = { + .vht_supported = true, + .cap = (IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_7991 | (1 << + IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_SHIFT)), + .vht_mcs = { + .rx_mcs_map = cpu_to_le16(0xfffc), + .rx_highest = cpu_to_le16(7), + .tx_mcs_map = cpu_to_le16(0xfffc), + .tx_highest = cpu_to_le16(7), + }, + }, +}; + +static void __cc33xx_op_remove_interface(struct cc33xx *cc, + struct ieee80211_vif *vif, + bool reset_tx_queues); +static void cc33xx_turn_off(struct cc33xx *cc); +static void cc33xx_free_ap_keys(struct cc33xx *cc, struct cc33xx_vif *wlvif); +static int process_core_status(struct cc33xx *cc, + struct core_status *core_status); +static int cc33xx_setup(struct cc33xx *cc); + +static int cc33xx_set_authorized(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + int ret; + + if (WARN_ON(wlvif->bss_type != BSS_TYPE_STA_BSS)) + return -EINVAL; + + if (!test_bit(WLVIF_FLAG_STA_ASSOCIATED, &wlvif->flags)) + return 0; + + if (test_and_set_bit(WLVIF_FLAG_STA_STATE_SENT, &wlvif->flags)) + return 0; + + ret = cc33xx_cmd_set_peer_state(cc, wlvif, wlvif->sta.hlid); + if (ret < 0) + return ret; + + cc33xx_info("Association complete."); + return 0; +} + +/* cc->mutex must be taken */ +void cc33xx_rearm_tx_watchdog_locked(struct cc33xx *cc) +{ + /* if the watchdog is not armed, don't do anything */ + if (cc->tx_allocated_blocks == 0) + return; + + cancel_delayed_work(&cc->tx_watchdog_work); + ieee80211_queue_delayed_work(cc->hw, &cc->tx_watchdog_work, + msecs_to_jiffies(cc->conf.host_conf.tx.tx_watchdog_timeout)); +} + +static void cc33xx_sta_rc_update(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + /* sanity */ + if (WARN_ON(wlvif->bss_type != BSS_TYPE_STA_BSS)) + return; + + /* ignore the change before association */ + if (!test_bit(WLVIF_FLAG_STA_ASSOCIATED, &wlvif->flags)) + return; + + /* If we started out as wide, we can change the operation mode. If we + * thought this was a 20mhz AP, we have to reconnect + */ + if (wlvif->sta.role_chan_type != NL80211_CHAN_HT40MINUS && + wlvif->sta.role_chan_type != NL80211_CHAN_HT40PLUS) + ieee80211_connection_loss(cc33xx_wlvif_to_vif(wlvif)); +} + +static void cc33xx_rc_update_work(struct work_struct *work) +{ + struct cc33xx_vif *wlvif = container_of(work, struct cc33xx_vif, + rc_update_work); + struct cc33xx *cc = wlvif->cc; + struct ieee80211_vif *vif = cc33xx_wlvif_to_vif(wlvif); + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) + goto out; + + if (!ieee80211_vif_is_mesh(vif)) + cc33xx_sta_rc_update(cc, wlvif); + +out: + mutex_unlock(&cc->mutex); +} + +static inline void cc33xx_tx_watchdog_work(struct work_struct *work) +{ + container_of(to_delayed_work(work), struct cc33xx, tx_watchdog_work); +} + +static void cc33xx_adjust_conf(struct cc33xx *cc) +{ + if (no_recovery != -1) + cc->conf.core.no_recovery = (u8)no_recovery; +} + +void cc33xx_flush_deferred_work(struct cc33xx *cc) +{ + struct sk_buff *skb; + + /* Pass all received frames to the network stack */ + while ((skb = skb_dequeue(&cc->deferred_rx_queue))) { + cc33xx_debug(DEBUG_RX, "%s: rx skb 0x%p", __func__, skb); + ieee80211_rx_ni(cc->hw, skb); + } + + /* Return sent skbs to the network stack */ + while ((skb = skb_dequeue(&cc->deferred_tx_queue))) + ieee80211_tx_status_ni(cc->hw, skb); +} + +static void cc33xx_netstack_work(struct work_struct *work) +{ + struct cc33xx *cc = container_of(work, struct cc33xx, netstack_work); + + do { + cc33xx_flush_deferred_work(cc); + } while (skb_queue_len(&cc->deferred_rx_queue)); +} + +static int cc33xx_irq_locked(struct cc33xx *cc) +{ + int ret = 0; + struct core_status *core_status_ptr; + u8 *rx_buf_ptr; + u16 rx_buf_len; + size_t read_data_len; + const size_t maximum_rx_packet_size = CC33XX_FW_RX_PACKET_RAM; + size_t rx_byte_count; + struct NAB_rx_header *NAB_rx_header; + + process_deferred_events(cc); + + claim_core_status_lock(cc); + + rx_byte_count = (le32_to_cpu(cc->core_status->rx_status) & RX_BYTE_COUNT_MASK); + if (rx_byte_count != 0) { + const int read_headers_len = sizeof(struct core_status) + + sizeof(struct NAB_rx_header); + + /* Read aggressively as more data might be coming in */ + rx_byte_count *= 2; + + read_data_len = rx_byte_count + read_headers_len; + + if (cc->max_transaction_len) { /* Used in SPI interface */ + const int spi_alignment = sizeof(u32) - 1; + + read_data_len = __ALIGN_MASK(read_data_len, + spi_alignment); + read_data_len = min(read_data_len, + cc->max_transaction_len); + } else { /* SDIO */ + const int sdio_alignment = CC33XX_BUS_BLOCK_SIZE - 1; + + read_data_len = __ALIGN_MASK(read_data_len, + sdio_alignment); + read_data_len = min(read_data_len, + maximum_rx_packet_size); + } + + ret = cc33xx_raw_read(cc, NAB_DATA_ADDR, cc->aggr_buf, + read_data_len, true); + if (ret < 0) { + release_core_status_lock(cc); + return ret; + } + + core_status_ptr = (struct core_status *)((u8 *)cc->aggr_buf + + read_data_len - sizeof(struct core_status)); + + memcpy(cc->core_status, + core_status_ptr, sizeof(struct core_status)); + + process_core_status(cc, cc->core_status); + + release_core_status_lock(cc); + + NAB_rx_header = (struct NAB_rx_header *)cc->aggr_buf; + rx_buf_len = le16_to_cpu(NAB_rx_header->len) - 8; + if (rx_buf_len != 0) { + rx_buf_ptr = (u8 *)cc->aggr_buf + sizeof(struct NAB_rx_header); + cc33xx_rx(cc, rx_buf_ptr, rx_buf_len); + } else { + cc33xx_error("Rx buffer length is 0"); + cc33xx_queue_recovery_work(cc); + } + } else { + release_core_status_lock(cc); + } + + cc33xx_tx_immediate_complete(cc); + + return ret; +} + +static int read_core_status(struct cc33xx *cc, struct core_status *core_status) +{ + return cc33xx_raw_read(cc, NAB_STATUS_ADDR, core_status, + sizeof(*core_status), false); +} + +#define CTRL_TYPE_BITS 4 +static int get_type(struct control_info_descriptor *control_info_descriptor) +{ + u16 type_mask = GENMASK(CTRL_TYPE_BITS - 1, 0); + + return le16_to_cpu(control_info_descriptor->type_and_length) & type_mask; +} + +static unsigned int get_length(struct control_info_descriptor *control_info_descriptor) +{ + return le16_to_cpu(control_info_descriptor->type_and_length) >> CTRL_TYPE_BITS; +} + +static int parse_control_message(struct cc33xx *cc, + const u8 *buffer, size_t buffer_length) +{ + u8 *const end_of_payload = (u8 *const)buffer + buffer_length; + u8 *const start_of_payload = (u8 *const)buffer; + struct control_info_descriptor *control_info_descriptor; + const u8 *event_data, *cmd_result_data; + unsigned int ctrl_info_type, ctrl_info_length; + + while (buffer < end_of_payload) { + control_info_descriptor = + (struct control_info_descriptor *)buffer; + + ctrl_info_type = get_type(control_info_descriptor); + ctrl_info_length = get_length(control_info_descriptor); + + cc33xx_debug(DEBUG_CMD, "Processing message type %d, len %d", + ctrl_info_type, ctrl_info_length); + + switch (ctrl_info_type) { + case CTRL_MSG_EVENT: + event_data = buffer + sizeof(*control_info_descriptor); + + deffer_event(cc, event_data, ctrl_info_length); + break; + + case CTRL_MSG_COMMND_COMPLETE: + cmd_result_data = buffer; + cmd_result_data += sizeof(*control_info_descriptor); + + if (ctrl_info_length > sizeof(cc->command_result)) { + print_hex_dump(KERN_DEBUG, "message dump:", + DUMP_PREFIX_OFFSET, 16, 1, + cmd_result_data, + ctrl_info_length, false); + + WARN(1, "Error device response exceeds result buffer size"); + + goto message_parse_error; + } + + memcpy(cc->command_result, + cmd_result_data, ctrl_info_length); + + cc->result_length = ctrl_info_length; + + complete(&cc->command_complete); + break; + + default: + print_hex_dump(KERN_DEBUG, "message dump:", + DUMP_PREFIX_OFFSET, 16, 1, + start_of_payload, buffer_length, false); + + WARN(1, "Error processing device message @ offset %zu", + (size_t)(buffer - start_of_payload)); + + goto message_parse_error; + } + + buffer += sizeof(*control_info_descriptor); + buffer += ctrl_info_length; + } + + return 0; + +message_parse_error: + return -EIO; +} + +static int read_control_message(struct cc33xx *cc, u8 *read_buffer, + size_t buffer_size) +{ + int ret; + size_t device_message_size; + struct NAB_header *nab_header; + + ret = cc33xx_raw_read(cc, NAB_CONTROL_ADDR, read_buffer, + buffer_size, false); + + if (ret < 0) { + cc33xx_debug(DEBUG_CMD, + "control read Error response 0x%x", ret); + return ret; + } + + nab_header = (struct NAB_header *)read_buffer; + + if (le32_to_cpu(nab_header->sync_pattern) != DEVICE_SYNC_PATTERN) { + cc33xx_error("Wrong device sync pattern: 0x%x", + nab_header->sync_pattern); + return -EIO; + } + + device_message_size = sizeof(*nab_header) + NAB_EXTRA_BYTES + + le16_to_cpu(nab_header->len); + + if (device_message_size > buffer_size) { + cc33xx_error("Invalid NAB length field: %x", nab_header->len); + return -EIO; + } + + return le16_to_cpu(nab_header->len); +} + +static int process_event_and_cmd_result(struct cc33xx *cc, + struct core_status *core_status) +{ + int ret; + u8 *read_buffer, *message; + const size_t buffer_size = CC33XX_CMD_MAX_SIZE; + size_t message_length; + struct core_status *new_core_status; + __le32 previous_hint; + + read_buffer = kmalloc(buffer_size, GFP_KERNEL); + if (!read_buffer) + return -ENOMEM; + + ret = read_control_message(cc, read_buffer, buffer_size); + if (ret < 0) + goto out; + + message_length = ret - NAB_EXTRA_BYTES; + message = read_buffer + sizeof(struct NAB_header) + NAB_EXTRA_BYTES; + ret = parse_control_message(cc, message, message_length); + if (ret < 0) + goto out; + + /* Each read transaction always carries an updated core status */ + previous_hint = core_status->host_interrupt_status; + new_core_status = (struct core_status *) + (read_buffer + buffer_size - sizeof(struct core_status)); + memcpy(core_status, new_core_status, sizeof(*core_status)); + /* Host interrupt filed is clear-on-read and we do not want + * to overrun previously unhandled bits. + */ + core_status->host_interrupt_status |= previous_hint; + +out: + kfree(read_buffer); + return ret; +} + +static int verify_padding(struct core_status *core_status) +{ + unsigned int i; + const u32 valid_padding = 0x55555555; + + for (i = 0; i < ARRAY_SIZE(core_status->block_pad); i++) { + if (le32_to_cpu(core_status->block_pad[i]) != valid_padding) { + cc33xx_error("Error in core status padding:"); + print_hex_dump(KERN_DEBUG, "", DUMP_PREFIX_OFFSET, 16, + 1, core_status, sizeof(*core_status), + false); + return -1; + } + } + + return 0; +} + +static int process_core_status(struct cc33xx *cc, + struct core_status *core_status) +{ + bool core_status_idle; + u32 shadow_host_interrupt_status; + int ret; + + do { + core_status_idle = true; + + shadow_host_interrupt_status = + le32_to_cpu(core_status->host_interrupt_status); + + /* Interrupts are aggregated (ORed) in this filed with each + * read operation from the device. + */ + core_status->host_interrupt_status = 0; + + cc33xx_debug(DEBUG_IRQ, + "HINT_STATUS: 0x%x, TSF: 0x%x, rx status: 0x%x", + shadow_host_interrupt_status, core_status->tsf, + core_status->rx_status); + + if (shadow_host_interrupt_status & HINT_COMMAND_COMPLETE) { + ret = process_event_and_cmd_result(cc, core_status); + if (ret < 0) { + memset(core_status, 0, sizeof(*core_status)); + return ret; + } + core_status_idle = false; + } + + if ((le32_to_cpu(core_status->rx_status) & RX_BYTE_COUNT_MASK) != 0) + queue_work(cc->freezable_wq, &cc->irq_deferred_work); + + if (core_status->fw_info.tx_result_queue_index != cc->last_fw_rls_idx) + queue_work(cc->freezable_wq, &cc->irq_deferred_work); + + if (shadow_host_interrupt_status & HINT_NEW_TX_RESULT) + queue_work(cc->freezable_wq, &cc->irq_deferred_work); + + if (shadow_host_interrupt_status & BOOT_TIME_INTERRUPTS) + cc33xx_handle_boot_irqs(cc, shadow_host_interrupt_status); + + if (shadow_host_interrupt_status & HINT_GENERAL_ERROR) { + cc33xx_error("FW is stuck, triggering recovery"); + cc33xx_queue_recovery_work(cc); + } + } while (!core_status_idle); + + return 0; +} + +void cc33xx_irq(void *cookie) +{ + struct cc33xx *cc = cookie; + unsigned long flags; + int ret; + + claim_core_status_lock(cc); + + if (test_bit(CC33XX_FLAG_SUSPENDED, &cc->flags)) { + /* don't enqueue a work right now. mark it as pending */ + set_bit(CC33XX_FLAG_PENDING_WORK, &cc->flags); + spin_lock_irqsave(&cc->cc_lock, flags); + cc33xx_disable_interrupts_nosync(cc); + pm_wakeup_hard_event(cc->dev); + spin_unlock_irqrestore(&cc->cc_lock, flags); + goto out; + } + + ret = read_core_status(cc, cc->core_status); + if (unlikely(ret < 0)) { + cc33xx_error("IO error during core status read"); + cc33xx_queue_recovery_work(cc); + goto out; + } + + ret = verify_padding(cc->core_status); + if (unlikely(ret < 0)) { + cc33xx_queue_recovery_work(cc); + goto out; + } + + process_core_status(cc, cc->core_status); + +out: + release_core_status_lock(cc); +} + +struct vif_counter_data { + u8 counter; + + struct ieee80211_vif *cur_vif; + bool cur_vif_running; +}; + +static void cc33xx_vif_count_iter(void *data, u8 *mac, + struct ieee80211_vif *vif) +{ + struct vif_counter_data *counter = data; + + counter->counter++; + if (counter->cur_vif == vif) + counter->cur_vif_running = true; +} + +/* caller must not hold cc->mutex, as it might deadlock */ +static void cc33xx_get_vif_count(struct ieee80211_hw *hw, + struct ieee80211_vif *cur_vif, + struct vif_counter_data *data) +{ + memset(data, 0, sizeof(*data)); + data->cur_vif = cur_vif; + + ieee80211_iterate_active_interfaces(hw, IEEE80211_IFACE_ITER_RESUME_ALL, + cc33xx_vif_count_iter, data); +} + +void cc33xx_queue_recovery_work(struct cc33xx *cc) +{ + /* Avoid a recursive recovery */ + if (cc->state == CC33XX_STATE_ON) { + cc->state = CC33XX_STATE_RESTARTING; + set_bit(CC33XX_FLAG_RECOVERY_IN_PROGRESS, &cc->flags); + ieee80211_queue_work(cc->hw, &cc->recovery_work); + } +} + +static void cc33xx_save_freed_pkts(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u8 hlid, struct ieee80211_sta *sta) +{ + struct cc33xx_station *wl_sta; + u32 sqn_recovery_padding = CC33XX_TX_SQN_POST_RECOVERY_PADDING; + + wl_sta = (void *)sta->drv_priv; + wl_sta->total_freed_pkts = cc->links[hlid].total_freed_pkts; + + /* increment the initial seq number on recovery to account for + * transmitted packets that we haven't yet got in the FW status + */ + if (wlvif->encryption_type == KEY_GEM) + sqn_recovery_padding = CC33XX_TX_SQN_POST_RECOVERY_PADDING_GEM; + + if (test_bit(CC33XX_FLAG_RECOVERY_IN_PROGRESS, &cc->flags)) + wl_sta->total_freed_pkts += sqn_recovery_padding; +} + +static void cc33xx_save_freed_pkts_addr(struct cc33xx *cc, + struct cc33xx_vif *wlvif, + u8 hlid, const u8 *addr) +{ + struct ieee80211_sta *sta; + struct ieee80211_vif *vif = cc33xx_wlvif_to_vif(wlvif); + + if (WARN_ON(hlid == CC33XX_INVALID_LINK_ID || + is_zero_ether_addr(addr))) + return; + + rcu_read_lock(); + sta = ieee80211_find_sta(vif, addr); + + if (sta) + cc33xx_save_freed_pkts(cc, wlvif, hlid, sta); + + rcu_read_unlock(); +} + +static void cc33xx_recovery_work(struct work_struct *work) +{ + struct cc33xx *cc = container_of(work, struct cc33xx, recovery_work); + struct cc33xx_vif *wlvif; + struct ieee80211_vif *vif; + + cc33xx_notice("CC33xx driver attempting recovery"); + + if (cc->conf.core.no_recovery) { + cc33xx_info("Recovery disabled by configuration, driver will not restart."); + return; + } + + if (test_bit(CC33XX_FLAG_DRIVER_REMOVED, &cc->flags)) { + cc33xx_info("Driver being removed, recovery disabled"); + return; + } + + cc->state = CC33XX_STATE_RESTARTING; + set_bit(CC33XX_FLAG_RECOVERY_IN_PROGRESS, &cc->flags); + + mutex_lock(&cc->mutex); + while (!list_empty(&cc->wlvif_list)) { + wlvif = list_first_entry(&cc->wlvif_list, + struct cc33xx_vif, list); + vif = cc33xx_wlvif_to_vif(wlvif); + + if (test_bit(WLVIF_FLAG_STA_ASSOCIATED, &wlvif->flags)) + ieee80211_connection_loss(vif); + + __cc33xx_op_remove_interface(cc, vif, false); + } + mutex_unlock(&cc->mutex); + + cc33xx_turn_off(cc); + msleep(500); + + mutex_lock(&cc->mutex); + cc33xx_init_fw(cc); + mutex_unlock(&cc->mutex); + + ieee80211_restart_hw(cc->hw); + + mutex_lock(&cc->mutex); + clear_bit(CC33XX_FLAG_RECOVERY_IN_PROGRESS, &cc->flags); + mutex_unlock(&cc->mutex); +} + +static void irq_deferred_work(struct work_struct *work) +{ + int ret; + unsigned long flags; + struct cc33xx *cc = + container_of(work, struct cc33xx, irq_deferred_work); + + mutex_lock(&cc->mutex); + + ret = cc33xx_irq_locked(cc); + if (ret) + cc33xx_queue_recovery_work(cc); + + spin_lock_irqsave(&cc->cc_lock, flags); + /* In case TX was not handled here, queue TX work */ + clear_bit(CC33XX_FLAG_TX_PENDING, &cc->flags); + if (!test_bit(CC33XX_FLAG_FW_TX_BUSY, &cc->flags) && + cc33xx_tx_total_queue_count(cc) > 0) + ieee80211_queue_work(cc->hw, &cc->tx_work); + spin_unlock_irqrestore(&cc->cc_lock, flags); + + mutex_unlock(&cc->mutex); +} + +static void irq_wrapper(struct platform_device *pdev) +{ + struct cc33xx *cc = platform_get_drvdata(pdev); + + cc33xx_irq(cc); +} + +static int cc33xx_plt_init(struct cc33xx *cc) +{ + /* PLT init: Role enable + Role start + plt Init */ + int ret = 0; + + /* Role enable */ + u8 returned_role_id = CC33XX_INVALID_ROLE_ID; + u8 bcast_addr[ETH_ALEN] = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff}; + + ret = cc33xx_cmd_role_enable(cc, bcast_addr, + ROLE_TRANSCEIVER, &returned_role_id); + if (ret < 0) { + cc33xx_info("PLT init Role Enable FAILED! , PLT roleID is: %u ", + returned_role_id); + goto out; + } + + ret = cc33xx_cmd_role_start_transceiver(cc, returned_role_id); + if (ret < 0) { + cc33xx_info("PLT init Role Start FAILED! , PLT roleID is: %u ", + returned_role_id); + cc33xx_cmd_role_disable(cc, &returned_role_id); + goto out; + } + + cc->plt_role_id = returned_role_id; + ret = cc33xx_cmd_plt_enable(cc, returned_role_id); + + if (ret >= 0) { + cc33xx_info("PLT init Role Start succeed!, PLT roleID is: %u ", + returned_role_id); + } else { + cc33xx_info("PLT init Role Start FAILED! , PLT roleID is: %u ", + returned_role_id); + } + +out: + return ret; +} + +int cc33xx_plt_start(struct cc33xx *cc, const enum plt_mode plt_mode) +{ + int ret = 0; + + mutex_lock(&cc->mutex); + + if (plt_mode == PLT_ON && cc->plt_mode == PLT_ON) { + cc33xx_error("PLT already on"); + ret = 0; + goto out; + } + + cc33xx_notice("PLT start"); + + if (plt_mode != PLT_CHIP_AWAKE) { + ret = cc33xx_plt_init(cc); + if (ret < 0) { + cc33xx_error("PLT start failed"); + goto out; + } + } + + /* Indicate to lower levels that we are now in PLT mode */ + cc->plt = true; + cc->plt_mode = plt_mode; + +out: + mutex_unlock(&cc->mutex); + + return ret; +} + +int cc33xx_plt_stop(struct cc33xx *cc) +{ + int ret = 0; + + cc33xx_notice("PLT stop"); + + ret = cc33xx_cmd_role_stop_transceiver(cc); + if (ret < 0) + goto out; + + ret = cc33xx_cmd_role_disable(cc, &cc->plt_role_id); + if (ret < 0) + goto out; + else + cc33xx_cmd_plt_disable(cc); + + cc33xx_flush_deferred_work(cc); + + flush_deferred_event_list(cc); + + mutex_lock(&cc->mutex); + cc->plt = false; + cc->plt_mode = PLT_OFF; + cc->rx_counter = 0; + mutex_unlock(&cc->mutex); + +out: + return ret; +} + +static void cc33xx_op_tx(struct ieee80211_hw *hw, + struct ieee80211_tx_control *control, + struct sk_buff *skb) +{ + struct cc33xx *cc = hw->priv; + struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); + struct ieee80211_vif *vif = info->control.vif; + struct cc33xx_vif *wlvif = NULL; + enum cc33xx_queue_stop_reason stop_reason = CC33XX_QUEUE_STOP_REASON_WATERMARK; + unsigned long flags; + int q, mapping; + u8 hlid; + + if (!vif) { + ieee80211_free_txskb(hw, skb); + return; + } + + wlvif = cc33xx_vif_to_data(vif); + mapping = skb_get_queue_mapping(skb); + q = cc33xx_tx_get_queue(mapping); + + hlid = cc33xx_tx_get_hlid(cc, wlvif, skb, control->sta); + + spin_lock_irqsave(&cc->cc_lock, flags); + + /* drop the packet if the link is invalid or the queue is stopped + * for any reason but watermark. Watermark is a "soft"-stop so we + * allow these packets through. + */ + + if (hlid == CC33XX_INVALID_LINK_ID || + (!test_bit(hlid, wlvif->links_map)) || + (cc33xx_is_queue_stopped_locked(cc, wlvif, q) && + !cc33xx_is_queue_stopped_by_reason_locked(cc, wlvif, q, + stop_reason))) { + cc33xx_debug(DEBUG_TX, "DROP skb hlid %d q %d ", hlid, q); + ieee80211_free_txskb(hw, skb); + goto out; + } + + cc33xx_debug(DEBUG_TX, "queue skb hlid %d q %d len %d %p", + hlid, q, skb->len, skb); + skb_queue_tail(&cc->links[hlid].tx_queue[q], skb); + + cc->tx_queue_count[q]++; + wlvif->tx_queue_count[q]++; + + /* The workqueue is slow to process the tx_queue and we need stop + * the queue here, otherwise the queue will get too long. + */ + if (wlvif->tx_queue_count[q] >= CC33XX_TX_QUEUE_HIGH_WATERMARK && + !cc33xx_is_queue_stopped_by_reason_locked(cc, wlvif, q, + stop_reason)) { + cc33xx_debug(DEBUG_TX, "op_tx: stopping queues for q %d", q); + cc33xx_stop_queue_locked(cc, wlvif, q, stop_reason); + } + + /* The chip specific setup must run before the first TX packet - + * before that, the tx_work will not be initialized! + */ + if (!test_bit(CC33XX_FLAG_FW_TX_BUSY, &cc->flags) && + !test_bit(CC33XX_FLAG_TX_PENDING, &cc->flags)) { + cc33xx_debug(DEBUG_TX, "Triggering tx thread"); + ieee80211_queue_work(cc->hw, &cc->tx_work); + } else { + cc33xx_debug(DEBUG_TX, "Not triggering tx thread cc->flags 0x%lx", + cc->flags); + } + +out: + spin_unlock_irqrestore(&cc->cc_lock, flags); +} + +/* The size of the dummy packet should be at least 1400 bytes. However, in + * order to minimize the number of bus transactions, aligning it to 512 bytes + * boundaries could be beneficial, performance wise + */ +#define TOTAL_TX_DUMMY_PACKET_SIZE (ALIGN(1400, 512)) + +static struct sk_buff *cc33xx_alloc_dummy_packet(struct cc33xx *cc) +{ + struct sk_buff *skb; + struct ieee80211_hdr_3addr *hdr; + unsigned int dummy_packet_size; + + dummy_packet_size = TOTAL_TX_DUMMY_PACKET_SIZE - + sizeof(struct cc33xx_tx_hw_descr) - sizeof(*hdr); + + skb = dev_alloc_skb(TOTAL_TX_DUMMY_PACKET_SIZE); + if (!skb) + return NULL; + + skb_reserve(skb, sizeof(struct cc33xx_tx_hw_descr)); + + hdr = skb_put_zero(skb, sizeof(*hdr)); + hdr->frame_control = cpu_to_le16(IEEE80211_FTYPE_DATA | + IEEE80211_STYPE_NULLFUNC | + IEEE80211_FCTL_TODS); + + skb_put_zero(skb, dummy_packet_size); + + /* Dummy packets require the TID to be management */ + skb->priority = CC33XX_TID_MGMT; + + /* Initialize all fields that might be used */ + skb_set_queue_mapping(skb, 0); + memset(IEEE80211_SKB_CB(skb), 0, sizeof(struct ieee80211_tx_info)); + + return skb; +} + +static int cc33xx_validate_wowlan_pattern(struct cfg80211_pkt_pattern *p) +{ + int num_fields = 0, in_field = 0, fields_size = 0; + int i, pattern_len = 0; + + if (!p->mask) { + cc33xx_warning("No mask in WoWLAN pattern"); + return -EINVAL; + } + + /* The pattern is broken up into segments of bytes at different offsets + * that need to be checked by the FW filter. Each segment is called + * a field in the FW API. We verify that the total number of fields + * required for this pattern won't exceed FW limits (8) + * as well as the total fields buffer won't exceed the FW limit. + * Note that if there's a pattern which crosses Ethernet/IP header + * boundary a new field is required. + */ + for (i = 0; i < p->pattern_len; i++) { + if (test_bit(i, (unsigned long *)p->mask)) { + if (!in_field) { + in_field = 1; + pattern_len = 1; + } else if (i == CC33XX_RX_FILTER_ETH_HEADER_SIZE) { + num_fields++; + fields_size += pattern_len + + RX_FILTER_FIELD_OVERHEAD; + pattern_len = 1; + } else { + pattern_len++; + } + } else if (in_field) { + in_field = 0; + fields_size += pattern_len + RX_FILTER_FIELD_OVERHEAD; + num_fields++; + } + } + + if (in_field) { + fields_size += pattern_len + RX_FILTER_FIELD_OVERHEAD; + num_fields++; + } + + if (num_fields > CC33XX_RX_FILTER_MAX_FIELDS) { + cc33xx_warning("RX Filter too complex. Too many segments"); + return -EINVAL; + } + + if (fields_size > CC33XX_RX_FILTER_MAX_FIELDS_SIZE) { + cc33xx_warning("RX filter pattern is too big"); + return -E2BIG; + } + + return 0; +} + +static void cc33xx_rx_filter_free(struct cc33xx_rx_filter *filter) +{ + int i; + + if (!filter) + return; + + for (i = 0; i < filter->num_fields; i++) + kfree(filter->fields[i].pattern); + + kfree(filter); +} + +static int cc33xx_rx_filter_alloc_field(struct cc33xx_rx_filter *filter, + u16 offset, u8 flags, + const u8 *pattern, u8 len) +{ + struct cc33xx_rx_filter_field *field; + + if (filter->num_fields == CC33XX_RX_FILTER_MAX_FIELDS) { + cc33xx_warning("Max fields per RX filter. can't alloc another"); + return -EINVAL; + } + + field = &filter->fields[filter->num_fields]; + + field->pattern = kzalloc(len, GFP_KERNEL); + if (!field->pattern) + return -ENOMEM; + + filter->num_fields++; + + field->offset = cpu_to_le16(offset); + field->flags = flags; + field->len = len; + memcpy(field->pattern, pattern, len); + + return 0; +} + +/* Allocates an RX filter returned through f + * which needs to be freed using rx_filter_free() + */ +static int +cc33xx_convert_wowlan_pattern_to_rx_filter(struct cfg80211_pkt_pattern *p, + struct cc33xx_rx_filter **f) +{ + int i, j, ret = 0; + struct cc33xx_rx_filter *filter; + u16 offset; + u8 flags, len; + + filter = kzalloc(sizeof(*filter), GFP_KERNEL); + if (!filter) { + ret = -ENOMEM; + goto err; + } + + i = 0; + while (i < p->pattern_len) { + if (!test_bit(i, (unsigned long *)p->mask)) { + i++; + continue; + } + + for (j = i; j < p->pattern_len; j++) { + if (!test_bit(j, (unsigned long *)p->mask)) + break; + + if (i < CC33XX_RX_FILTER_ETH_HEADER_SIZE && + j >= CC33XX_RX_FILTER_ETH_HEADER_SIZE) + break; + } + + if (i < CC33XX_RX_FILTER_ETH_HEADER_SIZE) { + offset = i; + flags = CC33XX_RX_FILTER_FLAG_ETHERNET_HEADER; + } else { + offset = i - CC33XX_RX_FILTER_ETH_HEADER_SIZE; + flags = CC33XX_RX_FILTER_FLAG_IP_HEADER; + } + + len = j - i; + + ret = cc33xx_rx_filter_alloc_field(filter, offset, flags, + &p->pattern[i], len); + if (ret) + goto err; + + i = j; + } + + filter->action = FILTER_SIGNAL; + + *f = filter; + ret = 0; + goto out; + +err: + cc33xx_rx_filter_free(filter); + *f = NULL; +out: + return ret; +} + +static int cc33xx_configure_wowlan(struct cc33xx *cc, + struct cfg80211_wowlan *wow) +{ + int i, ret; + + if (!wow || (!wow->any && !wow->n_patterns)) { + if (wow) + cc33xx_warning("invalid wow configuration - set to pattern trigger without setting pattern"); + + ret = cc33xx_acx_default_rx_filter_enable(cc, 0, + FILTER_SIGNAL); + if (ret) + goto out; + + ret = cc33xx_rx_filter_clear_all(cc); + if (ret) + goto out; + + return 0; + } + + if (wow->any) { + ret = cc33xx_acx_default_rx_filter_enable(cc, 1, + FILTER_SIGNAL); + if (ret) + goto out; + + ret = cc33xx_rx_filter_clear_all(cc); + if (ret) + goto out; + + return 0; + } + + if (WARN_ON(wow->n_patterns > CC33XX_MAX_RX_FILTERS)) + return -EINVAL; + + /* Validate all incoming patterns before clearing current FW state */ + for (i = 0; i < wow->n_patterns; i++) { + ret = cc33xx_validate_wowlan_pattern(&wow->patterns[i]); + if (ret) { + cc33xx_warning("Bad wowlan pattern %d", i); + return ret; + } + } + + ret = cc33xx_acx_default_rx_filter_enable(cc, 0, FILTER_SIGNAL); + if (ret) + goto out; + + ret = cc33xx_rx_filter_clear_all(cc); + if (ret) + goto out; + + /* Translate WoWLAN patterns into filters */ + for (i = 0; i < wow->n_patterns; i++) { + struct cfg80211_pkt_pattern *p; + struct cc33xx_rx_filter *filter = NULL; + + p = &wow->patterns[i]; + + ret = cc33xx_convert_wowlan_pattern_to_rx_filter(p, &filter); + if (ret) { + cc33xx_warning("Failed to create an RX filter from wowlan pattern %d", + i); + goto out; + } + + ret = cc33xx_rx_filter_enable(cc, i, 1, filter); + + cc33xx_rx_filter_free(filter); + if (ret) + goto out; + } + + ret = cc33xx_acx_default_rx_filter_enable(cc, 1, FILTER_DROP); + +out: + return ret; +} + +static int cc33xx_configure_suspend_sta(struct cc33xx *cc, + struct cc33xx_vif *wlvif, + struct cfg80211_wowlan *wow) +{ + struct cc33xx_core_conf *core_conf = &cc->conf.core; + int ret = 0; + + if (!test_bit(WLVIF_FLAG_STA_ASSOCIATED, &wlvif->flags)) + goto out; + + ret = cc33xx_configure_wowlan(cc, wow); + if (ret < 0) + goto out; + + if (core_conf->suspend_wake_up_event == core_conf->wake_up_event && + core_conf->suspend_listen_interval == core_conf->listen_interval) + goto out; + + ret = cc33xx_acx_wake_up_conditions(cc, wlvif, + core_conf->suspend_wake_up_event, + core_conf->suspend_listen_interval); + + if (ret < 0) + cc33xx_error("suspend: set wake up conditions failed: %d", ret); +out: + return ret; +} + +static int cc33xx_configure_suspend_ap(struct cc33xx *cc, + struct cc33xx_vif *wlvif, + struct cfg80211_wowlan *wow) +{ + int ret = 0; + + if (!test_bit(WLVIF_FLAG_AP_STARTED, &wlvif->flags)) + goto out; + + ret = cc33xx_acx_beacon_filter_opt(cc, wlvif, true); + if (ret < 0) + goto out; + + ret = cc33xx_configure_wowlan(cc, wow); + if (ret < 0) + goto out; + +out: + return ret; +} + +static int cc33xx_configure_suspend(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct cfg80211_wowlan *wow) +{ + if (wlvif->bss_type == BSS_TYPE_STA_BSS) + return cc33xx_configure_suspend_sta(cc, wlvif, wow); + + if (wlvif->bss_type == BSS_TYPE_AP_BSS) + return cc33xx_configure_suspend_ap(cc, wlvif, wow); + + return 0; +} + +static void cc33xx_configure_resume(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + int ret = 0; + bool is_ap = wlvif->bss_type == BSS_TYPE_AP_BSS; + bool is_sta = wlvif->bss_type == BSS_TYPE_STA_BSS; + struct cc33xx_core_conf *core_conf = &cc->conf.core; + + if (!is_ap && !is_sta) + return; + + if ((is_sta && !test_bit(WLVIF_FLAG_STA_ASSOCIATED, &wlvif->flags)) || + (is_ap && !test_bit(WLVIF_FLAG_AP_STARTED, &wlvif->flags))) + return; + + cc33xx_configure_wowlan(cc, NULL); + + if (is_sta) { + if (core_conf->suspend_wake_up_event == core_conf->wake_up_event && + core_conf->suspend_listen_interval == core_conf->listen_interval) + return; + + ret = cc33xx_acx_wake_up_conditions(cc, wlvif, + core_conf->wake_up_event, + core_conf->listen_interval); + + if (ret < 0) + cc33xx_error("resume: wake up conditions failed: %d", + ret); + + } else if (is_ap) { + ret = cc33xx_acx_beacon_filter_opt(cc, wlvif, false); + } +} + +static int __maybe_unused cc33xx_op_suspend(struct ieee80211_hw *hw, + struct cfg80211_wowlan *wow) +{ + struct cc33xx *cc = hw->priv; + struct cc33xx_vif *wlvif; + unsigned long flags; + int ret = 0; + + WARN_ON(!wow); + + /* we want to perform the recovery before suspending */ + if (test_bit(CC33XX_FLAG_RECOVERY_IN_PROGRESS, &cc->flags)) { + cc33xx_warning("postponing suspend to perform recovery"); + return -EBUSY; + } + + cc33xx_tx_flush(cc); + + mutex_lock(&cc->mutex); + + cc->keep_device_power = true; + cc33xx_for_each_wlvif(cc, wlvif) { + if (cc33xx_is_p2p_mgmt(wlvif)) + continue; + + ret = cc33xx_configure_suspend(cc, wlvif, wow); + if (ret < 0) { + mutex_unlock(&cc->mutex); + cc33xx_warning("couldn't prepare device to suspend"); + return ret; + } + } + + mutex_unlock(&cc->mutex); + + if (ret < 0) { + cc33xx_warning("couldn't prepare device to suspend"); + return ret; + } + + flush_work(&cc->tx_work); + + /* Cancel the watchdog even if above tx_flush failed. We will detect + * it on resume anyway. + */ + cancel_delayed_work(&cc->tx_watchdog_work); + + /* set suspended flag to avoid triggering a new threaded_irq + * work. + */ + spin_lock_irqsave(&cc->cc_lock, flags); + set_bit(CC33XX_FLAG_SUSPENDED, &cc->flags); + spin_unlock_irqrestore(&cc->cc_lock, flags); + + return 0; +} + +static int __maybe_unused cc33xx_op_resume(struct ieee80211_hw *hw) +{ + struct cc33xx *cc = hw->priv; + struct cc33xx_vif *wlvif; + unsigned long flags; + bool run_irq_work = false, pending_recovery; + int ret = 0; + + WARN_ON(!cc->keep_device_power); + + /* re-enable irq_work enqueuing, and call irq_work directly if + * there is a pending work. + */ + spin_lock_irqsave(&cc->cc_lock, flags); + clear_bit(CC33XX_FLAG_SUSPENDED, &cc->flags); + run_irq_work = test_and_clear_bit(CC33XX_FLAG_PENDING_WORK, &cc->flags); + spin_unlock_irqrestore(&cc->cc_lock, flags); + + mutex_lock(&cc->mutex); + + /* test the recovery flag before calling any SDIO functions */ + pending_recovery = test_bit(CC33XX_FLAG_RECOVERY_IN_PROGRESS, + &cc->flags); + + if (run_irq_work) { + cc33xx_debug(DEBUG_MAC80211, "Running postponed irq_work directly"); + + /* don't talk to the HW if recovery is pending */ + if (!pending_recovery) { + ret = cc33xx_irq_locked(cc); + if (ret) + cc33xx_queue_recovery_work(cc); + } + + cc33xx_enable_interrupts(cc); + } + + if (pending_recovery) { + cc33xx_warning("queuing forgotten recovery on resume"); + ieee80211_queue_work(cc->hw, &cc->recovery_work); + goto out; + } + + cc33xx_for_each_wlvif(cc, wlvif) { + if (cc33xx_is_p2p_mgmt(wlvif)) + continue; + + cc33xx_configure_resume(cc, wlvif); + } + +out: + cc->keep_device_power = false; + + /* Set a flag to re-init the watchdog on the first Tx after resume. + * That way we avoid possible conditions where Tx-complete interrupts + * fail to arrive and we perform a spurious recovery. + */ + set_bit(CC33XX_FLAG_REINIT_TX_WDOG, &cc->flags); + mutex_unlock(&cc->mutex); + + return ret; +} + +static int cc33xx_op_start(struct ieee80211_hw *hw) +{ + /* We have to delay the booting of the hardware because + * we need to know the local MAC address before downloading and + * initializing the firmware. The MAC address cannot be changed + * after boot, and without the proper MAC address, the firmware + * will not function properly. + * + * The MAC address is first known when the corresponding interface + * is added. That is where we will initialize the hardware. + */ + + return 0; +} + +static void cc33xx_turn_off(struct cc33xx *cc) +{ + int i; + + if (cc->state == CC33XX_STATE_OFF) { + if (test_and_clear_bit(CC33XX_FLAG_RECOVERY_IN_PROGRESS, + &cc->flags)) + cc33xx_enable_interrupts(cc); + + return; + } + + mutex_lock(&cc->mutex); + + /* this must be before the cancel_work calls below, so that the work + * functions don't perform further work. + */ + cc->state = CC33XX_STATE_OFF; + + /* Use the nosync variant to disable interrupts, so the mutex could be + * held while doing so without deadlocking. + */ + cc33xx_disable_interrupts_nosync(cc); + + mutex_unlock(&cc->mutex); + + if (!test_bit(CC33XX_FLAG_RECOVERY_IN_PROGRESS, &cc->flags)) + cancel_work_sync(&cc->recovery_work); + cc33xx_flush_deferred_work(cc); + cancel_delayed_work_sync(&cc->scan_complete_work); + cancel_work_sync(&cc->netstack_work); + cancel_work_sync(&cc->tx_work); + cancel_work_sync(&cc->irq_deferred_work); + cancel_delayed_work_sync(&cc->tx_watchdog_work); + + /* let's notify MAC80211 about the remaining pending TX frames */ + mutex_lock(&cc->mutex); + cc33xx_tx_reset(cc); + + cc33xx_power_off(cc); + + cc->band = NL80211_BAND_2GHZ; + + cc->rx_counter = 0; + cc->power_level = CC33XX_MAX_TXPWR; + cc->tx_blocks_available = 0; + cc->tx_allocated_blocks = 0; + + cc->ap_fw_ps_map = 0; + cc->ap_ps_map = 0; + cc->sleep_auth = CC33XX_PSM_ILLEGAL; + memset(cc->roles_map, 0, sizeof(cc->roles_map)); + memset(cc->links_map, 0, sizeof(cc->links_map)); + memset(cc->roc_map, 0, sizeof(cc->roc_map)); + memset(cc->session_ids, 0, sizeof(cc->session_ids)); + memset(cc->rx_filter_enabled, 0, sizeof(cc->rx_filter_enabled)); + cc->active_sta_count = 0; + cc->active_link_count = 0; + cc->ble_enable = 0; + + /* The system link is always allocated */ + cc->links[CC33XX_SYSTEM_HLID].allocated_pkts = 0; + cc->links[CC33XX_SYSTEM_HLID].prev_freed_pkts = 0; + __set_bit(CC33XX_SYSTEM_HLID, cc->links_map); + + /* this is performed after the cancel_work calls and the associated + * mutex_lock, so that cc33xx_op_add_interface does not accidentally + * get executed before all these vars have been reset. + */ + cc->flags = 0; + + for (i = 0; i < NUM_TX_QUEUES; i++) + cc->tx_allocated_pkts[i] = 0; + + kfree(cc->target_mem_map); + cc->target_mem_map = NULL; + + /* FW channels must be re-calibrated after recovery, + * save current Reg-Domain channel configuration and clear it. + */ + memcpy(cc->reg_ch_conf_pending, cc->reg_ch_conf_last, + sizeof(cc->reg_ch_conf_pending)); + memset(cc->reg_ch_conf_last, 0, sizeof(cc->reg_ch_conf_last)); + + mutex_unlock(&cc->mutex); +} + +static inline void cc33xx_op_stop(struct ieee80211_hw *hw, bool suspend) {} + +static void cc33xx_channel_switch_work(struct work_struct *work) +{ + struct delayed_work *dwork; + struct cc33xx *cc; + struct ieee80211_vif *vif; + struct cc33xx_vif *wlvif; + + dwork = to_delayed_work(work); + wlvif = container_of(dwork, struct cc33xx_vif, channel_switch_work); + cc = wlvif->cc; + + cc33xx_info("channel switch failed (role_id: %d).", wlvif->role_id); + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) + goto out; + + /* check the channel switch is still ongoing */ + if (!test_and_clear_bit(WLVIF_FLAG_CS_PROGRESS, &wlvif->flags)) + goto out; + + vif = cc33xx_wlvif_to_vif(wlvif); + ieee80211_chswitch_done(vif, false, 0); + + cc33xx_cmd_stop_channel_switch(cc, wlvif); + +out: + mutex_unlock(&cc->mutex); +} + +static void cc33xx_connection_loss_work(struct work_struct *work) +{ + struct delayed_work *dwork; + struct cc33xx *cc; + struct ieee80211_vif *vif; + struct cc33xx_vif *wlvif; + + dwork = to_delayed_work(work); + wlvif = container_of(dwork, struct cc33xx_vif, connection_loss_work); + cc = wlvif->cc; + + cc33xx_info("Connection loss work (role_id: %d).", wlvif->role_id); + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) + goto out; + + /* Call mac80211 connection loss */ + if (!test_bit(WLVIF_FLAG_STA_ASSOCIATED, &wlvif->flags)) + goto out; + + vif = cc33xx_wlvif_to_vif(wlvif); + ieee80211_connection_loss(vif); + +out: + mutex_unlock(&cc->mutex); +} + +static void cc33xx_pending_auth_complete_work(struct work_struct *work) +{ + struct delayed_work *dwork; + struct cc33xx *cc; + struct cc33xx_vif *wlvif; + unsigned long time_spare; + + dwork = to_delayed_work(work); + wlvif = container_of(dwork, struct cc33xx_vif, + pending_auth_complete_work); + cc = wlvif->cc; + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) + goto out; + + /* Make sure a second really passed since the last auth reply. Maybe + * a second auth reply arrived while we were stuck on the mutex. + * Check for a little less than the timeout to protect from scheduler + * irregularities. + */ + time_spare = msecs_to_jiffies(CC33XX_PEND_AUTH_ROC_TIMEOUT - 50); + time_spare += jiffies; + if (!time_after(time_spare, wlvif->pending_auth_reply_time)) + goto out; + + /* cancel the ROC if active */ + cc33xx_debug(DEBUG_CMD, + "pending_auth t/o expired - cancel ROC if active"); + + cc33xx_update_inconn_sta(cc, wlvif, NULL, false); + +out: + mutex_unlock(&cc->mutex); +} + +static void cc33xx_roc_timeout_work(struct work_struct *work) +{ + struct delayed_work *dwork; + struct cc33xx *cc; + struct cc33xx_vif *wlvif; + unsigned long time_spare; + + dwork = to_delayed_work(work); + wlvif = container_of(dwork, struct cc33xx_vif, roc_timeout_work); + cc = wlvif->cc; + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) + goto out; + + /* Make sure that requested timeout really passed. Maybe an association + * completed and croc arrived while we were stuck on the mutex. + * Check for a little less than the timeout to protect from scheduler + * irregularities. + */ + time_spare = msecs_to_jiffies(CC33xx_PEND_ROC_COMPLETE_TIMEOUT - 50); + time_spare += jiffies; + if (!time_after(time_spare, wlvif->pending_auth_reply_time)) + goto out; + + if (test_bit(wlvif->role_id, cc->roc_map)) + cc33xx_croc(cc, wlvif->role_id); + +out: + mutex_unlock(&cc->mutex); +} + +static int cc33xx_allocate_rate_policy(struct cc33xx *cc, u8 *idx) +{ + u8 policy = find_first_zero_bit(cc->rate_policies_map, + CC33XX_MAX_RATE_POLICIES); + if (policy >= CC33XX_MAX_RATE_POLICIES) + return -EBUSY; + + __set_bit(policy, cc->rate_policies_map); + *idx = policy; + return 0; +} + +static void cc33xx_free_rate_policy(struct cc33xx *cc, u8 *idx) +{ + if (WARN_ON(*idx >= CC33XX_MAX_RATE_POLICIES)) + return; + + __clear_bit(*idx, cc->rate_policies_map); + *idx = CC33XX_MAX_RATE_POLICIES; +} + +static u8 cc33xx_get_role_type(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + struct ieee80211_vif *vif = cc33xx_wlvif_to_vif(wlvif); + + switch (wlvif->bss_type) { + case BSS_TYPE_AP_BSS: + if (wlvif->p2p) + return CC33XX_ROLE_P2P_GO; + else if (ieee80211_vif_is_mesh(vif)) + return CC33XX_ROLE_MESH_POINT; + else + return CC33XX_ROLE_AP; + + case BSS_TYPE_STA_BSS: + if (wlvif->p2p) + return CC33XX_ROLE_P2P_CL; + else + return CC33XX_ROLE_STA; + + case BSS_TYPE_IBSS: + return CC33XX_ROLE_IBSS; + + default: + cc33xx_error("invalid bss_type: %d", wlvif->bss_type); + } + return CC33XX_INVALID_ROLE_TYPE; +} + +static int cc33xx_init_vif_data(struct cc33xx *cc, struct ieee80211_vif *vif) +{ + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + struct conf_tx_settings *tx_settings = &cc->conf.host_conf.tx; + int i; + + /* clear everything but the persistent data */ + memset(wlvif, 0, offsetof(struct cc33xx_vif, persistent)); + + switch (ieee80211_vif_type_p2p(vif)) { + case NL80211_IFTYPE_P2P_CLIENT: + wlvif->p2p = 1; + fallthrough; + case NL80211_IFTYPE_STATION: + case NL80211_IFTYPE_P2P_DEVICE: + wlvif->bss_type = BSS_TYPE_STA_BSS; + break; + case NL80211_IFTYPE_ADHOC: + wlvif->bss_type = BSS_TYPE_IBSS; + break; + case NL80211_IFTYPE_P2P_GO: + wlvif->p2p = 1; + fallthrough; + case NL80211_IFTYPE_AP: + case NL80211_IFTYPE_MESH_POINT: + wlvif->bss_type = BSS_TYPE_AP_BSS; + break; + default: + wlvif->bss_type = MAX_BSS_TYPE; + return -EOPNOTSUPP; + } + + wlvif->role_id = CC33XX_INVALID_ROLE_ID; + wlvif->dev_role_id = CC33XX_INVALID_ROLE_ID; + wlvif->dev_hlid = CC33XX_INVALID_LINK_ID; + + if (wlvif->bss_type == BSS_TYPE_STA_BSS || + wlvif->bss_type == BSS_TYPE_IBSS) { + /* init sta/ibss data */ + wlvif->sta.hlid = CC33XX_INVALID_LINK_ID; + cc33xx_allocate_rate_policy(cc, &wlvif->sta.basic_rate_idx); + cc33xx_allocate_rate_policy(cc, &wlvif->sta.ap_rate_idx); + cc33xx_allocate_rate_policy(cc, &wlvif->sta.p2p_rate_idx); + wlvif->basic_rate_set = CONF_TX_RATE_MASK_BASIC; + wlvif->basic_rate = CONF_TX_RATE_MASK_BASIC; + wlvif->rate_set = CONF_TX_RATE_MASK_BASIC; + } else { + /* init ap data */ + wlvif->ap.bcast_hlid = CC33XX_INVALID_LINK_ID; + wlvif->ap.global_hlid = CC33XX_INVALID_LINK_ID; + cc33xx_allocate_rate_policy(cc, &wlvif->ap.mgmt_rate_idx); + cc33xx_allocate_rate_policy(cc, &wlvif->ap.bcast_rate_idx); + for (i = 0; i < CONF_TX_MAX_AC_COUNT; i++) + cc33xx_allocate_rate_policy(cc, + &wlvif->ap.ucast_rate_idx[i]); + wlvif->basic_rate_set = CONF_TX_ENABLED_RATES; + /* TODO: check if basic_rate shouldn't be + * cc33xx_tx_min_rate_get(cc, wlvif->basic_rate_set); + * instead (the same thing for STA above). + */ + wlvif->basic_rate = CONF_TX_ENABLED_RATES; + /* TODO: this seems to be used only for STA, check it */ + wlvif->rate_set = CONF_TX_ENABLED_RATES; + } + + wlvif->bitrate_masks[NL80211_BAND_2GHZ] = tx_settings->basic_rate; + wlvif->bitrate_masks[NL80211_BAND_5GHZ] = tx_settings->basic_rate_5; + wlvif->beacon_int = CC33XX_DEFAULT_BEACON_INT; + + /* mac80211 configures some values globally, while we treat them + * per-interface. thus, on init, we have to copy them from cc + */ + wlvif->band = cc->band; + wlvif->power_level = cc->power_level; + + INIT_WORK(&wlvif->rc_update_work, cc33xx_rc_update_work); + INIT_DELAYED_WORK(&wlvif->channel_switch_work, + cc33xx_channel_switch_work); + INIT_DELAYED_WORK(&wlvif->connection_loss_work, + cc33xx_connection_loss_work); + INIT_DELAYED_WORK(&wlvif->pending_auth_complete_work, + cc33xx_pending_auth_complete_work); + INIT_DELAYED_WORK(&wlvif->roc_timeout_work, + cc33xx_roc_timeout_work); + INIT_LIST_HEAD(&wlvif->list); + + return 0; +} + +struct cc33xx_hw_queue_iter_data { + unsigned long hw_queue_map[BITS_TO_LONGS(CC33XX_NUM_MAC_ADDRESSES)]; + + /* current vif */ + struct ieee80211_vif *vif; + + /* is the current vif among those iterated */ + bool cur_running; +}; + +static void cc33xx_hw_queue_iter(void *data, u8 *mac, struct ieee80211_vif *vif) +{ + struct cc33xx_hw_queue_iter_data *iter_data = data; + + if (vif->type == NL80211_IFTYPE_P2P_DEVICE || + WARN_ON_ONCE(vif->hw_queue[0] == IEEE80211_INVAL_HW_QUEUE)) + return; + + if (iter_data->cur_running || vif == iter_data->vif) { + iter_data->cur_running = true; + return; + } + + __set_bit(vif->hw_queue[0] / NUM_TX_QUEUES, iter_data->hw_queue_map); +} + +static int cc33xx_allocate_hw_queue_base(struct cc33xx *cc, + struct cc33xx_vif *wlvif) +{ + struct ieee80211_vif *vif = cc33xx_wlvif_to_vif(wlvif); + struct cc33xx_hw_queue_iter_data iter_data = {}; + int i, q_base; + + if (vif->type == NL80211_IFTYPE_P2P_DEVICE) { + vif->cab_queue = IEEE80211_INVAL_HW_QUEUE; + return 0; + } + + iter_data.vif = vif; + + /* mark all bits taken by active interfaces */ + ieee80211_iterate_active_interfaces_atomic(cc->hw, + IEEE80211_IFACE_ITER_RESUME_ALL, + cc33xx_hw_queue_iter, &iter_data); + + /* the current vif is already running in mac80211 (resume/recovery) */ + if (iter_data.cur_running) { + wlvif->hw_queue_base = vif->hw_queue[0]; + cc33xx_debug(DEBUG_MAC80211, + "using pre-allocated hw queue base %d", + wlvif->hw_queue_base); + + /* interface type might have changed type */ + goto adjust_cab_queue; + } + + q_base = find_first_zero_bit(iter_data.hw_queue_map, + CC33XX_NUM_MAC_ADDRESSES); + if (q_base >= CC33XX_NUM_MAC_ADDRESSES) + return -EBUSY; + + wlvif->hw_queue_base = q_base * NUM_TX_QUEUES; + cc33xx_debug(DEBUG_MAC80211, "allocating hw queue base: %d", + wlvif->hw_queue_base); + + for (i = 0; i < NUM_TX_QUEUES; i++) { + cc->queue_stop_reasons[wlvif->hw_queue_base + i] = 0; + /* register hw queues in mac80211 */ + vif->hw_queue[i] = wlvif->hw_queue_base + i; + } + +adjust_cab_queue: + /* the last places are reserved for cab queues per interface */ + if (wlvif->bss_type == BSS_TYPE_AP_BSS) { + vif->cab_queue = NUM_TX_QUEUES * CC33XX_NUM_MAC_ADDRESSES + + wlvif->hw_queue_base / NUM_TX_QUEUES; + } else { + vif->cab_queue = IEEE80211_INVAL_HW_QUEUE; + } + + return 0; +} + +static int cc33xx_op_add_interface(struct ieee80211_hw *hw, + struct ieee80211_vif *vif) +{ + struct cc33xx *cc = hw->priv; + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + struct vif_counter_data vif_count; + int ret = 0; + u8 role_type; + + if (cc->plt) { + cc33xx_error("Adding Interface not allowed while in PLT mode"); + return -EBUSY; + } + + vif->driver_flags |= IEEE80211_VIF_BEACON_FILTER | + IEEE80211_VIF_SUPPORTS_UAPSD | + IEEE80211_VIF_SUPPORTS_CQM_RSSI; + + cc33xx_get_vif_count(hw, vif, &vif_count); + + mutex_lock(&cc->mutex); + + /* in some very corner case HW recovery scenarios its possible to + * get here before __cc33xx_op_remove_interface is complete, so + * opt out if that is the case. + */ + if (test_bit(CC33XX_FLAG_RECOVERY_IN_PROGRESS, &cc->flags) || + test_bit(WLVIF_FLAG_INITIALIZED, &wlvif->flags)) { + ret = -EBUSY; + goto out; + } + + ret = cc33xx_init_vif_data(cc, vif); + if (ret < 0) + goto out; + + wlvif->cc = cc; + role_type = cc33xx_get_role_type(cc, wlvif); + if (role_type == CC33XX_INVALID_ROLE_TYPE) { + ret = -EINVAL; + goto out; + } + + ret = cc33xx_allocate_hw_queue_base(cc, wlvif); + if (ret < 0) + goto out; + + if (!cc33xx_is_p2p_mgmt(wlvif)) { + ret = cc33xx_cmd_role_enable(cc, vif->addr, + role_type, &wlvif->role_id); + if (ret < 0) + goto out; + + ret = cc33xx_init_vif_specific(cc, vif); + if (ret < 0) + goto out; + } else { + ret = cc33xx_cmd_role_enable(cc, vif->addr, CC33XX_ROLE_DEVICE, + &wlvif->dev_role_id); + if (ret < 0) + goto out; + + /* needed mainly for configuring rate policies */ + ret = cc33xx_acx_config_ps(cc, wlvif); + if (ret < 0) + goto out; + } + + list_add(&wlvif->list, &cc->wlvif_list); + set_bit(WLVIF_FLAG_INITIALIZED, &wlvif->flags); + + if (wlvif->bss_type == BSS_TYPE_AP_BSS) + cc->ap_count++; + else + cc->sta_count++; + +out: + mutex_unlock(&cc->mutex); + + return ret; +} + +static void __cc33xx_op_remove_interface(struct cc33xx *cc, + struct ieee80211_vif *vif, + bool reset_tx_queues) +{ + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + int i, ret; + bool is_ap = (wlvif->bss_type == BSS_TYPE_AP_BSS); + + if (!test_and_clear_bit(WLVIF_FLAG_INITIALIZED, &wlvif->flags)) + return; + + /* because of hardware recovery, we may get here twice */ + if (cc->state == CC33XX_STATE_OFF) + return; + + if (cc->scan.state != CC33XX_SCAN_STATE_IDLE && + cc->scan_wlvif == wlvif) { + struct cfg80211_scan_info info = { + .aborted = true, + }; + + /* Rearm the tx watchdog just before idling scan. This + * prevents just-finished scans from triggering the watchdog + */ + cc33xx_rearm_tx_watchdog_locked(cc); + + cc->scan.state = CC33XX_SCAN_STATE_IDLE; + memset(cc->scan.scanned_ch, 0, sizeof(cc->scan.scanned_ch)); + cc->scan_wlvif = NULL; + cc->scan.req = NULL; + ieee80211_scan_completed(cc->hw, &info); + } + + if (cc->sched_vif == wlvif) + cc->sched_vif = NULL; + + if (cc->roc_vif == vif) { + cc->roc_vif = NULL; + ieee80211_remain_on_channel_expired(cc->hw); + } + + if (!test_bit(CC33XX_FLAG_RECOVERY_IN_PROGRESS, &cc->flags)) { + /* disable active roles */ + + if (wlvif->bss_type == BSS_TYPE_STA_BSS || + wlvif->bss_type == BSS_TYPE_IBSS) { + if (wlvif->dev_hlid != CC33XX_INVALID_LINK_ID) + cc33xx_stop_dev(cc, wlvif); + } + + if (!cc33xx_is_p2p_mgmt(wlvif)) { + ret = cc33xx_cmd_role_disable(cc, &wlvif->role_id); + if (ret < 0) + goto deinit; + } else { + ret = cc33xx_cmd_role_disable(cc, &wlvif->dev_role_id); + if (ret < 0) + goto deinit; + } + } +deinit: + cc33xx_tx_reset_wlvif(cc, wlvif); + + /* clear all hlids (except system_hlid) */ + wlvif->dev_hlid = CC33XX_INVALID_LINK_ID; + + if (wlvif->bss_type == BSS_TYPE_STA_BSS || + wlvif->bss_type == BSS_TYPE_IBSS) { + wlvif->sta.hlid = CC33XX_INVALID_LINK_ID; + cc33xx_free_rate_policy(cc, &wlvif->sta.basic_rate_idx); + cc33xx_free_rate_policy(cc, &wlvif->sta.ap_rate_idx); + cc33xx_free_rate_policy(cc, &wlvif->sta.p2p_rate_idx); + } else { + wlvif->ap.bcast_hlid = CC33XX_INVALID_LINK_ID; + wlvif->ap.global_hlid = CC33XX_INVALID_LINK_ID; + cc33xx_free_rate_policy(cc, &wlvif->ap.mgmt_rate_idx); + cc33xx_free_rate_policy(cc, &wlvif->ap.bcast_rate_idx); + for (i = 0; i < CONF_TX_MAX_AC_COUNT; i++) + cc33xx_free_rate_policy(cc, + &wlvif->ap.ucast_rate_idx[i]); + cc33xx_free_ap_keys(cc, wlvif); + } + + dev_kfree_skb(wlvif->probereq); + wlvif->probereq = NULL; + if (cc->last_wlvif == wlvif) + cc->last_wlvif = NULL; + list_del(&wlvif->list); + memset(wlvif->ap.sta_hlid_map, 0, sizeof(wlvif->ap.sta_hlid_map)); + wlvif->role_id = CC33XX_INVALID_ROLE_ID; + wlvif->dev_role_id = CC33XX_INVALID_ROLE_ID; + + if (is_ap) + cc->ap_count--; + else + cc->sta_count--; + + /* Last AP, have more stations. Configure sleep auth according to STA. + * Don't do thin on unintended recovery. + */ + if (test_bit(CC33XX_FLAG_RECOVERY_IN_PROGRESS, &cc->flags)) + goto unlock; + + /* mask ap events */ + if (cc->ap_count == 0 && is_ap) + cc->event_mask &= ~cc->ap_event_mask; + + if (cc->ap_count == 0 && is_ap && cc->sta_count) { + u8 sta_auth = cc->conf.host_conf.conn.sta_sleep_auth; + /* Configure for power according to debugfs */ + if (sta_auth != CC33XX_PSM_ILLEGAL) + cc33xx_acx_sleep_auth(cc, sta_auth); + /* Configure for ELP power saving */ + else + cc33xx_acx_sleep_auth(cc, CC33XX_PSM_ELP); + } + +unlock: + mutex_unlock(&cc->mutex); + + cancel_work_sync(&wlvif->rc_update_work); + cancel_delayed_work_sync(&wlvif->connection_loss_work); + cancel_delayed_work_sync(&wlvif->channel_switch_work); + cancel_delayed_work_sync(&wlvif->pending_auth_complete_work); + cancel_delayed_work_sync(&wlvif->roc_timeout_work); + + mutex_lock(&cc->mutex); +} + +static void cc33xx_op_remove_interface(struct ieee80211_hw *hw, + struct ieee80211_vif *vif) +{ + struct cc33xx *cc = hw->priv; + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + struct cc33xx_vif *iter; + struct vif_counter_data vif_count; + + cc33xx_get_vif_count(hw, vif, &vif_count); + mutex_lock(&cc->mutex); + + if (cc->state == CC33XX_STATE_OFF || + !test_bit(WLVIF_FLAG_INITIALIZED, &wlvif->flags)) + goto out; + + /* cc->vif can be null here if someone shuts down the interface + * just when hardware recovery has been started. + */ + cc33xx_for_each_wlvif(cc, iter) { + if (iter != wlvif) + continue; + + __cc33xx_op_remove_interface(cc, vif, true); + break; + } + WARN_ON(iter != wlvif); + +out: + mutex_unlock(&cc->mutex); +} + +static int cc33xx_op_change_interface(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + enum nl80211_iftype new_type, bool p2p) +{ + struct cc33xx *cc = hw->priv; + int ret; + + set_bit(CC33XX_FLAG_VIF_CHANGE_IN_PROGRESS, &cc->flags); + cc33xx_op_remove_interface(hw, vif); + + vif->type = new_type; + vif->p2p = p2p; + ret = cc33xx_op_add_interface(hw, vif); + + clear_bit(CC33XX_FLAG_VIF_CHANGE_IN_PROGRESS, &cc->flags); + return ret; +} + +static int cc33xx_join(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + int ret; + bool is_ibss = (wlvif->bss_type == BSS_TYPE_IBSS); + + /* One of the side effects of the JOIN command is that is clears + * WPA/WPA2 keys from the chipset. Performing a JOIN while associated + * to a WPA/WPA2 access point will therefore kill the data-path. + * Currently the only valid scenario for JOIN during association + * is on roaming, in which case we will also be given new keys. + * Keep the below message for now, unless it starts bothering + * users who really like to roam a lot :) + */ + if (test_bit(WLVIF_FLAG_STA_ASSOCIATED, &wlvif->flags)) + cc33xx_info("JOIN while associated."); + + /* clear encryption type */ + wlvif->encryption_type = KEY_NONE; + + if (is_ibss) { + ret = cc33xx_cmd_role_start_ibss(cc, wlvif); + } else { + if (cc->quirks & CC33XX_QUIRK_START_STA_FAILS) { + /* TODO: this is an ugly workaround for wl12xx fw + * bug - we are not able to tx/rx after the first + * start_sta, so make dummy start+stop calls, + * and then call start_sta again. + * this should be fixed in the fw. + */ + cc33xx_cmd_role_start_sta(cc, wlvif); + cc33xx_cmd_role_stop_sta(cc, wlvif); + } + + ret = cc33xx_cmd_role_start_sta(cc, wlvif); + } + + return ret; +} + +static int cc33xx_ssid_set(struct cc33xx_vif *wlvif, + struct sk_buff *skb, int offset) +{ + u8 ssid_len; + const u8 *ptr = cfg80211_find_ie(WLAN_EID_SSID, skb->data + offset, + skb->len - offset); + + if (!ptr) { + cc33xx_error("No SSID in IEs!"); + return -ENOENT; + } + + ssid_len = ptr[1]; + if (ssid_len > IEEE80211_MAX_SSID_LEN) { + cc33xx_error("SSID is too long!"); + return -EINVAL; + } + + wlvif->ssid_len = ssid_len; + memcpy(wlvif->ssid, ptr + 2, ssid_len); + return 0; +} + +static int cc33xx_set_ssid(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + struct ieee80211_vif *vif = cc33xx_wlvif_to_vif(wlvif); + struct sk_buff *skb; + int ieoffset; + + /* we currently only support setting the ssid from the ap probe req */ + if (wlvif->bss_type != BSS_TYPE_STA_BSS) + return -EINVAL; + + skb = ieee80211_ap_probereq_get(cc->hw, vif); + if (!skb) + return -EINVAL; + + ieoffset = offsetof(struct ieee80211_mgmt, u.probe_req.variable); + cc33xx_ssid_set(wlvif, skb, ieoffset); + dev_kfree_skb(skb); + + return 0; +} + +static int cc33xx_set_assoc(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct ieee80211_bss_conf *bss_conf, + struct ieee80211_sta *sta, + struct ieee80211_vif *vif, u32 sta_rate_set) +{ + int ret; + + wlvif->aid = vif->cfg.aid; + wlvif->channel_type = cfg80211_get_chandef_type(&bss_conf->chanreq.oper); + wlvif->beacon_int = bss_conf->beacon_int; + wlvif->wmm_enabled = bss_conf->qos; + + wlvif->nontransmitted = bss_conf->nontransmitted; + cc33xx_debug(DEBUG_MAC80211, "set_assoc mbssid params: nonTxbssid: %d, idx: %d, max_ind: %d, trans_bssid: %pM, ema_ap: %d", + bss_conf->nontransmitted, bss_conf->bssid_index, + bss_conf->bssid_indicator, bss_conf->transmitter_bssid, + bss_conf->ema_ap); + wlvif->bssid_index = bss_conf->bssid_index; + wlvif->bssid_indicator = bss_conf->bssid_indicator; + memcpy(wlvif->transmitter_bssid, bss_conf->transmitter_bssid, ETH_ALEN); + + set_bit(WLVIF_FLAG_STA_ASSOCIATED, &wlvif->flags); + + ret = cc33xx_assoc_info_cfg(cc, wlvif, sta, wlvif->aid); + if (ret < 0) + return ret; + + if (sta_rate_set) { + wlvif->rate_set = cc33xx_tx_enabled_rates_get(cc, sta_rate_set, + wlvif->band); + } + + return ret; +} + +static int cc33xx_unset_assoc(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + int ret; + bool sta = wlvif->bss_type == BSS_TYPE_STA_BSS; + + /* make sure we are connected (sta) joined */ + if (sta && !test_and_clear_bit(WLVIF_FLAG_STA_ASSOCIATED, + &wlvif->flags)) + return false; + + /* make sure we are joined (ibss) */ + if (!sta && test_and_clear_bit(WLVIF_FLAG_IBSS_JOINED, &wlvif->flags)) + return false; + + if (sta) { + /* use defaults when not associated */ + wlvif->aid = 0; + + /* free probe-request template */ + dev_kfree_skb(wlvif->probereq); + wlvif->probereq = NULL; + + /* disable beacon filtering */ + ret = cc33xx_acx_beacon_filter_opt(cc, wlvif, false); + if (ret < 0) + return ret; + } + + if (test_and_clear_bit(WLVIF_FLAG_CS_PROGRESS, &wlvif->flags)) { + struct ieee80211_vif *vif = cc33xx_wlvif_to_vif(wlvif); + + cc33xx_cmd_stop_channel_switch(cc, wlvif); + ieee80211_chswitch_done(vif, false, 0); + cancel_delayed_work(&wlvif->channel_switch_work); + } + + return 0; +} + +static void cc33xx_set_band_rate(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + wlvif->basic_rate_set = wlvif->bitrate_masks[wlvif->band]; + wlvif->rate_set = wlvif->basic_rate_set; +} + +static void cc33xx_sta_handle_idle(struct cc33xx *cc, + struct cc33xx_vif *wlvif, bool idle) +{ + bool cur_idle = !test_bit(WLVIF_FLAG_ACTIVE, &wlvif->flags); + + if (idle == cur_idle) + return; + + if (idle) { + clear_bit(WLVIF_FLAG_ACTIVE, &wlvif->flags); + } else { + /* The current firmware only supports sched_scan in idle */ + if (cc->sched_vif == wlvif) + cc33xx_scan_sched_scan_stop(cc, wlvif); + + set_bit(WLVIF_FLAG_ACTIVE, &wlvif->flags); + } +} + +static int cc33xx_config_vif(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct ieee80211_conf *conf, u64 changed) +{ + int ret; + + if (cc33xx_is_p2p_mgmt(wlvif)) + return 0; + + if (conf->power_level != wlvif->power_level && + (changed & IEEE80211_CONF_CHANGE_POWER)) { + ret = cc33xx_acx_tx_power(cc, wlvif, conf->power_level); + if (ret < 0) + return ret; + } + + return 0; +} + +static int cc33xx_op_config(struct ieee80211_hw *hw, u32 changed) +{ + struct cc33xx *cc = hw->priv; + struct cc33xx_vif *wlvif; + struct ieee80211_conf *conf = &hw->conf; + int ret = 0; + + cc33xx_debug(DEBUG_MAC80211, + "mac80211 config psm %s power %d %s changed 0x%x", + conf->flags & IEEE80211_CONF_PS ? "on" : "off", + conf->power_level, + conf->flags & IEEE80211_CONF_IDLE ? "idle" : "in use", + changed); + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) + goto out; + + /* configure each interface */ + cc33xx_for_each_wlvif(cc, wlvif) { + ret = cc33xx_config_vif(cc, wlvif, conf, changed); + if (ret < 0) + goto out; + } + +out: + mutex_unlock(&cc->mutex); + + return ret; +} + +struct cc33xx_filter_params { + bool enabled; + int mc_list_length; + u8 mc_list[ACX_MC_ADDRESS_GROUP_MAX][ETH_ALEN]; +}; + +static u64 cc33xx_op_prepare_multicast(struct ieee80211_hw *hw, + struct netdev_hw_addr_list *mc_list) +{ + struct cc33xx_filter_params *fp; + struct netdev_hw_addr *ha; + + fp = kzalloc(sizeof(*fp), GFP_ATOMIC); + if (!fp) { + cc33xx_error("Out of memory setting filters."); + return 0; + } + + /* update multicast filtering parameters */ + fp->mc_list_length = 0; + if (netdev_hw_addr_list_count(mc_list) > ACX_MC_ADDRESS_GROUP_MAX) { + fp->enabled = false; + } else { + fp->enabled = true; + netdev_hw_addr_list_for_each(ha, mc_list) { + memcpy(fp->mc_list[fp->mc_list_length], + ha->addr, ETH_ALEN); + fp->mc_list_length++; + } + } + + return (u64)(unsigned long)fp; +} + +#define CC33XX_SUPPORTED_FILTERS (FIF_ALLMULTI) + +static void cc33xx_op_configure_filter(struct ieee80211_hw *hw, + unsigned int changed, + unsigned int *total, u64 multicast) +{ + struct cc33xx_filter_params *fp = (void *)(unsigned long)multicast; + struct cc33xx *cc = hw->priv; + + mutex_lock(&cc->mutex); + + *total &= CC33XX_SUPPORTED_FILTERS; + + if (unlikely(cc->state != CC33XX_STATE_ON)) + goto out; + + if (!fp) + cc33xx_acx_group_address_tbl(cc, false, NULL, 0); + else if (*total & FIF_ALLMULTI || !fp->enabled) + cc33xx_acx_group_address_tbl(cc, false, NULL, 0); + else + cc33xx_acx_group_address_tbl(cc, true, fp->mc_list, fp->mc_list_length); + +out: + mutex_unlock(&cc->mutex); + kfree(fp); +} + +static int cc33xx_record_ap_key(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u8 id, u8 key_type, u8 key_size, const u8 *key, + u8 hlid, u32 tx_seq_32, u16 tx_seq_16) +{ + struct cc33xx_ap_key *ap_key; + int i; + + if (key_size > MAX_KEY_SIZE) + return -EINVAL; + + /* Find next free entry in ap_keys. Also check we are not replacing + * an existing key. + */ + for (i = 0; i < MAX_NUM_KEYS; i++) { + if (!wlvif->ap.recorded_keys[i]) + break; + + if (wlvif->ap.recorded_keys[i]->id == id) { + cc33xx_warning("trying to record key replacement"); + return -EINVAL; + } + } + + if (i == MAX_NUM_KEYS) + return -EBUSY; + + ap_key = kzalloc(sizeof(*ap_key), GFP_KERNEL); + if (!ap_key) + return -ENOMEM; + + ap_key->id = id; + ap_key->key_type = key_type; + ap_key->key_size = key_size; + memcpy(ap_key->key, key, key_size); + ap_key->hlid = hlid; + ap_key->tx_seq_32 = tx_seq_32; + ap_key->tx_seq_16 = tx_seq_16; + + wlvif->ap.recorded_keys[i] = ap_key; + return 0; +} + +static void cc33xx_free_ap_keys(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + int i; + + for (i = 0; i < MAX_NUM_KEYS; i++) { + kfree(wlvif->ap.recorded_keys[i]); + wlvif->ap.recorded_keys[i] = NULL; + } +} + +static int cc33xx_ap_init_hwenc(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + int i, ret = 0; + struct cc33xx_ap_key *key; + bool wep_key_added = false; + + for (i = 0; i < MAX_NUM_KEYS; i++) { + u8 hlid; + + if (!wlvif->ap.recorded_keys[i]) + break; + + key = wlvif->ap.recorded_keys[i]; + hlid = key->hlid; + if (hlid == CC33XX_INVALID_LINK_ID) + hlid = wlvif->ap.bcast_hlid; + + ret = cc33xx_cmd_set_ap_key(cc, wlvif, KEY_ADD_OR_REPLACE, + key->id, key->key_type, + key->key_size, key->key, hlid, + key->tx_seq_32, key->tx_seq_16); + if (ret < 0) + goto out; + + if (key->key_type == KEY_WEP) + wep_key_added = true; + } + + if (wep_key_added) { + ret = cc33xx_cmd_set_default_wep_key(cc, wlvif->default_key, + wlvif->ap.bcast_hlid); + if (ret < 0) + goto out; + } + +out: + cc33xx_free_ap_keys(cc, wlvif); + return ret; +} + +static int cc33xx_config_key(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u16 action, u8 id, u8 key_type, u8 key_size, + const u8 *key, u32 tx_seq_32, u16 tx_seq_16, + struct ieee80211_sta *sta) +{ + int ret; + bool is_ap = (wlvif->bss_type == BSS_TYPE_AP_BSS); + + if (is_ap) { + struct cc33xx_station *wl_sta; + u8 hlid; + + if (sta) { + wl_sta = (struct cc33xx_station *)sta->drv_priv; + hlid = wl_sta->hlid; + } else { + hlid = wlvif->ap.bcast_hlid; + } + + if (!test_bit(WLVIF_FLAG_AP_STARTED, &wlvif->flags)) { + /* We do not support removing keys after AP shutdown. + * Pretend we do to make mac80211 happy. + */ + if (action != KEY_ADD_OR_REPLACE) + return 0; + + ret = cc33xx_record_ap_key(cc, wlvif, id, key_type, + key_size, key, hlid, + tx_seq_32, tx_seq_16); + } else { + ret = cc33xx_cmd_set_ap_key(cc, wlvif, action, id, + key_type, key_size, key, + hlid, tx_seq_32, tx_seq_16); + } + + if (ret < 0) + return ret; + } else { + const u8 *addr; + static const u8 bcast_addr[ETH_ALEN] = { + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff + }; + + addr = sta ? sta->addr : bcast_addr; + + if (is_zero_ether_addr(addr)) { + /* We dont support TX only encryption */ + return -EOPNOTSUPP; + } + + /* The cc33xx does not allow to remove unicast keys - they + * will be cleared automatically on next CMD_JOIN. Ignore the + * request silently, as we dont want the mac80211 to emit + * an error message. + */ + if (action == KEY_REMOVE && !is_broadcast_ether_addr(addr)) + return 0; + + /* don't remove key if hlid was already deleted */ + if (action == KEY_REMOVE && + wlvif->sta.hlid == CC33XX_INVALID_LINK_ID) + return 0; + + ret = cc33xx_cmd_set_sta_key(cc, wlvif, action, id, key_type, + key_size, key, addr, tx_seq_32, + tx_seq_16); + if (ret < 0) + return ret; + } + + return 0; +} + +static int cc33xx_set_key(struct cc33xx *cc, enum set_key_cmd cmd, + struct ieee80211_vif *vif, struct ieee80211_sta *sta, + struct ieee80211_key_conf *key_conf) +{ + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + int ret; + u32 tx_seq_32 = 0; + u16 tx_seq_16 = 0; + u8 key_type; + u8 hlid; + + if (wlvif->bss_type == BSS_TYPE_AP_BSS) { + if (sta) { + struct cc33xx_station *wl_sta = (void *)sta->drv_priv; + + hlid = wl_sta->hlid; + } else { + hlid = wlvif->ap.bcast_hlid; + } + } else { + hlid = wlvif->sta.hlid; + } + + if (hlid != CC33XX_INVALID_LINK_ID) { + u64 tx_seq = cc->links[hlid].total_freed_pkts; + + tx_seq_32 = CC33XX_TX_SECURITY_HI32(tx_seq); + tx_seq_16 = CC33XX_TX_SECURITY_LO16(tx_seq); + } + + switch (key_conf->cipher) { + case WLAN_CIPHER_SUITE_WEP40: + case WLAN_CIPHER_SUITE_WEP104: + key_type = KEY_WEP; + key_conf->hw_key_idx = key_conf->keyidx; + break; + case WLAN_CIPHER_SUITE_TKIP: + key_type = KEY_TKIP; + key_conf->hw_key_idx = key_conf->keyidx; + break; + case WLAN_CIPHER_SUITE_CCMP: + key_type = KEY_AES; + key_conf->flags |= IEEE80211_KEY_FLAG_PUT_IV_SPACE; + break; + case WLAN_CIPHER_SUITE_GCMP: + key_type = KEY_GCMP128; + key_conf->flags |= IEEE80211_KEY_FLAG_PUT_IV_SPACE; + break; + case WLAN_CIPHER_SUITE_CCMP_256: + key_type = KEY_CCMP256; + key_conf->flags |= IEEE80211_KEY_FLAG_PUT_IV_SPACE; + break; + case WLAN_CIPHER_SUITE_GCMP_256: + key_type = KEY_GCMP_256; + key_conf->flags |= IEEE80211_KEY_FLAG_PUT_IV_SPACE; + break; + case WLAN_CIPHER_SUITE_AES_CMAC: + key_type = KEY_IGTK; + break; + case WLAN_CIPHER_SUITE_BIP_CMAC_256: + key_type = KEY_CMAC_256; + break; + case WLAN_CIPHER_SUITE_BIP_GMAC_128: + key_type = KEY_GMAC_128; + break; + case WLAN_CIPHER_SUITE_BIP_GMAC_256: + key_type = KEY_GMAC_256; + break; + case CC33XX_CIPHER_SUITE_GEM: + key_type = KEY_GEM; + break; + default: + cc33xx_error("Unknown key algo 0x%x", key_conf->cipher); + + return -EOPNOTSUPP; + } + + switch (cmd) { + case SET_KEY: + ret = cc33xx_config_key(cc, wlvif, KEY_ADD_OR_REPLACE, + key_conf->keyidx, key_type, key_conf->keylen, + key_conf->key, tx_seq_32, tx_seq_16, sta); + if (ret < 0) { + cc33xx_error("Could not add or replace key"); + return ret; + } + + /* reconfiguring arp response if the unicast (or common) + * encryption key type was changed + */ + if (wlvif->bss_type == BSS_TYPE_STA_BSS && + (sta || key_type == KEY_WEP) && + wlvif->encryption_type != key_type) { + wlvif->encryption_type = key_type; + if (ret < 0) { + cc33xx_warning("build arp rsp failed: %d", ret); + return ret; + } + } + break; + + case DISABLE_KEY: + ret = cc33xx_config_key(cc, wlvif, KEY_REMOVE, key_conf->keyidx, + key_type, key_conf->keylen, + key_conf->key, 0, 0, sta); + if (ret < 0) { + cc33xx_error("Could not remove key"); + return ret; + } + break; + + default: + cc33xx_error("Unsupported key cmd 0x%x", cmd); + return -EOPNOTSUPP; + } + + return ret; +} + +static int cc33xx_hw_set_key(struct cc33xx *cc, enum set_key_cmd cmd, + struct ieee80211_vif *vif, struct ieee80211_sta *sta, + struct ieee80211_key_conf *key_conf) +{ + bool special_enc; + int ret; + + special_enc = key_conf->cipher == CC33XX_CIPHER_SUITE_GEM || + key_conf->cipher == WLAN_CIPHER_SUITE_TKIP; + + ret = cc33xx_set_key(cc, cmd, vif, sta, key_conf); + if (ret < 0) + goto out; + + /* when adding the first or removing the last GEM/TKIP key, + * we have to adjust the number of spare blocks. + */ + if (special_enc) { + if (cmd == SET_KEY) { + /* first key */ + cc->extra_spare_key_count++; + } else if (cmd == DISABLE_KEY) { + /* last key */ + cc->extra_spare_key_count--; + } + } + +out: + return ret; +} + +static int cc33xx_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, + struct ieee80211_vif *vif, + struct ieee80211_sta *sta, + struct ieee80211_key_conf *key_conf) +{ + struct cc33xx *cc = hw->priv; + int ret; + bool might_change_spare = key_conf->cipher == CC33XX_CIPHER_SUITE_GEM || + key_conf->cipher == WLAN_CIPHER_SUITE_TKIP; + + if (might_change_spare) { + /* stop the queues and flush to ensure the next packets are + * in sync with FW spare block accounting + */ + cc33xx_stop_queues(cc, CC33XX_QUEUE_STOP_REASON_SPARE_BLK); + cc33xx_tx_flush(cc); + } + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) { + ret = -EAGAIN; + goto out_wake_queues; + } + + ret = cc33xx_hw_set_key(cc, cmd, vif, sta, key_conf); + +out_wake_queues: + if (might_change_spare) + cc33xx_wake_queues(cc, CC33XX_QUEUE_STOP_REASON_SPARE_BLK); + + mutex_unlock(&cc->mutex); + + return ret; +} + +static void cc33xx_op_set_default_key_idx(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + int key_idx) +{ + struct cc33xx *cc = hw->priv; + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + + cc33xx_debug(DEBUG_MAC80211, + "mac80211 set default key idx %d", key_idx); + + /* we don't handle unsetting of default key */ + if (key_idx == -1) + return; + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) + goto out_unlock; + + wlvif->default_key = key_idx; + + /* the default WEP key needs to be configured at least once */ + if (wlvif->encryption_type == KEY_WEP) + cc33xx_cmd_set_default_wep_key(cc, key_idx, wlvif->sta.hlid); + +out_unlock: + mutex_unlock(&cc->mutex); +} + +static int cc33xx_op_hw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif, + struct ieee80211_scan_request *hw_req) +{ + struct cfg80211_scan_request *req = &hw_req->req; + struct cc33xx *cc = hw->priv; + int ret; + u8 *ssid = NULL; + size_t len = 0; + + if (req->n_ssids) { + ssid = req->ssids[0].ssid; + len = req->ssids[0].ssid_len; + } + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) { + /* We cannot return -EBUSY here because cfg80211 will expect + * a call to ieee80211_scan_completed if we do - in this case + * there won't be any call. + */ + ret = -EAGAIN; + goto out; + } + + /* fail if there is any role in ROC */ + if (find_first_bit(cc->roc_map, CC33XX_MAX_ROLES) < CC33XX_MAX_ROLES) { + /* don't allow scanning right now */ + ret = -EBUSY; + goto out; + } + + ret = cc33xx_scan(hw->priv, vif, ssid, len, req); + +out: + mutex_unlock(&cc->mutex); + + return ret; +} + +static void cc33xx_op_cancel_hw_scan(struct ieee80211_hw *hw, + struct ieee80211_vif *vif) +{ + struct cc33xx *cc = hw->priv; + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + struct cfg80211_scan_info info = { + .aborted = true, + }; + int ret; + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) + goto out; + + if (cc->scan.state == CC33XX_SCAN_STATE_IDLE) + goto out; + + if (cc->scan.state != CC33XX_SCAN_STATE_DONE) { + ret = cc33xx_scan_stop(cc, wlvif); + if (ret < 0) + goto out; + } + + /* Rearm the tx watchdog just before idling scan. This + * prevents just-finished scans from triggering the watchdog + */ + cc33xx_rearm_tx_watchdog_locked(cc); + + cc->scan.state = CC33XX_SCAN_STATE_IDLE; + memset(cc->scan.scanned_ch, 0, sizeof(cc->scan.scanned_ch)); + cc->scan_wlvif = NULL; + cc->scan.req = NULL; + ieee80211_scan_completed(cc->hw, &info); + +out: + mutex_unlock(&cc->mutex); + + cancel_delayed_work_sync(&cc->scan_complete_work); +} + +static int cc33xx_op_sched_scan_start(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct cfg80211_sched_scan_request *req, + struct ieee80211_scan_ies *ies) +{ + struct cc33xx *cc = hw->priv; + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + int ret; + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) { + ret = -EAGAIN; + goto out; + } + + ret = cc33xx_sched_scan_start(cc, wlvif, req, ies); + if (ret < 0) + goto out; + + cc->sched_vif = wlvif; + +out: + mutex_unlock(&cc->mutex); + return ret; +} + +static int cc33xx_op_sched_scan_stop(struct ieee80211_hw *hw, + struct ieee80211_vif *vif) +{ + struct cc33xx *cc = hw->priv; + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) + goto out; + + /* command to stop periodic scan was sent from mac80211 + * mark than stop command is from mac80211 and release sched_vif + */ + cc->mac80211_scan_stopped = true; + cc->sched_vif = NULL; + cc33xx_scan_sched_scan_stop(cc, wlvif); + +out: + mutex_unlock(&cc->mutex); + + return 0; +} + +static int cc33xx_op_set_frag_threshold(struct ieee80211_hw *hw, u32 value) +{ + return 0; +} + +static int cc33xx_op_set_rts_threshold(struct ieee80211_hw *hw, u32 value) +{ + return 0; +} + +static int cc33xx_bss_erp_info_changed(struct cc33xx *cc, + struct ieee80211_vif *vif, + struct ieee80211_bss_conf *bss_conf, + u64 changed) +{ + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + int ret = 0; + + if (changed & BSS_CHANGED_ERP_SLOT) { + if (bss_conf->use_short_slot) + ret = cc33xx_acx_slot(cc, wlvif, SLOT_TIME_SHORT); + else + ret = cc33xx_acx_slot(cc, wlvif, SLOT_TIME_LONG); + if (ret < 0) { + cc33xx_warning("Set slot time failed %d", ret); + goto out; + } + } + + if (changed & BSS_CHANGED_ERP_PREAMBLE) { + if (bss_conf->use_short_preamble) + cc33xx_acx_set_preamble(cc, wlvif, ACX_PREAMBLE_SHORT); + else + cc33xx_acx_set_preamble(cc, wlvif, ACX_PREAMBLE_LONG); + } + + if (changed & BSS_CHANGED_ERP_CTS_PROT) { + if (bss_conf->use_cts_prot) { + ret = cc33xx_acx_cts_protect(cc, wlvif, + CTSPROTECT_ENABLE); + } else { + ret = cc33xx_acx_cts_protect(cc, wlvif, + CTSPROTECT_DISABLE); + } + + if (ret < 0) { + cc33xx_warning("Set ctsprotect failed %d", ret); + goto out; + } + } + +out: + return ret; +} + +static int cc33xx_set_beacon_template(struct cc33xx *cc, + struct ieee80211_vif *vif, bool is_ap) +{ + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + int ret; + int ieoffset = offsetof(struct ieee80211_mgmt, u.beacon.variable); + struct sk_buff *beacon = ieee80211_beacon_get(cc->hw, vif, 0); + + struct cc33xx_cmd_set_beacon_info *cmd; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + if (!beacon) { + ret = -EINVAL; + goto end_bcn; + } + + ret = cc33xx_ssid_set(wlvif, beacon, ieoffset); + if (ret < 0) + goto end_bcn; + + cmd->role_id = wlvif->role_id; + cmd->beacon_len = cpu_to_le16(beacon->len); + + memcpy(cmd->beacon, beacon->data, beacon->len); + + ret = cc33xx_cmd_send(cc, CMD_AP_SET_BEACON_INFO, cmd, sizeof(*cmd), 0); + if (ret < 0) + goto end_bcn; + +end_bcn: + dev_kfree_skb(beacon); + kfree(cmd); +out: + return ret; +} + +static int cc33xx_bss_beacon_info_changed(struct cc33xx *cc, + struct ieee80211_vif *vif, + struct ieee80211_bss_conf *bss_conf, + u32 changed) +{ + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + bool is_ap = (wlvif->bss_type == BSS_TYPE_AP_BSS); + int ret = 0; + + if (changed & BSS_CHANGED_BEACON_INT) { + cc33xx_debug(DEBUG_MASTER, "beacon interval updated: %d", + bss_conf->beacon_int); + + wlvif->beacon_int = bss_conf->beacon_int; + } + + if (changed & BSS_CHANGED_BEACON) { + ret = cc33xx_set_beacon_template(cc, vif, is_ap); + if (ret < 0) + goto out; + + if (test_and_clear_bit(WLVIF_FLAG_BEACON_DISABLED, + &wlvif->flags)) { + ret = cmd_dfs_master_restart(cc, wlvif); + if (ret < 0) + goto out; + } + } +out: + if (ret != 0) + cc33xx_error("beacon info change failed: %d", ret); + + return ret; +} + +/* AP mode changes */ +static void cc33xx_bss_info_changed_ap(struct cc33xx *cc, + struct ieee80211_vif *vif, + struct ieee80211_bss_conf *bss_conf, + u64 changed) +{ + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + int ret = 0; + + if (changed & BSS_CHANGED_BASIC_RATES) { + u32 rates = bss_conf->basic_rates; + u32 supported_rates = 0; + + wlvif->basic_rate_set = cc33xx_tx_enabled_rates_get(cc, rates, + wlvif->band); + wlvif->basic_rate = cc33xx_tx_min_rate_get(cc, + wlvif->basic_rate_set); + + supported_rates = CONF_TX_ENABLED_RATES | CONF_TX_MCS_RATES; + ret = cc33xx_update_ap_rates(cc, wlvif->role_id, + wlvif->basic_rate_set, + supported_rates); + + ret = cc33xx_set_beacon_template(cc, vif, true); + if (ret < 0) + goto out; + } + + ret = cc33xx_bss_beacon_info_changed(cc, vif, bss_conf, changed); + if (ret < 0) + goto out; + + if (changed & BSS_CHANGED_BEACON_ENABLED) { + if (bss_conf->enable_beacon) { + if (!test_bit(WLVIF_FLAG_AP_STARTED, &wlvif->flags)) { + ret = cc33xx_cmd_role_start_ap(cc, wlvif); + if (ret < 0) + goto out; + + ret = cc33xx_ap_init_hwenc(cc, wlvif); + if (ret < 0) + goto out; + + set_bit(WLVIF_FLAG_AP_STARTED, &wlvif->flags); + } + } else { + if (test_bit(WLVIF_FLAG_AP_STARTED, &wlvif->flags)) { + /* AP might be in ROC in case we have just + * sent auth reply. handle it. + */ + if (test_bit(wlvif->role_id, cc->roc_map)) + cc33xx_croc(cc, wlvif->role_id); + + ret = cc33xx_cmd_role_stop_ap(cc, wlvif); + if (ret < 0) + goto out; + + clear_bit(WLVIF_FLAG_AP_STARTED, &wlvif->flags); + clear_bit(WLVIF_FLAG_AP_PROBE_RESP_SET, + &wlvif->flags); + } + } + } + + ret = cc33xx_bss_erp_info_changed(cc, vif, bss_conf, changed); + if (ret < 0) + goto out; + +out: + return; +} + +static int cc33xx_set_bssid(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct ieee80211_bss_conf *bss_conf, + struct ieee80211_vif *vif, + u32 sta_rate_set) +{ + u32 rates; + + cc33xx_debug(DEBUG_MAC80211, "changed_bssid: %pM, aid: %d, bcn_int: %d, brates: 0x%x sta_rate_set: 0x%x, nontx: %d", + bss_conf->bssid, vif->cfg.aid, bss_conf->beacon_int, + bss_conf->basic_rates, sta_rate_set, + bss_conf->nontransmitted); + + wlvif->beacon_int = bss_conf->beacon_int; + rates = bss_conf->basic_rates; + wlvif->basic_rate_set = cc33xx_tx_enabled_rates_get(cc, rates, + wlvif->band); + wlvif->basic_rate = cc33xx_tx_min_rate_get(cc, wlvif->basic_rate_set); + + if (sta_rate_set) { + wlvif->rate_set = cc33xx_tx_enabled_rates_get(cc, sta_rate_set, + wlvif->band); + } + + wlvif->nontransmitted = bss_conf->nontransmitted; + cc33xx_debug(DEBUG_MAC80211, "changed_mbssid: nonTxbssid: %d, idx: %d, max_ind: %d, trans_bssid: %pM, ema_ap: %d", + bss_conf->nontransmitted, bss_conf->bssid_index, + bss_conf->bssid_indicator, bss_conf->transmitter_bssid, + bss_conf->ema_ap); + + if (bss_conf->nontransmitted) { + wlvif->bssid_index = bss_conf->bssid_index; + wlvif->bssid_indicator = bss_conf->bssid_indicator; + memcpy(wlvif->transmitter_bssid, + bss_conf->transmitter_bssid, + ETH_ALEN); + } + + /* we only support sched_scan while not connected */ + if (cc->sched_vif == wlvif) + cc33xx_scan_sched_scan_stop(cc, wlvif); + + cc33xx_set_ssid(cc, wlvif); + + set_bit(WLVIF_FLAG_IN_USE, &wlvif->flags); + + return 0; +} + +static int cc33xx_clear_bssid(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + int ret; + + /* revert back to minimum rates for the current band */ + cc33xx_set_band_rate(cc, wlvif); + wlvif->basic_rate = cc33xx_tx_min_rate_get(cc, wlvif->basic_rate_set); + + if (wlvif->bss_type == BSS_TYPE_STA_BSS && + test_bit(WLVIF_FLAG_IN_USE, &wlvif->flags)) { + ret = cc33xx_cmd_role_stop_sta(cc, wlvif); + if (ret < 0) + return ret; + } + + clear_bit(WLVIF_FLAG_IN_USE, &wlvif->flags); + return 0; +} + +static void cc33xx_sta_set_he(struct cc33xx *cc, struct cc33xx_vif *wlvif, bool has_he) +{ + struct cc33xx_vif *wlvif_itr; + u8 he_count = 0; + + wlvif->sta_has_he = has_he; + + if (has_he) + cc33xx_info("HE Enabled"); + else + cc33xx_info("HE Disabled"); + + cc33xx_for_each_wlvif_sta(cc, wlvif_itr) { + /* check for all valid link id's */ + if (wlvif_itr->role_id != 0xFF && wlvif_itr->sta_has_he) + he_count++; + } + + /* There can't be two stations connected with HE supported links */ + if (he_count > 1) + cc33xx_error("Both station interfaces has HE enabled!"); +} + +/* STA/IBSS mode changes */ +static void cc33xx_bss_info_changed_sta(struct cc33xx *cc, + struct ieee80211_vif *vif, + struct ieee80211_bss_conf *bss_conf, + u64 changed) +{ + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + bool do_join = false; + bool is_ibss = (wlvif->bss_type == BSS_TYPE_IBSS); + bool ibss_joined = false; + u32 sta_rate_set = 0; + int ret; + struct ieee80211_sta *sta = NULL; + bool sta_exists = false; + struct ieee80211_sta_ht_cap sta_ht_cap; + struct ieee80211_sta_he_cap sta_he_cap; + + if (is_ibss) { + ret = cc33xx_bss_beacon_info_changed(cc, vif, + bss_conf, changed); + if (ret < 0) + goto out; + } + + if (changed & BSS_CHANGED_IBSS) { + if (vif->cfg.ibss_joined) { + set_bit(WLVIF_FLAG_IBSS_JOINED, &wlvif->flags); + ibss_joined = true; + } else { + cc33xx_unset_assoc(cc, wlvif); + cc33xx_cmd_role_stop_sta(cc, wlvif); + } + } + + if ((changed & BSS_CHANGED_BEACON_INT) && ibss_joined) + do_join = true; + + /* Need to update the SSID (for filtering etc) */ + if ((changed & BSS_CHANGED_BEACON) && ibss_joined) + do_join = true; + + if ((changed & BSS_CHANGED_BEACON_ENABLED) && ibss_joined) { + cc33xx_debug(DEBUG_ADHOC, "ad-hoc beaconing: %s", + bss_conf->enable_beacon ? "enabled" : "disabled"); + + do_join = true; + } + + if (changed & BSS_CHANGED_IDLE && !is_ibss) + cc33xx_sta_handle_idle(cc, wlvif, vif->cfg.idle); + + if (changed & BSS_CHANGED_CQM) + wlvif->rssi_thold = bss_conf->cqm_rssi_thold; + + if (changed & (BSS_CHANGED_BSSID | BSS_CHANGED_HT | BSS_CHANGED_ASSOC)) { + rcu_read_lock(); + sta = ieee80211_find_sta(vif, bss_conf->bssid); + if (sta) { + u8 *rx_mask = sta->deflink.ht_cap.mcs.rx_mask; + + /* save the supp_rates of the ap */ + sta_rate_set = sta->deflink.supp_rates[wlvif->band]; + if (sta->deflink.ht_cap.ht_supported) { + sta_rate_set |= + (rx_mask[0] << HW_HT_RATES_OFFSET) | + (rx_mask[1] << HW_MIMO_RATES_OFFSET); + } + sta_ht_cap = sta->deflink.ht_cap; + sta_he_cap = sta->deflink.he_cap; + sta_exists = true; + } + + rcu_read_unlock(); + } + + if (changed & BSS_CHANGED_BSSID) { + if (!is_zero_ether_addr(bss_conf->bssid)) { + ret = cc33xx_set_bssid(cc, wlvif, + bss_conf, vif, sta_rate_set); + if (ret < 0) + goto out; + + /* Need to update the BSSID (for filtering etc) */ + do_join = true; + } else { + ret = cc33xx_clear_bssid(cc, wlvif); + if (ret < 0) + goto out; + } + } + + if (changed & BSS_CHANGED_IBSS) { + cc33xx_debug(DEBUG_ADHOC, "ibss_joined: %d", + vif->cfg.ibss_joined); + + if (vif->cfg.ibss_joined) { + u32 rates = bss_conf->basic_rates; + + wlvif->basic_rate_set = + cc33xx_tx_enabled_rates_get(cc, rates, + wlvif->band); + wlvif->basic_rate = + cc33xx_tx_min_rate_get(cc, + wlvif->basic_rate_set); + + /* by default, use 11b + OFDM rates */ + wlvif->rate_set = CONF_TX_IBSS_DEFAULT_RATES; + } + } + + if ((changed & BSS_CHANGED_BEACON_INFO) && bss_conf->dtim_period) { + /* enable beacon filtering */ + ret = cc33xx_acx_beacon_filter_opt(cc, wlvif, true); + if (ret < 0) + goto out; + } + + ret = cc33xx_bss_erp_info_changed(cc, vif, bss_conf, changed); + if (ret < 0) + goto out; + + if (do_join) { + ret = cc33xx_join(cc, wlvif); + if (ret < 0) { + cc33xx_warning("cmd join failed %d", ret); + goto out; + } + } + + if (changed & BSS_CHANGED_ASSOC) { + if (vif->cfg.assoc) { + ret = cc33xx_set_assoc(cc, wlvif, bss_conf, sta, vif, + sta_rate_set); + if (ret < 0) + goto out; + + if (test_bit(WLVIF_FLAG_STA_AUTHORIZED, &wlvif->flags)) + cc33xx_set_authorized(cc, wlvif); + + if (sta) + cc33xx_sta_set_he(cc, wlvif, sta->deflink.he_cap.has_he); + + } else { + cc33xx_unset_assoc(cc, wlvif); + } + } + + if (changed & BSS_CHANGED_PS) { + if (vif->cfg.ps && + test_bit(WLVIF_FLAG_STA_ASSOCIATED, &wlvif->flags) && + !test_bit(WLVIF_FLAG_IN_PS, &wlvif->flags)) { + int ps_mode; + char *ps_mode_str; + + if (cc->conf.host_conf.conn.forced_ps) { + ps_mode = STATION_POWER_SAVE_MODE; + ps_mode_str = "forced"; + } else { + ps_mode = STATION_AUTO_PS_MODE; + ps_mode_str = "auto"; + } + + cc33xx_debug(DEBUG_PSM, "%s ps enabled", ps_mode_str); + + ret = cc33xx_ps_set_mode(cc, wlvif, ps_mode); + if (ret < 0) + cc33xx_warning("enter %s ps failed %d", + ps_mode_str, ret); + } else if (!vif->cfg.ps && test_bit(WLVIF_FLAG_IN_PS, + &wlvif->flags)) { + cc33xx_debug(DEBUG_PSM, "auto ps disabled"); + + ret = cc33xx_ps_set_mode(cc, wlvif, + STATION_ACTIVE_MODE); + if (ret < 0) + cc33xx_warning("exit auto ps failed %d", ret); + } + } + + /* Handle new association with HT. Do this after join. */ + if (sta_exists) { + bool enabled = bss_conf->chanreq.oper.width != + NL80211_CHAN_WIDTH_20_NOHT; + cc33xx_debug(DEBUG_CMD, "cc33xx_hw_set_peer_cap %x", + wlvif->rate_set); + ret = cc33xx_acx_set_peer_cap(cc, &sta_ht_cap, &sta_he_cap, + wlvif, enabled, wlvif->rate_set, + wlvif->sta.hlid); + if (ret < 0) { + cc33xx_warning("Set ht cap failed %d", ret); + goto out; + } + + if (enabled) { + ret = cc33xx_acx_set_ht_information(cc, wlvif, + bss_conf->ht_operation_mode, + bss_conf->he_oper.params, + bss_conf->he_oper.nss_set); + if (ret < 0) { + cc33xx_warning("Set ht information failed %d", + ret); + goto out; + } + } + } + + /* Handle arp filtering. Done after join. */ + if ((changed & BSS_CHANGED_ARP_FILTER) || + (!is_ibss && (changed & BSS_CHANGED_QOS))) { + __be32 addr = vif->cfg.arp_addr_list[0]; + + wlvif->sta.qos = bss_conf->qos; + WARN_ON(wlvif->bss_type != BSS_TYPE_STA_BSS); + + if (vif->cfg.arp_addr_cnt == 1 && vif->cfg.assoc) { + wlvif->ip_addr = addr; + /* The template should have been configured only upon + * association. however, it seems that the correct ip + * isn't being set (when sending), so we have to + * reconfigure the template upon every ip change. + */ + if (ret < 0) { + cc33xx_warning("build arp rsp failed: %d", ret); + goto out; + } + + } else { + wlvif->ip_addr = 0; + } + + if (ret < 0) + goto out; + } + +out: + return; +} + +static void cc33xx_op_bss_info_changed(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct ieee80211_bss_conf *bss_conf, + u64 changed) +{ + struct cc33xx *cc = hw->priv; + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + bool is_ap = (wlvif->bss_type == BSS_TYPE_AP_BSS); + int ret, set_power; + + /* make sure to cancel pending disconnections if our association + * state changed + */ + if (!is_ap && (changed & BSS_CHANGED_ASSOC)) + cancel_delayed_work_sync(&wlvif->connection_loss_work); + + if (is_ap && (changed & BSS_CHANGED_BEACON_ENABLED) && + !bss_conf->enable_beacon) + cc33xx_tx_flush(cc); + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) + goto out; + + if (unlikely(!test_bit(WLVIF_FLAG_INITIALIZED, &wlvif->flags))) + goto out; + + if ((changed & BSS_CHANGED_TXPOWER) && bss_conf->txpower != wlvif->power_level) { + /* bss_conf->txpower is initialized with a default value, + * meaning the power has not been set and should be ignored, use + * max value instead + */ + set_power = (bss_conf->txpower == INT_MIN) ? + CC33XX_MAX_TXPWR : bss_conf->txpower; + ret = cc33xx_acx_tx_power(cc, wlvif, set_power); + + if (ret < 0) + goto out; + } + + if (is_ap) + cc33xx_bss_info_changed_ap(cc, vif, bss_conf, changed); + else + cc33xx_bss_info_changed_sta(cc, vif, bss_conf, changed); + +out: + mutex_unlock(&cc->mutex); +} + +static int cc33xx_op_add_chanctx(struct ieee80211_hw *hw, + struct ieee80211_chanctx_conf *ctx) +{ + cc33xx_debug(DEBUG_MAC80211, "mac80211 add chanctx %d (type %d)", + ieee80211_frequency_to_channel(ctx->def.chan->center_freq), + cfg80211_get_chandef_type(&ctx->def)); + return 0; +} + +static void cc33xx_op_remove_chanctx(struct ieee80211_hw *hw, + struct ieee80211_chanctx_conf *ctx) +{ + cc33xx_debug(DEBUG_MAC80211, "mac80211 remove chanctx %d (type %d)", + ieee80211_frequency_to_channel(ctx->def.chan->center_freq), + cfg80211_get_chandef_type(&ctx->def)); +} + +static void cc33xx_op_change_chanctx(struct ieee80211_hw *hw, + struct ieee80211_chanctx_conf *ctx, + u32 changed) +{ + struct cc33xx *cc = hw->priv; + struct cc33xx_vif *wlvif; + int channel = ieee80211_frequency_to_channel(ctx->def.chan->center_freq); + + cc33xx_debug(DEBUG_MAC80211, + "mac80211 change chanctx %d (type %d) changed 0x%x", + channel, cfg80211_get_chandef_type(&ctx->def), changed); + + mutex_lock(&cc->mutex); + + cc33xx_for_each_wlvif(cc, wlvif) { + struct ieee80211_vif *vif = cc33xx_wlvif_to_vif(wlvif); + + rcu_read_lock(); + if (rcu_access_pointer(vif->bss_conf.chanctx_conf) != ctx) { + rcu_read_unlock(); + continue; + } + rcu_read_unlock(); + + /* start radar if needed */ + if (changed & IEEE80211_CHANCTX_CHANGE_RADAR && + wlvif->bss_type == BSS_TYPE_AP_BSS && + ctx->radar_enabled && !wlvif->radar_enabled && + ctx->def.chan->dfs_state == NL80211_DFS_USABLE) { + cmd_set_cac(cc, wlvif, true); + wlvif->radar_enabled = true; + } + } + + mutex_unlock(&cc->mutex); +} + +static int cc33xx_op_assign_vif_chanctx(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct ieee80211_bss_conf *link_conf, + struct ieee80211_chanctx_conf *ctx) +{ + struct cc33xx *cc = hw->priv; + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + int channel = ieee80211_frequency_to_channel(ctx->def.chan->center_freq); + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) + goto out; + + if (unlikely(!test_bit(WLVIF_FLAG_INITIALIZED, &wlvif->flags))) + goto out; + + wlvif->band = ctx->def.chan->band; + wlvif->channel = channel; + wlvif->channel_type = cfg80211_get_chandef_type(&ctx->def); + + /* update default rates according to the band */ + cc33xx_set_band_rate(cc, wlvif); + + if (ctx->radar_enabled && ctx->def.chan->dfs_state == NL80211_DFS_USABLE) { + cc33xx_debug(DEBUG_MAC80211, "Start radar detection"); + cmd_set_cac(cc, wlvif, true); + wlvif->radar_enabled = true; + } + +out: + mutex_unlock(&cc->mutex); + + return 0; +} + +static void cc33xx_op_unassign_vif_chanctx(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct ieee80211_bss_conf *link_conf, + struct ieee80211_chanctx_conf *ctx) +{ + struct cc33xx *cc = hw->priv; + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + + cc33xx_tx_flush(cc); + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) + goto out; + + if (unlikely(!test_bit(WLVIF_FLAG_INITIALIZED, &wlvif->flags))) + goto out; + + if (wlvif->radar_enabled) { + cc33xx_debug(DEBUG_MAC80211, "Stop radar detection"); + cmd_set_cac(cc, wlvif, false); + wlvif->radar_enabled = false; + } + +out: + mutex_unlock(&cc->mutex); +} + +static int cc33xx_switch_vif_chan(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct ieee80211_chanctx_conf *new_ctx) +{ + int channel = ieee80211_frequency_to_channel(new_ctx->def.chan->center_freq); + + wlvif->band = new_ctx->def.chan->band; + wlvif->channel = channel; + wlvif->channel_type = cfg80211_get_chandef_type(&new_ctx->def); + + if (wlvif->bss_type != BSS_TYPE_AP_BSS) + return 0; + + WARN_ON(!test_bit(WLVIF_FLAG_BEACON_DISABLED, &wlvif->flags)); + + if (wlvif->radar_enabled) { + cmd_set_cac(cc, wlvif, false); + wlvif->radar_enabled = false; + } + + /* start radar if needed */ + if (new_ctx->radar_enabled) { + cmd_set_cac(cc, wlvif, true); + wlvif->radar_enabled = true; + } + + return 0; +} + +static int cc33xx_op_switch_vif_chanctx(struct ieee80211_hw *hw, + struct ieee80211_vif_chanctx_switch *vifs, + int n_vifs, + enum ieee80211_chanctx_switch_mode mode) +{ + struct cc33xx *cc = hw->priv; + int i, ret; + + cc33xx_debug(DEBUG_MAC80211, + "mac80211 switch chanctx n_vifs %d mode %d", n_vifs, mode); + + mutex_lock(&cc->mutex); + + for (i = 0; i < n_vifs; i++) { + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vifs[i].vif); + + ret = cc33xx_switch_vif_chan(cc, wlvif, vifs[i].new_ctx); + if (ret) + goto out; + } + +out: + mutex_unlock(&cc->mutex); + + return 0; +} + +static int cc33xx_op_conf_tx(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + unsigned int link_id, u16 queue, + const struct ieee80211_tx_queue_params *params) +{ + struct cc33xx *cc = hw->priv; + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + u8 ps_scheme; + int ret = 0; + + if (cc33xx_is_p2p_mgmt(wlvif)) + return 0; + + mutex_lock(&cc->mutex); + + cc33xx_debug(DEBUG_MAC80211, "mac80211 conf tx %d", queue); + + if (params->uapsd) + ps_scheme = CONF_PS_SCHEME_UPSD_TRIGGER; + else + ps_scheme = CONF_PS_SCHEME_LEGACY; + + if (!test_bit(WLVIF_FLAG_INITIALIZED, &wlvif->flags)) + goto out; + + ret = cc33xx_tx_param_cfg(cc, wlvif, cc33xx_tx_get_queue(queue), + params->cw_min, params->cw_max, params->aifs, + params->txop << 5, params->acm, ps_scheme, + params->mu_edca, params->mu_edca_param_rec.aifsn, + params->mu_edca_param_rec.ecw_min_max, + params->mu_edca_param_rec.mu_edca_timer); + +out: + mutex_unlock(&cc->mutex); + + return ret; +} + +static u64 cc33xx_op_get_tsf(struct ieee80211_hw *hw, struct ieee80211_vif *vif) +{ + struct cc33xx *cc = hw->priv; + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + u64 mactime = ULLONG_MAX; + + cc33xx_debug(DEBUG_MAC80211, "mac80211 get tsf"); + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) + goto out; + + cc33xx_acx_tsf_info(cc, wlvif, &mactime); + +out: + mutex_unlock(&cc->mutex); + + return mactime; +} + +static int cc33xx_op_get_survey(struct ieee80211_hw *hw, int idx, + struct survey_info *survey) +{ + struct ieee80211_conf *conf = &hw->conf; + + if (idx != 0) + return -ENOENT; + + survey->channel = conf->chandef.chan; + survey->filled = 0; + return 0; +} + +static int cc33xx_allocate_sta(struct cc33xx *cc, + struct cc33xx_vif *wlvif, + struct ieee80211_sta *sta) +{ + struct cc33xx_station *wl_sta; + int ret; + + if (cc->active_sta_count >= CC33XX_MAX_AP_STATIONS) { + cc33xx_warning("could not allocate HLID - too much stations"); + return -EBUSY; + } + + wl_sta = (struct cc33xx_station *)sta->drv_priv; + + ret = cc33xx_set_link(cc, wlvif, wl_sta->hlid); + + if (ret < 0) { + cc33xx_warning("could not allocate HLID - too many links"); + return -EBUSY; + } + + /* use the previous security seq, if this is a recovery/resume */ + cc->links[wl_sta->hlid].total_freed_pkts = wl_sta->total_freed_pkts; + + set_bit(wl_sta->hlid, wlvif->ap.sta_hlid_map); + memcpy(cc->links[wl_sta->hlid].addr, sta->addr, ETH_ALEN); + cc->active_sta_count++; + return 0; +} + +void cc33xx_free_sta(struct cc33xx *cc, struct cc33xx_vif *wlvif, u8 hlid) +{ + if (!test_bit(hlid, wlvif->ap.sta_hlid_map)) + return; + + clear_bit(hlid, wlvif->ap.sta_hlid_map); + __clear_bit(hlid, &cc->ap_ps_map); + __clear_bit(hlid, &cc->ap_fw_ps_map); + + /* save the last used PN in the private part of iee80211_sta, + * in case of recovery/suspend + */ + cc33xx_save_freed_pkts_addr(cc, wlvif, hlid, cc->links[hlid].addr); + + cc33xx_clear_link(cc, wlvif, &hlid); + cc->active_sta_count--; + + /* rearm the tx watchdog when the last STA is freed - give the FW a + * chance to return STA-buffered packets before complaining. + */ + if (cc->active_sta_count == 0) + cc33xx_rearm_tx_watchdog_locked(cc); +} + +static int cc33xx_sta_add(struct cc33xx *cc, + struct cc33xx_vif *wlvif, + struct ieee80211_sta *sta) +{ + struct cc33xx_station *wl_sta; + int ret = 0; + u8 hlid; + + cc33xx_debug(DEBUG_MAC80211, "mac80211 add sta %d", (int)sta->aid); + + wl_sta = (struct cc33xx_station *)sta->drv_priv; + ret = cc33xx_cmd_add_peer(cc, wlvif, sta, &hlid, 0); + if (ret < 0) + return ret; + + wl_sta->hlid = hlid; + ret = cc33xx_allocate_sta(cc, wlvif, sta); + + return ret; +} + +static int cc33xx_sta_remove(struct cc33xx *cc, + struct cc33xx_vif *wlvif, + struct ieee80211_sta *sta) +{ + struct cc33xx_station *wl_sta; + int ret = 0, id; + + cc33xx_debug(DEBUG_MAC80211, "mac80211 remove sta %d", (int)sta->aid); + + wl_sta = (struct cc33xx_station *)sta->drv_priv; + id = wl_sta->hlid; + if (WARN_ON(!test_bit(id, wlvif->ap.sta_hlid_map))) + return -EINVAL; + + ret = cc33xx_cmd_remove_peer(cc, wlvif, wl_sta->hlid); + if (ret < 0) + return ret; + + cc33xx_free_sta(cc, wlvif, wl_sta->hlid); + return ret; +} + +static void cc33xx_roc_if_possible(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + if (find_first_bit(cc->roc_map, CC33XX_MAX_ROLES) < CC33XX_MAX_ROLES) + return; + + if (WARN_ON(wlvif->role_id == CC33XX_INVALID_ROLE_ID)) + return; + + cc33xx_roc(cc, wlvif, wlvif->role_id, wlvif->band, wlvif->channel); +} + +/* when wl_sta is NULL, we treat this call as if coming from a + * pending auth reply. + * cc->mutex must be taken and the FW must be awake when the call + * takes place. + */ +void cc33xx_update_inconn_sta(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct cc33xx_station *wl_sta, bool in_conn) +{ + cc33xx_debug(DEBUG_CMD, "update_inconn_sta: in_conn=%d count=%d, pending_auth=%d", + in_conn, + wlvif->inconn_count, wlvif->ap_pending_auth_reply); + + if (in_conn) { + if (WARN_ON(wl_sta && wl_sta->in_connection)) + return; + + if (!wlvif->ap_pending_auth_reply && !wlvif->inconn_count) { + cc33xx_roc_if_possible(cc, wlvif); + if (test_bit(wlvif->role_id, cc->roc_map)) { + unsigned long roc_cmplt_jiffies = + msecs_to_jiffies(CC33xx_PEND_ROC_COMPLETE_TIMEOUT); + + /* set timer on croc timeout */ + wlvif->pending_auth_reply_time = jiffies; + cancel_delayed_work(&wlvif->roc_timeout_work); + + cc33xx_debug(DEBUG_AP, "delay queue roc_timeout_work"); + + ieee80211_queue_delayed_work(cc->hw, + &wlvif->roc_timeout_work, + roc_cmplt_jiffies); + } + } + + if (wl_sta) { + wl_sta->in_connection = true; + wlvif->inconn_count++; + } else { + wlvif->ap_pending_auth_reply = true; + } + } else { + if (wl_sta && !wl_sta->in_connection) + return; + + if (WARN_ON(!wl_sta && !wlvif->ap_pending_auth_reply)) + return; + + if (WARN_ON(wl_sta && !wlvif->inconn_count)) + return; + + if (wl_sta) { + wl_sta->in_connection = false; + wlvif->inconn_count--; + } else { + wlvif->ap_pending_auth_reply = false; + } + + if (!wlvif->inconn_count && !wlvif->ap_pending_auth_reply && + test_bit(wlvif->role_id, cc->roc_map)) { + cc33xx_croc(cc, wlvif->role_id); + /* remove timer for croc t/o */ + cc33xx_debug(DEBUG_AP, "Cancel pending_roc timeout"); + cancel_delayed_work(&wlvif->roc_timeout_work); + } + } +} + +static int cc33xx_update_sta_state(struct cc33xx *cc, + struct cc33xx_vif *wlvif, + struct ieee80211_sta *sta, + enum ieee80211_sta_state old_state, + enum ieee80211_sta_state new_state) +{ + struct cc33xx_station *wl_sta; + bool is_ap = wlvif->bss_type == BSS_TYPE_AP_BSS; + bool is_sta = wlvif->bss_type == BSS_TYPE_STA_BSS; + int ret; + + wl_sta = (struct cc33xx_station *)sta->drv_priv; + + /* Add station (AP mode) */ + if (is_ap && old_state == IEEE80211_STA_NOTEXIST && new_state == IEEE80211_STA_NONE) { + ret = cc33xx_sta_add(cc, wlvif, sta); + if (ret) + return ret; + + cc33xx_update_inconn_sta(cc, wlvif, wl_sta, true); + } + + /* Remove station (AP mode) */ + if (is_ap && old_state == IEEE80211_STA_NONE && new_state == IEEE80211_STA_NOTEXIST) { + /* must not fail */ + cc33xx_sta_remove(cc, wlvif, sta); + + cc33xx_update_inconn_sta(cc, wlvif, wl_sta, false); + } + + /* Authorize station (AP mode) */ + if (is_ap && new_state == IEEE80211_STA_AUTHORIZED) { + /* reconfigure peer */ + ret = cc33xx_cmd_add_peer(cc, wlvif, sta, NULL, true); + if (ret < 0) + return ret; + + cc33xx_update_inconn_sta(cc, wlvif, wl_sta, false); + } + + /* Authorize station */ + if (is_sta && new_state == IEEE80211_STA_AUTHORIZED) { + set_bit(WLVIF_FLAG_STA_AUTHORIZED, &wlvif->flags); + ret = cc33xx_set_authorized(cc, wlvif); + if (ret) + return ret; + } + + if (is_sta && old_state == IEEE80211_STA_AUTHORIZED && new_state == IEEE80211_STA_ASSOC) { + clear_bit(WLVIF_FLAG_STA_AUTHORIZED, &wlvif->flags); + clear_bit(WLVIF_FLAG_STA_STATE_SENT, &wlvif->flags); + } + + /* save seq number on disassoc (suspend) */ + if (is_sta && old_state == IEEE80211_STA_ASSOC && new_state == IEEE80211_STA_AUTH) { + cc33xx_save_freed_pkts(cc, wlvif, wlvif->sta.hlid, sta); + wlvif->total_freed_pkts = 0; + } + + /* restore seq number on assoc (resume) */ + if (is_sta && old_state == IEEE80211_STA_AUTH && new_state == IEEE80211_STA_ASSOC) + wlvif->total_freed_pkts = wl_sta->total_freed_pkts; + + /* clear ROCs on failure or authorization */ + if (is_sta && + (new_state == IEEE80211_STA_AUTHORIZED || + new_state == IEEE80211_STA_NOTEXIST)) { + if (test_bit(wlvif->role_id, cc->roc_map)) + cc33xx_croc(cc, wlvif->role_id); + } + + if (is_sta && (old_state == IEEE80211_STA_NOTEXIST && + new_state == IEEE80211_STA_NONE)) { + if (find_first_bit(cc->roc_map, + CC33XX_MAX_ROLES) >= CC33XX_MAX_ROLES) { + WARN_ON(wlvif->role_id == CC33XX_INVALID_ROLE_ID); + cc33xx_roc(cc, wlvif, wlvif->role_id, + wlvif->band, wlvif->channel); + } + } + + return 0; +} + +static int cc33xx_op_sta_state(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct ieee80211_sta *sta, + enum ieee80211_sta_state old_state, + enum ieee80211_sta_state new_state) +{ + struct cc33xx *cc = hw->priv; + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + int ret; + + cc33xx_debug(DEBUG_MAC80211, "mac80211 sta %d state=%d->%d", + sta->aid, old_state, new_state); + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) { + ret = -EBUSY; + goto out; + } + + ret = cc33xx_update_sta_state(cc, wlvif, sta, old_state, new_state); + +out: + mutex_unlock(&cc->mutex); + if (new_state < old_state) + return 0; + return ret; +} + +static int cc33xx_op_ampdu_action(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct ieee80211_ampdu_params *params) +{ + struct cc33xx *cc = hw->priv; + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + int ret; + u8 hlid, *ba_bitmap; + struct ieee80211_sta *sta = params->sta; + enum ieee80211_ampdu_mlme_action action = params->action; + u16 tid = params->tid; + u16 *ssn = ¶ms->ssn; + + /* sanity check - the fields in FW are only 8bits wide */ + if (WARN_ON(tid > 0xFF)) + return -EOPNOTSUPP; + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) { + ret = -EAGAIN; + goto out; + } + + if (wlvif->bss_type == BSS_TYPE_STA_BSS) { + hlid = wlvif->sta.hlid; + } else if (wlvif->bss_type == BSS_TYPE_AP_BSS) { + struct cc33xx_station *wl_sta; + + wl_sta = (struct cc33xx_station *)sta->drv_priv; + hlid = wl_sta->hlid; + } else { + ret = -EINVAL; + goto out; + } + + if (hlid == CC33XX_INVALID_LINK_ID) { + ret = 0; + goto out; + } + + if (WARN_ON(hlid >= CC33XX_MAX_LINKS)) { + ret = -EINVAL; + goto out; + } + + ba_bitmap = &cc->links[hlid].ba_bitmap; + + switch (action) { + case IEEE80211_AMPDU_RX_START: + if (!wlvif->ba_support || !wlvif->ba_allowed) { + ret = -EOPNOTSUPP; + break; + } + + if (cc->ba_rx_session_count >= CC33XX_RX_BA_MAX_SESSIONS) { + ret = -EBUSY; + cc33xx_error("exceeded max RX BA sessions"); + break; + } + + if (*ba_bitmap & BIT(tid)) { + ret = -EINVAL; + cc33xx_error("cannot enable RX BA session on active tid: %d", + tid); + break; + } + + ret = cc33xx_acx_set_ba_receiver_session(cc, tid, *ssn, + true, hlid, + params->buf_size); + + if (!ret) { + *ba_bitmap |= BIT(tid); + cc->ba_rx_session_count++; + } + break; + + case IEEE80211_AMPDU_RX_STOP: + if (!(*ba_bitmap & BIT(tid))) { + /* this happens on reconfig - so only output a debug + * message for now, and don't fail the function. + */ + cc33xx_debug(DEBUG_MAC80211, + "no active RX BA session on tid: %d", tid); + ret = 0; + break; + } + + ret = cc33xx_acx_set_ba_receiver_session(cc, tid, 0, + false, hlid, 0); + if (!ret) { + *ba_bitmap &= ~BIT(tid); + cc->ba_rx_session_count--; + } + break; + + /* The BA initiator session management in FW independently. + * Falling break here on purpose for all TX APDU commands. + */ + case IEEE80211_AMPDU_TX_START: + case IEEE80211_AMPDU_TX_STOP_CONT: + case IEEE80211_AMPDU_TX_STOP_FLUSH: + case IEEE80211_AMPDU_TX_STOP_FLUSH_CONT: + case IEEE80211_AMPDU_TX_OPERATIONAL: + ret = -EINVAL; + break; + + default: + cc33xx_error("Incorrect ampdu action id=%x\n", action); + ret = -EINVAL; + } + +out: + mutex_unlock(&cc->mutex); + + return ret; +} + +static int cc33xx_set_bitrate_mask(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + const struct cfg80211_bitrate_mask *mask) +{ + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + struct cc33xx *cc = hw->priv; + int ret = 0; + + cc33xx_debug(DEBUG_MAC80211, "mac80211 set_bitrate_mask 0x%x 0x%x", + mask->control[NL80211_BAND_2GHZ].legacy, + mask->control[NL80211_BAND_5GHZ].legacy); + + mutex_lock(&cc->mutex); + + wlvif->bitrate_masks[0] = cc33xx_tx_enabled_rates_get(cc, + mask->control[0].legacy, 0); + + if (unlikely(cc->state != CC33XX_STATE_ON)) + goto out; + + if (wlvif->bss_type == BSS_TYPE_STA_BSS && + !test_bit(WLVIF_FLAG_STA_ASSOCIATED, &wlvif->flags)) { + cc33xx_set_band_rate(cc, wlvif); + wlvif->basic_rate = cc33xx_tx_min_rate_get(cc, + wlvif->basic_rate_set); + } +out: + mutex_unlock(&cc->mutex); + + return ret; +} + +static void cc33xx_op_channel_switch(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct ieee80211_channel_switch *ch_switch) +{ + struct cc33xx *cc = hw->priv; + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + int ret; + + cc33xx_tx_flush(cc); + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state == CC33XX_STATE_OFF)) { + if (test_bit(WLVIF_FLAG_STA_ASSOCIATED, &wlvif->flags)) + ieee80211_chswitch_done(vif, false, 0); + goto out; + } else if (unlikely(cc->state != CC33XX_STATE_ON)) { + goto out; + } + + /* TODO: change mac80211 to pass vif as param */ + + if (test_bit(WLVIF_FLAG_STA_ASSOCIATED, &wlvif->flags)) { + unsigned long delay_usec; + + ret = cmd_channel_switch(cc, wlvif, ch_switch); + if (ret) + goto out; + + set_bit(WLVIF_FLAG_CS_PROGRESS, &wlvif->flags); + + /* indicate failure 5 seconds after channel switch time */ + delay_usec = ieee80211_tu_to_usec(wlvif->beacon_int) * + ch_switch->count; + ieee80211_queue_delayed_work(hw, &wlvif->channel_switch_work, + usecs_to_jiffies(delay_usec) + + msecs_to_jiffies(5000)); + } + +out: + mutex_unlock(&cc->mutex); +} + +static inline void cc33xx_op_channel_switch_beacon(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct cfg80211_chan_def *chandef) +{ + cc33xx_error("AP channel switch is not supported"); +} + +static inline void cc33xx_op_flush(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + u32 queues, bool drop) +{ + cc33xx_tx_flush(hw->priv); +} + +static int cc33xx_op_remain_on_channel(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct ieee80211_channel *chan, + int duration, + enum ieee80211_roc_type type) +{ + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + struct cc33xx *cc = hw->priv; + int channel, active_roc, ret = 0; + + channel = ieee80211_frequency_to_channel(chan->center_freq); + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) + goto out; + + /* return EBUSY if we can't ROC right now */ + active_roc = find_first_bit(cc->roc_map, CC33XX_MAX_ROLES); + if (cc->roc_vif || active_roc < CC33XX_MAX_ROLES) { + cc33xx_warning("active roc on role %d", active_roc); + ret = -EBUSY; + goto out; + } + + cc33xx_debug(DEBUG_MAC80211, + "call cc33xx_start_dev, band = %d, channel = %d", + chan->band, channel); + ret = cc33xx_start_dev(cc, wlvif, chan->band, channel); + if (ret < 0) + goto out; + + cc->roc_vif = vif; + ieee80211_queue_delayed_work(hw, &cc->roc_complete_work, + msecs_to_jiffies(duration)); + +out: + mutex_unlock(&cc->mutex); + return ret; +} + +static int __cc33xx_roc_completed(struct cc33xx *cc) +{ + struct cc33xx_vif *wlvif; + int ret; + + /* already completed */ + if (unlikely(!cc->roc_vif)) + return 0; + + wlvif = cc33xx_vif_to_data(cc->roc_vif); + + if (!test_bit(WLVIF_FLAG_INITIALIZED, &wlvif->flags)) + return -EBUSY; + + ret = cc33xx_stop_dev(cc, wlvif); + if (ret < 0) + return ret; + + cc->roc_vif = NULL; + + return 0; +} + +static int cc33xx_roc_completed(struct cc33xx *cc) +{ + int ret; + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) { + ret = -EBUSY; + goto out; + } + + ret = __cc33xx_roc_completed(cc); + +out: + mutex_unlock(&cc->mutex); + + return ret; +} + +static void cc33xx_roc_complete_work(struct work_struct *work) +{ + struct delayed_work *dwork; + struct cc33xx *cc; + int ret; + + dwork = to_delayed_work(work); + cc = container_of(dwork, struct cc33xx, roc_complete_work); + + ret = cc33xx_roc_completed(cc); + if (!ret) + ieee80211_remain_on_channel_expired(cc->hw); +} + +static int cc33xx_op_cancel_remain_on_channel(struct ieee80211_hw *hw, + struct ieee80211_vif *vif) +{ + struct cc33xx *cc = hw->priv; + + cc33xx_tx_flush(cc); + + /* we can't just flush_work here, because it might deadlock + * (as we might get called from the same workqueue) + */ + cancel_delayed_work_sync(&cc->roc_complete_work); + cc33xx_roc_completed(cc); + + return 0; +} + +static void cc33xx_op_sta_rc_update(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct ieee80211_link_sta *link_sta, + u32 changed) +{ + struct ieee80211_sta *sta = link_sta->sta; + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + + if (!(changed & IEEE80211_RC_BW_CHANGED)) + return; + + /* this callback is atomic, so schedule a new work */ + wlvif->rc_update_bw = sta->deflink.bandwidth; + memcpy(&wlvif->rc_ht_cap, &sta->deflink.ht_cap, sizeof(sta->deflink.ht_cap)); + ieee80211_queue_work(hw, &wlvif->rc_update_work); +} + +static void cc33xx_op_sta_statistics(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct ieee80211_sta *sta, + struct station_info *sinfo) +{ + struct cc33xx *cc = hw->priv; + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + s8 rssi_dbm; + int ret; + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) + goto out; + + ret = cc33xx_acx_average_rssi(cc, wlvif, &rssi_dbm); + if (ret < 0) + goto out; + + sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL); + sinfo->signal = rssi_dbm; + + ret = cc33xx_acx_get_tx_rate(cc, wlvif, sinfo); + if (ret < 0) + goto out; + +out: + mutex_unlock(&cc->mutex); +} + +static u32 cc33xx_op_get_expected_throughput(struct ieee80211_hw *hw, + struct ieee80211_sta *sta) +{ + struct cc33xx_station *wl_sta = (struct cc33xx_station *)sta->drv_priv; + struct cc33xx *cc = hw->priv; + u8 hlid = wl_sta->hlid; + + /* return in units of Kbps */ + return (cc->links[hlid].fw_rate_mbps * 1000); +} + +static bool cc33xx_tx_frames_pending(struct ieee80211_hw *hw) +{ + struct cc33xx *cc = hw->priv; + bool ret = false; + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) + goto out; + + /* packets are considered pending if in the TX queue or the FW */ + ret = (cc33xx_tx_total_queue_count(cc) > 0) || (cc->tx_frames_cnt > 0); +out: + mutex_unlock(&cc->mutex); + + return ret; +} + +#ifdef CONFIG_PM + +static const struct wiphy_wowlan_support cc33xx_wowlan_support = { + .flags = WIPHY_WOWLAN_ANY, + .n_patterns = CC33XX_MAX_RX_FILTERS, + .pattern_min_len = 1, + .pattern_max_len = CC33XX_RX_FILTER_MAX_PATTERN_SIZE, +}; + +static void setup_wake_irq(struct cc33xx *cc) +{ + struct platform_device *pdev = cc->pdev; + struct cc33xx_platdev_data *pdev_data = dev_get_platdata(&pdev->dev); + + struct resource *res; + int ret; + + device_init_wakeup(cc->dev, true); + + if (pdev_data->pwr_in_suspend) + cc->hw->wiphy->wowlan = &cc33xx_wowlan_support; + + res = platform_get_resource(pdev, IORESOURCE_IRQ, 0); + if (res) { + cc->wakeirq = res->start; + ret = dev_pm_set_dedicated_wake_irq(cc->dev, cc->wakeirq); + if (ret) + cc->wakeirq = -ENODEV; + } else { + cc->wakeirq = -ENODEV; + } + + cc->keep_device_power = true; +} +#else + +static inline void setup_wake_irq(struct cc33xx *cc) +{ + cc->keep_device_power = true; +} + +#endif /* CONFIG_PM */ + +static const struct ieee80211_ops cc33xx_ops = { + .start = cc33xx_op_start, + .stop = cc33xx_op_stop, + .add_interface = cc33xx_op_add_interface, + .remove_interface = cc33xx_op_remove_interface, + .change_interface = cc33xx_op_change_interface, +#ifdef CONFIG_PM + .suspend = cc33xx_op_suspend, + .resume = cc33xx_op_resume, +#endif + .config = cc33xx_op_config, + .prepare_multicast = cc33xx_op_prepare_multicast, + .configure_filter = cc33xx_op_configure_filter, + .tx = cc33xx_op_tx, + .wake_tx_queue = ieee80211_handle_wake_tx_queue, + .set_key = cc33xx_op_set_key, + .hw_scan = cc33xx_op_hw_scan, + .cancel_hw_scan = cc33xx_op_cancel_hw_scan, + .sched_scan_start = cc33xx_op_sched_scan_start, + .sched_scan_stop = cc33xx_op_sched_scan_stop, + .bss_info_changed = cc33xx_op_bss_info_changed, + .set_frag_threshold = cc33xx_op_set_frag_threshold, + .set_rts_threshold = cc33xx_op_set_rts_threshold, + .conf_tx = cc33xx_op_conf_tx, + .get_tsf = cc33xx_op_get_tsf, + .get_survey = cc33xx_op_get_survey, + .sta_state = cc33xx_op_sta_state, + .ampdu_action = cc33xx_op_ampdu_action, + .tx_frames_pending = cc33xx_tx_frames_pending, + .set_bitrate_mask = cc33xx_set_bitrate_mask, + .set_default_unicast_key = cc33xx_op_set_default_key_idx, + .channel_switch = cc33xx_op_channel_switch, + .channel_switch_beacon = cc33xx_op_channel_switch_beacon, + .flush = cc33xx_op_flush, + .remain_on_channel = cc33xx_op_remain_on_channel, + .cancel_remain_on_channel = cc33xx_op_cancel_remain_on_channel, + .add_chanctx = cc33xx_op_add_chanctx, + .remove_chanctx = cc33xx_op_remove_chanctx, + .change_chanctx = cc33xx_op_change_chanctx, + .assign_vif_chanctx = cc33xx_op_assign_vif_chanctx, + .unassign_vif_chanctx = cc33xx_op_unassign_vif_chanctx, + .switch_vif_chanctx = cc33xx_op_switch_vif_chanctx, + .link_sta_rc_update = cc33xx_op_sta_rc_update, + .sta_statistics = cc33xx_op_sta_statistics, + .get_expected_throughput = cc33xx_op_get_expected_throughput, + CFG80211_TESTMODE_CMD(cc33xx_tm_cmd) +}; + +u8 cc33xx_rate_to_idx(struct cc33xx *cc, u8 rate, enum nl80211_band band) +{ + u8 idx; + + if (WARN_ON(band >= 2)) + return 0; + + if (unlikely(rate > CONF_HW_RATE_INDEX_MAX)) { + cc33xx_error("Illegal RX rate from HW: %d", rate); + return 0; + } + + idx = cc33xx_band_rate_to_idx[band][rate]; + if (unlikely(idx == CONF_HW_RXTX_RATE_UNSUPPORTED)) { + cc33xx_error("Unsupported RX rate from HW: %d", rate); + return 0; + } + + return idx; +} + +static void cc33xx_derive_mac_addresses(struct cc33xx *cc) +{ + const u8 zero_mac[ETH_ALEN] = {0}; + u8 base_addr[ETH_ALEN]; + u8 bd_addr[ETH_ALEN]; + bool use_nvs = false; + bool use_efuse = false; + bool use_random = false; + + if (cc->nvs_mac_addr_len != ETH_ALEN) { + if (unlikely(cc->nvs_mac_addr_len > 0)) + cc33xx_warning("NVS MAC address present but has a wrong size, ignoring."); + + if (!ether_addr_equal(zero_mac, cc->efuse_mac_address)) { + use_efuse = true; + ether_addr_copy(base_addr, cc->efuse_mac_address); + cc33xx_debug(DEBUG_BOOT, + "MAC address derived from EFUSE"); + } else { + use_random = true; + eth_random_addr(base_addr); + cc33xx_warning("No EFUSE / NVS data, using random locally administered address."); + } + } else { + u8 *nvs_addr = cc->nvs_mac_addr; + const u8 efuse_magic_addr[ETH_ALEN] = { + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}; + const u8 random_magic_addr[ETH_ALEN] = { + 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}; + + /* In NVS, addresses 00-00-00-00-00-00 and 00-00-00-00-00-01 + * have special meaning: + */ + + if (ether_addr_equal(nvs_addr, efuse_magic_addr)) { + use_efuse = true; + ether_addr_copy(base_addr, cc->efuse_mac_address); + cc33xx_debug(DEBUG_BOOT, + "NVS file selects address from EFUSE"); + } else if (ether_addr_equal(nvs_addr, random_magic_addr)) { + use_random = true; + eth_random_addr(base_addr); + cc33xx_debug(DEBUG_BOOT, + "NVS file sets random MAC address"); + } else { + use_nvs = true; + ether_addr_copy(base_addr, nvs_addr); + cc33xx_debug(DEBUG_BOOT, + "NVS file sets explicit MAC address"); + } + } + + if (use_nvs || use_efuse) { + u8 oui_laa_bit = BIT(1); + u8 oui_multicast_bit = BIT(0); + + base_addr[0] &= ~oui_multicast_bit; + + ether_addr_copy(cc->addresses[0].addr, base_addr); + ether_addr_copy(cc->addresses[1].addr, base_addr); + ether_addr_copy(cc->addresses[2].addr, base_addr); + ether_addr_copy(bd_addr, base_addr); + + cc->addresses[1].addr[0] |= oui_laa_bit; + cc->addresses[2].addr[0] |= oui_laa_bit; + + eth_addr_inc(cc->addresses[2].addr); + eth_addr_inc(bd_addr); + } else if (use_random) { + ether_addr_copy(cc->addresses[0].addr, base_addr); + ether_addr_copy(cc->addresses[1].addr, base_addr); + ether_addr_copy(cc->addresses[2].addr, base_addr); + ether_addr_copy(bd_addr, base_addr); + + eth_addr_inc(bd_addr); + eth_addr_inc(cc->addresses[1].addr); + eth_addr_inc(cc->addresses[1].addr); + eth_addr_inc(cc->addresses[2].addr); + eth_addr_inc(cc->addresses[2].addr); + eth_addr_inc(cc->addresses[2].addr); + } else { + WARN_ON(1); + } + + cc->hw->wiphy->n_addresses = CC33XX_NUM_MAC_ADDRESSES; + cc->hw->wiphy->addresses = cc->addresses; + + cmd_set_bd_addr(cc, bd_addr); +} + +static int cc33xx_register_hw(struct cc33xx *cc) +{ + int ret; + + if (cc->mac80211_registered) + return 0; + + cc33xx_derive_mac_addresses(cc); + + ret = ieee80211_register_hw(cc->hw); + if (ret < 0) { + cc33xx_error("unable to register mac80211 hw: %d", ret); + goto out; + } + + cc->mac80211_registered = true; + +out: + return ret; +} + +static void cc33xx_unregister_hw(struct cc33xx *cc) +{ + if (cc->plt) + cc33xx_plt_stop(cc); + + ieee80211_unregister_hw(cc->hw); + cc->mac80211_registered = false; +} + +static int cc33xx_init_ieee80211(struct cc33xx *cc) +{ + unsigned int i; + + if (cc->conf.core.mixed_mode_support) { + static const u32 cipher_suites[] = { + WLAN_CIPHER_SUITE_CCMP, + WLAN_CIPHER_SUITE_AES_CMAC, + WLAN_CIPHER_SUITE_TKIP, + WLAN_CIPHER_SUITE_GCMP, + WLAN_CIPHER_SUITE_GCMP_256, + WLAN_CIPHER_SUITE_BIP_GMAC_128, + WLAN_CIPHER_SUITE_BIP_GMAC_256, + }; + cc->hw->wiphy->cipher_suites = cipher_suites; + cc->hw->wiphy->n_cipher_suites = ARRAY_SIZE(cipher_suites); + + } else { + static const u32 cipher_suites[] = { + WLAN_CIPHER_SUITE_CCMP, + WLAN_CIPHER_SUITE_AES_CMAC, + WLAN_CIPHER_SUITE_GCMP, + WLAN_CIPHER_SUITE_GCMP_256, + WLAN_CIPHER_SUITE_BIP_GMAC_128, + WLAN_CIPHER_SUITE_BIP_GMAC_256, + }; + cc->hw->wiphy->cipher_suites = cipher_suites; + cc->hw->wiphy->n_cipher_suites = ARRAY_SIZE(cipher_suites); + } + + /* The tx descriptor buffer */ + cc->hw->extra_tx_headroom = CC33XX_TX_EXTRA_HEADROOM; + + if (cc->quirks & CC33XX_QUIRK_TKIP_HEADER_SPACE) + cc->hw->extra_tx_headroom += CC33XX_EXTRA_SPACE_TKIP; + + /* unit us */ + /* FIXME: find a proper value */ + cc->hw->max_listen_interval = + cc->conf.host_conf.conn.max_listen_interval; + + ieee80211_hw_set(cc->hw, SUPPORT_FAST_XMIT); + ieee80211_hw_set(cc->hw, CHANCTX_STA_CSA); + ieee80211_hw_set(cc->hw, QUEUE_CONTROL); + ieee80211_hw_set(cc->hw, TX_AMPDU_SETUP_IN_HW); + ieee80211_hw_set(cc->hw, AMPDU_AGGREGATION); + ieee80211_hw_set(cc->hw, AP_LINK_PS); + ieee80211_hw_set(cc->hw, SPECTRUM_MGMT); + ieee80211_hw_set(cc->hw, REPORTS_TX_ACK_STATUS); + ieee80211_hw_set(cc->hw, CONNECTION_MONITOR); + ieee80211_hw_set(cc->hw, HAS_RATE_CONTROL); + ieee80211_hw_set(cc->hw, SUPPORTS_DYNAMIC_PS); + ieee80211_hw_set(cc->hw, SIGNAL_DBM); + ieee80211_hw_set(cc->hw, SUPPORTS_PS); + ieee80211_hw_set(cc->hw, SUPPORTS_TX_FRAG); + ieee80211_hw_set(cc->hw, SUPPORTS_MULTI_BSSID); + ieee80211_hw_set(cc->hw, SUPPORTS_AMSDU_IN_AMPDU); + + cc->hw->wiphy->interface_modes = cc33xx_wiphy_interface_modes(); + + cc->hw->wiphy->max_scan_ssids = 1; + cc->hw->wiphy->max_sched_scan_ssids = 16; + cc->hw->wiphy->max_match_sets = 16; + /* Maximum length of elements in scanning probe request templates + * should be the maximum length possible for a template, without + * the IEEE80211 header of the template + */ + cc->hw->wiphy->max_scan_ie_len = CC33XX_CMD_TEMPL_MAX_SIZE - + sizeof(struct ieee80211_header); + + cc->hw->wiphy->max_sched_scan_reqs = 1; + cc->hw->wiphy->max_sched_scan_ie_len = CC33XX_CMD_TEMPL_MAX_SIZE - + sizeof(struct ieee80211_header); + + cc->hw->wiphy->max_remain_on_channel_duration = 30000; + + cc->hw->wiphy->features |= NL80211_FEATURE_AP_SCAN; + + /* clear channel flags from the previous usage + * and restore max_power & max_antenna_gain values. + */ + for (i = 0; i < ARRAY_SIZE(cc33xx_channels); i++) { + cc33xx_band_2ghz.channels[i].flags = 0; + cc33xx_band_2ghz.channels[i].max_power = CC33XX_MAX_TXPWR; + cc33xx_band_2ghz.channels[i].max_antenna_gain = 0; + } + + for (i = 0; i < ARRAY_SIZE(cc33xx_channels_5ghz); i++) { + cc33xx_band_5ghz.channels[i].flags = 0; + cc33xx_band_5ghz.channels[i].max_power = CC33XX_MAX_TXPWR; + cc33xx_band_5ghz.channels[i].max_antenna_gain = 0; + } + + /* Enable/Disable He based on conf file params */ + if (!cc->conf.mac.he_enable) { + cc33xx_band_2ghz.iftype_data = NULL; + cc33xx_band_2ghz.n_iftype_data = 0; + + cc33xx_band_5ghz.iftype_data = NULL; + cc33xx_band_5ghz.n_iftype_data = 0; + } + + /* We keep local copies of the band structs because we need to + * modify them on a per-device basis. + */ + memcpy(&cc->bands[NL80211_BAND_2GHZ], &cc33xx_band_2ghz, + sizeof(cc33xx_band_2ghz)); + memcpy(&cc->bands[NL80211_BAND_2GHZ].ht_cap, + &cc->ht_cap[NL80211_BAND_2GHZ], + sizeof(*cc->ht_cap)); + + memcpy(&cc->bands[NL80211_BAND_5GHZ], &cc33xx_band_5ghz, + sizeof(cc33xx_band_5ghz)); + memcpy(&cc->bands[NL80211_BAND_5GHZ].ht_cap, + &cc->ht_cap[NL80211_BAND_5GHZ], + sizeof(*cc->ht_cap)); + + ieee80211_set_sband_iftype_data(&cc->bands[NL80211_BAND_2GHZ], iftype_data_2ghz); + ieee80211_set_sband_iftype_data(&cc->bands[NL80211_BAND_5GHZ], iftype_data_5ghz); + + cc->hw->wiphy->bands[NL80211_BAND_2GHZ] = + &cc->bands[NL80211_BAND_2GHZ]; + + if (!cc->disable_5g && cc->conf.core.enable_5ghz) + cc->hw->wiphy->bands[NL80211_BAND_5GHZ] = + &cc->bands[NL80211_BAND_5GHZ]; + + /* allow 4 queues per mac address we support + + * 1 cab queue per mac + one global offchannel Tx queue + */ + cc->hw->queues = (NUM_TX_QUEUES + 1) * CC33XX_NUM_MAC_ADDRESSES + 1; + + /* the last queue is the offchannel queue */ + cc->hw->offchannel_tx_hw_queue = cc->hw->queues - 1; + cc->hw->max_rates = 1; + + /* allowed interface combinations */ + cc->hw->wiphy->iface_combinations = cc33xx_iface_combinations; + cc->hw->wiphy->n_iface_combinations = ARRAY_SIZE(cc33xx_iface_combinations); + + SET_IEEE80211_DEV(cc->hw, cc->dev); + + cc->hw->sta_data_size = sizeof(struct cc33xx_station); + cc->hw->vif_data_size = sizeof(struct cc33xx_vif); + + cc->hw->max_rx_aggregation_subframes = cc->conf.host_conf.ht.rx_ba_win_size; + + /* For all ps schemes don't use UAPSD, except for UAPSD scheme + * As these are the currently supportedd PS schemes, use the default + * legacy otherwise + */ + if (cc->conf.mac.ps_scheme == PS_SCHEME_UPSD_TRIGGER) { + cc->hw->uapsd_queues = IEEE80211_WMM_IE_STA_QOSINFO_AC_MASK; + } else if ((cc->conf.mac.ps_scheme != PS_SCHEME_LEGACY) && + (cc->conf.mac.ps_scheme != PS_SCHEME_NOPSPOLL)) { + cc->hw->uapsd_queues = 0; + cc->conf.mac.ps_scheme = PS_SCHEME_LEGACY; + } else { + cc->hw->uapsd_queues = 0; + } + + return 0; +} + +#define create_high_prio_freezable_workqueue(name) \ + alloc_workqueue("%s", __WQ_LEGACY | WQ_FREEZABLE | WQ_UNBOUND | \ + WQ_MEM_RECLAIM | WQ_HIGHPRI, 1, (name)) + +static struct ieee80211_hw *cc33xx_alloc_hw(u32 aggr_buf_size) +{ + struct ieee80211_hw *hw; + struct cc33xx *cc; + int i, j; + unsigned int order; + + hw = ieee80211_alloc_hw(sizeof(*cc), &cc33xx_ops); + if (!hw) { + cc33xx_error("could not alloc ieee80211_hw"); + goto err_hw_alloc; + } + + cc = hw->priv; + memset(cc, 0, sizeof(*cc)); + + INIT_LIST_HEAD(&cc->wlvif_list); + + cc->hw = hw; + + /* cc->num_links is not configured yet, so just use CC33XX_MAX_LINKS. + * we don't allocate any additional resource here, so that's fine. + */ + for (i = 0; i < NUM_TX_QUEUES; i++) + for (j = 0; j < CC33XX_MAX_LINKS; j++) + skb_queue_head_init(&cc->links[j].tx_queue[i]); + + skb_queue_head_init(&cc->deferred_rx_queue); + skb_queue_head_init(&cc->deferred_tx_queue); + + init_llist_head(&cc->event_list); + + INIT_WORK(&cc->netstack_work, cc33xx_netstack_work); + INIT_WORK(&cc->tx_work, cc33xx_tx_work); + INIT_WORK(&cc->recovery_work, cc33xx_recovery_work); + INIT_WORK(&cc->irq_deferred_work, irq_deferred_work); + INIT_DELAYED_WORK(&cc->scan_complete_work, cc33xx_scan_complete_work); + INIT_DELAYED_WORK(&cc->roc_complete_work, cc33xx_roc_complete_work); + INIT_DELAYED_WORK(&cc->tx_watchdog_work, cc33xx_tx_watchdog_work); + + cc->freezable_netstack_wq = create_freezable_workqueue("cc33xx_netstack_wq"); + if (!cc->freezable_netstack_wq) + goto err_hw_alloc; + + cc->freezable_wq = create_high_prio_freezable_workqueue("cc33xx_wq"); + if (!cc->freezable_wq) + goto err_ns_wq; + + cc->rx_counter = 0; + cc->power_level = CC33XX_MAX_TXPWR; + cc->band = NL80211_BAND_2GHZ; + cc->flags = 0; + cc->sleep_auth = CC33XX_PSM_ILLEGAL; + + cc->ap_ps_map = 0; + cc->ap_fw_ps_map = 0; + cc->quirks = 0; + cc->active_sta_count = 0; + cc->active_link_count = 0; + cc->fwlog_size = 0; + + /* The system link is always allocated */ + __set_bit(CC33XX_SYSTEM_HLID, cc->links_map); + + memset(cc->tx_frames_map, 0, sizeof(cc->tx_frames_map)); + for (i = 0; i < CC33XX_NUM_TX_DESCRIPTORS; i++) + cc->tx_frames[i] = NULL; + + spin_lock_init(&cc->cc_lock); + + cc->state = CC33XX_STATE_OFF; + mutex_init(&cc->mutex); + mutex_init(&cc->flush_mutex); + init_completion(&cc->nvs_loading_complete); + + order = get_order(aggr_buf_size); + cc->aggr_buf = (u8 *)__get_free_pages(GFP_KERNEL, order); + if (!cc->aggr_buf) + goto err_all_wq; + + cc->aggr_buf_size = aggr_buf_size; + + cc->dummy_packet = cc33xx_alloc_dummy_packet(cc); + if (!cc->dummy_packet) + goto err_aggr; + + /* Allocate one page for the FW log */ + cc->fwlog = (u8 *)get_zeroed_page(GFP_KERNEL); + if (!cc->fwlog) + goto err_dummy_packet; + + cc->buffer_32 = kmalloc(sizeof(*cc->buffer_32), GFP_KERNEL); + if (!cc->buffer_32) + goto err_fwlog; + + cc->core_status = kzalloc(sizeof(*cc->core_status), GFP_KERNEL); + if (!cc->core_status) + goto err_buf32; + + return hw; + +err_buf32: + kfree(cc->buffer_32); + +err_fwlog: + free_page((unsigned long)cc->fwlog); + +err_dummy_packet: + dev_kfree_skb(cc->dummy_packet); + +err_aggr: + free_pages((unsigned long)cc->aggr_buf, order); + +err_all_wq: + destroy_workqueue(cc->freezable_wq); + +err_ns_wq: + destroy_workqueue(cc->freezable_netstack_wq); + +err_hw_alloc: + return NULL; +} + +static int cc33xx_free_hw(struct cc33xx *cc) +{ + /* Unblock any fwlog readers */ + mutex_lock(&cc->mutex); + cc->fwlog_size = -1; + mutex_unlock(&cc->mutex); + + kfree(cc->buffer_32); + kfree(cc->core_status); + free_page((unsigned long)cc->fwlog); + dev_kfree_skb(cc->dummy_packet); + free_pages((unsigned long)cc->aggr_buf, get_order(cc->aggr_buf_size)); + + kfree(cc->nvs_mac_addr); + cc->nvs_mac_addr = NULL; + + destroy_workqueue(cc->freezable_wq); + destroy_workqueue(cc->freezable_netstack_wq); + flush_deferred_event_list(cc); + + ieee80211_free_hw(cc->hw); + + return 0; +} + +static int cc33xx_identify_chip(struct cc33xx *cc) +{ + int ret = 0; + + cc->quirks |= CC33XX_QUIRK_RX_BLOCKSIZE_ALIGN | + CC33XX_QUIRK_TX_BLOCKSIZE_ALIGN | + CC33XX_QUIRK_NO_SCHED_SCAN_WHILE_CONN | + CC33XX_QUIRK_TX_PAD_LAST_FRAME | + CC33XX_QUIRK_DUAL_PROBE_TMPL; + + if (cc->if_ops->get_max_transaction_len) + cc->max_transaction_len = + cc->if_ops->get_max_transaction_len(cc->dev); + else + cc->max_transaction_len = 0; + + return ret; +} + +static int read_version_info(struct cc33xx *cc) +{ + int ret; + + ret = cc33xx_acx_init_get_fw_versions(cc); + if (ret < 0) { + cc33xx_error("Get FW version FAILED!"); + return ret; + } + + cc33xx_debug(DEBUG_BOOT, "Wireless firmware version %u.%u.%u.%u", + cc->fw_ver->major_version, + cc->fw_ver->minor_version, + cc->fw_ver->api_version, + cc->fw_ver->build_version); + + cc33xx_debug(DEBUG_BOOT, "Wireless PHY version %u.%u.%u.%u.%u.%u", + cc->fw_ver->phy_version[5], + cc->fw_ver->phy_version[4], + cc->fw_ver->phy_version[3], + cc->fw_ver->phy_version[2], + cc->fw_ver->phy_version[1], + cc->fw_ver->phy_version[0]); + + return 0; +} + +static void cc33xx_nvs_cb(const struct firmware *fw, void *context) +{ + struct cc33xx *cc = context; + struct platform_device *pdev = cc->pdev; + struct cc33xx_platdev_data *pdev_data = dev_get_platdata(&pdev->dev); + + int ret; + + if (fw) { + cc->nvs_mac_addr = kmemdup(fw->data, fw->size, GFP_KERNEL); + if (!cc->nvs_mac_addr) { + cc33xx_error("Could not allocate nvs data"); + goto out; + } + cc->nvs_mac_addr_len = fw->size; + } else if (pdev_data->family->nvs_name) { + cc33xx_debug(DEBUG_BOOT, "Could not get nvs file %s", + pdev_data->family->nvs_name); + cc->nvs_mac_addr = NULL; + cc->nvs_mac_addr_len = 0; + } else { + cc->nvs_mac_addr = NULL; + cc->nvs_mac_addr_len = 0; + } + + ret = cc33xx_setup(cc); + if (ret < 0) + goto out_free_nvs; + + BUILD_BUG_ON(CC33XX_NUM_TX_DESCRIPTORS > CC33XX_MAX_TX_DESCRIPTORS); + + /* adjust some runtime configuration parameters */ + cc33xx_adjust_conf(cc); + + cc->if_ops = pdev_data->if_ops; + cc->if_ops->set_irq_handler(cc->dev, irq_wrapper); + + cc33xx_power_off(cc); + + setup_wake_irq(cc); + + ret = cc33xx_init_fw(cc); + if (ret < 0) { + cc33xx_error("FW download failed"); + cc33xx_power_off(cc); + goto out_irq; + } + + ret = cc33xx_identify_chip(cc); + if (ret < 0) + goto out_irq; + + ret = read_version_info(cc); + if (ret < 0) + goto out_irq; + + ret = cc33xx_init_ieee80211(cc); + if (ret) + goto out_irq; + + ret = cc33xx_register_hw(cc); + if (ret) + goto out_irq; + + cc->initialized = true; + goto out; + +out_irq: + if (cc->wakeirq >= 0) + dev_pm_clear_wake_irq(cc->dev); + device_init_wakeup(cc->dev, false); + +out_free_nvs: + kfree(cc->nvs_mac_addr); + +out: + release_firmware(fw); + complete_all(&cc->nvs_loading_complete); + cc33xx_debug(DEBUG_CC33xx, "%s complete", __func__); +} + +static int cc33xx_load_ini_bin_file(struct device *dev, + struct cc33xx_conf_file *conf, + const char *file) +{ + struct cc33xx_conf_file *conf_file; + const struct firmware *fw; + int ret; + + ret = request_firmware(&fw, file, dev); + if (ret < 0) { + cc33xx_error("could not get configuration binary %s: %d", + file, ret); + return ret; + } + + if (fw->size != CC33X_CONF_SIZE) { + cc33xx_error("%s configuration binary size is wrong, expected %zu got %zu", + file, CC33X_CONF_SIZE, + fw->size); + ret = -EINVAL; + goto out_release; + } + + conf_file = (struct cc33xx_conf_file *)fw->data; + + if (conf_file->header.magic != cpu_to_le32(CC33XX_CONF_MAGIC)) { + cc33xx_error("conf file magic number mismatch, expected 0x%0x got 0x%0x", + CC33XX_CONF_MAGIC, conf_file->header.magic); + ret = -EINVAL; + goto out_release; + } + + memcpy(conf, conf_file, sizeof(*conf)); + +out_release: + release_firmware(fw); + return ret; +} + +static int cc33xx_ini_bin_init(struct cc33xx *cc, struct device *dev) +{ + struct platform_device *pdev = cc->pdev; + struct cc33xx_platdev_data *pdata = dev_get_platdata(&pdev->dev); + + if (cc33xx_load_ini_bin_file(dev, &cc->conf, + pdata->family->cfg_name) < 0) + cc33xx_warning("falling back to default config"); + + return 0; +} + +static inline void cc33xx_set_ht_cap(struct cc33xx *cc, enum nl80211_band band, + struct ieee80211_sta_ht_cap *ht_cap) +{ + memcpy(&cc->ht_cap[band], ht_cap, sizeof(*ht_cap)); +} + +static int cc33xx_setup(struct cc33xx *cc) +{ + int ret; + + BUILD_BUG_ON(CC33XX_MAX_AP_STATIONS > CC33XX_MAX_LINKS); + + ret = cc33xx_ini_bin_init(cc, cc->dev); + if (ret < 0) + return ret; + + if (cc->conf.host_conf.ht.mode == HT_MODE_DEFAULT) { + cc33xx_set_ht_cap(cc, NL80211_BAND_2GHZ, + &cc33xx_siso40_ht_cap_2ghz); + + /* 5Ghz is always wide */ + cc33xx_set_ht_cap(cc, NL80211_BAND_5GHZ, + &cc33xx_siso40_ht_cap_5ghz); + } else if (cc->conf.host_conf.ht.mode == HT_MODE_WIDE) { + cc33xx_set_ht_cap(cc, NL80211_BAND_2GHZ, + &cc33xx_siso40_ht_cap_2ghz); + cc33xx_set_ht_cap(cc, NL80211_BAND_5GHZ, + &cc33xx_siso40_ht_cap_5ghz); + } else if (cc->conf.host_conf.ht.mode == HT_MODE_SISO20) { + cc33xx_set_ht_cap(cc, NL80211_BAND_2GHZ, &cc33xx_siso20_ht_cap); + cc33xx_set_ht_cap(cc, NL80211_BAND_5GHZ, &cc33xx_siso20_ht_cap); + } + + cc->event_mask = BSS_LOSS_EVENT_ID | SCAN_COMPLETE_EVENT_ID | + RADAR_DETECTED_EVENT_ID | RSSI_SNR_TRIGGER_0_EVENT_ID | + PERIODIC_SCAN_COMPLETE_EVENT_ID | + PERIODIC_SCAN_REPORT_EVENT_ID | DUMMY_PACKET_EVENT_ID | + PEER_REMOVE_COMPLETE_EVENT_ID | + BA_SESSION_RX_CONSTRAINT_EVENT_ID | + REMAIN_ON_CHANNEL_COMPLETE_EVENT_ID | + CHANNEL_SWITCH_COMPLETE_EVENT_ID | + DFS_CHANNELS_CONFIG_COMPLETE_EVENT | + SMART_CONFIG_SYNC_EVENT_ID | INACTIVE_STA_EVENT_ID | + SMART_CONFIG_DECODE_EVENT_ID | TIME_SYNC_EVENT_ID | + FW_LOGGER_INDICATION | RX_BA_WIN_SIZE_CHANGE_EVENT_ID; + + cc->ap_event_mask = MAX_TX_FAILURE_EVENT_ID; + + return 0; +} + +static int cc33xx_probe(struct platform_device *pdev) +{ + struct cc33xx *cc; + struct ieee80211_hw *hw; + struct cc33xx_platdev_data *pdev_data = dev_get_platdata(&pdev->dev); + const char *nvs_name; + int ret; + + if (!pdev_data) { + cc33xx_error("can't access platform data"); + return -EINVAL; + } + + hw = cc33xx_alloc_hw(CC33XX_AGGR_BUFFER_SIZE); + if (!hw) { + ret = -ENOMEM; + goto out; + } + cc = hw->priv; + cc->dev = &pdev->dev; + cc->pdev = pdev; + platform_set_drvdata(pdev, cc); + + if (pdev_data->family && pdev_data->family->nvs_name) { + nvs_name = pdev_data->family->nvs_name; + ret = request_firmware_nowait(THIS_MODULE, FW_ACTION_UEVENT, + nvs_name, &pdev->dev, GFP_KERNEL, + cc, cc33xx_nvs_cb); + if (ret < 0) { + cc33xx_error("request_firmware_nowait failed for %s: %d", + nvs_name, ret); + complete_all(&cc->nvs_loading_complete); + } + } else { + cc33xx_nvs_cb(NULL, cc); + cc33xx_error("Invalid platform data entry"); + ret = -EINVAL; + } + +out: + return ret; +} + +static void cc33xx_remove(struct platform_device *pdev) +{ + struct cc33xx_platdev_data *pdev_data = dev_get_platdata(&pdev->dev); + struct cc33xx *cc = platform_get_drvdata(pdev); + + set_bit(CC33XX_FLAG_DRIVER_REMOVED, &cc->flags); + + cc->dev->driver->pm = NULL; + + if (pdev_data->family && pdev_data->family->nvs_name) + wait_for_completion(&cc->nvs_loading_complete); + + if (!cc->initialized) + goto out; + + if (cc->wakeirq >= 0) { + dev_pm_clear_wake_irq(cc->dev); + cc->wakeirq = -ENODEV; + } + + device_init_wakeup(cc->dev, false); + cc33xx_unregister_hw(cc); + cc33xx_turn_off(cc); + +out: + cc33xx_free_hw(cc); +} + +static const struct platform_device_id cc33xx_id_table[] = { + { "cc33xx", 0 }, + { } +}; +MODULE_DEVICE_TABLE(platform, cc33xx_id_table); + +static struct platform_driver cc33xx_driver = { + .probe = cc33xx_probe, + .remove = cc33xx_remove, + .id_table = cc33xx_id_table, + .driver = { + .name = "cc33xx_driver", + } +}; + +module_platform_driver(cc33xx_driver); + +module_param_named(debug_level, cc33xx_debug_level, uint, 0600); +MODULE_PARM_DESC(debug_level, "cc33xx debugging level"); + +module_param(no_recovery, int, 0600); +MODULE_PARM_DESC(no_recovery, "Prevent HW recovery. FW will remain stuck."); + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("Texas Instruments CC33xx WLAN driver"); +MODULE_AUTHOR("Michael Nemanov "); +MODULE_AUTHOR("Sabeeh Khan "); + +MODULE_FIRMWARE(SECOND_LOADER_NAME); +MODULE_FIRMWARE(FW_NAME); From patchwork Thu Nov 7 12:52:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nemanov, Michael" X-Patchwork-Id: 13866427 X-Patchwork-Delegate: kuba@kernel.org Received: from fllv0015.ext.ti.com (fllv0015.ext.ti.com [198.47.19.141]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 001D721745E; Thu, 7 Nov 2024 12:53:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.19.141 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730984002; cv=none; b=Hy/oxUexOL7a5OdXYga5HdFpqDVgW13VFi8pQ6d8G1JbQLa9R/EiDlBycYAUxcQM6oIprQP+2WLtmOhpgQx8uAEBzD5iQKPLq19MpN4iQDueCK2+7ciTWpUdiyXo38RL3IbVGYeF2Y0qJcbudD9vI2nZdtUV3p7qy48gCC28veg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730984002; c=relaxed/simple; bh=yM1VI1Iua73e5f9PaUCH+ruJT0ZAzk2xDJVe1fAekg4=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=AURHwKj64vqKNYhX6+W325paVSPqJ9T5aTFi5QCUWrUbiGn4888G+CqOO7V94ZF39JDEPC7iniUb7tagJCJoDeo4T+uGathrBQtxcEncK2+16oHg4X4qXF0d5TXKX1MmyYAHUayD9zT7Axfi4Ns9moYF3f30VJy33z26CPCkOxc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=L5T8tZDK; arc=none smtp.client-ip=198.47.19.141 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="L5T8tZDK" Received: from lelv0265.itg.ti.com ([10.180.67.224]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 4A7Cqq9b073892; Thu, 7 Nov 2024 06:52:52 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1730983972; bh=CuZ5/UV380NviR7khpoggkADuPul6Z2jvwO6MnsB+48=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=L5T8tZDKylnRc9TJD3qLswpOWLwTXEecmlFU0N2MGz4rUb8RRB0xQKtaqQR8gs37o k28nMw4hHo9lGJaO2enOv/7v7LfrMFC33R3ZOYW+uKMA5TSTkXp0XnLzlcfb10RiAo j09vnSyz/t3ZyPLslpm58yQr4w7gZFHLiXggAJFQ= Received: from DFLE105.ent.ti.com (dfle105.ent.ti.com [10.64.6.26]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 4A7CqqcO011161 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 7 Nov 2024 06:52:52 -0600 Received: from DFLE110.ent.ti.com (10.64.6.31) by DFLE105.ent.ti.com (10.64.6.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Thu, 7 Nov 2024 06:52:52 -0600 Received: from lelvsmtp5.itg.ti.com (10.180.75.250) by DFLE110.ent.ti.com (10.64.6.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23 via Frontend Transport; Thu, 7 Nov 2024 06:52:52 -0600 Received: from localhost (udb0389739.dhcp.ti.com [137.167.1.149]) by lelvsmtp5.itg.ti.com (8.15.2/8.15.2) with ESMTP id 4A7Cqp8u054880; Thu, 7 Nov 2024 06:52:51 -0600 From: Michael Nemanov To: Kalle Valo , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , , , , CC: Sabeeh Khan , Michael Nemanov Subject: [PATCH v5 10/17] wifi: cc33xx: Add rx.c, rx.h Date: Thu, 7 Nov 2024 14:52:02 +0200 Message-ID: <20241107125209.1736277-11-michael.nemanov@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241107125209.1736277-1-michael.nemanov@ti.com> References: <20241107125209.1736277-1-michael.nemanov@ti.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea X-Patchwork-Delegate: kuba@kernel.org Code that handles parsing raw Rx data buffer from HW and, splitting it in to SKBs and handing them to MAC80211. Rx handling starts at cc33xx_rx. Full SKBs are stored at cc->deferred_rx_queue from where they are handed to MAC80211 by calling cc->netstack_work (cc33xx_netstack_work @ main.c). This allows calling ieee80211_rx_ni while new data is being read from HW. Signed-off-by: Michael Nemanov --- drivers/net/wireless/ti/cc33xx/rx.c | 388 ++++++++++++++++++++++++++++ drivers/net/wireless/ti/cc33xx/rx.h | 86 ++++++ 2 files changed, 474 insertions(+) create mode 100644 drivers/net/wireless/ti/cc33xx/rx.c create mode 100644 drivers/net/wireless/ti/cc33xx/rx.h diff --git a/drivers/net/wireless/ti/cc33xx/rx.c b/drivers/net/wireless/ti/cc33xx/rx.c new file mode 100644 index 000000000000..b6ee293fbb0b --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/rx.c @@ -0,0 +1,388 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#include "acx.h" +#include "rx.h" +#include "tx.h" +#include "io.h" + +#define RSSI_LEVEL_BITMASK 0x7F +#define ANT_DIVERSITY_BITMASK BIT(7) +#define ANT_DIVERSITY_SHIFT 7 + +/* Construct the rx status structure for upper layers */ +static void cc33xx_rx_status(struct cc33xx *cc, + struct cc33xx_rx_descriptor *desc, + struct ieee80211_rx_status *status, + u8 beacon, u8 probe_rsp) +{ + memset(status, 0, sizeof(struct ieee80211_rx_status)); + + if ((desc->flags & CC33XX_RX_DESC_BAND_MASK) == CC33XX_RX_DESC_BAND_BG) + status->band = NL80211_BAND_2GHZ; + else if ((desc->flags & CC33XX_RX_DESC_BAND_MASK) == CC33XX_RX_DESC_BAND_J) + status->band = NL80211_BAND_2GHZ; + else if ((desc->flags & CC33XX_RX_DESC_BAND_MASK) == CC33XX_RX_DESC_BAND_A) + status->band = NL80211_BAND_5GHZ; + else + status->band = NL80211_BAND_5GHZ; /* todo -Should be 6GHZ when added */ + + status->rate_idx = cc33xx_rate_to_idx(cc, desc->rate, status->band); + + if (desc->frame_format == CC33xx_VHT) + status->encoding = RX_ENC_VHT; + else if ((desc->frame_format == CC33xx_HT_MF) || + (desc->frame_format == CC33xx_HT_GF)) + status->encoding = RX_ENC_HT; + else if ((desc->frame_format == CC33xx_B_SHORT) || + (desc->frame_format == CC33xx_B_LONG) || + (desc->frame_format == CC33xx_LEGACY_OFDM)) + status->encoding = RX_ENC_LEGACY; + else + status->encoding = RX_ENC_HE; + + /* Read the signal level and antenna diversity indication. + * The msb in the signal level is always set as it is a + * negative number. + * The antenna indication is the msb of the rssi. + */ + status->signal = ((desc->rssi & RSSI_LEVEL_BITMASK) | BIT(7)); + status->antenna = ((desc->rssi & ANT_DIVERSITY_BITMASK) >> ANT_DIVERSITY_SHIFT); + status->freq = ieee80211_channel_to_frequency(desc->channel, + status->band); + + if (desc->flags & CC33XX_RX_DESC_ENCRYPT_MASK) { + u8 desc_err_code = desc->status & CC33XX_RX_DESC_STATUS_MASK; + + /* Frame is sent to driver with the IV (for PN replay check) + * but without the MIC + */ + status->flag |= RX_FLAG_MMIC_STRIPPED | + RX_FLAG_DECRYPTED | RX_FLAG_MIC_STRIPPED; + + if (unlikely(desc_err_code & CC33XX_RX_DESC_MIC_FAIL)) { + status->flag |= RX_FLAG_MMIC_ERROR; + cc33xx_warning("Michael MIC error. Desc: 0x%x", + desc_err_code); + } + } + + if (beacon || probe_rsp) + status->boottime_ns = ktime_get_boottime_ns(); + + if (beacon) + cc33xx_set_pending_regdomain_ch(cc, (u16)desc->channel, + status->band); + status->nss = 1; +} + +/* Copy part\ all of the descriptor. Allocate skb, or drop corrupted packet + */ +static int cc33xx_rx_get_packet_descriptor(struct cc33xx *cc, u8 *raw_buffer_ptr, + u16 *raw_buffer_len) +{ + u16 missing_desc_bytes; + u16 available_desc_bytes; + u16 pkt_data_len; + struct sk_buff *skb; + u16 prev_buffer_len = *raw_buffer_len; + + missing_desc_bytes = sizeof(struct cc33xx_rx_descriptor); + missing_desc_bytes -= cc->partial_rx.handled_bytes; + available_desc_bytes = min(*raw_buffer_len, missing_desc_bytes); + memcpy(((u8 *)(&cc->partial_rx.desc)) + cc->partial_rx.handled_bytes, + raw_buffer_ptr, available_desc_bytes); + + /* If descriptor was not completed */ + if (available_desc_bytes != missing_desc_bytes) { + cc->partial_rx.handled_bytes += *raw_buffer_len; + cc->partial_rx.status = CURR_RX_DESC; + *raw_buffer_len = 0; + goto out; + } else { + cc->partial_rx.handled_bytes += available_desc_bytes; + *raw_buffer_len -= available_desc_bytes; + } + + /* Descriptor was fully copied */ + pkt_data_len = cc->partial_rx.original_bytes; + pkt_data_len -= sizeof(struct cc33xx_rx_descriptor); + + if (unlikely(cc->partial_rx.desc.status & CC33XX_RX_DESC_DECRYPT_FAIL)) { + cc33xx_warning("corrupted packet in RX: status: 0x%x len: %d", + cc->partial_rx.desc.status & CC33XX_RX_DESC_STATUS_MASK, + pkt_data_len); + + /* If frame can be fully dropped */ + if (pkt_data_len <= *raw_buffer_len) { + *raw_buffer_len -= pkt_data_len; + cc->partial_rx.status = CURR_RX_START; + } else { + cc->partial_rx.handled_bytes += *raw_buffer_len; + cc->partial_rx.status = CURR_RX_DROP; + *raw_buffer_len = 0; + } + goto out; + } + + skb = __dev_alloc_skb(pkt_data_len, GFP_KERNEL); + if (!skb) { + cc33xx_error("Couldn't allocate RX frame"); + /* If frame can be fully dropped */ + if (pkt_data_len <= *raw_buffer_len) { + *raw_buffer_len -= pkt_data_len; + cc->partial_rx.status = CURR_RX_START; + } else { + /* Dropped partial frame */ + cc->partial_rx.handled_bytes += *raw_buffer_len; + cc->partial_rx.status = CURR_RX_DROP; + *raw_buffer_len = 0; + } + goto out; + } + + cc->partial_rx.skb = skb; + cc->partial_rx.status = CURR_RX_DATA; + +out: + /* Function return the amount of consumed bytes */ + return (prev_buffer_len - *raw_buffer_len); +} + +/* Copy part or all of the packet's data. push skb to queue if possible */ +static int cc33xx_rx_get_packet_data(struct cc33xx *cc, u8 *raw_buffer_ptr, + u16 *raw_buffer_len) +{ + u16 missing_data_bytes; + u16 available_data_bytes; + u32 defer_count; + enum cc33xx_rx_buf_align rx_align; + u16 extra_bytes; + struct ieee80211_hdr *hdr; + u8 beacon = 0; + u8 is_probe_resp = 0; + u16 seq_num; + u16 prev_buffer_len = *raw_buffer_len; + + missing_data_bytes = cc->partial_rx.original_bytes; + missing_data_bytes -= cc->partial_rx.handled_bytes; + available_data_bytes = min(missing_data_bytes, *raw_buffer_len); + + skb_put_data(cc->partial_rx.skb, raw_buffer_ptr, available_data_bytes); + + /* Check if we didn't manage to copy the entire packet - got out, + * continue next time + */ + if (available_data_bytes != missing_data_bytes) { + cc->partial_rx.handled_bytes += *raw_buffer_len; + cc->partial_rx.status = CURR_RX_DATA; + *raw_buffer_len = 0; + goto out; + } else { + *raw_buffer_len -= available_data_bytes; + } + + /* Data fully copied */ + + rx_align = cc->partial_rx.desc.header_alignment; + if (rx_align == CC33XX_RX_BUF_PADDED) + skb_pull(cc->partial_rx.skb, RX_BUF_ALIGN); + + extra_bytes = cc->partial_rx.desc.pad_len; + if (extra_bytes != 0) + skb_trim(cc->partial_rx.skb, + cc->partial_rx.skb->len - extra_bytes); + + hdr = (struct ieee80211_hdr *)cc->partial_rx.skb->data; + + if (ieee80211_is_beacon(hdr->frame_control)) + beacon = 1; + if (ieee80211_is_probe_resp(hdr->frame_control)) + is_probe_resp = 1; + + cc33xx_rx_status(cc, &cc->partial_rx.desc, + IEEE80211_SKB_RXCB(cc->partial_rx.skb), + beacon, is_probe_resp); + + seq_num = (le16_to_cpu(hdr->seq_ctrl) & IEEE80211_SCTL_SEQ) >> 4; + cc33xx_debug(DEBUG_RX, "rx skb 0x%p: %d B %s seq %d link id %d", + cc->partial_rx.skb, + cc->partial_rx.skb->len - cc->partial_rx.desc.pad_len, + beacon ? "beacon" : "", seq_num, cc->partial_rx.desc.hlid); + + cc33xx_debug(DEBUG_RX, "rx frame. frame type 0x%x, frame length 0x%x, frame address 0x%lx", + hdr->frame_control, cc->partial_rx.skb->len, + (unsigned long)cc->partial_rx.skb->data); + + /* Adding frame to queue */ + skb_queue_tail(&cc->deferred_rx_queue, cc->partial_rx.skb); + cc->rx_counter++; + cc->partial_rx.status = CURR_RX_START; + + /* Make sure the deferred queues don't get too long */ + defer_count = skb_queue_len(&cc->deferred_tx_queue); + defer_count += skb_queue_len(&cc->deferred_rx_queue); + if (defer_count >= CC33XX_RX_QUEUE_MAX_LEN) + cc33xx_flush_deferred_work(cc); + else + queue_work(cc->freezable_netstack_wq, &cc->netstack_work); + +out: + return (prev_buffer_len - *raw_buffer_len); +} + +static int cc33xx_rx_drop_packet_data(struct cc33xx *cc, u8 *raw_buffer_ptr, + u16 *raw_buffer_len) +{ + u16 prev_buffer_len = *raw_buffer_len; + + /* Can we drop the entire frame ? */ + if (*raw_buffer_len >= + (cc->partial_rx.original_bytes - cc->partial_rx.handled_bytes)) { + *raw_buffer_len -= cc->partial_rx.original_bytes - + cc->partial_rx.handled_bytes; + cc->partial_rx.handled_bytes = 0; + cc->partial_rx.status = CURR_RX_START; + } else { + cc->partial_rx.handled_bytes += *raw_buffer_len; + *raw_buffer_len = 0; + } + + return (prev_buffer_len - *raw_buffer_len); +} + +/* Handle single packet from the RX buffer. We don't have to be aligned to + * packet boundary (buffer may start \ end in the middle of packet) + */ +static void cc33xx_rx_handle_packet(struct cc33xx *cc, u8 *raw_buffer_ptr, + u16 *raw_buffer_len) +{ + struct cc33xx_rx_descriptor *desc; + u16 consumed_bytes; + + if (cc->partial_rx.status == CURR_RX_START) { + WARN_ON(*raw_buffer_len < 2); + desc = (struct cc33xx_rx_descriptor *)raw_buffer_ptr; + cc->partial_rx.original_bytes = le16_to_cpu(desc->length); + cc->partial_rx.handled_bytes = 0; + cc->partial_rx.status = CURR_RX_DESC; + + cc33xx_debug(DEBUG_RX, "rx frame. desc length 0x%x, alignment 0x%x, padding 0x%x", + desc->length, desc->header_alignment, desc->pad_len); + } + + /* start \ continue copy descriptor */ + if (cc->partial_rx.status == CURR_RX_DESC) { + consumed_bytes = cc33xx_rx_get_packet_descriptor(cc, + raw_buffer_ptr, + raw_buffer_len); + raw_buffer_ptr += consumed_bytes; + } + + /* Check if we are in the middle of dropped packet */ + if (unlikely(cc->partial_rx.status == CURR_RX_DROP)) { + consumed_bytes = cc33xx_rx_drop_packet_data(cc, raw_buffer_ptr, + raw_buffer_len); + raw_buffer_ptr += consumed_bytes; + } + + /* start \ continue copy descriptor */ + if (cc->partial_rx.status == CURR_RX_DATA) { + consumed_bytes = cc33xx_rx_get_packet_data(cc, raw_buffer_ptr, + raw_buffer_len); + raw_buffer_ptr += consumed_bytes; + } +} + +/* It is assumed that SDIO buffer was read prior to this function (data buffer + * is read along with the status). The RX function gets pointer to the RX data + * and its length. This buffer may contain unknown number of packets, separated + * by hif descriptor and 0-3 bytes padding if required. + * The last packet may be truncated in the middle, and should be saved for next + * iteration. + */ +int cc33xx_rx(struct cc33xx *cc, u8 *rx_buf_ptr, u16 rx_buf_len) +{ + u16 local_rx_buffer_len = rx_buf_len; + u16 pkt_offset = 0; + u16 consumed_bytes; + u16 prev_rx_buf_len; + + /* Split data into separate packets */ + while (local_rx_buffer_len > 0) { + cc33xx_debug(DEBUG_RX, "start loop. buffer length %d", + local_rx_buffer_len); + + /* the handle data call can only fail in memory-outage + * conditions, in that case the received frame will just + * be dropped. + */ + prev_rx_buf_len = local_rx_buffer_len; + cc33xx_rx_handle_packet(cc, rx_buf_ptr + pkt_offset, + &local_rx_buffer_len); + consumed_bytes = prev_rx_buf_len - local_rx_buffer_len; + + pkt_offset += consumed_bytes; + + cc33xx_debug(DEBUG_RX, "end rx loop. buffer length %d, packet counter %d, current packet status %d", + local_rx_buffer_len, cc->rx_counter, + cc->partial_rx.status); + } + + return 0; +} + +#ifdef CONFIG_PM +int cc33xx_rx_filter_enable(struct cc33xx *cc, int index, bool enable, + struct cc33xx_rx_filter *filter) +{ + int ret; + + if (!!test_bit(index, cc->rx_filter_enabled) == enable) { + cc33xx_warning("Request to enable an already enabled rx filter %d", + index); + return 0; + } + + ret = cc33xx_acx_set_rx_filter(cc, index, enable, filter); + + if (ret) { + cc33xx_error("Failed to %s rx data filter %d (err=%d)", + enable ? "enable" : "disable", index, ret); + return ret; + } + + if (enable) + __set_bit(index, cc->rx_filter_enabled); + else + __clear_bit(index, cc->rx_filter_enabled); + + return 0; +} + +int cc33xx_rx_filter_clear_all(struct cc33xx *cc) +{ + int i, ret = 0; + + for (i = 0; i < CC33XX_MAX_RX_FILTERS; i++) { + if (!test_bit(i, cc->rx_filter_enabled)) + continue; + ret = cc33xx_rx_filter_enable(cc, i, 0, NULL); + if (ret) + goto out; + } + +out: + return ret; +} +#else +int cc33xx_rx_filter_enable(struct cc33xx *cc, int index, bool enable, + struct cc33xx_rx_filter *filter) +{ + return 0; +} + +int cc33xx_rx_filter_clear_all(struct cc33xx *cc) { return 0; } +#endif /* CONFIG_PM */ diff --git a/drivers/net/wireless/ti/cc33xx/rx.h b/drivers/net/wireless/ti/cc33xx/rx.h new file mode 100644 index 000000000000..46ff6867749f --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/rx.h @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#ifndef __RX_H__ +#define __RX_H__ + +/* RX Descriptor flags: + * + * Bits 0-1 - band + * Bit 2 - STBC + * Bit 3 - A-MPDU + * Bit 4 - HT + * Bits 5-7 - encryption + */ +#define CC33XX_RX_DESC_BAND_MASK 0x03 +#define CC33XX_RX_DESC_ENCRYPT_MASK 0xE0 + +#define CC33XX_RX_DESC_BAND_BG 0x00 +#define CC33XX_RX_DESC_BAND_J 0x01 +#define CC33XX_RX_DESC_BAND_A 0x02 + +/* RX Descriptor status + * + * Bits 0-2 - error code + * Bits 3-5 - process_id tag (AP mode FW) + * Bits 6-7 - reserved + */ +enum { + CC33XX_RX_DESC_SUCCESS = 0x00, + CC33XX_RX_DESC_DECRYPT_FAIL = 0x01, + CC33XX_RX_DESC_MIC_FAIL = 0x02, + CC33XX_RX_DESC_STATUS_MASK = 0x07 +}; + +/* Account for the padding inserted by the FW in case of RX_ALIGNMENT + * or for fixing alignment in case the packet wasn't aligned. + */ +#define RX_BUF_ALIGN 2 + +/* Describes the alignment state of a Rx buffer */ +enum cc33xx_rx_buf_align { + CC33XX_RX_BUF_ALIGNED, + CC33XX_RX_BUF_UNALIGNED, + CC33XX_RX_BUF_PADDED, +}; + +enum cc33xx_rx_curr_status { + CURR_RX_START, + CURR_RX_DROP, + CURR_RX_DESC, + CURR_RX_DATA +}; + +struct cc33xx_rx_descriptor { + __le16 length; + u8 header_alignment; + u8 status; + __le32 timestamp; + + u8 flags; + u8 rate; + u8 channel; + s8 rssi; + u8 snr; + + u8 hlid; + u8 pad_len; + u8 frame_format; +} __packed; + +struct partial_rx_frame { + struct sk_buff *skb; + struct cc33xx_rx_descriptor desc; + u16 handled_bytes; + u16 original_bytes; /* including descriptor */ + enum cc33xx_rx_curr_status status; +}; + +int cc33xx_rx(struct cc33xx *cc, u8 *rx_buf_ptr, u16 rx_buf_len); +int cc33xx_rx_filter_enable(struct cc33xx *cc, int index, bool enable, + struct cc33xx_rx_filter *filter); +int cc33xx_rx_filter_clear_all(struct cc33xx *cc); + +#endif /* __RX_H__ */ From patchwork Thu Nov 7 12:52:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nemanov, Michael" X-Patchwork-Id: 13866422 X-Patchwork-Delegate: kuba@kernel.org Received: from lelv0142.ext.ti.com (lelv0142.ext.ti.com [198.47.23.249]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CF5B121019E; Thu, 7 Nov 2024 12:53:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.23.249 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983988; cv=none; b=a9Lc4IXd/0N8to4acZsMcVSi6QcAX3Y2D9JGyBZjectxBMxllUiL9CMinAoeGEaAQ2/rOnwO9FO/TahC5XFCMGoxnfNugFGNtBTvjoY/WpBoxi/Bf2yfYE/4/ST63JjYBxja4B4i+LoVtAQMznc3ZKsAeq9RI+zTkMECL4qOsl0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983988; c=relaxed/simple; bh=bvp/+KPpTdUSvsAIfJCauixXnQDJ8eNEjE+po7g4dNc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=FEug9adMUF1Y7sBICx+sh2O+s0GqsqAiJI+QXsKC7tM3l/uQNde6naHoog5ldHbAsrP/9omibs+jscDC2f+yEhqX2EBJ5hkM/Cnb9YG873j3AksnYRtcWULFoQb2Kpzx5XfHG+4nTJtNBukfnpWUdRif/ygf5VZp4dcPxShCCnQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=D6/wDBHs; arc=none smtp.client-ip=198.47.23.249 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="D6/wDBHs" Received: from fllv0034.itg.ti.com ([10.64.40.246]) by lelv0142.ext.ti.com (8.15.2/8.15.2) with ESMTP id 4A7CqsJG041947; Thu, 7 Nov 2024 06:52:54 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1730983974; bh=xPF/lDPC5teE7ZyOE4mb/oF/g+9eCsfnoxf/XPJp7uw=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=D6/wDBHsCegKsS5g1ByI7fGrF5uJgQomVH8gAVR+/f2KxFbmtRp0Jb5FihtZhqQCV cZ8ERvMVGClM3hCptlvjLr7yGMnsl2T+74K9llYL86VUMBdun+7zUrLrOb66UimRen Ygkxfrrwlx2A6Kkgkowq2sIzvqo4BdQXwcaDWe0M= Received: from DFLE101.ent.ti.com (dfle101.ent.ti.com [10.64.6.22]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 4A7CqssE060372 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 7 Nov 2024 06:52:54 -0600 Received: from DFLE107.ent.ti.com (10.64.6.28) by DFLE101.ent.ti.com (10.64.6.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Thu, 7 Nov 2024 06:52:53 -0600 Received: from lelvsmtp6.itg.ti.com (10.180.75.249) by DFLE107.ent.ti.com (10.64.6.28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23 via Frontend Transport; Thu, 7 Nov 2024 06:52:53 -0600 Received: from localhost (udb0389739.dhcp.ti.com [137.167.1.149]) by lelvsmtp6.itg.ti.com (8.15.2/8.15.2) with ESMTP id 4A7Cqq8N038478; Thu, 7 Nov 2024 06:52:53 -0600 From: Michael Nemanov To: Kalle Valo , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , , , , CC: Sabeeh Khan , Michael Nemanov Subject: [PATCH v5 11/17] wifi: cc33xx: Add tx.c, tx.h Date: Thu, 7 Nov 2024 14:52:03 +0200 Message-ID: <20241107125209.1736277-12-michael.nemanov@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241107125209.1736277-1-michael.nemanov@ti.com> References: <20241107125209.1736277-1-michael.nemanov@ti.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea X-Patchwork-Delegate: kuba@kernel.org Tx of data frames starts either with MAC80211 tx op (cc33xx_op_tx) or from the deferred IRQ context (irq_deferred_work). Both trigger cc->tx_work (cc33xx_tx_work @ tx.c) which will call cc33xx_tx_work_locked where data from MAC80211 SKBs will be packed and transferred to HW. An interrupt will then be received from HW indicating that a given frame was transmitted or expired so that its SKB can be freed (cc33xx_tx_immediate_complete). Signed-off-by: Michael Nemanov --- drivers/net/wireless/ti/cc33xx/tx.c | 1409 +++++++++++++++++++++++++++ drivers/net/wireless/ti/cc33xx/tx.h | 160 +++ 2 files changed, 1569 insertions(+) create mode 100644 drivers/net/wireless/ti/cc33xx/tx.c create mode 100644 drivers/net/wireless/ti/cc33xx/tx.h diff --git a/drivers/net/wireless/ti/cc33xx/tx.c b/drivers/net/wireless/ti/cc33xx/tx.c new file mode 100644 index 000000000000..37f8b63c99e0 --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/tx.c @@ -0,0 +1,1409 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#include "acx.h" +#include "debug.h" +#include "io.h" +#include "ps.h" +#include "tx.h" +#include "cc33xx.h" + +static int cc33xx_set_default_wep_key(struct cc33xx *cc, + struct cc33xx_vif *wlvif, u8 id) +{ + int ret; + bool is_ap = (wlvif->bss_type == BSS_TYPE_AP_BSS); + + if (is_ap) + ret = cc33xx_cmd_set_default_wep_key(cc, id, + wlvif->ap.bcast_hlid); + else + ret = cc33xx_cmd_set_default_wep_key(cc, id, wlvif->sta.hlid); + + if (ret < 0) + return ret; + + cc33xx_debug(DEBUG_CRYPT, "default wep key idx: %d", (int)id); + return 0; +} + +static int cc33xx_alloc_tx_id(struct cc33xx *cc, struct sk_buff *skb) +{ + int id; + + id = find_first_zero_bit(cc->tx_frames_map, CC33XX_NUM_TX_DESCRIPTORS); + if (id >= CC33XX_NUM_TX_DESCRIPTORS) + return -EBUSY; + + __set_bit(id, cc->tx_frames_map); + cc->tx_frames[id] = skb; + cc->tx_frames_cnt++; + cc33xx_debug(DEBUG_TX, "alloc desc ID. id - %d, frames count %d", + id, cc->tx_frames_cnt); + return id; +} + +void cc33xx_free_tx_id(struct cc33xx *cc, int id) +{ + if (__test_and_clear_bit(id, cc->tx_frames_map)) { + if (unlikely(cc->tx_frames_cnt == CC33XX_NUM_TX_DESCRIPTORS)) + clear_bit(CC33XX_FLAG_FW_TX_BUSY, &cc->flags); + + cc->tx_frames[id] = NULL; + cc->tx_frames_cnt--; + } + cc33xx_debug(DEBUG_TX, "free desc ID. id - %d, frames count %d", + id, cc->tx_frames_cnt); +} +EXPORT_SYMBOL(cc33xx_free_tx_id); + +static void cc33xx_tx_ap_update_inconnection_sta(struct cc33xx *cc, + struct cc33xx_vif *wlvif, + struct sk_buff *skb) +{ + struct ieee80211_hdr *hdr; + + hdr = (struct ieee80211_hdr *)(skb->data + + sizeof(struct cc33xx_tx_hw_descr)); + if (!ieee80211_is_auth(hdr->frame_control)) + return; + + /* ROC for 1 second on the AP channel for completing the connection. + * Note the ROC will be continued by the update_sta_state callbacks + * once the station reaches the associated state. + */ + cc33xx_update_inconn_sta(cc, wlvif, NULL, true); + wlvif->pending_auth_reply_time = jiffies; + cancel_delayed_work(&wlvif->pending_auth_complete_work); + ieee80211_queue_delayed_work(cc->hw, + &wlvif->pending_auth_complete_work, + msecs_to_jiffies(CC33XX_PEND_AUTH_ROC_TIMEOUT)); +} + +static void cc33xx_tx_regulate_link(struct cc33xx *cc, + struct cc33xx_vif *wlvif, + u8 hlid) +{ + bool fw_ps; + u8 tx_pkts; + + if (WARN_ON(!test_bit(hlid, wlvif->links_map))) + return; + + fw_ps = test_bit(hlid, &cc->ap_fw_ps_map); + tx_pkts = cc->links[hlid].allocated_pkts; + + /* if in FW PS and there is enough data in FW we can put the link + * into high-level PS and clean out its TX queues. + * Make an exception if this is the only connected link. In this + * case FW-memory congestion is less of a problem. + * Note that a single connected STA means 2*ap_count + 1 active links, + * since we must account for the global and broadcast AP links + * for each AP. The "fw_ps" check assures us the other link is a STA + * connected to the AP. Otherwise the FW would not set the PSM bit. + */ + if (cc->active_link_count > (cc->ap_count * 2 + 1) && fw_ps && + tx_pkts >= CC33XX_PS_STA_MAX_PACKETS) + cc33xx_ps_link_start(cc, wlvif, hlid, true); +} + +inline bool cc33xx_is_dummy_packet(struct cc33xx *cc, struct sk_buff *skb) +{ + return cc->dummy_packet == skb; +} +EXPORT_SYMBOL(cc33xx_is_dummy_packet); + +static u8 cc33xx_tx_get_hlid_ap(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct sk_buff *skb, struct ieee80211_sta *sta) +{ + struct ieee80211_hdr *hdr; + + if (sta) { + struct cc33xx_station *wl_sta; + + wl_sta = (struct cc33xx_station *)sta->drv_priv; + return wl_sta->hlid; + } + + if (!test_bit(WLVIF_FLAG_AP_STARTED, &wlvif->flags)) + return CC33XX_SYSTEM_HLID; + + hdr = (struct ieee80211_hdr *)skb->data; + if (is_multicast_ether_addr(ieee80211_get_DA(hdr))) + return wlvif->ap.bcast_hlid; + else + return wlvif->ap.global_hlid; +} + +u8 cc33xx_tx_get_hlid(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct sk_buff *skb, struct ieee80211_sta *sta) +{ + struct ieee80211_tx_info *control; + + if (wlvif->bss_type == BSS_TYPE_AP_BSS) + return cc33xx_tx_get_hlid_ap(cc, wlvif, skb, sta); + + control = IEEE80211_SKB_CB(skb); + if (control->flags & IEEE80211_TX_CTL_TX_OFFCHAN) + return wlvif->dev_hlid; + + return wlvif->sta.hlid; +} + +unsigned int cc33xx_calc_packet_alignment(struct cc33xx *cc, + unsigned int packet_length) +{ + if ((cc->quirks & CC33XX_QUIRK_TX_PAD_LAST_FRAME) || + !(cc->quirks & CC33XX_QUIRK_TX_BLOCKSIZE_ALIGN)) + return ALIGN(packet_length, CC33XX_TX_ALIGN_TO); + else + return ALIGN(packet_length, CC33XX_BUS_BLOCK_SIZE); +} +EXPORT_SYMBOL(cc33xx_calc_packet_alignment); + +static u32 cc33xx_calc_tx_blocks(struct cc33xx *cc, u32 len, u32 spare_blks) +{ + u32 blk_size = CC33XX_TX_HW_BLOCK_SIZE; + /* In CC33xx the packet will be stored along with its internal descriptor. + * the descriptor is not part of the host transaction, but should be + * considered as part of the allocate memory blocks in the device + */ + len = len + CC33xx_INTERNAL_DESC_SIZE; + return (len + blk_size - 1) / blk_size + spare_blks; +} + +static inline void cc33xx_set_tx_desc_blocks(struct cc33xx *cc, + struct cc33xx_tx_hw_descr *desc, + u32 blks, u32 spare_blks) +{ + desc->cc33xx_mem.total_mem_blocks = blks; +} + +static void cc33xx_set_tx_desc_data_len(struct cc33xx *cc, + struct cc33xx_tx_hw_descr *desc, + struct sk_buff *skb) +{ + desc->length = cpu_to_le16(skb->len); + + /* if only the last frame is to be padded, we unset this bit on Tx */ + if (cc->quirks & CC33XX_QUIRK_TX_PAD_LAST_FRAME) + desc->cc33xx_mem.ctrl = CC33XX_TX_CTRL_NOT_PADDED; + else + desc->cc33xx_mem.ctrl = 0; + + cc33xx_debug(DEBUG_TX, "tx_fill_hdr: hlid: %d len: %d life: %d mem: %d", + desc->hlid, le16_to_cpu(desc->length), + le16_to_cpu(desc->life_time), + desc->cc33xx_mem.total_mem_blocks); +} + +static int cc33xx_get_spare_blocks(struct cc33xx *cc, bool is_gem) +{ + /* If we have keys requiring extra spare, indulge them */ + if (cc->extra_spare_key_count) + return CC33XX_TX_HW_EXTRA_BLOCK_SPARE; + + return CC33XX_TX_HW_BLOCK_SPARE; +} + +int cc33xx_tx_get_queue(int queue) +{ + switch (queue) { + case 0: + return CONF_TX_AC_VO; + case 1: + return CONF_TX_AC_VI; + case 2: + return CONF_TX_AC_BE; + case 3: + return CONF_TX_AC_BK; + default: + return CONF_TX_AC_BE; + } +} + +static int cc33xx_tx_allocate(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct sk_buff *skb, u32 extra, u32 buf_offset, + u8 hlid, bool is_gem, + struct NAB_tx_header *nab_cmd) +{ + struct cc33xx_tx_hw_descr *desc; + u32 total_blocks; + int id, ret = -EBUSY, ac; + u32 spare_blocks; + u32 total_skb_len = skb->len + extra; + /* Add NAB command required for CC33xx architecture */ + u32 total_len = sizeof(struct NAB_tx_header); + + total_skb_len += sizeof(struct cc33xx_tx_hw_descr); + total_len += total_skb_len; + + cc33xx_debug(DEBUG_TX, "cc->tx_blocks_available %d", + cc->tx_blocks_available); + + if (buf_offset + total_len > cc->aggr_buf_size) + return -EAGAIN; + + spare_blocks = cc33xx_get_spare_blocks(cc, is_gem); + + /* allocate free identifier for the packet */ + id = cc33xx_alloc_tx_id(cc, skb); + if (id < 0) + return id; + + /* memblocks should not include nab descriptor */ + total_blocks = cc33xx_calc_tx_blocks(cc, total_skb_len, spare_blocks); + cc33xx_debug(DEBUG_TX, "total blocks %d", total_blocks); + + if (total_blocks <= cc->tx_blocks_available) { + /** + * In CC33XX the packet starts with NAB command, + * only then the descriptor. + */ + nab_cmd->sync = cpu_to_le32(HOST_SYNC_PATTERN); + nab_cmd->opcode = cpu_to_le16(NAB_SEND_CMD); + + /** + * length should include the following 4 bytes + * of the NAB command. + */ + nab_cmd->len = cpu_to_le16(total_len - + sizeof(struct NAB_header)); + nab_cmd->desc_length = cpu_to_le16(total_len - + sizeof(struct NAB_tx_header)); + nab_cmd->sd = 0; + nab_cmd->flags = NAB_SEND_FLAGS; + + desc = skb_push(skb, total_skb_len - skb->len); + + cc33xx_set_tx_desc_blocks(cc, desc, total_blocks, spare_blocks); + + desc->id = id; + + cc33xx_debug(DEBUG_TX, + "tx allocate id %u skb 0x%p tx_memblocks %d", + id, skb, desc->cc33xx_mem.total_mem_blocks); + + cc->tx_blocks_available -= total_blocks; + cc->tx_allocated_blocks += total_blocks; + + /* If the FW was empty before, arm the Tx watchdog. Also do + * this on the first Tx after resume, as we always cancel the + * watchdog on suspend. + */ + if (cc->tx_allocated_blocks == total_blocks || + test_and_clear_bit(CC33XX_FLAG_REINIT_TX_WDOG, &cc->flags)) + cc33xx_rearm_tx_watchdog_locked(cc); + + ac = cc33xx_tx_get_queue(skb_get_queue_mapping(skb)); + desc->ac = ac; + cc->tx_allocated_pkts[ac]++; + + if (test_bit(hlid, cc->links_map)) + cc->links[hlid].allocated_pkts++; + + ret = 0; + + cc33xx_debug(DEBUG_TX, + "tx_allocate: size: %d, blocks: %d, id: %d", + total_len, total_blocks, id); + } else { + cc33xx_free_tx_id(cc, id); + } + + return ret; +} + +static void cc33xx_tx_fill_hdr(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct sk_buff *skb, u32 extra, + struct ieee80211_tx_info *control, u8 hlid) +{ + struct cc33xx_tx_hw_descr *desc; + int ac, rate_idx; + u16 tx_attr = 0; + __le16 frame_control; + struct ieee80211_hdr *hdr; + u8 *frame_start; + bool is_dummy; + + desc = (struct cc33xx_tx_hw_descr *)skb->data; + + frame_start = (u8 *)(desc + 1); + hdr = (struct ieee80211_hdr *)(frame_start + extra); + frame_control = hdr->frame_control; + + /* relocate space for security header */ + if (extra) { + int hdrlen = ieee80211_hdrlen(frame_control); + + memmove(frame_start, hdr, hdrlen); + skb_set_network_header(skb, skb_network_offset(skb) + extra); + } + + is_dummy = cc33xx_is_dummy_packet(cc, skb); + if (is_dummy || !wlvif || wlvif->bss_type != BSS_TYPE_AP_BSS) + desc->life_time = cpu_to_le16(TX_HW_MGMT_PKT_LIFETIME_TU); + else + desc->life_time = cpu_to_le16(TX_HW_AP_MODE_PKT_LIFETIME_TU); + + /* queue */ + ac = cc33xx_tx_get_queue(skb_get_queue_mapping(skb)); + desc->tid = skb->priority; + + if (is_dummy) { + /* FW expects the dummy packet to have an invalid session id - + * any session id that is different than the one set in the join + */ + tx_attr = (SESSION_COUNTER_INVALID << + TX_HW_ATTR_OFST_SESSION_COUNTER) & + TX_HW_ATTR_SESSION_COUNTER; + + tx_attr |= TX_HW_ATTR_TX_DUMMY_REQ; + } else if (wlvif) { + u8 session_id = cc->session_ids[hlid]; + + if (cc->quirks & CC33XX_QUIRK_AP_ZERO_SESSION_ID && + wlvif->bss_type == BSS_TYPE_AP_BSS) + session_id = 0; + + /* configure the tx attributes */ + tx_attr = session_id << TX_HW_ATTR_OFST_SESSION_COUNTER; + } + + desc->hlid = hlid; + if (is_dummy || !wlvif) { + rate_idx = 0; + } else if (wlvif->bss_type != BSS_TYPE_AP_BSS) { + /* if the packets are data packets + * send them with AP rate policies (EAPOLs are an exception), + * otherwise use default basic rates + */ + if (skb->protocol == cpu_to_be16(ETH_P_PAE)) + rate_idx = wlvif->sta.basic_rate_idx; + else if (control->flags & IEEE80211_TX_CTL_NO_CCK_RATE) + rate_idx = wlvif->sta.p2p_rate_idx; + else if (ieee80211_is_data(frame_control)) + rate_idx = wlvif->sta.ap_rate_idx; + else + rate_idx = wlvif->sta.basic_rate_idx; + } else { + if (hlid == wlvif->ap.global_hlid) + rate_idx = wlvif->ap.mgmt_rate_idx; + else if (hlid == wlvif->ap.bcast_hlid || + skb->protocol == cpu_to_be16(ETH_P_PAE) || + !ieee80211_is_data(frame_control)) + /* send non-data, bcast and EAPOLs using the + * min basic rate + */ + rate_idx = wlvif->ap.bcast_rate_idx; + else + rate_idx = wlvif->ap.ucast_rate_idx[ac]; + } + + tx_attr |= rate_idx << TX_HW_ATTR_OFST_RATE_POLICY; + + /* for WEP shared auth - no fw encryption is needed */ + if (ieee80211_is_auth(frame_control) && + ieee80211_has_protected(frame_control)) + tx_attr |= TX_HW_ATTR_HOST_ENCRYPT; + + /* send EAPOL frames as voice */ + if (control->control.flags & IEEE80211_TX_CTRL_PORT_CTRL_PROTO) + tx_attr |= TX_HW_ATTR_EAPOL_FRAME; + + desc->tx_attr = cpu_to_le16(tx_attr); + + cc33xx_set_tx_desc_data_len(cc, desc, skb); +} + +/* caller must hold cc->mutex */ +static int cc33xx_prepare_tx_frame(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct sk_buff *skb, u32 buf_offset, u8 hlid) +{ + struct ieee80211_tx_info *info; + u32 extra = 0; + int ret = 0; + u32 total_len; + bool is_dummy; + bool is_gem = false; + struct NAB_tx_header nab_cmd; + + if (!skb) { + cc33xx_error("discarding null skb"); + return -EINVAL; + } + + if (hlid == CC33XX_INVALID_LINK_ID) { + cc33xx_error("invalid hlid. dropping skb 0x%p", skb); + return -EINVAL; + } + + info = IEEE80211_SKB_CB(skb); + + is_dummy = cc33xx_is_dummy_packet(cc, skb); + + if ((cc->quirks & CC33XX_QUIRK_TKIP_HEADER_SPACE) && + info->control.hw_key && + info->control.hw_key->cipher == WLAN_CIPHER_SUITE_TKIP) + extra = CC33XX_EXTRA_SPACE_TKIP; + + if (info->control.hw_key) { + bool is_wep; + u8 idx = info->control.hw_key->hw_key_idx; + u32 cipher = info->control.hw_key->cipher; + + is_wep = (cipher == WLAN_CIPHER_SUITE_WEP40) || + (cipher == WLAN_CIPHER_SUITE_WEP104); + + if (WARN_ON(is_wep && wlvif && wlvif->default_key != idx)) { + ret = cc33xx_set_default_wep_key(cc, wlvif, idx); + if (ret < 0) + return ret; + wlvif->default_key = idx; + } + + is_gem = (cipher == CC33XX_CIPHER_SUITE_GEM); + } + + /* Add 4 bytes gap, may be filled later on by the PMAC. */ + extra += IEEE80211_HT_CTL_LEN; + ret = cc33xx_tx_allocate(cc, wlvif, skb, extra, buf_offset, hlid, + is_gem, &nab_cmd); + cc33xx_debug(DEBUG_TX, "cc33xx_tx_allocate %d", ret); + + if (ret < 0) + return ret; + + cc33xx_tx_fill_hdr(cc, wlvif, skb, extra, info, hlid); + + if (!is_dummy && wlvif && wlvif->bss_type == BSS_TYPE_AP_BSS) { + cc33xx_tx_ap_update_inconnection_sta(cc, wlvif, skb); + cc33xx_tx_regulate_link(cc, wlvif, hlid); + } + + /* The length of each packet is stored in terms of + * words. Thus, we must pad the skb data to make sure its + * length is aligned. The number of padding bytes is computed + * and set in cc33xx_tx_fill_hdr. + * In special cases, we want to align to a specific block size + * (eg. for wl128x with SDIO we align to 256). + */ + total_len = cc33xx_calc_packet_alignment(cc, skb->len); + + memcpy(cc->aggr_buf + buf_offset, + &nab_cmd, sizeof(struct NAB_tx_header)); + memcpy(cc->aggr_buf + buf_offset + sizeof(struct NAB_tx_header), + skb->data, skb->len); + memset(cc->aggr_buf + buf_offset + sizeof(struct NAB_tx_header) + + skb->len, 0, total_len - skb->len); + + /* Revert side effects in the dummy packet skb, so it can be reused */ + if (is_dummy) + skb_pull(skb, sizeof(struct cc33xx_tx_hw_descr)); + + return (total_len + sizeof(struct NAB_tx_header)); +} + +u32 cc33xx_tx_enabled_rates_get(struct cc33xx *cc, u32 rate_set, + enum nl80211_band rate_band) +{ + struct ieee80211_supported_band *band; + u32 enabled_rates = 0; + int bit; + + band = cc->hw->wiphy->bands[rate_band]; + for (bit = 0; bit < band->n_bitrates; bit++) { + if (rate_set & 0x1) + enabled_rates |= band->bitrates[bit].hw_value; + rate_set >>= 1; + } + + /* MCS rates indication are on bits 16 - 31 */ + rate_set >>= HW_HT_RATES_OFFSET - band->n_bitrates; + + for (bit = 0; bit < 16; bit++) { + if (rate_set & 0x1) + enabled_rates |= (CONF_HW_BIT_RATE_MCS_0 << bit); + rate_set >>= 1; + } + + return enabled_rates; +} + +static inline int cc33xx_tx_get_mac80211_queue(struct cc33xx_vif *wlvif, + int queue) +{ + int mac_queue = wlvif->hw_queue_base; + + switch (queue) { + case CONF_TX_AC_VO: + return mac_queue + 0; + case CONF_TX_AC_VI: + return mac_queue + 1; + case CONF_TX_AC_BE: + return mac_queue + 2; + case CONF_TX_AC_BK: + return mac_queue + 3; + default: + return mac_queue + 2; + } +} + +static void cc33xx_wake_queue(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u8 queue, enum cc33xx_queue_stop_reason reason) +{ + unsigned long flags; + int hwq = cc33xx_tx_get_mac80211_queue(wlvif, queue); + + spin_lock_irqsave(&cc->cc_lock, flags); + + /* queue should not be clear for this reason */ + WARN_ON_ONCE(!test_and_clear_bit(reason, &cc->queue_stop_reasons[hwq])); + + if (cc->queue_stop_reasons[hwq]) + goto out; + + ieee80211_wake_queue(cc->hw, hwq); + +out: + spin_unlock_irqrestore(&cc->cc_lock, flags); +} + +void cc33xx_handle_tx_low_watermark(struct cc33xx *cc) +{ + int i; + struct cc33xx_vif *wlvif; + + cc33xx_for_each_wlvif(cc, wlvif) { + for (i = 0; i < NUM_TX_QUEUES; i++) { + if (cc33xx_is_queue_stopped_by_reason(cc, wlvif, i, + CC33XX_QUEUE_STOP_REASON_WATERMARK) && + wlvif->tx_queue_count[i] <= + CC33XX_TX_QUEUE_LOW_WATERMARK) + /* firmware buffer has space, restart queues */ + cc33xx_wake_queue(cc, wlvif, i, + CC33XX_QUEUE_STOP_REASON_WATERMARK); + } + } +} + +static int cc33xx_select_ac(struct cc33xx *cc) +{ + int i, q = -1, ac; + u32 min_pkts = 0xffffffff; + + /* Find a non-empty ac where: + * 1. There are packets to transmit + * 2. The FW has the least allocated blocks + * + * We prioritize the ACs according to VO>VI>BE>BK + */ + for (i = 0; i < NUM_TX_QUEUES; i++) { + ac = cc33xx_tx_get_queue(i); + if (cc->tx_queue_count[ac] && + cc->tx_allocated_pkts[ac] < min_pkts) { + q = ac; + min_pkts = cc->tx_allocated_pkts[q]; + } + } + + return q; +} + +static struct sk_buff *cc33xx_lnk_dequeue(struct cc33xx *cc, + struct cc33xx_link *lnk, u8 q) +{ + struct sk_buff *skb; + unsigned long flags; + + skb = skb_dequeue(&lnk->tx_queue[q]); + if (skb) { + spin_lock_irqsave(&cc->cc_lock, flags); + WARN_ON_ONCE(cc->tx_queue_count[q] <= 0); + cc->tx_queue_count[q]--; + if (lnk->wlvif) { + WARN_ON_ONCE(lnk->wlvif->tx_queue_count[q] <= 0); + lnk->wlvif->tx_queue_count[q]--; + } + spin_unlock_irqrestore(&cc->cc_lock, flags); + } + + return skb; +} + +static bool cc33xx_lnk_high_prio(struct cc33xx *cc, u8 hlid, + struct cc33xx_link *lnk) +{ + u8 thold; + struct core_fw_status *core_fw_status = &cc->core_status->fw_info; + unsigned long suspend_bitmap, fast_bitmap, ps_bitmap; + + suspend_bitmap = le32_to_cpu(core_fw_status->link_suspend_bitmap); + fast_bitmap = le32_to_cpu(core_fw_status->link_fast_bitmap); + ps_bitmap = le32_to_cpu(core_fw_status->link_ps_bitmap); + + /* suspended links are never high priority */ + if (test_bit(hlid, &suspend_bitmap)) + return false; + + /* the priority thresholds are taken from FW */ + if (test_bit(hlid, &fast_bitmap) && !test_bit(hlid, &ps_bitmap)) + thold = core_fw_status->tx_fast_link_prio_threshold; + else + thold = core_fw_status->tx_slow_link_prio_threshold; + + return lnk->allocated_pkts < thold; +} + +static bool cc33xx_lnk_low_prio(struct cc33xx *cc, u8 hlid, + struct cc33xx_link *lnk) +{ + u8 thold; + struct core_fw_status *core_fw_status = &cc->core_status->fw_info; + unsigned long suspend_bitmap, fast_bitmap, ps_bitmap; + + suspend_bitmap = le32_to_cpu(core_fw_status->link_suspend_bitmap); + fast_bitmap = le32_to_cpu(core_fw_status->link_fast_bitmap); + ps_bitmap = le32_to_cpu(core_fw_status->link_ps_bitmap); + + if (test_bit(hlid, &suspend_bitmap)) + thold = core_fw_status->tx_suspend_threshold; + else if (test_bit(hlid, &fast_bitmap) && !test_bit(hlid, &ps_bitmap)) + thold = core_fw_status->tx_fast_stop_threshold; + else + thold = core_fw_status->tx_slow_stop_threshold; + + return lnk->allocated_pkts < thold; +} + +static struct sk_buff *cc33xx_lnk_dequeue_high_prio(struct cc33xx *cc, + u8 hlid, u8 ac, + u8 *low_prio_hlid) +{ + struct cc33xx_link *lnk = &cc->links[hlid]; + + if (!cc33xx_lnk_high_prio(cc, hlid, lnk)) { + if (*low_prio_hlid == CC33XX_INVALID_LINK_ID && + !skb_queue_empty(&lnk->tx_queue[ac]) && + cc33xx_lnk_low_prio(cc, hlid, lnk)) + /* we found the first non-empty low priority queue */ + *low_prio_hlid = hlid; + + return NULL; + } + + return cc33xx_lnk_dequeue(cc, lnk, ac); +} + +static struct sk_buff *cc33xx_vif_dequeue_high_prio(struct cc33xx *cc, + struct cc33xx_vif *wlvif, + u8 ac, u8 *hlid, + u8 *low_prio_hlid) +{ + struct sk_buff *skb = NULL; + int i, h, start_hlid; + + /* start from the link after the last one */ + start_hlid = (wlvif->last_tx_hlid + 1) % CC33XX_MAX_LINKS; + + /* dequeue according to AC, round robin on each link */ + for (i = 0; i < CC33XX_MAX_LINKS; i++) { + h = (start_hlid + i) % CC33XX_MAX_LINKS; + + /* only consider connected stations */ + if (!test_bit(h, wlvif->links_map)) + continue; + + skb = cc33xx_lnk_dequeue_high_prio(cc, h, ac, low_prio_hlid); + if (!skb) + continue; + + wlvif->last_tx_hlid = h; + break; + } + + if (!skb) + wlvif->last_tx_hlid = 0; + + *hlid = wlvif->last_tx_hlid; + return skb; +} + +static struct sk_buff *cc33xx_skb_dequeue(struct cc33xx *cc, u8 *hlid) +{ + unsigned long flags; + struct cc33xx_vif *wlvif = cc->last_wlvif; + struct sk_buff *skb = NULL; + int ac; + u8 low_prio_hlid = CC33XX_INVALID_LINK_ID; + + ac = cc33xx_select_ac(cc); + if (ac < 0) + goto out; + + /* continue from last wlvif (round robin) */ + if (wlvif) { + cc33xx_for_each_wlvif_continue(cc, wlvif) { + if (!wlvif->tx_queue_count[ac]) + continue; + + skb = cc33xx_vif_dequeue_high_prio(cc, wlvif, ac, hlid, + &low_prio_hlid); + if (!skb) + continue; + + cc->last_wlvif = wlvif; + break; + } + } + + /* dequeue from the system HLID before the restarting wlvif list */ + if (!skb) { + skb = cc33xx_lnk_dequeue_high_prio(cc, CC33XX_SYSTEM_HLID, + ac, &low_prio_hlid); + if (skb) { + *hlid = CC33XX_SYSTEM_HLID; + cc->last_wlvif = NULL; + } + } + + /* Do a new pass over the wlvif list. But no need to continue + * after last_wlvif. The previous pass should have found it. + */ + if (!skb) { + cc33xx_for_each_wlvif(cc, wlvif) { + if (!wlvif->tx_queue_count[ac]) + goto next; + + skb = cc33xx_vif_dequeue_high_prio(cc, wlvif, ac, hlid, + &low_prio_hlid); + if (skb) { + cc->last_wlvif = wlvif; + break; + } + +next: + if (wlvif == cc->last_wlvif) + break; + } + } + + /* no high priority skbs found - but maybe a low priority one? */ + if (!skb && low_prio_hlid != CC33XX_INVALID_LINK_ID) { + struct cc33xx_link *lnk = &cc->links[low_prio_hlid]; + + skb = cc33xx_lnk_dequeue(cc, lnk, ac); + + WARN_ON(!skb); /* we checked this before */ + *hlid = low_prio_hlid; + + /* ensure proper round robin in the vif/link levels */ + cc->last_wlvif = lnk->wlvif; + if (lnk->wlvif) + lnk->wlvif->last_tx_hlid = low_prio_hlid; + } + +out: + if (!skb && + test_and_clear_bit(CC33XX_FLAG_DUMMY_PACKET_PENDING, &cc->flags)) { + int q; + + skb = cc->dummy_packet; + *hlid = CC33XX_SYSTEM_HLID; + q = cc33xx_tx_get_queue(skb_get_queue_mapping(skb)); + spin_lock_irqsave(&cc->cc_lock, flags); + WARN_ON_ONCE(cc->tx_queue_count[q] <= 0); + cc->tx_queue_count[q]--; + spin_unlock_irqrestore(&cc->cc_lock, flags); + } + + return skb; +} + +static void cc33xx_skb_queue_head(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct sk_buff *skb, u8 hlid) +{ + unsigned long flags; + int q = cc33xx_tx_get_queue(skb_get_queue_mapping(skb)); + + if (cc33xx_is_dummy_packet(cc, skb)) { + set_bit(CC33XX_FLAG_DUMMY_PACKET_PENDING, &cc->flags); + } else { + skb_queue_head(&cc->links[hlid].tx_queue[q], skb); + + /* make sure we dequeue the same packet next time */ + wlvif->last_tx_hlid = (hlid + CC33XX_MAX_LINKS - 1) % + CC33XX_MAX_LINKS; + } + + spin_lock_irqsave(&cc->cc_lock, flags); + cc->tx_queue_count[q]++; + if (wlvif) + wlvif->tx_queue_count[q]++; + spin_unlock_irqrestore(&cc->cc_lock, flags); +} + +static inline bool cc33xx_tx_is_data_present(struct sk_buff *skb) +{ + struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)(skb->data); + + return ieee80211_is_data_present(hdr->frame_control); +} + +/* Returns failure values only in case of failed bus ops within this function. + * cc33xx_prepare_tx_frame retvals won't be returned in order to avoid + * triggering recovery by higher layers when not necessary. + * In case a FW command fails within cc33xx_prepare_tx_frame fails a recovery + * will be queued in cc33xx_cmd_send. -EAGAIN/-EBUSY from prepare_tx_frame + * can occur and are legitimate so don't propagate. -EINVAL will emit a WARNING + * within prepare_tx_frame code but there's nothing we should do about those + * as well. + */ +int cc33xx_tx_work_locked(struct cc33xx *cc) +{ + struct cc33xx_vif *wlvif; + struct sk_buff *skb; + struct cc33xx_tx_hw_descr *desc; + u32 buf_offset = 0, last_len = 0; + u32 transfer_len = 0; + u32 padding_size = 0; + bool sent_packets = false; + unsigned long active_hlids[BITS_TO_LONGS(CC33XX_MAX_LINKS)] = {0}; + int ret = 0; + int bus_ret = 0; + u8 hlid; + + memset(cc->aggr_buf, 0, 0x300); + if (unlikely(cc->state != CC33XX_STATE_ON)) + return 0; + + while ((skb = cc33xx_skb_dequeue(cc, &hlid))) { + struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); + bool has_data = false; + + cc33xx_debug(DEBUG_TX, "skb dequeue skb: 0x%p data %#lx head %#lx tail %#lx end %#lx", + skb, + (unsigned long)skb->data, (unsigned long)skb->head, + (unsigned long)skb->tail, (unsigned long)skb->end); + wlvif = NULL; + if (!cc33xx_is_dummy_packet(cc, skb)) + wlvif = cc33xx_vif_to_data(info->control.vif); + else + hlid = CC33XX_SYSTEM_HLID; + + has_data = wlvif && cc33xx_tx_is_data_present(skb); + ret = cc33xx_prepare_tx_frame(cc, wlvif, skb, buf_offset, + hlid); + + if (ret == -EAGAIN) { + /* Aggregation buffer is full. + * Flush buffer and try again. + */ + cc33xx_skb_queue_head(cc, wlvif, skb, hlid); + + transfer_len = __ALIGN_MASK(buf_offset, + CC33XX_BUS_BLOCK_SIZE * 2 - 1); + + padding_size = transfer_len - buf_offset; + memset(cc->aggr_buf + buf_offset, 0x33, padding_size); + + cc33xx_debug(DEBUG_TX, "sdio transaction length: %d ", + transfer_len); + + bus_ret = cc33xx_write(cc, NAB_DATA_ADDR, cc->aggr_buf, + transfer_len, true); + if (bus_ret < 0) + goto out; + + sent_packets = true; + buf_offset = 0; + continue; + } else if (ret == -EBUSY) { + /* Firmware buffer is full. + * Queue back last skb, and stop aggregating. + */ + cc33xx_skb_queue_head(cc, wlvif, skb, hlid); + /* No work left, avoid scheduling redundant tx work */ + set_bit(CC33XX_FLAG_FW_TX_BUSY, &cc->flags); + goto out_ack; + } else if (ret < 0) { + if (cc33xx_is_dummy_packet(cc, skb)) + /* fw still expects dummy packet, + * so re-enqueue it + */ + cc33xx_skb_queue_head(cc, wlvif, skb, hlid); + else + ieee80211_free_txskb(cc->hw, skb); + goto out_ack; + } + + last_len = ret; + buf_offset += last_len; + + if (has_data) { + desc = (struct cc33xx_tx_hw_descr *)skb->data; + __set_bit(desc->hlid, active_hlids); + } + } + +out_ack: + if (buf_offset) { + transfer_len = __ALIGN_MASK(buf_offset, + CC33XX_BUS_BLOCK_SIZE * 2 - 1); + + padding_size = transfer_len - buf_offset; + memset(cc->aggr_buf + buf_offset, 0x33, padding_size); + + cc33xx_debug(DEBUG_TX, "sdio transaction (926) length: %d ", + transfer_len); + + bus_ret = cc33xx_write(cc, NAB_DATA_ADDR, cc->aggr_buf, + transfer_len, true); + if (bus_ret < 0) + goto out; + + sent_packets = true; + } + + if (sent_packets) + cc33xx_handle_tx_low_watermark(cc); + +out: + return bus_ret; +} + +void cc33xx_tx_work(struct work_struct *work) +{ + struct cc33xx *cc = container_of(work, struct cc33xx, tx_work); + int ret; + + mutex_lock(&cc->mutex); + + ret = cc33xx_tx_work_locked(cc); + if (ret < 0) { + cc33xx_queue_recovery_work(cc); + goto out; + } + +out: + mutex_unlock(&cc->mutex); +} + +void cc33xx_tx_reset_link_queues(struct cc33xx *cc, u8 hlid) +{ + struct sk_buff *skb; + int i; + unsigned long flags; + struct ieee80211_tx_info *info; + int total[NUM_TX_QUEUES]; + struct cc33xx_link *lnk = &cc->links[hlid]; + + for (i = 0; i < NUM_TX_QUEUES; i++) { + total[i] = 0; + while ((skb = skb_dequeue(&lnk->tx_queue[i]))) { + cc33xx_debug(DEBUG_TX, "link freeing skb 0x%p", skb); + + if (!cc33xx_is_dummy_packet(cc, skb)) { + info = IEEE80211_SKB_CB(skb); + info->status.rates[0].idx = -1; + info->status.rates[0].count = 0; + ieee80211_tx_status_ni(cc->hw, skb); + } + + total[i]++; + } + } + + spin_lock_irqsave(&cc->cc_lock, flags); + for (i = 0; i < NUM_TX_QUEUES; i++) { + cc->tx_queue_count[i] -= total[i]; + if (lnk->wlvif) + lnk->wlvif->tx_queue_count[i] -= total[i]; + } + spin_unlock_irqrestore(&cc->cc_lock, flags); + + cc33xx_handle_tx_low_watermark(cc); +} + +/* caller must hold cc->mutex and TX must be stopped */ +void cc33xx_tx_reset_wlvif(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + int i; + + /* TX failure */ + for_each_set_bit(i, wlvif->links_map, CC33XX_MAX_LINKS) { + if (wlvif->bss_type == BSS_TYPE_AP_BSS && + i != wlvif->ap.bcast_hlid && + i != wlvif->ap.global_hlid) { + /* this calls cc33xx_clear_link */ + cc33xx_free_sta(cc, wlvif, i); + } else { + u8 hlid = i; + + cc33xx_clear_link(cc, wlvif, &hlid); + } + } + + wlvif->last_tx_hlid = 0; + + for (i = 0; i < NUM_TX_QUEUES; i++) + wlvif->tx_queue_count[i] = 0; +} + +int cc33xx_tx_total_queue_count(struct cc33xx *cc) +{ + int i, count = 0; + + for (i = 0; i < NUM_TX_QUEUES; i++) + count += cc->tx_queue_count[i]; + + return count; +} + +/* caller must hold cc->mutex and TX must be stopped */ +void cc33xx_tx_reset(struct cc33xx *cc) +{ + int i; + struct sk_buff *skb; + struct ieee80211_tx_info *info; + + /* only reset the queues if something bad happened */ + if (cc33xx_tx_total_queue_count(cc) != 0) { + for (i = 0; i < CC33XX_MAX_LINKS; i++) + cc33xx_tx_reset_link_queues(cc, i); + + for (i = 0; i < NUM_TX_QUEUES; i++) + cc->tx_queue_count[i] = 0; + } + + /* Make sure the driver is at a consistent state, in case this + * function is called from a context other than interface removal. + * This call will always wake the TX queues. + */ + cc33xx_handle_tx_low_watermark(cc); + + for (i = 0; i < CC33XX_NUM_TX_DESCRIPTORS; i++) { + if (!cc->tx_frames[i]) + continue; + + skb = cc->tx_frames[i]; + cc33xx_free_tx_id(cc, i); + cc33xx_debug(DEBUG_TX, "freeing skb 0x%p", skb); + + if (!cc33xx_is_dummy_packet(cc, skb)) { + /* Remove private headers before passing the skb to + * mac80211 + */ + info = IEEE80211_SKB_CB(skb); + skb_pull(skb, sizeof(struct cc33xx_tx_hw_descr)); + if ((cc->quirks & CC33XX_QUIRK_TKIP_HEADER_SPACE) && + info->control.hw_key && + info->control.hw_key->cipher == + WLAN_CIPHER_SUITE_TKIP) { + int hdrlen = ieee80211_get_hdrlen_from_skb(skb); + + memmove(skb->data + CC33XX_EXTRA_SPACE_TKIP, + skb->data, hdrlen); + skb_pull(skb, CC33XX_EXTRA_SPACE_TKIP); + } + + info->status.rates[0].idx = -1; + info->status.rates[0].count = 0; + + ieee80211_tx_status_ni(cc->hw, skb); + } + } +} + +#define CC33XX_TX_FLUSH_TIMEOUT 500000 + +/* caller must *NOT* hold cc->mutex */ +void cc33xx_tx_flush(struct cc33xx *cc) +{ + unsigned long timeout, start_time; + int i; + + start_time = jiffies; + timeout = start_time + usecs_to_jiffies(CC33XX_TX_FLUSH_TIMEOUT); + + /* only one flush should be in progress, for consistent queue state */ + mutex_lock(&cc->flush_mutex); + + mutex_lock(&cc->mutex); + if (cc->tx_frames_cnt == 0 && cc33xx_tx_total_queue_count(cc) == 0) { + mutex_unlock(&cc->mutex); + goto out; + } + + cc33xx_stop_queues(cc, CC33XX_QUEUE_STOP_REASON_FLUSH); + + while (!time_after(jiffies, timeout)) { + cc33xx_debug(DEBUG_MAC80211, "flushing tx buffer: %d %d", + cc->tx_frames_cnt, + cc33xx_tx_total_queue_count(cc)); + + /* force Tx and give the driver some time to flush data */ + mutex_unlock(&cc->mutex); + if (cc33xx_tx_total_queue_count(cc)) + cc33xx_tx_work(&cc->tx_work); + msleep(20); + mutex_lock(&cc->mutex); + + if (cc->tx_frames_cnt == 0 && cc33xx_tx_total_queue_count(cc) == 0) { + cc33xx_debug(DEBUG_MAC80211, "tx flush took %d ms", + jiffies_to_msecs(jiffies - start_time)); + goto out_wake; + } + } + + cc33xx_warning("Unable to flush all TX buffers, timed out (timeout %d ms", + CC33XX_TX_FLUSH_TIMEOUT / 1000); + + /* forcibly flush all Tx buffers on our queues */ + for (i = 0; i < CC33XX_MAX_LINKS; i++) + cc33xx_tx_reset_link_queues(cc, i); + +out_wake: + cc33xx_wake_queues(cc, CC33XX_QUEUE_STOP_REASON_FLUSH); + mutex_unlock(&cc->mutex); +out: + mutex_unlock(&cc->flush_mutex); +} + +u32 cc33xx_tx_min_rate_get(struct cc33xx *cc, u32 rate_set) +{ + if (WARN_ON(!rate_set)) + return 0; + + return BIT(__ffs(rate_set)); +} + +void cc33xx_stop_queue_locked(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u8 queue, enum cc33xx_queue_stop_reason reason) +{ + int hwq = cc33xx_tx_get_mac80211_queue(wlvif, queue); + bool stopped = !!cc->queue_stop_reasons[hwq]; + + /* queue should not be stopped for this reason */ + WARN_ON_ONCE(test_and_set_bit(reason, &cc->queue_stop_reasons[hwq])); + + if (stopped) + return; + + ieee80211_stop_queue(cc->hw, hwq); +} + +void cc33xx_stop_queues(struct cc33xx *cc, + enum cc33xx_queue_stop_reason reason) +{ + int i; + unsigned long flags; + + spin_lock_irqsave(&cc->cc_lock, flags); + + /* mark all possible queues as stopped */ + for (i = 0; i < CC33XX_NUM_MAC_ADDRESSES * NUM_TX_QUEUES; i++) { + WARN_ON_ONCE(test_and_set_bit(reason, + &cc->queue_stop_reasons[i])); + } + + /* use the global version to make sure all vifs in mac80211 we don't + * know are stopped. + */ + ieee80211_stop_queues(cc->hw); + + spin_unlock_irqrestore(&cc->cc_lock, flags); +} + +void cc33xx_wake_queues(struct cc33xx *cc, + enum cc33xx_queue_stop_reason reason) +{ + int i; + unsigned long flags; + + spin_lock_irqsave(&cc->cc_lock, flags); + + /* mark all possible queues as awake */ + for (i = 0; i < CC33XX_NUM_MAC_ADDRESSES * NUM_TX_QUEUES; i++) { + WARN_ON_ONCE(!test_and_clear_bit(reason, + &cc->queue_stop_reasons[i])); + } + + /* use the global version to make sure all vifs in mac80211 we don't + * know are woken up. + */ + ieee80211_wake_queues(cc->hw); + + spin_unlock_irqrestore(&cc->cc_lock, flags); +} + +bool cc33xx_is_queue_stopped_by_reason(struct cc33xx *cc, + struct cc33xx_vif *wlvif, u8 queue, + enum cc33xx_queue_stop_reason reason) +{ + unsigned long flags; + bool stopped; + + spin_lock_irqsave(&cc->cc_lock, flags); + stopped = cc33xx_is_queue_stopped_by_reason_locked(cc, wlvif, queue, + reason); + spin_unlock_irqrestore(&cc->cc_lock, flags); + + return stopped; +} + +bool cc33xx_is_queue_stopped_by_reason_locked(struct cc33xx *cc, + struct cc33xx_vif *wlvif, u8 queue, + enum cc33xx_queue_stop_reason reason) +{ + int hwq = cc33xx_tx_get_mac80211_queue(wlvif, queue); + + assert_spin_locked(&cc->cc_lock); + return test_bit(reason, &cc->queue_stop_reasons[hwq]); +} + +bool cc33xx_is_queue_stopped_locked(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u8 queue) +{ + int hwq = cc33xx_tx_get_mac80211_queue(wlvif, queue); + + assert_spin_locked(&cc->cc_lock); + return !!cc->queue_stop_reasons[hwq]; +} + +static void cc33xx_tx_complete_packet(struct cc33xx *cc, u8 tx_stat_byte, + struct core_fw_status *core_fw_status) +{ + struct ieee80211_tx_info *info; + struct sk_buff *skb; + int id = tx_stat_byte & CC33XX_TX_STATUS_DESC_ID_MASK; + bool tx_success; + struct cc33xx_tx_hw_descr *tx_desc; + u16 desc_session_idx; + + /* check for id legality */ + if (unlikely(id >= CC33XX_NUM_TX_DESCRIPTORS || + !cc->tx_frames[id])) { + cc33xx_warning("illegal id in tx completion: %d", id); + + print_hex_dump(KERN_DEBUG, "fw_info local:", + DUMP_PREFIX_OFFSET, 16, 4, (u8 *)(core_fw_status), + sizeof(struct core_fw_status), false); + + cc33xx_queue_recovery_work(cc); + return; + } + + /* a zero bit indicates Tx success */ + tx_success = !(tx_stat_byte & BIT(CC33XX_TX_STATUS_STAT_BIT_IDX)); + + skb = cc->tx_frames[id]; + info = IEEE80211_SKB_CB(skb); + tx_desc = (struct cc33xx_tx_hw_descr *)skb->data; + + if (cc33xx_is_dummy_packet(cc, skb)) { + cc33xx_free_tx_id(cc, id); + return; + } + + /* update the TX status info */ + if (tx_success && !(info->flags & IEEE80211_TX_CTL_NO_ACK)) + info->flags |= IEEE80211_TX_STAT_ACK; + + info->status.rates[0].count = 1; /* no data about retries */ + info->status.ack_signal = -1; + + if (!tx_success) + cc->stats.retry_count++; + + /* remove private header from packet */ + skb_pull(skb, sizeof(struct cc33xx_tx_hw_descr)); + + /* remove TKIP header space if present */ + if ((cc->quirks & CC33XX_QUIRK_TKIP_HEADER_SPACE) && + info->control.hw_key && + info->control.hw_key->cipher == WLAN_CIPHER_SUITE_TKIP) { + int hdrlen = ieee80211_get_hdrlen_from_skb(skb); + + memmove(skb->data + CC33XX_EXTRA_SPACE_TKIP, skb->data, hdrlen); + skb_pull(skb, CC33XX_EXTRA_SPACE_TKIP); + } + + cc33xx_debug(DEBUG_TX, + "tx status id %u skb 0x%p success %d, tx_memblocks %d", + id, skb, tx_success, tx_desc->cc33xx_mem.total_mem_blocks); + + /** + * in order to update the memory management + * we should have total_blocks, ac, and hlid + */ + cc->tx_blocks_available += tx_desc->cc33xx_mem.total_mem_blocks; + cc->tx_allocated_blocks -= tx_desc->cc33xx_mem.total_mem_blocks; + /* per queue */ + + /* prevent wrap-around in freed-packets counter */ + cc->tx_allocated_pkts[tx_desc->ac]--; + + /* per link */ + desc_session_idx = (le16_to_cpu(tx_desc->tx_attr) & TX_HW_ATTR_SESSION_COUNTER) >> + TX_HW_ATTR_OFST_SESSION_COUNTER; + + if (cc->session_ids[tx_desc->hlid] == desc_session_idx) + cc->links[tx_desc->hlid].allocated_pkts--; + + cc33xx_free_tx_id(cc, id); + + /* new mem blocks are available now */ + clear_bit(CC33XX_FLAG_FW_TX_BUSY, &cc->flags); + + /* return the packet to the stack */ + skb_queue_tail(&cc->deferred_tx_queue, skb); + queue_work(cc->freezable_wq, &cc->netstack_work); +} + +void cc33xx_tx_immediate_complete(struct cc33xx *cc) +{ + u8 tx_result_queue_index; + struct core_fw_status core_fw_status; + u8 i; + + claim_core_status_lock(cc); + memcpy(&core_fw_status, &cc->core_status->fw_info, + sizeof(struct core_fw_status)); + + tx_result_queue_index = cc->core_status->fw_info.tx_result_queue_index; + /* Lock guarantees we shadow tx_result_queue_index NOT during + * an active transaction. Subsequent references to fw_info can be done + * without locking as long we do not pass this index. + */ + release_core_status_lock(cc); + + cc33xx_debug(DEBUG_TX, "last released desc = %d, current idx = %d", + cc->last_fw_rls_idx, tx_result_queue_index); + + /* nothing to do here */ + if (cc->last_fw_rls_idx == tx_result_queue_index) + return; + + /* freed Tx descriptors */ + + if (tx_result_queue_index >= TX_RESULT_QUEUE_SIZE) { + cc33xx_error("invalid desc release index %d", + tx_result_queue_index); + WARN_ON(1); + return; + } + + cc33xx_debug(DEBUG_TX, "TX result queue! priv last fw idx %d, current resut index %d ", + cc->last_fw_rls_idx, tx_result_queue_index); + + for (i = cc->last_fw_rls_idx; i != tx_result_queue_index; + i = (i + 1) % TX_RESULT_QUEUE_SIZE) { + cc33xx_tx_complete_packet(cc, core_fw_status.tx_result_queue[i], + &core_fw_status); + } + + cc->last_fw_rls_idx = tx_result_queue_index; +} diff --git a/drivers/net/wireless/ti/cc33xx/tx.h b/drivers/net/wireless/ti/cc33xx/tx.h new file mode 100644 index 000000000000..9062d50c7c8e --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/tx.h @@ -0,0 +1,160 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#ifndef __TX_H__ +#define __TX_H__ + +#define CC33XX_TX_HW_BLOCK_SPARE 1 +/* for special cases - namely, TKIP and GEM */ +#define CC33XX_TX_HW_EXTRA_BLOCK_SPARE 2 +#define CC33XX_TX_HW_BLOCK_SIZE 256 + +#define CC33XX_TX_STATUS_DESC_ID_MASK 0x7F +#define CC33XX_TX_STATUS_STAT_BIT_IDX 7 + +/* Indicates this TX HW frame is not padded to SDIO block size */ +#define CC33XX_TX_CTRL_NOT_PADDED BIT(7) + +#define TX_HW_MGMT_PKT_LIFETIME_TU 2000 +#define TX_HW_AP_MODE_PKT_LIFETIME_TU 8000 + +#define TX_HW_ATTR_SESSION_COUNTER (BIT(2) | BIT(3) | BIT(4)) +#define TX_HW_ATTR_TX_DUMMY_REQ BIT(13) +#define TX_HW_ATTR_HOST_ENCRYPT BIT(14) +#define TX_HW_ATTR_EAPOL_FRAME BIT(15) + +#define TX_HW_ATTR_OFST_SESSION_COUNTER 2 +#define TX_HW_ATTR_OFST_RATE_POLICY 5 + +#define CC33XX_TX_ALIGN_TO 4 +#define CC33XX_EXTRA_SPACE_TKIP 4 +#define CC33XX_EXTRA_SPACE_AES 8 +#define CC33XX_EXTRA_SPACE_MAX 8 + +#define CC33XX_TX_EXTRA_HEADROOM \ + (sizeof(struct cc33xx_tx_hw_descr) + IEEE80211_HT_CTL_LEN) + +/* Used for management frames and dummy packets */ +#define CC33XX_TID_MGMT 7 + +/* stop a ROC for pending authentication reply after this time (ms) */ +#define CC33XX_PEND_AUTH_ROC_TIMEOUT 1000 +#define CC33xx_PEND_ROC_COMPLETE_TIMEOUT 2000 + +struct cc33xx_tx_mem { + /* Total number of memory blocks allocated by the host for + * this packet. + */ + u8 total_mem_blocks; + + /* control bits + */ + u8 ctrl; +} __packed; + +/* On cc33xx based devices, when TX packets are aggregated, each packet + * size must be aligned to the SDIO block size. The maximum block size + * is bounded by the type of the padded bytes field that is sent to the + * FW. The HW maximum block size is 256 bytes. We use 128 to utilize the + * SDIO built-in busy signal when the FIFO is full. + */ +#define CC33XX_BUS_BLOCK_SIZE 128 + +struct cc33xx_tx_hw_descr { + /* Length of packet in words, including descriptor+header+data */ + __le16 length; + + struct cc33xx_tx_mem cc33xx_mem; + + /* Packet identifier used also in the Tx-Result. */ + u8 id; + /* The packet TID value (as User-Priority) */ + u8 tid; + /* host link ID (HLID) */ + u8 hlid; + u8 ac; + /* Max delay in TUs until transmission. The last device time the + * packet can be transmitted is: start_time + (1024 * life_time) + */ + __le16 life_time; + /* Bitwise fields - see TX_ATTR... definitions above. */ + __le16 tx_attr; +} __packed; + +struct cc33xx_tx_hw_res_descr { + /* Packet Identifier - same value used in the Tx descriptor.*/ + u8 id; + /* The status of the transmission, indicating success or one of + * several possible reasons for failure. + */ + u8 status; + /* Total air access duration including all retrys and overheads.*/ + __le16 medium_usage; + /* The time passed from host xfer to Tx-complete.*/ + __le32 fw_handling_time; + /* Total media delay + * (from 1st EDCA AIFS counter until TX Complete). + */ + __le32 medium_delay; + /* LS-byte of last TKIP seq-num (saved per AC for recovery). */ + u8 tx_security_sequence_number_lsb; + /* Retry count - number of transmissions without successful ACK.*/ + u8 ack_failures; + /* The rate that succeeded getting ACK + * (Valid only if status=SUCCESS). + */ + u8 rate_class_index; + /* for 4-byte alignment. */ + u8 spare; +} __packed; + +enum cc33xx_queue_stop_reason { + CC33XX_QUEUE_STOP_REASON_WATERMARK, + CC33XX_QUEUE_STOP_REASON_FW_RESTART, + CC33XX_QUEUE_STOP_REASON_FLUSH, + CC33XX_QUEUE_STOP_REASON_SPARE_BLK, /* 18xx specific */ +}; + +int cc33xx_tx_get_queue(int queue); +int cc33xx_tx_total_queue_count(struct cc33xx *cc); +void cc33xx_tx_immediate_complete(struct cc33xx *cc); +void cc33xx_tx_work(struct work_struct *work); +int cc33xx_tx_work_locked(struct cc33xx *cc); +void cc33xx_tx_reset_wlvif(struct cc33xx *cc, struct cc33xx_vif *wlvif); +void cc33xx_tx_reset(struct cc33xx *cc); +void cc33xx_tx_flush(struct cc33xx *cc); +u8 cc33xx_rate_to_idx(struct cc33xx *cc, u8 rate, enum nl80211_band band); +u32 cc33xx_tx_enabled_rates_get(struct cc33xx *cc, u32 rate_set, + enum nl80211_band rate_band); +u32 cc33xx_tx_min_rate_get(struct cc33xx *cc, u32 rate_set); +u8 cc33xx_tx_get_hlid(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct sk_buff *skb, struct ieee80211_sta *sta); +void cc33xx_tx_reset_link_queues(struct cc33xx *cc, u8 hlid); +void cc33xx_handle_tx_low_watermark(struct cc33xx *cc); +bool cc33xx_is_dummy_packet(struct cc33xx *cc, struct sk_buff *skb); +unsigned int cc33xx_calc_packet_alignment(struct cc33xx *cc, + unsigned int packet_length); +void cc33xx_free_tx_id(struct cc33xx *cc, int id); +void cc33xx_stop_queue_locked(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u8 queue, enum cc33xx_queue_stop_reason reason); +void cc33xx_stop_queues(struct cc33xx *cc, + enum cc33xx_queue_stop_reason reason); +void cc33xx_wake_queues(struct cc33xx *cc, + enum cc33xx_queue_stop_reason reason); +bool cc33xx_is_queue_stopped_by_reason(struct cc33xx *cc, + struct cc33xx_vif *wlvif, u8 queue, + enum cc33xx_queue_stop_reason reason); +bool cc33xx_is_queue_stopped_by_reason_locked(struct cc33xx *cc, + struct cc33xx_vif *wlvif, + u8 queue, + enum cc33xx_queue_stop_reason reason); +bool cc33xx_is_queue_stopped_locked(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u8 queue); + +/* from main.c */ +void cc33xx_free_sta(struct cc33xx *cc, struct cc33xx_vif *wlvif, u8 hlid); +void cc33xx_rearm_tx_watchdog_locked(struct cc33xx *cc); + +#endif /* __TX_H__ */ From patchwork Thu Nov 7 12:52:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nemanov, Michael" X-Patchwork-Id: 13866420 X-Patchwork-Delegate: kuba@kernel.org Received: from fllv0016.ext.ti.com (fllv0016.ext.ti.com [198.47.19.142]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6066821442E; Thu, 7 Nov 2024 12:53:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.19.142 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983984; cv=none; b=AjmuuKen+pTLDsqA/qWxig+25zXNsUaAgzrMA/5VhGfFPdn10lmC894MMld5+WCPZD/yf2B4lFd4gWAKUN7739NLxrtiQBaikr1fJzFEYyGZcgPRPef0X29ZL7bFinWYHR2l8KOAISOsXm0FA8kdY9HrP2j8QzR9aLEuYInENQ0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983984; c=relaxed/simple; bh=Hf/7G60jaXIYK35ZCiFWQtP4c0TpBzXdq4DbMODo9/I=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=EGUeYkwars3uCgvpcefRmDd5Mpf5md5wyI6L8nIH2Hud0+d9CsuIDROReXcFdyfUai+NxCy+he5RRrznawRpo2+xkgutSJxveetnJ9XFKocO5FAhTCIDD1jEBfbdGO8DUrpAo2JxcJztHqsvll0w4ur7GcaeiwZtAADZ5VG5yzU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=JfhY4Hns; arc=none smtp.client-ip=198.47.19.142 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="JfhY4Hns" Received: from lelv0266.itg.ti.com ([10.180.67.225]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 4A7CqtNP049083; Thu, 7 Nov 2024 06:52:55 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1730983975; bh=pfy1yy2gwNFOPkKPmfB3NTYFBRwvCDDq0Cd7DByNqtk=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=JfhY4HnsSgB35a40iyw8hb8dQa9d3JlyN4yKfez4tJNKmhnuGOO5siYnYC0W1nNh3 4vJy1J936IIsEeSsvGdH3XwrlH0tXndDipGA/8fA56loYQ9xellKWu6YliwB31Uh14 9hik7sz0Tdbs6xVZ90dExvwgBy3dS0CUrekLyu3U= Received: from DLEE113.ent.ti.com (dlee113.ent.ti.com [157.170.170.24]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTP id 4A7Cqtoj100090; Thu, 7 Nov 2024 06:52:55 -0600 Received: from DLEE114.ent.ti.com (157.170.170.25) by DLEE113.ent.ti.com (157.170.170.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Thu, 7 Nov 2024 06:52:55 -0600 Received: from lelvsmtp5.itg.ti.com (10.180.75.250) by DLEE114.ent.ti.com (157.170.170.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23 via Frontend Transport; Thu, 7 Nov 2024 06:52:55 -0600 Received: from localhost (udb0389739.dhcp.ti.com [137.167.1.149]) by lelvsmtp5.itg.ti.com (8.15.2/8.15.2) with ESMTP id 4A7CqsaZ054901; Thu, 7 Nov 2024 06:52:54 -0600 From: Michael Nemanov To: Kalle Valo , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , , , , CC: Sabeeh Khan , Michael Nemanov Subject: [PATCH v5 12/17] wifi: cc33xx: Add init.c, init.h Date: Thu, 7 Nov 2024 14:52:04 +0200 Message-ID: <20241107125209.1736277-13-michael.nemanov@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241107125209.1736277-1-michael.nemanov@ti.com> References: <20241107125209.1736277-1-michael.nemanov@ti.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea X-Patchwork-Delegate: kuba@kernel.org High-level init code for new vifs Signed-off-by: Michael Nemanov --- drivers/net/wireless/ti/cc33xx/init.c | 231 ++++++++++++++++++++++++++ drivers/net/wireless/ti/cc33xx/init.h | 15 ++ 2 files changed, 246 insertions(+) create mode 100644 drivers/net/wireless/ti/cc33xx/init.c create mode 100644 drivers/net/wireless/ti/cc33xx/init.h diff --git a/drivers/net/wireless/ti/cc33xx/init.c b/drivers/net/wireless/ti/cc33xx/init.c new file mode 100644 index 000000000000..ff1fab1e0104 --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/init.c @@ -0,0 +1,231 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#include +#include "acx.h" +#include "cmd.h" +#include "conf.h" +#include "event.h" +#include "tx.h" +#include "init.h" + +static int cc33xx_init_phy_vif_config(struct cc33xx *cc, + struct cc33xx_vif *wlvif) +{ + int ret; + + ret = cc33xx_acx_slot(cc, wlvif, DEFAULT_SLOT_TIME); + if (ret < 0) + return ret; + + return 0; +} + +static int cc33xx_init_sta_beacon_filter(struct cc33xx *cc, + struct cc33xx_vif *wlvif) +{ + int ret; + + ret = cc33xx_acx_beacon_filter_table(cc, wlvif); + if (ret < 0) + return ret; + + /* disable beacon filtering until we get the first beacon */ + ret = cc33xx_acx_beacon_filter_opt(cc, wlvif, false); + if (ret < 0) + return ret; + + return 0; +} + +static int cc33xx_ap_init_templates(struct cc33xx *cc, + struct ieee80211_vif *vif) +{ + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + int ret; + + /* when operating as AP we want to receive external beacons for + * configuring ERP protection. + */ + ret = cc33xx_acx_beacon_filter_opt(cc, wlvif, false); + if (ret < 0) + return ret; + + return 0; +} + +static void cc33xx_set_ba_policies(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + /* Reset the BA RX indicators */ + wlvif->ba_allowed = true; + cc->ba_rx_session_count = 0; + + /* BA is supported in STA/AP modes */ + wlvif->ba_support = (wlvif->bss_type != BSS_TYPE_AP_BSS && + wlvif->bss_type != BSS_TYPE_STA_BSS); +} + +/* vif-specifc initialization */ +static int cc33xx_init_sta_role(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + int ret = cc33xx_acx_group_address_tbl(cc, true, NULL, 0); + + if (ret < 0) + return ret; + + /* Beacon filtering */ + ret = cc33xx_init_sta_beacon_filter(cc, wlvif); + if (ret < 0) + return ret; + + return 0; +} + +/* vif-specific initialization */ +static int cc33xx_init_ap_role(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + int ret; + + /* initialize Tx power */ + ret = cc33xx_acx_tx_power(cc, wlvif, wlvif->power_level); + if (ret < 0) + return ret; + + if (cc->radar_debug_mode) + cc33xx_cmd_generic_cfg(cc, wlvif, + CC33XX_CFG_FEATURE_RADAR_DEBUG, + cc->radar_debug_mode, 0); + + return 0; +} + +int cc33xx_init_vif_specific(struct cc33xx *cc, struct ieee80211_vif *vif) +{ + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + struct conf_tx_settings *tx_settings = &cc->conf.host_conf.tx; + struct conf_tx_ac_category *conf_ac = &tx_settings->ac_conf0; + bool is_ap = (wlvif->bss_type == BSS_TYPE_AP_BSS); + u8 ps_scheme = cc->conf.mac.ps_scheme; + int ret, i; + + /* consider all existing roles before configuring psm. */ + + if (cc->ap_count == 0 && is_ap) { /* first AP */ + ret = cc33xx_acx_sleep_auth(cc, CC33XX_PSM_ELP); + if (ret < 0) + return ret; + + /* unmask ap events */ + cc->event_mask |= cc->ap_event_mask; + + /* first STA, no APs */ + } else if (cc->sta_count == 0 && cc->ap_count == 0 && !is_ap) { + u8 sta_auth = cc->conf.host_conf.conn.sta_sleep_auth; + /* Configure for power according to debugfs */ + if (sta_auth != CC33XX_PSM_ILLEGAL) + ret = cc33xx_acx_sleep_auth(cc, sta_auth); + /* Configure for ELP power saving */ + else + ret = cc33xx_acx_sleep_auth(cc, CC33XX_PSM_ELP); + + if (ret < 0) + return ret; + } + + /* Mode specific init */ + if (is_ap) { + ret = cc33xx_init_ap_role(cc, wlvif); + if (ret < 0) + return ret; + } else { + ret = cc33xx_init_sta_role(cc, wlvif); + if (ret < 0) + return ret; + } + + cc33xx_init_phy_vif_config(cc, wlvif); + + /* Default TID/AC configuration */ + if (WARN_ON(tx_settings->ac_conf_count != tx_settings->tid_conf_count) || + WARN_ON(tx_settings->ac_conf_count != CONF_TX_MAX_AC_COUNT)) + return -EINVAL; + + for (i = 0; i < tx_settings->tid_conf_count; i++) { + /* If no ps poll is used, send legacy ps scheme in cmd */ + if (ps_scheme == PS_SCHEME_NOPSPOLL) + ps_scheme = PS_SCHEME_LEGACY; + + ret = cc33xx_tx_param_cfg(cc, wlvif, conf_ac->ac, + conf_ac->cw_min, conf_ac->cw_max, + conf_ac->aifsn, conf_ac->tx_op_limit, + false, ps_scheme, conf_ac->is_mu_edca, + conf_ac->mu_edca_aifs, + conf_ac->mu_edca_ecw_min_max, + conf_ac->mu_edca_timer); + + if (ret < 0) + return ret; + + conf_ac++; + } + + /* Mode specific init - post mem init */ + if (is_ap) + ret = cc33xx_ap_init_templates(cc, vif); + + if (ret < 0) + return ret; + + /* Configure initiator BA sessions policies */ + cc33xx_set_ba_policies(cc, wlvif); + + return 0; +} + +int cc33xx_hw_init(struct cc33xx *cc) +{ + cc33xx_acx_init_mem_config(cc); + + cc->last_fw_rls_idx = 0; + cc->partial_rx.status = CURR_RX_START; + return 0; +} + +int cc33xx_download_ini_params_and_wait(struct cc33xx *cc) +{ + struct cc33xx_cmd_ini_params_download *cmd; + size_t command_size = ALIGN((sizeof(*cmd) + sizeof(cc->conf)), 4); + int ret; + + cc33xx_set_max_buffer_size(cc, INI_MAX_BUFFER_SIZE); + + cc33xx_debug(DEBUG_ACX, + "Downloading INI configurations to FW, payload Length: %zu", + sizeof(cc->conf)); + + cmd = kzalloc(command_size, GFP_KERNEL); + if (!cmd) { + cc33xx_set_max_buffer_size(cc, CMD_MAX_BUFFER_SIZE); + return -ENOMEM; + } + + cmd->length = cpu_to_le32(sizeof(cc->conf)); + + /* copy INI file params payload */ + memcpy((cmd->payload), &cc->conf, sizeof(cc->conf)); + + ret = cc33xx_cmd_send(cc, CMD_DOWNLOAD_INI_PARAMS, + cmd, command_size, 0); + if (ret < 0) { + cc33xx_warning("download INI params to FW command sending failed: %d", + ret); + } else { + cc33xx_debug(DEBUG_BOOT, "INI Params downloaded successfully"); + } + + cc33xx_set_max_buffer_size(cc, CMD_MAX_BUFFER_SIZE); + kfree(cmd); + return ret; +} diff --git a/drivers/net/wireless/ti/cc33xx/init.h b/drivers/net/wireless/ti/cc33xx/init.h new file mode 100644 index 000000000000..b0bc6a548611 --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/init.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#ifndef __INIT_H__ +#define __INIT_H__ + +#include "cc33xx.h" + +int cc33xx_hw_init(struct cc33xx *cc); +int cc33xx_download_ini_params_and_wait(struct cc33xx *cc); +int cc33xx_init_vif_specific(struct cc33xx *cc, struct ieee80211_vif *vif); + +#endif /* __INIT_H__ */ From patchwork Thu Nov 7 12:52:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nemanov, Michael" X-Patchwork-Id: 13866425 X-Patchwork-Delegate: kuba@kernel.org Received: from lelv0143.ext.ti.com (lelv0143.ext.ti.com [198.47.23.248]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D1FD4215C7C; Thu, 7 Nov 2024 12:53:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.23.248 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983991; cv=none; b=jpQgH2Qh8JkYzWlTOIQsJWDHnpa7zWd9FMyJ6S+247KzghRjMbGuv8rBm78/2D7zur5mjb5ki3fpo3WQ29mj3H+l5PO5B6CQjO6J6hUwwyYtP4VH+8tk5bdIMpmadfFHZBiKABG0ClNwfLsdiAksP21cysDQoscCFCVa34fT2dE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983991; c=relaxed/simple; bh=ZKyc0CYWJZPRZ3BW+OVShUjI+AUlx7p/dGPpibBrG7k=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=f1cBCHzMGJDdmm9xhEqsfm5cPTuNCUt27T3ni7NEIpwskl2qATSLylgwPlK0x7hbGbq2RZzqZfOkFL7+tPIuL3CRYWU4azZjujre5w4Z6iMR1fo0wdN0ygigFPR7r61hVRW84fwGZuTJbbpzlq3x3wtNNr//erbpOR589/Eirw8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=RtuXOfG4; arc=none smtp.client-ip=198.47.23.248 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="RtuXOfG4" Received: from fllv0034.itg.ti.com ([10.64.40.246]) by lelv0143.ext.ti.com (8.15.2/8.15.2) with ESMTP id 4A7Cqv1g090826; Thu, 7 Nov 2024 06:52:57 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1730983977; bh=WWzAkAKmV1jAVqXQk6grVIVkJYvHYorMJzMXDOJZgN4=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=RtuXOfG4KsKGZoHx2bKo1v3VxfR145SG3CVMJWTKt99UfQiyZM5kyODoABLrbp+ot h8/hggvfcxi1NNnrbPraTSLsu/GrFvqIsc8CZWy1E4kJ+tdcdBZFj+gq31gL0fVF6Z 6moV9D1E5NlX5f/JYxon8dTBg/mzwSV8gfFi+CNc= Received: from DLEE109.ent.ti.com (dlee109.ent.ti.com [157.170.170.41]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 4A7CqvQE060379 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 7 Nov 2024 06:52:57 -0600 Received: from DLEE109.ent.ti.com (157.170.170.41) by DLEE109.ent.ti.com (157.170.170.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Thu, 7 Nov 2024 06:52:56 -0600 Received: from lelvsmtp6.itg.ti.com (10.180.75.249) by DLEE109.ent.ti.com (157.170.170.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23 via Frontend Transport; Thu, 7 Nov 2024 06:52:56 -0600 Received: from localhost (udb0389739.dhcp.ti.com [137.167.1.149]) by lelvsmtp6.itg.ti.com (8.15.2/8.15.2) with ESMTP id 4A7CqtMF038506; Thu, 7 Nov 2024 06:52:56 -0600 From: Michael Nemanov To: Kalle Valo , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , , , , CC: Sabeeh Khan , Michael Nemanov Subject: [PATCH v5 13/17] wifi: cc33xx: Add scan.c, scan.h Date: Thu, 7 Nov 2024 14:52:05 +0200 Message-ID: <20241107125209.1736277-14-michael.nemanov@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241107125209.1736277-1-michael.nemanov@ti.com> References: <20241107125209.1736277-1-michael.nemanov@ti.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea X-Patchwork-Delegate: kuba@kernel.org Handles the scan process. Scan starts via cc33xx_op_hw_scan (main.c) which calls cc33xx_scan. Scan channels are packed and sent to HW where scanning is managed by FW concurrently to other roles without driver intervention. Scan results are handled like normal management frames and are sent to MAC80211. HW notifies driver of scan completion via dedicated event which triggers a call to cc33xx_scan_completed. Signed-off-by: Michael Nemanov --- drivers/net/wireless/ti/cc33xx/scan.c | 900 ++++++++++++++++++++++++++ drivers/net/wireless/ti/cc33xx/scan.h | 33 + 2 files changed, 933 insertions(+) create mode 100644 drivers/net/wireless/ti/cc33xx/scan.c create mode 100644 drivers/net/wireless/ti/cc33xx/scan.h diff --git a/drivers/net/wireless/ti/cc33xx/scan.c b/drivers/net/wireless/ti/cc33xx/scan.c new file mode 100644 index 000000000000..199fde76cbbe --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/scan.c @@ -0,0 +1,900 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#include "cc33xx.h" +#include "debug.h" +#include "cmd.h" +#include "scan.h" +#include "tx.h" +#include "conf.h" + +#define CC33XX_SCAN_TIMEOUT 30000 /* msec */ + +#define MAX_CHANNELS_2GHZ 14 +#define MAX_CHANNELS_4GHZ 4 +#define MAX_CHANNELS_5GHZ 32 +#define CONN_SCAN_MAX_CHANNELS_ALL_BANDS 46 + +#define SCHED_SCAN_MAX_SSIDS 16 + +#define CONN_SCAN_MAX_BAND 2 +#define SCAN_MAX_SCHED_SCAN_PLANS 12 + +#define SCAN_CHANNEL_FLAGS_DFS BIT(0) + +struct conn_scan_ch_params { + __le16 min_duration; + __le16 max_duration; + __le16 passive_duration; + + u8 channel; + u8 tx_power_att; + + /* bit 0: DFS channel; bit 1: DFS enabled */ + u8 flags; + + u8 padding[3]; +} __packed; + +enum { + SCAN_SSID_TYPE_PUBLIC = 0, + SCAN_SSID_TYPE_HIDDEN = 1, +}; + +enum scan_request_type { + SCAN_REQUEST_NONE, + SCAN_REQUEST_CONNECT_PERIODIC_SCAN, + SCAN_REQUEST_ONE_SHOT, + SCAN_REQUEST_SURVEY_SCAN, + SCAN_NUM_OF_REQUEST_TYPE +}; + +enum { + SCAN_SSID_FILTER_ANY = 0, + SCAN_SSID_FILTER_SPECIFIC = 1, + SCAN_SSID_FILTER_LIST = 2, + SCAN_SSID_FILTER_DISABLED = 3 +}; + +enum { + SCAN_TYPE_SEARCH = 0, + SCAN_TYPE_PERIODIC = 1, + SCAN_TYPE_TRACKING = 2, +}; + +struct cc33xx_ssid { + u8 type; + u8 len; + u8 ssid[IEEE80211_MAX_SSID_LEN]; + u8 padding[2]; +} __packed; + +struct cc33xx_cmd_ssid_list { + struct cc33xx_cmd_header header; + + u8 role_id; + u8 scan_type; + u8 n_ssids; + struct cc33xx_ssid ssids[SCHED_SCAN_MAX_SSIDS]; + u8 padding; +} __packed; + +struct conn_scan_dwell_info { + __le16 min_duration; + __le16 max_duration; + __le16 passive_duration; +} __packed; + +struct conn_scan_ch_info { + u8 channel; + u8 tx_power_att; + u8 flags; +} __packed; + +struct scan_one_shot_info { + u8 passive[CONN_SCAN_MAX_BAND]; + u8 active[CONN_SCAN_MAX_BAND]; + u8 dfs; + + struct conn_scan_ch_info channel_list[CONN_SCAN_MAX_CHANNELS_ALL_BANDS]; + struct conn_scan_dwell_info dwell_info[CONN_SCAN_MAX_BAND]; + u8 reserved; +}; + +struct sched_scan_plan_cmd { + u32 interval; + u32 iterations; +}; + +struct scan_periodic_info { + struct sched_scan_plan_cmd sched_scan_plans[SCAN_MAX_SCHED_SCAN_PLANS]; + u16 sched_scan_plans_num; + + u8 passive[CONN_SCAN_MAX_BAND]; + u8 active[CONN_SCAN_MAX_BAND]; + u8 dfs; + + struct conn_scan_ch_info channel_list[CONN_SCAN_MAX_CHANNELS_ALL_BANDS]; + struct conn_scan_dwell_info dwell_info[CONN_SCAN_MAX_BAND]; +} __packed; + +struct scan_param { + union { + struct scan_one_shot_info one_shot; + struct scan_periodic_info periodic; + } u; +} __packed; + +struct cc33xx_cmd_scan_params { + struct cc33xx_cmd_header header; + u8 scan_type; + u8 role_id; + + struct scan_param params; + s8 rssi_threshold; /* for filtering (in dBm) */ + s8 snr_threshold; /* for filtering (in dB) */ + + u8 bssid[ETH_ALEN]; + u8 padding[2]; + + u8 ssid_from_list; /* use ssid from configured ssid list */ + u8 filter; /* forward only results with matching ssids */ + + u8 num_of_ssids; +} __packed; + +#define MAX_EXTRA_IES_LEN 512 + +struct cc33xx_cmd_set_ies { + struct cc33xx_cmd_header header; + u8 scan_type; + u8 role_id; + __le16 len; + u8 data[MAX_EXTRA_IES_LEN]; +} __packed; + +struct cc33xx_cmd_scan_stop { + struct cc33xx_cmd_header header; + + u8 scan_type; + u8 role_id; + u8 is_ET; + u8 padding; +} __packed; + +struct cc33xx_scan_channels { + u8 passive[CONN_SCAN_MAX_BAND]; /* number of passive scan channels */ + u8 active[CONN_SCAN_MAX_BAND]; /* number of active scan channels */ + u8 dfs; /* number of dfs channels in 5ghz */ + u8 passive_active; /* number of passive before active channels 2.4ghz */ + + struct conn_scan_ch_params channels_2[MAX_CHANNELS_2GHZ]; + struct conn_scan_ch_params channels_5[MAX_CHANNELS_5GHZ]; + struct conn_scan_ch_params channels_4[MAX_CHANNELS_4GHZ]; +}; + +static void cc33xx_adjust_channels(struct scan_param *scan_param, + struct cc33xx_scan_channels *cmd_channels, + enum scan_request_type scan_type) +{ + struct conn_scan_ch_info *ch_info_list; + struct conn_scan_dwell_info *dwell_info; + struct conn_scan_ch_params *channel; + struct conn_scan_ch_params *ch_params_list; + + u8 *passive; + u8 *dfs; + u8 *active; + int i, j; + u8 band; + + if (scan_type == SCAN_REQUEST_CONNECT_PERIODIC_SCAN) { + ch_info_list = scan_param->u.periodic.channel_list; + dwell_info = scan_param->u.periodic.dwell_info; + active = (u8 *)&scan_param->u.periodic.active; + passive = (u8 *)&scan_param->u.periodic.passive; + dfs = (u8 *)&scan_param->u.periodic.dfs; + } else { + ch_info_list = scan_param->u.one_shot.channel_list; + dwell_info = scan_param->u.one_shot.dwell_info; + active = (u8 *)&scan_param->u.one_shot.active; + passive = (u8 *)&scan_param->u.one_shot.passive; + dfs = (u8 *)&scan_param->u.one_shot.dfs; + } + + memcpy(passive, cmd_channels->passive, sizeof(cmd_channels->passive)); + memcpy(active, cmd_channels->active, sizeof(cmd_channels->active)); + *dfs = cmd_channels->dfs; + + ch_params_list = cmd_channels->channels_2; + for (i = 0; i < MAX_CHANNELS_2GHZ; ++i) { + ch_info_list[i].channel = ch_params_list[i].channel; + ch_info_list[i].flags = ch_params_list[i].flags; + ch_info_list[i].tx_power_att = ch_params_list[i].tx_power_att; + } + + channel = &ch_params_list[0]; + band = NL80211_BAND_2GHZ; + dwell_info[band].min_duration = channel->min_duration; + dwell_info[band].max_duration = channel->max_duration; + dwell_info[band].passive_duration = channel->passive_duration; + + ch_params_list = cmd_channels->channels_5; + for (j = 0; j < MAX_CHANNELS_5GHZ; ++i, ++j) { + ch_info_list[i].channel = ch_params_list[j].channel; + ch_info_list[i].flags = ch_params_list[j].flags; + ch_info_list[i].tx_power_att = ch_params_list[j].tx_power_att; + } + + channel = &ch_params_list[0]; + band = NL80211_BAND_5GHZ; + dwell_info[band].min_duration = channel->min_duration; + dwell_info[band].max_duration = channel->max_duration; + dwell_info[band].passive_duration = channel->passive_duration; +} + +static int cc33xx_cmd_build_probe_req(struct cc33xx *cc, + struct cc33xx_vif *wlvif, u8 role_id, + u8 scan_type, const u8 *ssid, + size_t ssid_len, const u8 *ie0, + size_t ie0_len, const u8 *ie1, + size_t ie1_len, bool sched_scan) +{ + struct ieee80211_vif *vif = cc33xx_wlvif_to_vif(wlvif); + struct sk_buff *skb = NULL; + struct cc33xx_cmd_set_ies *cmd; + int ret; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + skb = ieee80211_probereq_get(cc->hw, vif->addr, ssid, + ssid_len, ie0_len + ie1_len); + if (!skb) { + ret = -ENOMEM; + goto out_free; + } + + if (ie0_len) + skb_put_data(skb, ie0, ie0_len); + + if (ie1_len) + skb_put_data(skb, ie1, ie1_len); + + cmd->scan_type = scan_type; + cmd->role_id = role_id; + + cmd->len = cpu_to_le16(skb->len - sizeof(struct ieee80211_hdr_3addr)); + + if (skb->data) { + memcpy(cmd->data, + skb->data + sizeof(struct ieee80211_hdr_3addr), le16_to_cpu(cmd->len)); + } + + usleep_range(10000, 11000); + ret = cc33xx_cmd_send(cc, CMD_SET_PROBE_IE, cmd, sizeof(*cmd), 0); + + if (ret < 0) { + cc33xx_warning("cmd set_template failed: %d", ret); + goto out_free; + } + +out_free: + dev_kfree_skb(skb); + kfree(cmd); +out: + return ret; +} + +static void cc33xx_started_vifs_iter(void *data, u8 *mac, + struct ieee80211_vif *vif) +{ + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + bool active = false; + int *count = (int *)data; + + /* count active interfaces according to interface type. + * checking only bss_conf.idle is bad for some cases, e.g. + * we don't want to count sta in p2p_find as active interface. + */ + switch (wlvif->bss_type) { + case BSS_TYPE_STA_BSS: + if (test_bit(WLVIF_FLAG_STA_ASSOCIATED, &wlvif->flags)) + active = true; + break; + + case BSS_TYPE_AP_BSS: + if (wlvif->cc->active_sta_count > 0) + active = true; + break; + + default: + break; + } + + if (active) + (*count)++; +} + +static int cc33xx_count_started_vifs(struct cc33xx *cc) +{ + int count = 0; + + ieee80211_iterate_active_interfaces_atomic(cc->hw, + IEEE80211_IFACE_ITER_RESUME_ALL, + cc33xx_started_vifs_iter, + &count); + return count; +} + +static int cc33xx_scan_get_channels(struct cc33xx *cc, + struct ieee80211_channel *req_channels[], + u32 n_channels, u32 n_ssids, + struct conn_scan_ch_params *channels, + u32 band, bool radar, bool passive, + unsigned int start, unsigned int max_channels, + u8 *n_pactive_ch, int scan_type) +{ + unsigned int i, j; + u32 flags; + bool force_passive = !n_ssids; + u32 min_dwell_time_active, max_dwell_time_active; + u32 dwell_time_passive, dwell_time_dfs; + struct conn_scan_ch_params *ch; + struct ieee80211_channel *req_ch; + + /* configure dwell times according to scan type */ + if (scan_type == SCAN_TYPE_SEARCH) { + struct conf_scan_settings *c = &cc->conf.host_conf.scan; + bool active_vif_exists = !!cc33xx_count_started_vifs(cc); + + min_dwell_time_active = active_vif_exists ? + c->min_dwell_time_active : + c->min_dwell_time_active_long; + max_dwell_time_active = active_vif_exists ? + c->max_dwell_time_active : + c->max_dwell_time_active_long; + dwell_time_passive = c->dwell_time_passive; + dwell_time_dfs = c->dwell_time_dfs; + } else { + struct conf_sched_scan_settings *c = + &cc->conf.host_conf.sched_scan; + u32 delta_per_probe; + + delta_per_probe = (band == NL80211_BAND_5GHZ) ? + c->dwell_time_delta_per_probe_5 : + c->dwell_time_delta_per_probe; + + min_dwell_time_active = c->base_dwell_time + + n_ssids * c->num_probe_reqs * delta_per_probe; + + max_dwell_time_active = min_dwell_time_active; + max_dwell_time_active += c->max_dwell_time_delta; + dwell_time_passive = c->dwell_time_passive; + dwell_time_dfs = c->dwell_time_dfs; + } + + min_dwell_time_active = DIV_ROUND_UP(min_dwell_time_active, 1000); + max_dwell_time_active = DIV_ROUND_UP(max_dwell_time_active, 1000); + dwell_time_passive = DIV_ROUND_UP(dwell_time_passive, 1000); + dwell_time_dfs = DIV_ROUND_UP(dwell_time_dfs, 1000); + + for (i = 0, j = start; i < n_channels && j < max_channels; i++) { + flags = req_channels[i]->flags; + ch = &channels[j]; + req_ch = req_channels[i]; + + if (force_passive) + flags |= IEEE80211_CHAN_NO_IR; + + if (req_ch->band == band && !(flags & IEEE80211_CHAN_DISABLED) && + (!!(flags & IEEE80211_CHAN_RADAR) == radar) && + /* if radar is set, we ignore the passive flag */ + (radar || !!(flags & IEEE80211_CHAN_NO_IR) == passive)) { + if (flags & IEEE80211_CHAN_RADAR) { + ch->flags |= SCAN_CHANNEL_FLAGS_DFS; + + ch->passive_duration = + cpu_to_le16(dwell_time_dfs); + } else { + ch->passive_duration = + cpu_to_le16(dwell_time_passive); + } + + ch->min_duration = cpu_to_le16(min_dwell_time_active); + ch->max_duration = cpu_to_le16(max_dwell_time_active); + + ch->tx_power_att = req_ch->max_power; + ch->channel = req_ch->hw_value; + + if (n_pactive_ch && band == NL80211_BAND_2GHZ && + ch->channel >= 12 && ch->channel <= 14 && + (flags & IEEE80211_CHAN_NO_IR) && !force_passive) { + /* pactive channels treated as DFS */ + ch->flags = SCAN_CHANNEL_FLAGS_DFS; + + /* n_pactive_ch is counted down from the end of + * the passive channel list + */ + (*n_pactive_ch)++; + cc33xx_debug(DEBUG_SCAN, "n_pactive_ch = %d", + *n_pactive_ch); + } + + cc33xx_debug(DEBUG_SCAN, "freq %d, ch. %d, flags 0x%x, power %d, min/max_dwell %d/%d%s%s", + req_ch->center_freq, req_ch->hw_value, + req_ch->flags, req_ch->max_power, + min_dwell_time_active, + max_dwell_time_active, + flags & IEEE80211_CHAN_RADAR ? ", DFS" : "", + flags & IEEE80211_CHAN_NO_IR ? ", NO-IR" : ""); + j++; + } + } + + return j - start; +} + +static bool cc33xx_set_scan_chan_params(struct cc33xx *cc, + struct cc33xx_scan_channels *cfg, + struct ieee80211_channel *channels[], + u32 n_channels, u32 n_ssids, + int scan_type) +{ + u8 n_pactive_ch = 0; + + cfg->passive[0] = cc33xx_scan_get_channels(cc, channels, n_channels, + n_ssids, cfg->channels_2, + NL80211_BAND_2GHZ, false, + true, 0, MAX_CHANNELS_2GHZ, + &n_pactive_ch, scan_type); + + cfg->active[0] = cc33xx_scan_get_channels(cc, channels, n_channels, + n_ssids, cfg->channels_2, + NL80211_BAND_2GHZ, false, + false, cfg->passive[0], + MAX_CHANNELS_2GHZ, + &n_pactive_ch, scan_type); + + cfg->passive[1] = cc33xx_scan_get_channels(cc, channels, n_channels, + n_ssids, cfg->channels_5, + NL80211_BAND_5GHZ, false, + true, 0, MAX_CHANNELS_5GHZ, + &n_pactive_ch, scan_type); + + cfg->dfs = cc33xx_scan_get_channels(cc, channels, n_channels, n_ssids, + cfg->channels_5, NL80211_BAND_5GHZ, + true, true, cfg->passive[1], + MAX_CHANNELS_5GHZ, &n_pactive_ch, + scan_type); + + cfg->active[1] = cc33xx_scan_get_channels(cc, channels, n_channels, + n_ssids, cfg->channels_5, + NL80211_BAND_5GHZ, false, + false, + cfg->passive[1] + cfg->dfs, + MAX_CHANNELS_5GHZ, + &n_pactive_ch, scan_type); + + cfg->passive_active = n_pactive_ch; + + cc33xx_debug(DEBUG_SCAN, "2.4GHz: active %d passive %d", + cfg->active[0], cfg->passive[0]); + cc33xx_debug(DEBUG_SCAN, "5GHz: active %d passive %d", + cfg->active[1], cfg->passive[1]); + cc33xx_debug(DEBUG_SCAN, "DFS: %d", cfg->dfs); + + return cfg->passive[0] || cfg->active[0] || cfg->passive[1] || + cfg->active[1] || cfg->dfs; +} + +static int cc33xx_scan_send(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct cfg80211_scan_request *req) +{ + struct cc33xx_cmd_scan_params *cmd; + struct cc33xx_scan_channels *cmd_channels = NULL; + struct cc33xx_ssid *cmd_ssid; + u16 alloc_size; + int ret; + int i; + + alloc_size = sizeof(*cmd) + (sizeof(struct cc33xx_ssid) * req->n_ssids); + cmd = kzalloc(alloc_size, GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + /* scan on the dev role if the regular one is not started */ + if (cc33xx_is_p2p_mgmt(wlvif)) + cmd->role_id = wlvif->dev_role_id; + else + cmd->role_id = wlvif->role_id; + + if (WARN_ON(cmd->role_id == CC33XX_INVALID_ROLE_ID)) { + ret = -EINVAL; + goto out; + } + + cmd->scan_type = SCAN_REQUEST_ONE_SHOT; + cmd->rssi_threshold = -127; + cmd->snr_threshold = 0; + + for (i = 0; i < ETH_ALEN; i++) + cmd->bssid[i] = req->bssid[i]; + cmd->ssid_from_list = 0; + cmd->filter = 0; + WARN_ON(req->n_ssids > 1); + + /* configure channels */ + cmd_channels = kzalloc(sizeof(*cmd_channels), GFP_KERNEL); + if (!cmd_channels) { + ret = -ENOMEM; + goto out; + } + + cc33xx_set_scan_chan_params(cc, cmd_channels, req->channels, + req->n_channels, req->n_ssids, + SCAN_TYPE_SEARCH); + + cc33xx_adjust_channels(&cmd->params, cmd_channels, cmd->scan_type); + if (req->n_ssids > 0) { + cmd->ssid_from_list = 1; + cmd->num_of_ssids = req->n_ssids; + cmd_ssid = (struct cc33xx_ssid *)((u8 *)cmd + sizeof(*cmd)); + + cmd_ssid->len = req->ssids[0].ssid_len; + memcpy(cmd_ssid->ssid, req->ssids[0].ssid, cmd_ssid->len); + cmd_ssid->type = (req->ssids[0].ssid_len) ? + SCAN_SSID_TYPE_HIDDEN : SCAN_SSID_TYPE_PUBLIC; + } + + ret = cc33xx_cmd_build_probe_req(cc, wlvif, cmd->role_id, cmd->scan_type, + req->ssids ? req->ssids[0].ssid : NULL, + req->ssids ? req->ssids[0].ssid_len : 0, + req->ie, req->ie_len, NULL, 0, false); + if (ret < 0) { + cc33xx_error("PROBE request template failed"); + goto out; + } + + cc33xx_dump(DEBUG_SCAN, "SCAN: ", cmd, alloc_size); + + ret = cc33xx_cmd_send(cc, CMD_SCAN, cmd, alloc_size, 0); + if (ret < 0) { + cc33xx_error("SCAN failed"); + goto out; + } + +out: + kfree(cmd_channels); + kfree(cmd); + return ret; +} + +static int cc33xx_scan_sched_scan_ssid_list(struct cc33xx *cc, + struct cc33xx_vif *wlvif, + struct cfg80211_sched_scan_request *req, + struct cc33xx_cmd_ssid_list *cmd) +{ + struct cfg80211_match_set *sets = req->match_sets; + struct cfg80211_ssid *ssids = req->ssids; + int ret = 0, i, j, n_match_ssids = 0; + + /* count the match sets that contain SSIDs */ + for (i = 0; i < req->n_match_sets; i++) { + if (sets[i].ssid.ssid_len > 0) + n_match_ssids++; + } + + /* No filter, no ssids or only bcast ssid */ + if (!n_match_ssids && (!req->n_ssids || + (req->n_ssids == 1 && req->ssids[0].ssid_len == 0))) + goto out; + + cmd->role_id = wlvif->role_id; + if (!n_match_ssids) { + /* No filter, with ssids */ + + for (i = 0; i < req->n_ssids; i++) { + cmd->ssids[cmd->n_ssids].type = (ssids[i].ssid_len) ? + SCAN_SSID_TYPE_HIDDEN : SCAN_SSID_TYPE_PUBLIC; + cmd->ssids[cmd->n_ssids].len = ssids[i].ssid_len; + memcpy(cmd->ssids[cmd->n_ssids].ssid, ssids[i].ssid, + ssids[i].ssid_len); + cmd->n_ssids++; + } + } else { + /* Add all SSIDs from the filters */ + for (i = 0; i < req->n_match_sets; i++) { + /* ignore sets without SSIDs */ + if (!sets[i].ssid.ssid_len) + continue; + + cmd->ssids[cmd->n_ssids].type = SCAN_SSID_TYPE_PUBLIC; + cmd->ssids[cmd->n_ssids].len = sets[i].ssid.ssid_len; + memcpy(cmd->ssids[cmd->n_ssids].ssid, + sets[i].ssid.ssid, sets[i].ssid.ssid_len); + cmd->n_ssids++; + } + if (req->n_ssids > 1 || (req->n_ssids == 1 && req->ssids[0].ssid_len > 0)) { + /* Mark all the SSIDs passed in the SSID list as HIDDEN, + * so they're used in probe requests. + */ + for (i = 0; i < req->n_ssids; i++) { + if (!req->ssids[i].ssid_len) + continue; + + for (j = 0; j < cmd->n_ssids; j++) { + if (req->ssids[i].ssid_len == cmd->ssids[j].len && + !memcmp(req->ssids[i].ssid, + cmd->ssids[j].ssid, + req->ssids[i].ssid_len)) { + cmd->ssids[j].type = + SCAN_SSID_TYPE_HIDDEN; + break; + } + } + /* Fail if SSID isn't present in the filters */ + if (j == cmd->n_ssids) { + ret = -EINVAL; + goto out; + } + } + } + } + + return cmd->n_ssids; +out: + if (ret < 0) + return ret; + + return 0; +} + +int cc33xx_sched_scan_start(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct cfg80211_sched_scan_request *req, + struct ieee80211_scan_ies *ies) +{ + struct cc33xx_cmd_scan_params *cmd; + struct cc33xx_cmd_ssid_list *ssid_list; + struct cc33xx_scan_channels *cmd_channels = NULL; + struct conf_sched_scan_settings *c = &cc->conf.host_conf.sched_scan; + int ret; + int n_ssids = 0; + int alloc_size = sizeof(*cmd); + + ssid_list = kzalloc(sizeof(*ssid_list), GFP_KERNEL); + if (!ssid_list) { + ret = -ENOMEM; + goto out_ssid_free; + } + + n_ssids = cc33xx_scan_sched_scan_ssid_list(cc, wlvif, req, ssid_list); + if (n_ssids < 0) + return n_ssids; + + if (n_ssids <= 5) { + alloc_size += (n_ssids * sizeof(struct cc33xx_ssid)); + } else { /* n_ssids > 5 */ + ssid_list->scan_type = SCAN_REQUEST_CONNECT_PERIODIC_SCAN; + ret = cc33xx_cmd_send(cc, CMD_CONNECTION_SCAN_SSID_CFG, + ssid_list, sizeof(*ssid_list), 0); + if (ret < 0) { + cc33xx_error("cmd sched scan ssid list failed"); + goto out_ssid_free; + } + } + + cmd = kzalloc(alloc_size, GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out_free; + } + + cmd->role_id = wlvif->role_id; + + if (WARN_ON(cmd->role_id == CC33XX_INVALID_ROLE_ID)) { + ret = -EINVAL; + goto out_free; + } + + cmd->scan_type = SCAN_REQUEST_CONNECT_PERIODIC_SCAN; + cmd->rssi_threshold = c->rssi_threshold; + cmd->snr_threshold = c->snr_threshold; + + cmd->filter = 1; + cmd->num_of_ssids = n_ssids; + + cc33xx_debug(DEBUG_CMD, "ssid list num of n_ssids %d", n_ssids); + if (n_ssids > 0 && n_ssids <= 5) { + cmd->ssid_from_list = 1; + memcpy((u8 *)cmd + sizeof(*cmd), ssid_list->ssids, + n_ssids * sizeof(struct cc33xx_ssid)); + } + + cmd_channels = kzalloc(sizeof(*cmd_channels), GFP_KERNEL); + if (!cmd_channels) { + ret = -ENOMEM; + goto out_free; + } + + /* configure channels */ + cc33xx_set_scan_chan_params(cc, cmd_channels, req->channels, + req->n_channels, req->n_ssids, + SCAN_TYPE_PERIODIC); + cc33xx_adjust_channels(&cmd->params, cmd_channels, cmd->scan_type); + + memcpy(cmd->params.u.periodic.sched_scan_plans, req->scan_plans, + sizeof(struct sched_scan_plan_cmd) * req->n_scan_plans); + + cmd->params.u.periodic.sched_scan_plans_num = req->n_scan_plans; + + cc33xx_debug(DEBUG_SCAN, + "interval[0]: %d, iterations[0]: %d, num_plans: %d", + cmd->params.u.periodic.sched_scan_plans[0].interval, + cmd->params.u.periodic.sched_scan_plans[0].iterations, + cmd->params.u.periodic.sched_scan_plans_num); + + ret = cc33xx_cmd_build_probe_req(cc, wlvif, cmd->role_id, cmd->scan_type, + req->ssids ? req->ssids[0].ssid : NULL, + req->ssids ? req->ssids[0].ssid_len : 0, + ies->ies[NL80211_BAND_2GHZ], + ies->len[NL80211_BAND_2GHZ], + ies->common_ies, + ies->common_ie_len, true); + + if (ret < 0) { + cc33xx_error("PROBE request template failed"); + goto out_free; + } + + cc33xx_dump(DEBUG_SCAN, "SCAN: ", cmd, alloc_size); + + ret = cc33xx_cmd_send(cc, CMD_SCAN, cmd, alloc_size, 0); + if (ret < 0) { + cc33xx_error("SCAN failed"); + goto out_free; + } + +out_free: + kfree(cmd_channels); + kfree(cmd); + +out_ssid_free: + kfree(ssid_list); + + return ret; +} + +static int __cc33xx_scan_stop(struct cc33xx *cc, + struct cc33xx_vif *wlvif, u8 scan_type) +{ + struct cc33xx_cmd_scan_stop *stop; + int ret; + + stop = kzalloc(sizeof(*stop), GFP_KERNEL); + if (!stop) + return -ENOMEM; + + stop->role_id = wlvif->role_id; + stop->scan_type = scan_type; + + ret = cc33xx_cmd_send(cc, CMD_STOP_SCAN, stop, sizeof(*stop), 0); + if (ret < 0) { + cc33xx_error("failed to send sched scan stop command"); + goto out_free; + } + +out_free: + kfree(stop); + return ret; +} + +void cc33xx_scan_sched_scan_stop(struct cc33xx *cc, + struct cc33xx_vif *wlvif) +{ + __cc33xx_scan_stop(cc, wlvif, SCAN_REQUEST_CONNECT_PERIODIC_SCAN); +} + +static int cc33xx_scan_start(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct cfg80211_scan_request *req) +{ + return cc33xx_scan_send(cc, wlvif, req); +} + +int cc33xx_scan_stop(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + return __cc33xx_scan_stop(cc, wlvif, SCAN_REQUEST_ONE_SHOT); +} + +void cc33xx_scan_complete_work(struct work_struct *work) +{ + struct delayed_work *dwork; + struct cc33xx *cc; + struct cfg80211_scan_info info = { + .aborted = false, + }; + + dwork = to_delayed_work(work); + cc = container_of(dwork, struct cc33xx, scan_complete_work); + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) + goto out; + + if (cc->scan.state == CC33XX_SCAN_STATE_IDLE) + goto out; + + /* Rearm the tx watchdog just before idling scan. This + * prevents just-finished scans from triggering the watchdog + */ + cc33xx_rearm_tx_watchdog_locked(cc); + + cc->scan.state = CC33XX_SCAN_STATE_IDLE; + memset(cc->scan.scanned_ch, 0, sizeof(cc->scan.scanned_ch)); + cc->scan.req = NULL; + cc->scan_wlvif = NULL; + + if (cc->scan.failed) { + cc33xx_info("Scan completed due to error."); + cc33xx_queue_recovery_work(cc); + } + + cc33xx_cmd_regdomain_config_locked(cc); + + ieee80211_scan_completed(cc->hw, &info); + +out: + mutex_unlock(&cc->mutex); +} + +int cc33xx_scan(struct cc33xx *cc, struct ieee80211_vif *vif, const u8 *ssid, + size_t ssid_len, struct cfg80211_scan_request *req) +{ + struct cc33xx_vif *wlvif = cc33xx_vif_to_data(vif); + + if (cc->scan.state != CC33XX_SCAN_STATE_IDLE) + return -EBUSY; + + cc->scan.state = CC33XX_SCAN_STATE_2GHZ_ACTIVE; + + if (ssid_len && ssid) { + cc->scan.ssid_len = ssid_len; + memcpy(cc->scan.ssid, ssid, ssid_len); + } else { + cc->scan.ssid_len = 0; + } + + cc->scan_wlvif = wlvif; + cc->scan.req = req; + memset(cc->scan.scanned_ch, 0, sizeof(cc->scan.scanned_ch)); + + /* we assume failure so that timeout scenarios are handled correctly */ + cc->scan.failed = true; + ieee80211_queue_delayed_work(cc->hw, &cc->scan_complete_work, + msecs_to_jiffies(CC33XX_SCAN_TIMEOUT)); + + cc33xx_scan_start(cc, wlvif, req); + + return 0; +} + +inline void cc33xx_scan_sched_scan_results(struct cc33xx *cc) +{ + ieee80211_sched_scan_results(cc->hw); +} + +void cc33xx_scan_completed(struct cc33xx *cc, struct cc33xx_vif *wlvif) +{ + cc->scan.failed = false; + cancel_delayed_work(&cc->scan_complete_work); + ieee80211_queue_delayed_work(cc->hw, &cc->scan_complete_work, + msecs_to_jiffies(0)); +} diff --git a/drivers/net/wireless/ti/cc33xx/scan.h b/drivers/net/wireless/ti/cc33xx/scan.h new file mode 100644 index 000000000000..9161df413596 --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/scan.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#ifndef __SCAN_H__ +#define __SCAN_H__ + +#include "cc33xx.h" + +enum { + CC33XX_SCAN_STATE_IDLE, + CC33XX_SCAN_STATE_2GHZ_ACTIVE, + CC33XX_SCAN_STATE_2GHZ_PASSIVE, + CC33XX_SCAN_STATE_5GHZ_ACTIVE, + CC33XX_SCAN_STATE_5GHZ_PASSIVE, + CC33XX_SCAN_STATE_DONE +}; + +int cc33xx_scan_stop(struct cc33xx *cc, struct cc33xx_vif *wlvif); +void cc33xx_scan_completed(struct cc33xx *cc, struct cc33xx_vif *wlvif); +int cc33xx_sched_scan_start(struct cc33xx *cc, struct cc33xx_vif *wlvif, + struct cfg80211_sched_scan_request *req, + struct ieee80211_scan_ies *ies); +void cc33xx_scan_sched_scan_stop(struct cc33xx *cc, struct cc33xx_vif *wlvif); + +int cc33xx_scan(struct cc33xx *cc, struct ieee80211_vif *vif, + const u8 *ssid, size_t ssid_len, + struct cfg80211_scan_request *req); +void cc33xx_scan_complete_work(struct work_struct *work); +void cc33xx_scan_sched_scan_results(struct cc33xx *cc); + +#endif /* __CC33XX_SCAN_H__ */ From patchwork Thu Nov 7 12:52:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nemanov, Michael" X-Patchwork-Id: 13866428 X-Patchwork-Delegate: kuba@kernel.org Received: from fllv0016.ext.ti.com (fllv0016.ext.ti.com [198.47.19.142]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 31E6A21745E; Thu, 7 Nov 2024 12:53:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.19.142 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730984013; cv=none; b=XUMQ1bF4kGOVrr7os/UpDwiVoVAY8S9xT4L0u0fkoSd6VdKU0FHXE8ZP9BX96bAVN2dP/bX3+jMmxgXj15P0IA2mT7Cr64lLgirUcF2u5fC3mNDl6HRL+mGJ3h7B/BhU5vT8tz1JPrAZ1TUB3KzWH354xIlwaGQdb0JLXNlbvbo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730984013; c=relaxed/simple; bh=THwDVJycWD85EIuVm6b31JKn4pPiivNrQZ2Jehv5PFU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=WNW44vYYaKyHhhhy3dP9suO8wZnIn3mroTg1Y562lEoxzC4SoEDcPgsSAL2NJNE8ywCo/AVYUyYjdYQyAsUu4uGirpDwUPc1rN91m0OAw1GixZOvOIQqzG07hwKaypjNs3RECtHc7jjzm8pCJYjKMjAGR3KmVaUpO/xBY9v6wBU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=EvRhuoRc; arc=none smtp.client-ip=198.47.19.142 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="EvRhuoRc" Received: from fllv0034.itg.ti.com ([10.64.40.246]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 4A7Cqw7V049089; Thu, 7 Nov 2024 06:52:58 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1730983978; bh=zzYTimKlJmYJPJPK7k4y8QpACJoWgJ+dEn8661bKzAA=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=EvRhuoRcte1ivwAMhLhtGCsYIP+Nn/u619PWYgSDybDA2xGWmRoWhzjmGN5mEaTDS gkxqH5bkyEIJK/DsMpzteHXom7VUmD8i+lrUxSYmMaACBiYoesVJ9owx39+fyf18le Cqld8QVx5uDVHP7IVoId9pcFcGAlHqityBEgQSUU= Received: from DLEE101.ent.ti.com (dlee101.ent.ti.com [157.170.170.31]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 4A7Cqwwn060386 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 7 Nov 2024 06:52:58 -0600 Received: from DLEE100.ent.ti.com (157.170.170.30) by DLEE101.ent.ti.com (157.170.170.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Thu, 7 Nov 2024 06:52:57 -0600 Received: from lelvsmtp5.itg.ti.com (10.180.75.250) by DLEE100.ent.ti.com (157.170.170.30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23 via Frontend Transport; Thu, 7 Nov 2024 06:52:57 -0600 Received: from localhost (udb0389739.dhcp.ti.com [137.167.1.149]) by lelvsmtp5.itg.ti.com (8.15.2/8.15.2) with ESMTP id 4A7CqveY054936; Thu, 7 Nov 2024 06:52:57 -0600 From: Michael Nemanov To: Kalle Valo , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , , , , CC: Sabeeh Khan , Michael Nemanov Subject: [PATCH v5 14/17] wifi: cc33xx: Add conf.h Date: Thu, 7 Nov 2024 14:52:06 +0200 Message-ID: <20241107125209.1736277-15-michael.nemanov@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241107125209.1736277-1-michael.nemanov@ti.com> References: <20241107125209.1736277-1-michael.nemanov@ti.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea X-Patchwork-Delegate: kuba@kernel.org Various HW / FW / Driver controls unique for the CC33xx that can be set by OEMs. Signed-off-by: Michael Nemanov --- drivers/net/wireless/ti/cc33xx/conf.h | 1246 +++++++++++++++++++++++++ 1 file changed, 1246 insertions(+) create mode 100644 drivers/net/wireless/ti/cc33xx/conf.h diff --git a/drivers/net/wireless/ti/cc33xx/conf.h b/drivers/net/wireless/ti/cc33xx/conf.h new file mode 100644 index 000000000000..bc6eeb7a82c4 --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/conf.h @@ -0,0 +1,1246 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#ifndef __CONF_H__ +#define __CONF_H__ + +struct cc33xx_conf_header { + __le32 magic; + __le32 version; + __le32 checksum; +} __packed; + +#define CC33XX_CONF_MAGIC 0x10e100ca +#define CC33XX_CONF_VERSION 0x01070069 +#define CC33XX_CONF_MASK 0x0000ffff +#define CC33X_CONF_SIZE (sizeof(struct cc33xx_conf_file)) + +enum { + CONF_HW_BIT_RATE_1MBPS = BIT(1), + CONF_HW_BIT_RATE_2MBPS = BIT(2), + CONF_HW_BIT_RATE_5_5MBPS = BIT(3), + CONF_HW_BIT_RATE_11MBPS = BIT(4), + CONF_HW_BIT_RATE_6MBPS = BIT(5), + CONF_HW_BIT_RATE_9MBPS = BIT(6), + CONF_HW_BIT_RATE_12MBPS = BIT(7), + CONF_HW_BIT_RATE_18MBPS = BIT(8), + CONF_HW_BIT_RATE_24MBPS = BIT(9), + CONF_HW_BIT_RATE_36MBPS = BIT(10), + CONF_HW_BIT_RATE_48MBPS = BIT(11), + CONF_HW_BIT_RATE_54MBPS = BIT(12), + CONF_HW_BIT_RATE_MCS_0 = BIT(13), + CONF_HW_BIT_RATE_MCS_1 = BIT(14), + CONF_HW_BIT_RATE_MCS_2 = BIT(15), + CONF_HW_BIT_RATE_MCS_3 = BIT(16), + CONF_HW_BIT_RATE_MCS_4 = BIT(17), + CONF_HW_BIT_RATE_MCS_5 = BIT(18), + CONF_HW_BIT_RATE_MCS_6 = BIT(19), + CONF_HW_BIT_RATE_MCS_7 = BIT(20) +}; + +enum { + CONF_HW_RATE_INDEX_1MBPS = 1, + CONF_HW_RATE_INDEX_2MBPS = 2, + CONF_HW_RATE_INDEX_5_5MBPS = 3, + CONF_HW_RATE_INDEX_11MBPS = 4, + CONF_HW_RATE_INDEX_6MBPS = 5, + CONF_HW_RATE_INDEX_9MBPS = 6, + CONF_HW_RATE_INDEX_12MBPS = 7, + CONF_HW_RATE_INDEX_18MBPS = 8, + CONF_HW_RATE_INDEX_24MBPS = 9, + CONF_HW_RATE_INDEX_36MBPS = 10, + CONF_HW_RATE_INDEX_48MBPS = 11, + CONF_HW_RATE_INDEX_54MBPS = 12, + CONF_HW_RATE_INDEX_MCS0 = 13, + CONF_HW_RATE_INDEX_MCS1 = 14, + CONF_HW_RATE_INDEX_MCS2 = 15, + CONF_HW_RATE_INDEX_MCS3 = 16, + CONF_HW_RATE_INDEX_MCS4 = 17, + CONF_HW_RATE_INDEX_MCS5 = 18, + CONF_HW_RATE_INDEX_MCS6 = 19, + CONF_HW_RATE_INDEX_MCS7 = 20, + + CONF_HW_RATE_INDEX_MAX = CONF_HW_RATE_INDEX_MCS7, +}; + +enum { + CONF_PREAMBLE_TYPE_SHORT = 0, + CONF_PREAMBLE_TYPE_LONG = 1, + CONF_PREAMBLE_TYPE_OFDM = 2, + CONF_PREAMBLE_TYPE_N_MIXED_MODE = 3, + CONF_PREAMBLE_TYPE_GREENFIELD = 4, + CONF_PREAMBLE_TYPE_AX_SU = 5, + CONF_PREAMBLE_TYPE_AX_MU = 6, + CONF_PREAMBLE_TYPE_AX_SU_ER = 7, + CONF_PREAMBLE_TYPE_AX_TB = 8, + CONF_PREAMBLE_TYPE_AX_TB_NDP_FB = 9, + CONF_PREAMBLE_TYPE_AC_VHT = 10, + CONF_PREAMBLE_TYPE_BE_EHT_MU = 13, + CONF_PREAMBLE_TYPE_BE_EHT_TB = 14, + CONF_PREAMBLE_TYPE_INVALID = 0xFF +}; + +#define CONF_HW_RXTX_RATE_UNSUPPORTED 0xff + +enum conf_rx_queue_type { + CONF_RX_QUEUE_TYPE_LOW_PRIORITY, /* All except the high priority */ + CONF_RX_QUEUE_TYPE_HIGH_PRIORITY, /* Management and voice packets */ +}; + +struct cc33xx_clk_cfg { + u32 n; + u32 m; + u32 p; + u32 q; + u8 swallow; +}; + +struct conf_rx_settings { + /* The maximum amount of time, in TU, before the + * firmware discards the MSDU. + * + * Range: 0 - 0xFFFFFFFF + */ + u32 rx_msdu_life_time; + + /* Packet detection threshold in the PHY. + * + * FIXME: details unknown. + */ + u32 packet_detection_threshold; + + /* The longest time the STA will wait to receive traffic from the AP + * after a PS-poll has been transmitted. + * + * Range: 0 - 200000 + */ + u16 ps_poll_timeout; + /* The longest time the STA will wait to receive traffic from the AP + * after a frame has been sent from an UPSD enabled queue. + * + * Range: 0 - 200000 + */ + u16 upsd_timeout; + + /* The number of octets in an MPDU, below which an RTS/CTS + * handshake is not performed. + * + * Range: 0 - 4096 + */ + u16 rts_threshold; + + /* The RX Clear Channel Assessment threshold in the PHY + * (the energy threshold). + * + * Range: ENABLE_ENERGY_D == 0x140A + * DISABLE_ENERGY_D == 0xFFEF + */ + u16 rx_cca_threshold; + + /* Occupied Rx mem-blocks number which requires interrupting the host + * (0 = no buffering, 0xffff = disabled). + * + * Range: uint16_t + */ + u16 irq_blk_threshold; + + /* Rx packets number which requires interrupting the host + * (0 = no buffering). + * + * Range: uint16_t + */ + u16 irq_pkt_threshold; + + /* Max time in msec the FW may delay RX-Complete interrupt. + * + * Range: 1 - 100 + */ + u16 irq_timeout; + + /* The RX queue type. + * + * Range: RX_QUEUE_TYPE_RX_LOW_PRIORITY, RX_QUEUE_TYPE_RX_HIGH_PRIORITY, + */ + u8 queue_type; +} __packed; + +#define CONF_TX_MAX_RATE_CLASSES 10 + +#define CONF_TX_RATE_MASK_UNSPECIFIED 0 +#define CONF_TX_RATE_MASK_BASIC (CONF_HW_BIT_RATE_1MBPS | \ + CONF_HW_BIT_RATE_2MBPS) +#define CONF_TX_RATE_RETRY_LIMIT 10 + +/* basic rates for p2p operations (probe req/resp, etc.) */ +#define CONF_TX_RATE_MASK_BASIC_P2P CONF_HW_BIT_RATE_6MBPS + +/* Rates supported for data packets when operating as STA/AP. Note the absence + * of the 22Mbps rate. There is a FW limitation on 12 rates so we must drop + * one. The rate dropped is not mandatory under any operating mode. + */ +#define CONF_TX_ENABLED_RATES (CONF_HW_BIT_RATE_1MBPS | \ + CONF_HW_BIT_RATE_2MBPS | CONF_HW_BIT_RATE_5_5MBPS | \ + CONF_HW_BIT_RATE_6MBPS | CONF_HW_BIT_RATE_9MBPS | \ + CONF_HW_BIT_RATE_11MBPS | CONF_HW_BIT_RATE_12MBPS | \ + CONF_HW_BIT_RATE_18MBPS | CONF_HW_BIT_RATE_24MBPS | \ + CONF_HW_BIT_RATE_36MBPS | CONF_HW_BIT_RATE_48MBPS | \ + CONF_HW_BIT_RATE_54MBPS) + +#define CONF_TX_CCK_RATES (CONF_HW_BIT_RATE_1MBPS | \ + CONF_HW_BIT_RATE_2MBPS | CONF_HW_BIT_RATE_5_5MBPS | \ + CONF_HW_BIT_RATE_11MBPS) + +#define CONF_TX_OFDM_RATES (CONF_HW_BIT_RATE_6MBPS | \ + CONF_HW_BIT_RATE_12MBPS | CONF_HW_BIT_RATE_24MBPS | \ + CONF_HW_BIT_RATE_36MBPS | CONF_HW_BIT_RATE_48MBPS | \ + CONF_HW_BIT_RATE_54MBPS) + +#define CONF_TX_MCS_RATES (CONF_HW_BIT_RATE_MCS_0 | \ + CONF_HW_BIT_RATE_MCS_1 | CONF_HW_BIT_RATE_MCS_2 | \ + CONF_HW_BIT_RATE_MCS_3 | CONF_HW_BIT_RATE_MCS_4 | \ + CONF_HW_BIT_RATE_MCS_5 | CONF_HW_BIT_RATE_MCS_6 | \ + CONF_HW_BIT_RATE_MCS_7) + +/* Default rates for management traffic when operating in AP mode. This + * should be configured according to the basic rate set of the AP + */ +#define CONF_TX_AP_DEFAULT_MGMT_RATES (CONF_HW_BIT_RATE_1MBPS | \ + CONF_HW_BIT_RATE_2MBPS | CONF_HW_BIT_RATE_5_5MBPS) + +/* default rates for working as IBSS (11b and OFDM) */ +#define CONF_TX_IBSS_DEFAULT_RATES (CONF_HW_BIT_RATE_1MBPS | \ + CONF_HW_BIT_RATE_2MBPS | CONF_HW_BIT_RATE_5_5MBPS | \ + CONF_HW_BIT_RATE_11MBPS | CONF_TX_OFDM_RATES) + +struct conf_tx_rate_class { + /* The rates enabled for this rate class. + * + * Range: CONF_HW_BIT_RATE_* bit mask + */ + u32 enabled_rates; + + /* The dot11 short retry limit used for TX retries. + * + * Range: uint8_t + */ + u8 short_retry_limit; + + /* The dot11 long retry limit used for TX retries. + * + * Range: uint8_t + */ + u8 long_retry_limit; + + /* Flags controlling the attributes of TX transmission. + * + * Range: bit 0: Truncate - when set, FW attempts to send a frame stop + * when the total valid per-rate attempts have + * been exhausted; otherwise transmissions + * will continue at the lowest available rate + * until the appropriate one of the + * short_retry_limit, long_retry_limit, + * dot11_max_transmit_msdu_life_time, or + * max_tx_life_time, is exhausted. + * 1: Preamble Override - indicates if the preamble type + * should be used in TX. + * 2: Preamble Type - the type of the preamble to be used by + * the policy (0 - long preamble, 1 - short preamble. + */ + u8 aflags; +} __packed; + +#define CONF_TX_MAX_AC_COUNT 4 + +/* Slot number setting to start transmission at PIFS interval */ +#define CONF_TX_AIFS_PIFS 1 +/* Slot number setting to start transmission at DIFS interval normal + * DCF access + */ +#define CONF_TX_AIFS_DIFS 2 + +enum conf_tx_ac { + CONF_TX_AC_BE = 0, /* best effort / legacy */ + CONF_TX_AC_BK = 1, /* background */ + CONF_TX_AC_VI = 2, /* video */ + CONF_TX_AC_VO = 3, /* voice */ + CONF_TX_AC_CTS2SELF = 4, /* fictitious AC, follows AC_VO */ + CONF_TX_AC_ANY_TID = 0xff +}; + +struct conf_sig_weights { + /* RSSI from beacons average weight. + * + * Range: uint8_t + */ + u8 rssi_bcn_avg_weight; + + /* RSSI from data average weight. + * + * Range: uint8_t + */ + u8 rssi_pkt_avg_weight; + + /* SNR from beacons average weight. + * + * Range: uint8_t + */ + u8 snr_bcn_avg_weight; + + /* SNR from data average weight. + * + * Range: uint8_t + */ + u8 snr_pkt_avg_weight; +} __packed; + +struct conf_tx_ac_category { + /* The AC class identifier. + * + * Range: enum conf_tx_ac + */ + u8 ac; + + /* The contention window minimum size (in slots) for the access + * class. + * + * Range: uint8_t + */ + u8 cw_min; + + /* The contention window maximum size (in slots) for the access + * class. + * + * Range: uint8_t + */ + u16 cw_max; + + /* The AIF value (in slots) for the access class. + * + * Range: uint8_t + */ + u8 aifsn; + + /* The TX Op Limit (in microseconds) for the access class. + * + * Range: uint16_t + */ + u16 tx_op_limit; + + /* Is the MU EDCA configured + * + * Range: uint8_t + */ + u8 is_mu_edca; + + /* The AIFSN value for the corresponding access class + * + * Range: uint8_t + */ + u8 mu_edca_aifs; + + /* The ECWmin and ECWmax value is indicating contention window maximum + * size (in slots) for the access + * + * Range: uint8_t + */ + u8 mu_edca_ecw_min_max; + + /* The MU EDCA timer (in microseconds) obtaining an EDCA TXOP + * for STA using MU EDCA parameters + * + * Range: uint8_t + */ + u8 mu_edca_timer; +} __packed; + +#define CONF_TX_MAX_TID_COUNT 8 + +/* Allow TX BA on all TIDs but 6,7. These are currently reserved in the FW */ +#define CONF_TX_BA_ENABLED_TID_BITMAP 0x3F + +enum { + CONF_CHANNEL_TYPE_DCF = 0, /* DC/LEGACY*/ + CONF_CHANNEL_TYPE_EDCF = 1, /* EDCA*/ + CONF_CHANNEL_TYPE_HCCA = 2, /* HCCA*/ +}; + +enum { + CONF_PS_SCHEME_LEGACY = 0, + CONF_PS_SCHEME_UPSD_TRIGGER = 1, + CONF_PS_SCHEME_LEGACY_PSPOLL = 2, + CONF_PS_SCHEME_SAPSD = 3, +}; + +enum { + CONF_ACK_POLICY_LEGACY = 0, + CONF_ACK_POLICY_NO_ACK = 1, + CONF_ACK_POLICY_BLOCK = 2, +}; + +struct conf_tx_tid { + u8 queue_id; + u8 channel_type; + u8 tsid; + u8 ps_scheme; + u8 ack_policy; + u32 apsd_conf[2]; +} __packed; + +struct conf_tx_settings { + /* The TX ED value for TELEC Enable/Disable. + * + * Range: 0, 1 + */ + u8 tx_energy_detection; + + /* Configuration for rate classes for TX (currently only one + * rate class supported). Used in non-AP mode. + */ + struct conf_tx_rate_class sta_rc_conf; + + /* Configuration for access categories for TX rate control. + */ + u8 ac_conf_count; + /*struct conf_tx_ac_category ac_conf[CONF_TX_MAX_AC_COUNT];*/ + struct conf_tx_ac_category ac_conf0; + struct conf_tx_ac_category ac_conf1; + struct conf_tx_ac_category ac_conf2; + struct conf_tx_ac_category ac_conf3; + + /* AP-mode - allow this number of TX retries to a station before an + * event is triggered from FW. + * In AP-mode the hlids of unreachable stations are given in the + * "sta_tx_retry_exceeded" member in the event mailbox. + */ + u8 max_tx_retries; + + /* AP-mode - after this number of seconds a connected station is + * considered inactive. + */ + u16 ap_aging_period; + + /* Configuration for TID parameters. + */ + u8 tid_conf_count; + /* struct conf_tx_tid tid_conf[]; */ + struct conf_tx_tid tid_conf0; + struct conf_tx_tid tid_conf1; + struct conf_tx_tid tid_conf2; + struct conf_tx_tid tid_conf3; + struct conf_tx_tid tid_conf4; + struct conf_tx_tid tid_conf5; + struct conf_tx_tid tid_conf6; + struct conf_tx_tid tid_conf7; + + /* The TX fragmentation threshold. + * + * Range: uint16_t + */ + u16 frag_threshold; + + /* Max time in msec the FW may delay frame TX-Complete interrupt. + * + * Range: uint16_t + */ + u16 tx_compl_timeout; + + /* Completed TX packet count which requires to issue the TX-Complete + * interrupt. + * + * Range: uint16_t + */ + u16 tx_compl_threshold; + + /* The rate used for control messages and scanning on the 2.4GHz band + * + * Range: CONF_HW_BIT_RATE_* bit mask + */ + u32 basic_rate; + + /* The rate used for control messages and scanning on the 5GHz band + * + * Range: CONF_HW_BIT_RATE_* bit mask + */ + u32 basic_rate_5; + + /* TX retry limits for templates + */ + u8 tmpl_short_retry_limit; + u8 tmpl_long_retry_limit; + + /* Time in ms for Tx watchdog timer to expire */ + u32 tx_watchdog_timeout; + + /* when a slow link has this much packets pending, it becomes a low + * priority link, scheduling-wise + */ + u8 slow_link_thold; + + /* when a fast link has this much packets pending, it becomes a low + * priority link, scheduling-wise + */ + u8 fast_link_thold; +} __packed; + +enum { + CONF_WAKE_UP_EVENT_BEACON = 0x00, /* Wake on every Beacon */ + CONF_WAKE_UP_EVENT_DTIM = 0x01, /* Wake on every DTIM */ + CONF_WAKE_UP_EVENT_N_DTIM = 0x02, /* Wake every Nth DTIM */ + CONF_WAKE_UP_EVENT_LIMIT = CONF_WAKE_UP_EVENT_N_DTIM, + /* Not supported: */ + CONF_WAKE_UP_EVENT_N_BEACONS = 0x03, /* Wake every Nth beacon */ + CONF_WAKE_UP_EVENT_BITS_MASK = 0x0F +}; + +#define CONF_MAX_BCN_FILT_IE_COUNT 32 + +#define CONF_BCN_RULE_PASS_ON_CHANGE BIT(0) +#define CONF_BCN_RULE_PASS_ON_APPEARANCE BIT(1) + +#define CONF_BCN_IE_OUI_LEN 3 +#define CONF_BCN_IE_VER_LEN 2 + +struct conf_bcn_filt_rule { + /* IE number to which to associate a rule. + * + * Range: uint8_t + */ + u8 ie; + + /* Rule to associate with the specific ie. + * + * Range: CONF_BCN_RULE_PASS_ON_* + */ + u8 rule; + + /* OUI for the vendor specifie IE (221) + */ + u8 oui[3]; + + /* Type for the vendor specifie IE (221) + */ + u8 type; + + /* Version for the vendor specifie IE (221) + */ + u8 version[2]; +} __packed; + +enum conf_bcn_filt_mode { + CONF_BCN_FILT_MODE_DISABLED = 0, + CONF_BCN_FILT_MODE_ENABLED = 1 +}; + +enum conf_bet_mode { + CONF_BET_MODE_DISABLE = 0, + CONF_BET_MODE_ENABLE = 1, +}; + +struct conf_conn_settings { + /* Enable or disable the beacon filtering. + * + * Range: CONF_BCN_FILT_MODE_* + */ + u8 bcn_filt_mode; + + /* Configure Beacon filter pass-through rules. + */ + u8 bcn_filt_ie_count; + /*struct conf_bcn_filt_rule bcn_filt_ie[CONF_MAX_BCN_FILT_IE_COUNT];*/ + /* struct conf_bcn_filt_rule bcn_filt_ie[32]; */ + struct conf_bcn_filt_rule bcn_filt_ie0; + struct conf_bcn_filt_rule bcn_filt_ie1; + struct conf_bcn_filt_rule bcn_filt_ie2; + struct conf_bcn_filt_rule bcn_filt_ie3; + struct conf_bcn_filt_rule bcn_filt_ie4; + struct conf_bcn_filt_rule bcn_filt_ie5; + struct conf_bcn_filt_rule bcn_filt_ie6; + struct conf_bcn_filt_rule bcn_filt_ie7; + struct conf_bcn_filt_rule bcn_filt_ie8; + struct conf_bcn_filt_rule bcn_filt_ie9; + struct conf_bcn_filt_rule bcn_filt_ie10; + struct conf_bcn_filt_rule bcn_filt_ie11; + struct conf_bcn_filt_rule bcn_filt_ie12; + struct conf_bcn_filt_rule bcn_filt_ie13; + struct conf_bcn_filt_rule bcn_filt_ie14; + struct conf_bcn_filt_rule bcn_filt_ie15; + struct conf_bcn_filt_rule bcn_filt_ie16; + struct conf_bcn_filt_rule bcn_filt_ie17; + struct conf_bcn_filt_rule bcn_filt_ie18; + struct conf_bcn_filt_rule bcn_filt_ie19; + struct conf_bcn_filt_rule bcn_filt_ie20; + struct conf_bcn_filt_rule bcn_filt_ie21; + struct conf_bcn_filt_rule bcn_filt_ie22; + struct conf_bcn_filt_rule bcn_filt_ie23; + struct conf_bcn_filt_rule bcn_filt_ie24; + struct conf_bcn_filt_rule bcn_filt_ie25; + struct conf_bcn_filt_rule bcn_filt_ie26; + struct conf_bcn_filt_rule bcn_filt_ie27; + struct conf_bcn_filt_rule bcn_filt_ie28; + struct conf_bcn_filt_rule bcn_filt_ie29; + struct conf_bcn_filt_rule bcn_filt_ie30; + struct conf_bcn_filt_rule bcn_filt_ie31; + + /* The number of consecutive beacons to lose, before the firmware + * becomes out of synch. + * + * Range: uint32_t + */ + u32 synch_fail_thold; + + /* After out-of-synch, the number of TU's to wait without a further + * received beacon (or probe response) before issuing the BSS_EVENT_LOSE + * event. + * + * Range: uint32_t + */ + u32 bss_lose_timeout; + + /* Beacon receive timeout. + * + * Range: uint32_t + */ + u32 beacon_rx_timeout; + + /* Broadcast receive timeout. + * + * Range: uint32_t + */ + u32 broadcast_timeout; + + /* Enable/disable reception of broadcast packets in power save mode + * + * Range: 1 - enable, 0 - disable + */ + u8 rx_broadcast_in_ps; + + /* Consecutive PS Poll failures before sending event to driver + * + * Range: uint8_t + */ + u8 ps_poll_threshold; + + /* Configuration of signal average weights. + */ + struct conf_sig_weights sig_weights; + + /* Specifies if beacon early termination procedure is enabled or + * disabled. + * + * Range: CONF_BET_MODE_* + */ + u8 bet_enable; + + /* Specifies the maximum number of consecutive beacons that may be + * early terminated. After this number is reached at least one full + * beacon must be correctly received in FW before beacon ET + * resumes. + * + * Range 0 - 255 + */ + u8 bet_max_consecutive; + + /* Specifies the maximum number of times to try PSM entry if it fails + * (if sending the appropriate null-func message fails.) + * + * Range 0 - 255 + */ + u8 psm_entry_retries; + + /* Specifies the maximum number of times to try PSM exit if it fails + * (if sending the appropriate null-func message fails.) + * + * Range 0 - 255 + */ + u8 psm_exit_retries; + + /* Specifies the maximum number of times to try transmit the PSM entry + * null-func frame for each PSM entry attempt + * + * Range 0 - 255 + */ + u8 psm_entry_nullfunc_retries; + + /* Specifies the dynamic PS timeout in ms that will be used + * by the FW when in AUTO_PS mode + */ + u16 dynamic_ps_timeout; + + /* Specifies whether dynamic PS should be disabled and PSM forced. + * This is required for certain WiFi certification tests. + */ + u8 forced_ps; + + /* Specifies the interval of the connection keep-alive null-func + * frame in ms. + * + * Range: 1000 - 3600000 + */ + u32 keep_alive_interval; + + /* Maximum listen interval supported by the driver in units of beacons. + * + * Range: uint16_t + */ + u8 max_listen_interval; + + /* Default sleep authorization for a new STA interface. This determines + * whether we can go to ELP. + */ + u8 sta_sleep_auth; + + /* Default RX BA Activity filter configuration + */ + u8 suspend_rx_ba_activity; +} __packed; + +struct conf_itrim_settings { + /* enable dco itrim */ + u8 enable; + + /* moderation timeout in microsecs from the last TX */ + u32 timeout; +} __packed; + +enum conf_fast_wakeup { + CONF_FAST_WAKEUP_ENABLE, + CONF_FAST_WAKEUP_DISABLE, +}; + +struct conf_pm_config_settings { + /* Host clock settling time + * + * Range: 0 - 30000 us + */ + u32 host_clk_settling_time; + + /* Host fast wakeup support + * + * Range: enum conf_fast_wakeup + */ + u8 host_fast_wakeup_support; +} __packed; + +struct conf_roam_trigger_settings { + /* The minimum interval between two trigger events. + * + * Range: 0 - 60000 ms + */ + u16 trigger_pacing; + + /* The weight for rssi/beacon average calculation + * + * Range: 0 - 255 + */ + u8 avg_weight_rssi_beacon; + + /* The weight for rssi/data frame average calculation + * + * Range: 0 - 255 + */ + u8 avg_weight_rssi_data; + + /* The weight for snr/beacon average calculation + * + * Range: 0 - 255 + */ + u8 avg_weight_snr_beacon; + + /* The weight for snr/data frame average calculation + * + * Range: 0 - 255 + */ + u8 avg_weight_snr_data; +} __packed; + +struct conf_scan_settings { + /* The minimum time to wait on each channel for active scans + * This value will be used whenever there's a connected interface. + * + * Range: uint32_t tu/1000 + */ + u32 min_dwell_time_active; + + /* The maximum time to wait on each channel for active scans + * This value will be currently used whenever there's a + * connected interface. It shouldn't exceed 30000 (~30ms) to avoid + * possible interference of voip traffic going on while scanning. + * + * Range: uint32_t tu/1000 + */ + u32 max_dwell_time_active; + + /* The minimum time to wait on each channel for active scans + * when it's possible to have longer scan dwell times. + * Currently this is used whenever we're idle on all interfaces. + * Longer dwell times improve detection of networks within a + * single scan. + * + * Range: uint32_t tu/1000 + */ + u32 min_dwell_time_active_long; + + /* The maximum time to wait on each channel for active scans + * when it's possible to have longer scan dwell times. + * See min_dwell_time_active_long + * + * Range: uint32_t tu/1000 + */ + u32 max_dwell_time_active_long; + + /* time to wait on the channel for passive scans (in TU/1000) */ + u32 dwell_time_passive; + + /* time to wait on the channel for DFS scans (in TU/1000) */ + u32 dwell_time_dfs; + + /* Number of probe requests to transmit on each active scan channel + * + * Range: uint8_t + */ + u16 num_probe_reqs; + + /* Scan trigger (split scan) timeout. The FW will split the scan + * operation into slices of the given time and allow the FW to schedule + * other tasks in between. + * + * Range: uint32_t Microsecs + */ + u32 split_scan_timeout; +} __packed; + +struct conf_sched_scan_settings { + /* The base time to wait on the channel for active scans (in TU/1000). + * The minimum dwell time is calculated according to this: + * min_dwell_time = base + num_of_probes_to_be_sent * delta_per_probe + * The maximum dwell time is calculated according to this: + * max_dwell_time = min_dwell_time + max_dwell_time_delta + */ + u32 base_dwell_time; + + /* The delta between the min dwell time and max dwell time for + * active scans (in TU/1000s). The max dwell time is used by the FW once + * traffic is detected on the channel. + */ + u32 max_dwell_time_delta; + + /* Delta added to min dwell time per each probe in 2.4 GHz (TU/1000) */ + u32 dwell_time_delta_per_probe; + + /* Delta added to min dwell time per each probe in 5 GHz (TU/1000) */ + u32 dwell_time_delta_per_probe_5; + + /* time to wait on the channel for passive scans (in TU/1000) */ + u32 dwell_time_passive; + + /* time to wait on the channel for DFS scans (in TU/1000) */ + u32 dwell_time_dfs; + + /* number of probe requests to send on each channel in active scans */ + u8 num_probe_reqs; + + /* RSSI threshold to be used for filtering */ + s8 rssi_threshold; + + /* SNR threshold to be used for filtering */ + s8 snr_threshold; + + /* number of short intervals scheduled scan cycles before + * switching to long intervals + */ + u8 num_short_intervals; + + /* interval between each long scheduled scan cycle (in ms) */ + u16 long_interval; +} __packed; + +struct conf_ht_setting { + u8 rx_ba_win_size; + u8 tx_ba_win_size; + u16 inactivity_timeout; + + /* bitmap of enabled TIDs for TX BA sessions */ + u8 tx_ba_tid_bitmap; + + /* DEFAULT / WIDE / SISO20 */ + u8 mode; +} __packed; + +struct conf_memory_settings { + /* Number of stations supported in IBSS mode */ + u8 num_stations; + + /* Number of ssid profiles used in IBSS mode */ + u8 ssid_profiles; + + /* Number of memory buffers allocated to rx pool */ + u8 rx_block_num; + + /* Minimum number of blocks allocated to tx pool */ + u8 tx_min_block_num; + + /* Disable/Enable dynamic memory */ + u8 dynamic_memory; + + /* Minimum required free tx memory blocks in order to assure optimum + * performance + * + * Range: 0-120 + */ + u8 min_req_tx_blocks; + + /* Minimum required free rx memory blocks in order to assure optimum + * performance + * + * Range: 0-120 + */ + u8 min_req_rx_blocks; + + /* Minimum number of mem blocks (free+used) guaranteed for TX + * + * Range: 0-120 + */ + u8 tx_min; +} __packed; + +struct conf_rx_streaming_settings { + /* RX Streaming duration (in msec) from last tx/rx + * + * Range: uint32_t + */ + u32 duration; + + /* Bitmap of tids to be polled during RX streaming. + * (Note: it doesn't look like it really matters) + * + * Range: 0x1-0xff + */ + u8 queues; + + /* RX Streaming interval. + * (Note:this value is also used as the rx streaming timeout) + * Range: 0 (disabled), 10 - 100 + */ + u8 interval; + + /* enable rx streaming also when there is no coex activity + */ + u8 always; +} __packed; + +struct conf_fwlog { + /* Continuous or on-demand */ + u8 mode; + + /* Number of memory blocks dedicated for the FW logger + * + * Range: 2-16, or 0 to disable the FW logger + */ + u8 mem_blocks; + + /* Minimum log level threshold */ + u8 severity; + + /* Include/exclude timestamps from the log messages */ + u8 timestamp; + + /* See enum cc33xx_fwlogger_output */ + u8 output; + + /* Regulates the frequency of log messages */ + u8 threshold; +} __packed; + +#define ACX_RATE_MGMT_NUM_OF_RATES 13 +struct conf_rate_policy_settings { + u16 rate_retry_score; + u16 per_add; + u16 per_th1; + u16 per_th2; + u16 max_per; + u8 inverse_curiosity_factor; + u8 tx_fail_low_th; + u8 tx_fail_high_th; + u8 per_alpha_shift; + u8 per_add_shift; + u8 per_beta1_shift; + u8 per_beta2_shift; + u8 rate_check_up; + u8 rate_check_down; + u8 rate_retry_policy[13]; +} __packed; + +struct conf_hangover_settings { + u32 recover_time; + u8 hangover_period; + u8 dynamic_mode; + u8 early_termination_mode; + u8 max_period; + u8 min_period; + u8 increase_delta; + u8 decrease_delta; + u8 quiet_time; + u8 increase_time; + u8 window_size; +} __packed; + +enum { + CLOCK_CONFIG_16_2_M = 1, + CLOCK_CONFIG_16_368_M, + CLOCK_CONFIG_16_8_M, + CLOCK_CONFIG_19_2_M, + CLOCK_CONFIG_26_M, + CLOCK_CONFIG_32_736_M, + CLOCK_CONFIG_33_6_M, + CLOCK_CONFIG_38_468_M, + CLOCK_CONFIG_52_M, + + NUM_CLOCK_CONFIGS, +}; + +enum cc33xx_ht_mode { + /* Default - use MIMO, fallback to SISO20 */ + HT_MODE_DEFAULT = 0, + + /* Wide - use SISO40 */ + HT_MODE_WIDE = 1, + + /* Use SISO20 */ + HT_MODE_SISO20 = 2, +}; + +struct conf_ap_sleep_settings { + /* Duty Cycle (20-80% of staying Awake) for IDLE AP + * (0: disable) + */ + u8 idle_duty_cycle; + /* Duty Cycle (20-80% of staying Awake) for Connected AP + * (0: disable) + */ + u8 connected_duty_cycle; + /* Maximum stations that are allowed to be connected to AP + * (255: no limit) + */ + u8 max_stations_thresh; + /* Timeout till enabling the Sleep Mechanism after data stops + * [unit: 100 msec] + */ + u8 idle_conn_thresh; +} __packed; + +#define CHANNELS_COUNT 39 /* 14 2.4GHz channels, 25 5GHz channels*/ +#define PER_CHANNEL_REG_RULE_BYTES 13 +#define REG_RULES_COUNT (CHANNELS_COUNT * PER_CHANNEL_REG_RULE_BYTES) /* 507 */ + +/* TX Power limitation for a channel, used for reg domain */ +struct conf_channel_power_limit { + u32 reg_lim_0; + u32 reg_lim_1; + u32 reg_lim_2; + u8 reg_lim_3; +} __packed; + +struct conf_coex_configuration { + /* Work without Coex HW + * + * Range: 1 - YES, 0 - NO + */ + u8 disable_coex; + /* Yes/No Choose if External SoC entity is connected + * + * Range: 1 - YES, 0 - NO + */ + u8 is_ext_soc_enable; + /* External SoC grant polarity + * + * 0 - Active Low + * + * 1 - Active High (Default) + */ + u8 ext_soc_grant_polarity; + /* External SoC priority polarity + * + * 0 - Active Low (Default) + * + * 1 - Active High + */ + u8 ext_soc_priority_polarity; + /* External SoC request polarity + * + * 0 - Active Low (Default) + * + * 1 - Active High + */ + u8 ext_soc_request_polarity; + u16 ext_soc_min_grant_time; + u16 ext_soc_max_grant_time; + /* Range: 0 - 20 us + */ + u8 ext_soc_t2_time; + + u8 ext_soc_to_wifi_grant_delay; + u8 ext_soc_to_ble_grant_delay; +} __packed; + +struct conf_iomux_configuration { + /* For any iomux pull value: + * 1: Pull up + * 2: Pull down + * 3: Pull disable + * ff: Default value set by HW + * ANY other value is invalid + */ + u8 slow_clock_in_pull_val; + u8 sdio_clk_pull_val; + u8 sdio_cmd_pull_val; + u8 sdio_d0_pull_val; + u8 sdio_d1_pull_val; + u8 sdio_d2_pull_val; + u8 sdio_d3_pull_val; + u8 host_irq_wl_pull_val; + u8 uart1_tx_pull_val; + u8 uart1_rx_pull_val; + u8 uart1_cts_pull_val; + u8 uart1_rts_pull_val; + u8 coex_priority_pull_val; + u8 coex_req_pull_val; + u8 coex_grant_pull_val; + u8 host_irq_ble_pull_val; + u8 fast_clk_req_pull_val; + u8 ant_sel_pull_val; +} __packed; + +struct conf_ant_diversity { + /* First beacons after antenna switch. + * In this window we asses our satisfaction from the new antenna. + */ + u8 fast_switching_window; + + /* Deltas above this threshold between the curiosity score and + * the average RSSI will lead to antenna switch. + */ + u8 rssi_delta_for_switching; + + /* Used in the first beacons after antenna switch: + * Deltas above this threshold between the average RSSI and + * the curiosity score will make us switch back the antennas. + */ + u8 rssi_delta_for_fast_switching; + + /* Curiosity punishment in beacon timeout after an antenna switch. + */ + u8 curiosity_punish; + + /* Curiosity raise in beacon timeout not after an antenna switch. + */ + u8 curiosity_raise; + + /* Used for the average RSSI punishment in beacon timeout + * not after antenna switch. + */ + u8 consecutive_missed_beacons_threshold; + + /* Used in the curiosity metric. + */ + u8 compensation_log; + + /* Used in the average RSSI metric. + */ + u8 log_alpha; + + /* Curiosity initialization score. + */ + s8 initial_curiosity; + + /* MR configuration: should the AP follow the STA antenna or use the default antenna. + */ + u8 ap_follows_sta; + + /* MR configuration: should the BLE follow the STA antenna or use the default antenna. + */ + u8 ble_follows_sta; + + /* The antenna to use when the diversity mechanism is not in charge. + */ + u8 default_antenna; +} __packed; + +struct cc33xx_core_conf { + u8 enable_5ghz; + u8 enable_ble; + u8 enable_at_test_debug; + u8 disable_beamforming_fftp; + u32 ble_uart_baudrate; + u8 enable_flow_ctrl; + u8 listen_interval; + u8 wake_up_event; + u8 suspend_listen_interval; + u8 suspend_wake_up_event; + u8 per_channel_power_limit[507]; + u32 internal_slowclk_wakeup_earlier; + u32 internal_slowclk_open_window_longer; + u32 external_slowclk_wakeup_earlier; + u32 external_slowclk_open_window_longer; + struct conf_coex_configuration coex_configuration; + /* Prevent HW recovery. FW will remain stuck. */ + u8 no_recovery; + u8 disable_logger; + u8 mixed_mode_support; + u8 sram_ldo_voltage_trimming; + u32 xtal_settling_time_usec; + struct conf_ant_diversity ant_diversity; + struct conf_iomux_configuration iomux_configuration; +} __packed; + +struct cc33xx_mac_conf { + u8 ps_scheme; + u8 he_enable; + u8 ap_max_num_stations; +} __packed; + +struct cc33xx_phy_conf { + u8 insertion_loss_2_4ghz[2]; + u8 insertion_loss_5ghz[2]; + u8 reserved_0[2]; + u8 ant_gain_2_4ghz[2]; + u8 ant_gain_5ghz[2]; + u8 reserved_1[2]; + u8 ble_ch_lim_1m[40]; + u8 ble_ch_lim_2m[40]; + u8 one_time_calibration_only; + u8 is_diplexer_present; + u8 num_of_antennas; + u8 reg_domain; + u16 calib_period; +} __packed; + +struct cc33xx_host_conf { + struct conf_rx_settings rx; + struct conf_tx_settings tx; + struct conf_conn_settings conn; + struct conf_itrim_settings itrim; + struct conf_pm_config_settings pm_config; + struct conf_roam_trigger_settings roam_trigger; + struct conf_scan_settings scan; + struct conf_sched_scan_settings sched_scan; + struct conf_ht_setting ht; + struct conf_memory_settings mem; + struct conf_rx_streaming_settings rx_streaming; + struct conf_fwlog fwlog; + struct conf_rate_policy_settings rate; + struct conf_hangover_settings hangover; + struct conf_ap_sleep_settings ap_sleep; + +} __packed; + +struct cc33xx_conf_file { + struct cc33xx_conf_header header; + struct cc33xx_phy_conf phy; + struct cc33xx_mac_conf mac; + struct cc33xx_core_conf core; + struct cc33xx_host_conf host_conf; +} __packed; + +#endif From patchwork Thu Nov 7 12:52:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nemanov, Michael" X-Patchwork-Id: 13866421 X-Patchwork-Delegate: kuba@kernel.org Received: from fllv0015.ext.ti.com (fllv0015.ext.ti.com [198.47.19.141]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1B8F42101A1; Thu, 7 Nov 2024 12:53:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.19.141 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983987; cv=none; b=tn0kJiETDjhN2f7juC2hPH6K315uh3DOu4zxpwsnHelu+4WgNe9PudTml2J9qcJtJ/DhhwY0b5dbCzrc8kDifLrJ32uN3ijFmtv3d67uR/aa4+leUFnh5bo6ZTPHJMGQzCRVDYD8uFbrsVFMusx14jOc4SZaC5FyF+rwQJH8REI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983987; c=relaxed/simple; bh=6cHKr+wfiiogbejYnMLD0LdbcbGA4BDdEeUn2O+K3O0=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Zo0xglvwNYUW54rJ5DTJZwajiEK857AxxMzCTP79BxNfSTWrnVFPg7c2N6zdlUz/Tx0FKmWmAM8uf+fP/qmAt6R00whC9TRwdn/Bwcwj5RxFTglBDVlh00G4344e5oog3DEVvIh2w4Pq/xakOQf6g1DA5bexCovWv4Ih/tLOvjU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=GEJCcGOd; arc=none smtp.client-ip=198.47.19.141 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="GEJCcGOd" Received: from lelv0265.itg.ti.com ([10.180.67.224]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 4A7CqxmU073901; Thu, 7 Nov 2024 06:52:59 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1730983979; bh=3kC/T8kYABRP+bgBGlTrGCzGjj2W0s1hDujCc4snWHk=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=GEJCcGOdrTdOOqOPrA4XUkNpnpRdXpbS/QFqcpVwP5+AtfRe5PXnFNQP5QxEuD70I 8/ZLIGMg1Nc8aUMJwcfmyFy+uthJT/mjcNqmgmYkwPpgaq/x9vouNDeqLUBqXMNxws k3QS2SoAYuibK9/+ns+DBe+j1OQUsTp/CkCaVw/Y= Received: from DLEE101.ent.ti.com (dlee101.ent.ti.com [157.170.170.31]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 4A7CqxtO011195 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 7 Nov 2024 06:52:59 -0600 Received: from DLEE107.ent.ti.com (157.170.170.37) by DLEE101.ent.ti.com (157.170.170.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Thu, 7 Nov 2024 06:52:59 -0600 Received: from lelvsmtp6.itg.ti.com (10.180.75.249) by DLEE107.ent.ti.com (157.170.170.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23 via Frontend Transport; Thu, 7 Nov 2024 06:52:59 -0600 Received: from localhost (udb0389739.dhcp.ti.com [137.167.1.149]) by lelvsmtp6.itg.ti.com (8.15.2/8.15.2) with ESMTP id 4A7Cqwl6038536; Thu, 7 Nov 2024 06:52:59 -0600 From: Michael Nemanov To: Kalle Valo , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , , , , CC: Sabeeh Khan , Michael Nemanov Subject: [PATCH v5 15/17] wifi: cc33xx: Add ps.c, ps.h Date: Thu, 7 Nov 2024 14:52:07 +0200 Message-ID: <20241107125209.1736277-16-michael.nemanov@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241107125209.1736277-1-michael.nemanov@ti.com> References: <20241107125209.1736277-1-michael.nemanov@ti.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea X-Patchwork-Delegate: kuba@kernel.org 80211 power-save modes are handled automictically by HW but can be overridden here. Signed-off-by: Michael Nemanov --- drivers/net/wireless/ti/cc33xx/ps.c | 108 ++++++++++++++++++++++++++++ drivers/net/wireless/ti/cc33xx/ps.h | 16 +++++ 2 files changed, 124 insertions(+) create mode 100644 drivers/net/wireless/ti/cc33xx/ps.c create mode 100644 drivers/net/wireless/ti/cc33xx/ps.h diff --git a/drivers/net/wireless/ti/cc33xx/ps.c b/drivers/net/wireless/ti/cc33xx/ps.c new file mode 100644 index 000000000000..768f1f2fc3f1 --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/ps.c @@ -0,0 +1,108 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#include "ps.h" +#include "tx.h" +#include "debug.h" + +int cc33xx_ps_set_mode(struct cc33xx *cc, struct cc33xx_vif *wlvif, + enum cc33xx_cmd_ps_mode_e mode) +{ + int ret; + u16 timeout = cc->conf.host_conf.conn.dynamic_ps_timeout; + + switch (mode) { + case STATION_AUTO_PS_MODE: + case STATION_POWER_SAVE_MODE: + ret = cc33xx_cmd_ps_mode(cc, wlvif, mode, timeout); + if (ret < 0) + return ret; + + set_bit(WLVIF_FLAG_IN_PS, &wlvif->flags); + break; + + case STATION_ACTIVE_MODE: + ret = cc33xx_cmd_ps_mode(cc, wlvif, mode, 0); + if (ret < 0) + return ret; + + clear_bit(WLVIF_FLAG_IN_PS, &wlvif->flags); + break; + + default: + cc33xx_warning("trying to set ps to unsupported mode %d", mode); + ret = -EINVAL; + } + + return ret; +} + +static void cc33xx_ps_filter_frames(struct cc33xx *cc, u8 hlid) +{ + int i; + struct sk_buff *skb; + struct ieee80211_tx_info *info; + unsigned long flags; + int filtered[NUM_TX_QUEUES]; + struct cc33xx_link *lnk = &cc->links[hlid]; + + /* filter all frames currently in the low level queues for this hlid */ + for (i = 0; i < NUM_TX_QUEUES; i++) { + filtered[i] = 0; + while ((skb = skb_dequeue(&lnk->tx_queue[i]))) { + filtered[i]++; + + if (WARN_ON(cc33xx_is_dummy_packet(cc, skb))) + continue; + + info = IEEE80211_SKB_CB(skb); + info->flags |= IEEE80211_TX_STAT_TX_FILTERED; + info->status.rates[0].idx = -1; + ieee80211_tx_status_ni(cc->hw, skb); + } + } + + spin_lock_irqsave(&cc->cc_lock, flags); + for (i = 0; i < NUM_TX_QUEUES; i++) { + cc->tx_queue_count[i] -= filtered[i]; + if (lnk->wlvif) + lnk->wlvif->tx_queue_count[i] -= filtered[i]; + } + + spin_unlock_irqrestore(&cc->cc_lock, flags); + cc33xx_handle_tx_low_watermark(cc); +} + +void cc33xx_ps_link_start(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u8 hlid, bool clean_queues) +{ + struct ieee80211_sta *sta; + struct ieee80211_vif *vif = cc33xx_wlvif_to_vif(wlvif); + + if (WARN_ON_ONCE(wlvif->bss_type != BSS_TYPE_AP_BSS)) + return; + + if (!test_bit(hlid, wlvif->ap.sta_hlid_map) || + test_bit(hlid, &cc->ap_ps_map)) + return; + + rcu_read_lock(); + sta = ieee80211_find_sta(vif, cc->links[hlid].addr); + if (!sta) { + cc33xx_error("could not find sta %pM for starting ps", + cc->links[hlid].addr); + rcu_read_unlock(); + return; + } + + ieee80211_sta_ps_transition_ni(sta, true); + rcu_read_unlock(); + + /* do we want to filter all frames from this link's queues? */ + if (clean_queues) + cc33xx_ps_filter_frames(cc, hlid); + + __set_bit(hlid, &cc->ap_ps_map); +} diff --git a/drivers/net/wireless/ti/cc33xx/ps.h b/drivers/net/wireless/ti/cc33xx/ps.h new file mode 100644 index 000000000000..47f65b684b52 --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/ps.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#ifndef __PS_H__ +#define __PS_H__ + +#include "acx.h" + +int cc33xx_ps_set_mode(struct cc33xx *cc, struct cc33xx_vif *wlvif, + enum cc33xx_cmd_ps_mode_e mode); +void cc33xx_ps_link_start(struct cc33xx *cc, struct cc33xx_vif *wlvif, + u8 hlid, bool clean_queues); + +#endif /* __PS_H__ */ From patchwork Thu Nov 7 12:52:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nemanov, Michael" X-Patchwork-Id: 13866423 X-Patchwork-Delegate: kuba@kernel.org Received: from lelv0143.ext.ti.com (lelv0143.ext.ti.com [198.47.23.248]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E6B4215F49; Thu, 7 Nov 2024 12:53:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.23.248 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983989; cv=none; b=rYUSbgpd/7JPZvHco9ow/dR3s3Tkxw0OOiG08tPjOvEwvVKeCebNTbGcCWs5fK0MWhetWOjc7yTQ0Zdy76hq6hB7M7stjwNpew2SDRJVMPRt+YZ5wKn3GemgJxdHQXr4/L/+eN6te3Vb036ia0kZblbGirM1aAuOb/P+ZxGa6UE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983989; c=relaxed/simple; bh=R6tpvRyIpwX4pXsdqHoRg3bE5Z7lDq0QLETfgR6SyRs=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=adkxH8GYvrfGS67+v/ITmEH77FZsnX33m5GaQzleidwWfhDwzjTLwq5i1QtkDZrThBOGwxGkErqlGdb6hqycwLsKvVDO7o9gFV34ncIdF1T1FI2r/qFOaXYE+HTNfP5KZW2/p9yIvrWXlKzOvmndGsHpBibpVnCKGxnLaTt7mvY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=bFnMbrxH; arc=none smtp.client-ip=198.47.23.248 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="bFnMbrxH" Received: from fllv0034.itg.ti.com ([10.64.40.246]) by lelv0143.ext.ti.com (8.15.2/8.15.2) with ESMTP id 4A7Cr1Fg090833; Thu, 7 Nov 2024 06:53:01 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1730983981; bh=fVF+Yae9zuym8XN0sRP46oIer23L3hr3RjHm+d4FVXs=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=bFnMbrxHuPFT+lTdGCBs4ybBiyHh1/9XLh5RpPun43YX/1w6mOAHZCtMd9ntgbFpa ihsDiNOIzhT8c2K4lV2khMw24AJZ3+OE/c2wOZr5pZCZWr/M8fCNNZpknnEfpgPfe1 SYuvBvhRggWXamzY2hX+/2CkVDr50tQ6lYLaFoLo= Received: from DFLE110.ent.ti.com (dfle110.ent.ti.com [10.64.6.31]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 4A7Cr1fG060397 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 7 Nov 2024 06:53:01 -0600 Received: from DFLE112.ent.ti.com (10.64.6.33) by DFLE110.ent.ti.com (10.64.6.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Thu, 7 Nov 2024 06:53:00 -0600 Received: from lelvsmtp6.itg.ti.com (10.180.75.249) by DFLE112.ent.ti.com (10.64.6.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23 via Frontend Transport; Thu, 7 Nov 2024 06:53:00 -0600 Received: from localhost (udb0389739.dhcp.ti.com [137.167.1.149]) by lelvsmtp6.itg.ti.com (8.15.2/8.15.2) with ESMTP id 4A7Cqx0l038557; Thu, 7 Nov 2024 06:53:00 -0600 From: Michael Nemanov To: Kalle Valo , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , , , , CC: Sabeeh Khan , Michael Nemanov Subject: [PATCH v5 16/17] wifi: cc33xx: Add testmode.c, testmode.h Date: Thu, 7 Nov 2024 14:52:08 +0200 Message-ID: <20241107125209.1736277-17-michael.nemanov@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241107125209.1736277-1-michael.nemanov@ti.com> References: <20241107125209.1736277-1-michael.nemanov@ti.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea X-Patchwork-Delegate: kuba@kernel.org Allows a user-space tools to access FW APIs via CFG80211_TESTMODE infrastructure. Signed-off-by: Michael Nemanov --- drivers/net/wireless/ti/cc33xx/testmode.c | 349 ++++++++++++++++++++++ drivers/net/wireless/ti/cc33xx/testmode.h | 12 + 2 files changed, 361 insertions(+) create mode 100644 drivers/net/wireless/ti/cc33xx/testmode.c create mode 100644 drivers/net/wireless/ti/cc33xx/testmode.h diff --git a/drivers/net/wireless/ti/cc33xx/testmode.c b/drivers/net/wireless/ti/cc33xx/testmode.c new file mode 100644 index 000000000000..b845610c5a30 --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/testmode.c @@ -0,0 +1,349 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#include + +#include "cc33xx.h" +#include "acx.h" +#include "io.h" +#include "testmode.h" + +#define CC33XX_TM_MAX_DATA_LENGTH 1024 + +enum cc33xx_tm_commands { + CC33XX_TM_CMD_UNSPEC, + CC33XX_TM_CMD_TEST, + CC33XX_TM_CMD_INTERROGATE, + CC33XX_TM_CMD_CONFIGURE, + CC33XX_TM_CMD_NVS_PUSH, /* Not in use. Keep to not break ABI */ + CC33XX_TM_CMD_SET_PLT_MODE, + CC33XX_TM_CMD_RECOVER, /* Not in use. Keep to not break ABI */ + CC33XX_TM_CMD_GET_MAC, + + __CC33XX_TM_CMD_AFTER_LAST +}; + +enum cc33xx_tm_attrs { + CC33XX_TM_ATTR_UNSPEC, + CC33XX_TM_ATTR_CMD_ID, + CC33XX_TM_ATTR_ANSWER, + CC33XX_TM_ATTR_DATA, + CC33XX_TM_ATTR_IE_ID, + CC33XX_TM_ATTR_PLT_MODE, + + __CC33XX_TM_ATTR_AFTER_LAST +}; + +#define CC33XX_TM_ATTR_MAX (__CC33XX_TM_ATTR_AFTER_LAST - 1) + +static struct nla_policy cc33xx_tm_policy[CC33XX_TM_ATTR_MAX + 1] = { + [CC33XX_TM_ATTR_CMD_ID] = { .type = NLA_U32 }, + [CC33XX_TM_ATTR_ANSWER] = { .type = NLA_U8 }, + [CC33XX_TM_ATTR_DATA] = { .type = NLA_BINARY, + .len = CC33XX_TM_MAX_DATA_LENGTH }, + [CC33XX_TM_ATTR_IE_ID] = { .type = NLA_U32 }, + [CC33XX_TM_ATTR_PLT_MODE] = { .type = NLA_U32 }, +}; + +static int cc33xx_tm_cmd_test(struct cc33xx *cc, struct nlattr *tb[]) +{ + int ret, len; + u16 buf_len; + struct sk_buff *skb; + void *buf; + u8 answer = 0; + + if (!tb[CC33XX_TM_ATTR_DATA]) + return -EINVAL; + + buf = nla_data(tb[CC33XX_TM_ATTR_DATA]); + buf_len = nla_len(tb[CC33XX_TM_ATTR_DATA]); + + if (tb[CC33XX_TM_ATTR_ANSWER]) + answer = nla_get_u8(tb[CC33XX_TM_ATTR_ANSWER]); + + if (buf_len > sizeof(struct cc33xx_command)) + return -EMSGSIZE; + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) { + ret = -EINVAL; + goto out; + } + + ret = cc33xx_cmd_test(cc, buf, buf_len, answer); + if (ret < 0) { + cc33xx_warning("testmode cmd test failed: %d", ret); + goto out; + } + + if (answer) { + /* If we got bip calibration answer print radio status */ + struct cc33xx_cmd_cal_p2g *params = + (struct cc33xx_cmd_cal_p2g *)buf; + s16 radio_status = (s16)le16_to_cpu(params->radio_status); + + if (params->test.id == TEST_CMD_P2G_CAL && radio_status < 0) + cc33xx_warning("testmode cmd: radio status=%d", + radio_status); + else + cc33xx_info("testmode cmd: radio status=%d", + radio_status); + + len = nla_total_size(buf_len); + skb = cfg80211_testmode_alloc_reply_skb(cc->hw->wiphy, len); + if (!skb) { + ret = -ENOMEM; + goto out; + } + + if (nla_put(skb, CC33XX_TM_ATTR_DATA, buf_len, buf)) { + kfree_skb(skb); + ret = -EMSGSIZE; + goto out; + } + + ret = cfg80211_testmode_reply(skb); + } + +out: + mutex_unlock(&cc->mutex); + + return ret; +} + +static int cc33xx_tm_cmd_interrogate(struct cc33xx *cc, struct nlattr *tb[]) +{ + int ret; + struct cc33xx_command *cmd; + struct sk_buff *skb; + u8 ie_id; + + if (!tb[CC33XX_TM_ATTR_IE_ID]) + return -EINVAL; + + ie_id = nla_get_u8(tb[CC33XX_TM_ATTR_IE_ID]); + + mutex_lock(&cc->mutex); + + if (unlikely(cc->state != CC33XX_STATE_ON)) { + ret = -EINVAL; + goto out; + } + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) { + ret = -ENOMEM; + goto out; + } + + ret = cc33xx_cmd_debug_inter(cc, ie_id, cmd, + sizeof(struct acx_header), sizeof(*cmd)); + if (ret < 0) { + cc33xx_warning("testmode cmd interrogate failed: %d", ret); + goto out_free; + } + + skb = cfg80211_testmode_alloc_reply_skb(cc->hw->wiphy, sizeof(*cmd)); + if (!skb) { + ret = -ENOMEM; + goto out_free; + } + + if (nla_put(skb, CC33XX_TM_ATTR_DATA, sizeof(*cmd), cmd)) { + kfree_skb(skb); + ret = -EMSGSIZE; + goto out_free; + } + + ret = cfg80211_testmode_reply(skb); + if (ret < 0) + goto out_free; + +out_free: + kfree(cmd); + +out: + mutex_unlock(&cc->mutex); + + return ret; +} + +static int cc33xx_tm_cmd_configure(struct cc33xx *cc, struct nlattr *tb[]) +{ + int ret; + u16 buf_len; + void *buf; + u8 ie_id; + + if (!tb[CC33XX_TM_ATTR_DATA]) + return -EINVAL; + if (!tb[CC33XX_TM_ATTR_IE_ID]) + return -EINVAL; + + ie_id = nla_get_u8(tb[CC33XX_TM_ATTR_IE_ID]); + buf = nla_data(tb[CC33XX_TM_ATTR_DATA]); + buf_len = nla_len(tb[CC33XX_TM_ATTR_DATA]); + + if (buf_len > sizeof(struct cc33xx_command)) + return -EMSGSIZE; + + mutex_lock(&cc->mutex); + ret = cc33xx_cmd_debug(cc, ie_id, buf, buf_len); + mutex_unlock(&cc->mutex); + + if (ret < 0) { + cc33xx_warning("testmode cmd configure failed: %d", ret); + return ret; + } + + return 0; +} + +static int cc33xx_tm_detect_fem(struct cc33xx *cc, struct nlattr *tb[]) +{ + /* return FEM type */ + int ret, len; + struct sk_buff *skb; + + ret = cc33xx_plt_start(cc, PLT_FEM_DETECT); + if (ret < 0) + goto out; + + mutex_lock(&cc->mutex); + + len = nla_total_size(sizeof(cc->fem_manuf)); + skb = cfg80211_testmode_alloc_reply_skb(cc->hw->wiphy, len); + if (!skb) { + ret = -ENOMEM; + goto out_mutex; + } + + if (nla_put(skb, CC33XX_TM_ATTR_DATA, sizeof(cc->fem_manuf), + &cc->fem_manuf)) { + kfree_skb(skb); + ret = -EMSGSIZE; + goto out_mutex; + } + + ret = cfg80211_testmode_reply(skb); + +out_mutex: + mutex_unlock(&cc->mutex); + + /* We always stop plt after DETECT mode */ + cc33xx_plt_stop(cc); +out: + return ret; +} + +static int cc33xx_tm_cmd_set_plt_mode(struct cc33xx *cc, struct nlattr *tb[]) +{ + u32 val; + int ret; + + if (!tb[CC33XX_TM_ATTR_PLT_MODE]) + return -EINVAL; + + val = nla_get_u32(tb[CC33XX_TM_ATTR_PLT_MODE]); + + switch (val) { + case PLT_OFF: + ret = cc33xx_plt_stop(cc); + break; + case PLT_ON: + case PLT_CHIP_AWAKE: + ret = cc33xx_plt_start(cc, val); + break; + case PLT_FEM_DETECT: + ret = cc33xx_tm_detect_fem(cc, tb); + break; + default: + ret = -EINVAL; + break; + } + + return ret; +} + +static int cc33xx_tm_cmd_get_mac(struct cc33xx *cc, struct nlattr *tb[]) +{ + struct sk_buff *skb; + u8 zero_mac[ETH_ALEN] = {0}; + int ret = 0; + + mutex_lock(&cc->mutex); + + if (!cc->plt) { + ret = -EINVAL; + goto out; + } + + if (memcmp(zero_mac, cc->efuse_mac_address, ETH_ALEN) == 0) { + ret = -EOPNOTSUPP; + goto out; + } + + skb = cfg80211_testmode_alloc_reply_skb(cc->hw->wiphy, ETH_ALEN); + if (!skb) { + ret = -ENOMEM; + goto out; + } + + if (nla_put(skb, CC33XX_TM_ATTR_DATA, + ETH_ALEN, cc->efuse_mac_address)) { + kfree_skb(skb); + ret = -EMSGSIZE; + goto out; + } + + ret = cfg80211_testmode_reply(skb); + if (ret < 0) + goto out; + +out: + mutex_unlock(&cc->mutex); + return ret; +} + +int cc33xx_tm_cmd(struct ieee80211_hw *hw, struct ieee80211_vif *vif, + void *data, int len) +{ + struct cc33xx *cc = hw->priv; + struct nlattr *tb[CC33XX_TM_ATTR_MAX + 1]; + u32 nla_cmd; + int err; + + err = nla_parse_deprecated(tb, CC33XX_TM_ATTR_MAX, data, len, + cc33xx_tm_policy, NULL); + if (err) + return err; + + if (!tb[CC33XX_TM_ATTR_CMD_ID]) + return -EINVAL; + + nla_cmd = nla_get_u32(tb[CC33XX_TM_ATTR_CMD_ID]); + + /* Only SET_PLT_MODE is allowed in case of mode PLT_CHIP_AWAKE */ + if (cc->plt_mode == PLT_CHIP_AWAKE && + nla_cmd != CC33XX_TM_CMD_SET_PLT_MODE) + return -EOPNOTSUPP; + + switch (nla_cmd) { + case CC33XX_TM_CMD_TEST: + return cc33xx_tm_cmd_test(cc, tb); + case CC33XX_TM_CMD_INTERROGATE: + return cc33xx_tm_cmd_interrogate(cc, tb); + case CC33XX_TM_CMD_CONFIGURE: + return cc33xx_tm_cmd_configure(cc, tb); + case CC33XX_TM_CMD_SET_PLT_MODE: + return cc33xx_tm_cmd_set_plt_mode(cc, tb); + case CC33XX_TM_CMD_GET_MAC: + return cc33xx_tm_cmd_get_mac(cc, tb); + default: + return -EOPNOTSUPP; + } +} diff --git a/drivers/net/wireless/ti/cc33xx/testmode.h b/drivers/net/wireless/ti/cc33xx/testmode.h new file mode 100644 index 000000000000..58f336202925 --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/testmode.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2022-2024 Texas Instruments Incorporated - https://www.ti.com/ + */ + +#ifndef __TESTMODE_H__ +#define __TESTMODE_H__ + +int cc33xx_tm_cmd(struct ieee80211_hw *hw, struct ieee80211_vif *vif, + void *data, int len); + +#endif /* __TESTMODE_H__ */ From patchwork Thu Nov 7 12:52:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nemanov, Michael" X-Patchwork-Id: 13866424 X-Patchwork-Delegate: kuba@kernel.org Received: from lelv0142.ext.ti.com (lelv0142.ext.ti.com [198.47.23.249]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A0BAB215F7D; Thu, 7 Nov 2024 12:53:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.23.249 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983990; cv=none; b=PxdZP5RGaEERW7gTWKQcPmWzGDn4TZCoLSJ5ZCvMXyedgqo6bFb9+awL5YCst/XyV/WsKaerkHfCn0h0Qz0msuxmTRY+nxw4CFE1XlE2P2iP+FE/9rn58j62uewEC+JbIcXEuwBuy77VVCW/JAl4BWHDqJenln/Y2xPy8mhP5Uk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730983990; c=relaxed/simple; bh=O4JIEnkWMQ605+o0syIyHSWT27O1VQEtWWa9lopfQjk=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Q8gvyfPUkDfYm4DWObWac81VcJIkj5fiRLhKkSxxefoUDlpGqG1B1G3ZMayucJ9NG3F26IK27OgyeSv/VgVpV6X0hpOU6elLnwMA/seViHH1ApkBd8Cfqrbd6MTAdcTj2ZIdF/afN75VGHS9CRm/tTtapVYtGPh8731vQKxetYA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=Mg9vj6xl; arc=none smtp.client-ip=198.47.23.249 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="Mg9vj6xl" Received: from fllv0035.itg.ti.com ([10.64.41.0]) by lelv0142.ext.ti.com (8.15.2/8.15.2) with ESMTP id 4A7Cr2kS041981; Thu, 7 Nov 2024 06:53:02 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1730983982; bh=3Bf1eq+WdZi+EdwuJ3Oe2U0adIIvW8vJSFGrZiaYIY4=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=Mg9vj6xlmDOvw+fcrSzarLS0aouuxdOSzzMtKPth3FVE6HDCH/MC+iGDpgQB40eHC pcRZkjItif7Z0BR9QOv5wBOsY+cKenMCFExi1HW8ss0bfu4NC2uPLK/B1eURnIOV+P DD7zvUyQrr7vXyaZX/9ChH/ZbITwjyLhzCKL0x40= Received: from DLEE100.ent.ti.com (dlee100.ent.ti.com [157.170.170.30]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 4A7Cr2f3116182 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 7 Nov 2024 06:53:02 -0600 Received: from DLEE105.ent.ti.com (157.170.170.35) by DLEE100.ent.ti.com (157.170.170.30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Thu, 7 Nov 2024 06:53:02 -0600 Received: from lelvsmtp6.itg.ti.com (10.180.75.249) by DLEE105.ent.ti.com (157.170.170.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23 via Frontend Transport; Thu, 7 Nov 2024 06:53:02 -0600 Received: from localhost (udb0389739.dhcp.ti.com [137.167.1.149]) by lelvsmtp6.itg.ti.com (8.15.2/8.15.2) with ESMTP id 4A7Cr1vX038593; Thu, 7 Nov 2024 06:53:01 -0600 From: Michael Nemanov To: Kalle Valo , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , , , , CC: Sabeeh Khan , Michael Nemanov Subject: [PATCH v5 17/17] wifi: cc33xx: Add Kconfig, Makefile Date: Thu, 7 Nov 2024 14:52:09 +0200 Message-ID: <20241107125209.1736277-18-michael.nemanov@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241107125209.1736277-1-michael.nemanov@ti.com> References: <20241107125209.1736277-1-michael.nemanov@ti.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea X-Patchwork-Delegate: kuba@kernel.org Integrate cc33xx into wireless/ti folder Signed-off-by: Michael Nemanov --- drivers/net/wireless/ti/Kconfig | 1 + drivers/net/wireless/ti/Makefile | 1 + drivers/net/wireless/ti/cc33xx/Kconfig | 24 ++++++++++++++++++++++++ drivers/net/wireless/ti/cc33xx/Makefile | 10 ++++++++++ 4 files changed, 36 insertions(+) create mode 100644 drivers/net/wireless/ti/cc33xx/Kconfig create mode 100644 drivers/net/wireless/ti/cc33xx/Makefile diff --git a/drivers/net/wireless/ti/Kconfig b/drivers/net/wireless/ti/Kconfig index 3fcd9e395f72..fa7214d6018c 100644 --- a/drivers/net/wireless/ti/Kconfig +++ b/drivers/net/wireless/ti/Kconfig @@ -14,6 +14,7 @@ if WLAN_VENDOR_TI source "drivers/net/wireless/ti/wl1251/Kconfig" source "drivers/net/wireless/ti/wl12xx/Kconfig" source "drivers/net/wireless/ti/wl18xx/Kconfig" +source "drivers/net/wireless/ti/cc33xx/Kconfig" # keep last for automatic dependencies source "drivers/net/wireless/ti/wlcore/Kconfig" diff --git a/drivers/net/wireless/ti/Makefile b/drivers/net/wireless/ti/Makefile index 05ee016594f8..4356f58b4b98 100644 --- a/drivers/net/wireless/ti/Makefile +++ b/drivers/net/wireless/ti/Makefile @@ -3,3 +3,4 @@ obj-$(CONFIG_WLCORE) += wlcore/ obj-$(CONFIG_WL12XX) += wl12xx/ obj-$(CONFIG_WL1251) += wl1251/ obj-$(CONFIG_WL18XX) += wl18xx/ +obj-$(CONFIG_CC33XX) += cc33xx/ diff --git a/drivers/net/wireless/ti/cc33xx/Kconfig b/drivers/net/wireless/ti/cc33xx/Kconfig new file mode 100644 index 000000000000..0c3ff97dacc7 --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/Kconfig @@ -0,0 +1,24 @@ +# SPDX-License-Identifier: GPL-2.0-only +config CC33XX + tristate "TI CC33XX support" + depends on MAC80211 + select FW_LOADER + help + This module contains the main code for TI CC33XX WLAN chips. It abstracts + hardware-specific differences among different chipset families. + Each chipset family needs to implement its own lower-level module + that will depend on this module for the common code. + + If you choose to build a module, it will be called cc33xx. Say N if + unsure. + +config CC33XX_SDIO + tristate "TI CC33XX SDIO support" + depends on CC33XX && MMC + help + This module adds support for the SDIO interface of adapters using + TI CC33XX WLAN chipsets. Select this if your platform is using + the SDIO bus. + + If you choose to build a module, it'll be called cc33xx_sdio. + Say N if unsure. diff --git a/drivers/net/wireless/ti/cc33xx/Makefile b/drivers/net/wireless/ti/cc33xx/Makefile new file mode 100644 index 000000000000..6156f778edee --- /dev/null +++ b/drivers/net/wireless/ti/cc33xx/Makefile @@ -0,0 +1,10 @@ +# SPDX-License-Identifier: GPL-2.0 + +cc33xx-objs = main.o cmd.o io.o event.o tx.o rx.o ps.o acx.o \ + boot.o init.o scan.o + +cc33xx_sdio-objs = sdio.o + +cc33xx-$(CONFIG_NL80211_TESTMODE) += testmode.o +obj-$(CONFIG_CC33XX) += cc33xx.o +obj-$(CONFIG_CC33XX_SDIO) += cc33xx_sdio.o