From patchwork Mon Oct 8 08:17:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Coelho X-Patchwork-Id: 10630207 X-Patchwork-Delegate: luca@coelho.fi Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BD98A184E for ; Mon, 8 Oct 2018 08:17:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AF0E0287EF for ; Mon, 8 Oct 2018 08:17:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A334C28BE5; Mon, 8 Oct 2018 08:17:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 97DF128B27 for ; Mon, 8 Oct 2018 08:17:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726656AbeJHP2O (ORCPT ); Mon, 8 Oct 2018 11:28:14 -0400 Received: from paleale.coelho.fi ([176.9.41.70]:56280 "EHLO farmhouse.coelho.fi" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725983AbeJHP2N (ORCPT ); Mon, 8 Oct 2018 11:28:13 -0400 Received: from 91-156-4-241.elisa-laajakaista.fi ([91.156.4.241] helo=redipa.ger.corp.intel.com) by farmhouse.coelho.fi with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.91) (envelope-from ) id 1g9QjV-0003w6-9A; Mon, 08 Oct 2018 11:17:41 +0300 From: Luca Coelho To: kvalo@codeaurora.org Cc: linux-wireless@vger.kernel.org, Sara Sharon , Luca Coelho Date: Mon, 8 Oct 2018 11:17:18 +0300 Message-Id: <20181008081735.3401-2-luca@coelho.fi> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008081735.3401-1-luca@coelho.fi> References: <20181008081735.3401-1-luca@coelho.fi> MIME-Version: 1.0 Subject: [PATCH 01/18] iwlwifi: mvm: don't send keys when entering D3 Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Sara Sharon In the past, we needed to program the keys when entering D3. This was since we replaced the image. However, now that there is a single image, this is no longer needed. Note that RSC is sent separately in a new command. This solves issues with newer devices that support PN offload. Since driver re-sent the keys, the PN got zeroed and the receiver dropped the next packets, until PN caught up again. Signed-off-by: Sara Sharon Signed-off-by: Luca Coelho --- drivers/net/wireless/intel/iwlwifi/mvm/d3.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c index 210be26aadaa..fb981270f224 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c @@ -722,8 +722,10 @@ int iwl_mvm_wowlan_config_key_params(struct iwl_mvm *mvm, { struct iwl_wowlan_kek_kck_material_cmd kek_kck_cmd = {}; struct iwl_wowlan_tkip_params_cmd tkip_cmd = {}; + bool unified = fw_has_capa(&mvm->fw->ucode_capa, + IWL_UCODE_TLV_CAPA_CNSLDTD_D3_D0_IMG); struct wowlan_key_data key_data = { - .configure_keys = !d0i3, + .configure_keys = !d0i3 && !unified, .use_rsc_tsc = false, .tkip = &tkip_cmd, .use_tkip = false, From patchwork Mon Oct 8 08:17:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Coelho X-Patchwork-Id: 10630205 X-Patchwork-Delegate: luca@coelho.fi Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 592E7933 for ; Mon, 8 Oct 2018 08:17:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4A4CD287EF for ; Mon, 8 Oct 2018 08:17:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3E59D28BE6; Mon, 8 Oct 2018 08:17:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C20F228BE5 for ; Mon, 8 Oct 2018 08:17:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726787AbeJHP2O (ORCPT ); Mon, 8 Oct 2018 11:28:14 -0400 Received: from paleale.coelho.fi ([176.9.41.70]:56286 "EHLO farmhouse.coelho.fi" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726113AbeJHP2N (ORCPT ); Mon, 8 Oct 2018 11:28:13 -0400 Received: from 91-156-4-241.elisa-laajakaista.fi ([91.156.4.241] helo=redipa.ger.corp.intel.com) by farmhouse.coelho.fi with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.91) (envelope-from ) id 1g9QjW-0003w6-2Z; Mon, 08 Oct 2018 11:17:42 +0300 From: Luca Coelho To: kvalo@codeaurora.org Cc: linux-wireless@vger.kernel.org, Sara Sharon , Luca Coelho Date: Mon, 8 Oct 2018 11:17:19 +0300 Message-Id: <20181008081735.3401-3-luca@coelho.fi> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008081735.3401-1-luca@coelho.fi> References: <20181008081735.3401-1-luca@coelho.fi> MIME-Version: 1.0 Subject: [PATCH 02/18] iwlwifi: pcie: don't pad AMSDU packets Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Sara Sharon When we TX AMSDU, we shouldn't pad the packet. In the past, we were building AMSDU only in transport layer, and gen2 functions are built based on this. However, now that op mode may build AMSDUs, we need to take care of padding also in gen2 "non-pcie-amsdu" path. Signed-off-by: Sara Sharon Signed-off-by: Luca Coelho --- drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c index b71cf55480fc..09cdc15f7645 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c @@ -454,7 +454,8 @@ iwl_tfh_tfd *iwl_pcie_gen2_build_tx(struct iwl_trans *trans, struct sk_buff *skb, struct iwl_cmd_meta *out_meta, int hdr_len, - int tx_cmd_len) + int tx_cmd_len, + bool pad) { int idx = iwl_pcie_get_cmd_index(txq, txq->write_ptr); struct iwl_tfh_tfd *tfd = iwl_pcie_get_tfd(trans, txq, idx); @@ -478,7 +479,10 @@ iwl_tfh_tfd *iwl_pcie_gen2_build_tx(struct iwl_trans *trans, len = tx_cmd_len + sizeof(struct iwl_cmd_header) + hdr_len - IWL_FIRST_TB_SIZE; - tb1_len = ALIGN(len, 4); + if (pad) + tb1_len = ALIGN(len, 4); + else + tb1_len = len; /* map the data for TB1 */ tb1_addr = ((u8 *)&dev_cmd->hdr) + IWL_FIRST_TB_SIZE; @@ -551,7 +555,7 @@ struct iwl_tfh_tfd *iwl_pcie_gen2_build_tfd(struct iwl_trans *trans, out_meta, hdr_len, len); return iwl_pcie_gen2_build_tx(trans, txq, dev_cmd, skb, out_meta, - hdr_len, len); + hdr_len, len, !amsdu); } int iwl_trans_pcie_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb, From patchwork Mon Oct 8 08:17:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Coelho X-Patchwork-Id: 10630209 X-Patchwork-Delegate: luca@coelho.fi Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 65B4B933 for ; Mon, 8 Oct 2018 08:17:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 522B4287EF for ; Mon, 8 Oct 2018 08:17:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4707828BE5; Mon, 8 Oct 2018 08:17:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A6CC8287EF for ; Mon, 8 Oct 2018 08:17:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726967AbeJHP2P (ORCPT ); Mon, 8 Oct 2018 11:28:15 -0400 Received: from paleale.coelho.fi ([176.9.41.70]:56294 "EHLO farmhouse.coelho.fi" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726451AbeJHP2P (ORCPT ); Mon, 8 Oct 2018 11:28:15 -0400 Received: from 91-156-4-241.elisa-laajakaista.fi ([91.156.4.241] helo=redipa.ger.corp.intel.com) by farmhouse.coelho.fi with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.91) (envelope-from ) id 1g9QjW-0003w6-PZ; Mon, 08 Oct 2018 11:17:43 +0300 From: Luca Coelho To: kvalo@codeaurora.org Cc: linux-wireless@vger.kernel.org, Sara Sharon , Luca Coelho Date: Mon, 8 Oct 2018 11:17:20 +0300 Message-Id: <20181008081735.3401-4-luca@coelho.fi> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008081735.3401-1-luca@coelho.fi> References: <20181008081735.3401-1-luca@coelho.fi> MIME-Version: 1.0 Subject: [PATCH 03/18] iwlwifi: trace: change trace to trace one TB at a time Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Sara Sharon Split TX tracing to be per TB. This is needed now that AMSDUs can be sent and skb can be larger than trace limit. Signed-off-by: Sara Sharon Signed-off-by: Luca Coelho --- .../intel/iwlwifi/iwl-devtrace-data.h | 30 ++++--------------- .../net/wireless/intel/iwlwifi/pcie/tx-gen2.c | 18 ++++++----- drivers/net/wireless/intel/iwlwifi/pcie/tx.c | 29 ++++++++++-------- 3 files changed, 34 insertions(+), 43 deletions(-) diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-devtrace-data.h b/drivers/net/wireless/intel/iwlwifi/iwl-devtrace-data.h index 2cc6c019d0e1..420e6d745f77 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-devtrace-data.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-devtrace-data.h @@ -30,38 +30,20 @@ #undef TRACE_SYSTEM #define TRACE_SYSTEM iwlwifi_data -TRACE_EVENT(iwlwifi_dev_tx_data, - TP_PROTO(const struct device *dev, - struct sk_buff *skb, u8 hdr_len), - TP_ARGS(dev, skb, hdr_len), +TRACE_EVENT(iwlwifi_dev_tx_tb, + TP_PROTO(const struct device *dev, struct sk_buff *skb, + u8 *data_src, size_t data_len), + TP_ARGS(dev, skb, data_src, data_len), TP_STRUCT__entry( DEV_ENTRY __dynamic_array(u8, data, - iwl_trace_data(skb) ? skb->len - hdr_len : 0) + iwl_trace_data(skb) ? data_len : 0) ), TP_fast_assign( DEV_ASSIGN; if (iwl_trace_data(skb)) - skb_copy_bits(skb, hdr_len, - __get_dynamic_array(data), - skb->len - hdr_len); - ), - TP_printk("[%s] TX frame data", __get_str(dev)) -); - -TRACE_EVENT(iwlwifi_dev_tx_tso_chunk, - TP_PROTO(const struct device *dev, - u8 *data_src, size_t data_len), - TP_ARGS(dev, data_src, data_len), - TP_STRUCT__entry( - DEV_ENTRY - - __dynamic_array(u8, data, data_len) - ), - TP_fast_assign( - DEV_ASSIGN; - memcpy(__get_dynamic_array(data), data_src, data_len); + memcpy(__get_dynamic_array(data), data_src, data_len); ), TP_printk("[%s] TX frame data", __get_str(dev)) ); diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c index 09cdc15f7645..e880f69eac26 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c @@ -330,7 +330,7 @@ static int iwl_pcie_gen2_build_amsdu(struct iwl_trans *trans, goto out_err; } iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, tb_len); - trace_iwlwifi_dev_tx_tso_chunk(trans->dev, start_hdr, tb_len); + trace_iwlwifi_dev_tx_tb(trans->dev, skb, start_hdr, tb_len); /* add this subframe's headers' length to the tx_cmd */ le16_add_cpu(&tx_cmd->len, hdr_page->pos - subf_hdrs_start); @@ -347,8 +347,8 @@ static int iwl_pcie_gen2_build_amsdu(struct iwl_trans *trans, goto out_err; } iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, tb_len); - trace_iwlwifi_dev_tx_tso_chunk(trans->dev, tso.data, - tb_len); + trace_iwlwifi_dev_tx_tb(trans->dev, skb, tso.data, + tb_len); data_left -= tb_len; tso_build_data(skb, &tso, tb_len); @@ -438,6 +438,9 @@ static int iwl_pcie_gen2_tx_add_frags(struct iwl_trans *trans, return -ENOMEM; tb_idx = iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, skb_frag_size(frag)); + trace_iwlwifi_dev_tx_tb(trans->dev, skb, + skb_frag_address(frag), + skb_frag_size(frag)); if (tb_idx < 0) return tb_idx; @@ -490,6 +493,8 @@ iwl_tfh_tfd *iwl_pcie_gen2_build_tx(struct iwl_trans *trans, if (unlikely(dma_mapping_error(trans->dev, tb_phys))) goto out_err; iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, tb1_len); + trace_iwlwifi_dev_tx(trans->dev, skb, tfd, sizeof(*tfd), &dev_cmd->hdr, + IWL_FIRST_TB_SIZE + tb1_len, hdr_len); /* set up TFD's third entry to point to remainder of skb's head */ tb2_len = skb_headlen(skb) - hdr_len; @@ -500,15 +505,14 @@ iwl_tfh_tfd *iwl_pcie_gen2_build_tx(struct iwl_trans *trans, if (unlikely(dma_mapping_error(trans->dev, tb_phys))) goto out_err; iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, tb2_len); + trace_iwlwifi_dev_tx_tb(trans->dev, skb, + skb->data + hdr_len, + tb2_len); } if (iwl_pcie_gen2_tx_add_frags(trans, skb, tfd, out_meta)) goto out_err; - trace_iwlwifi_dev_tx(trans->dev, skb, tfd, sizeof(*tfd), &dev_cmd->hdr, - IWL_FIRST_TB_SIZE + tb1_len, hdr_len); - trace_iwlwifi_dev_tx_data(trans->dev, skb, hdr_len); - return tfd; out_err: diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c index f227b91098c9..87b7225fe289 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c @@ -1994,6 +1994,9 @@ static int iwl_fill_data_tbs(struct iwl_trans *trans, struct sk_buff *skb, head_tb_len, DMA_TO_DEVICE); if (unlikely(dma_mapping_error(trans->dev, tb_phys))) return -EINVAL; + trace_iwlwifi_dev_tx_tb(trans->dev, skb, + skb->data + hdr_len, + head_tb_len); iwl_pcie_txq_build_tfd(trans, txq, tb_phys, head_tb_len, false); } @@ -2011,6 +2014,9 @@ static int iwl_fill_data_tbs(struct iwl_trans *trans, struct sk_buff *skb, if (unlikely(dma_mapping_error(trans->dev, tb_phys))) return -EINVAL; + trace_iwlwifi_dev_tx_tb(trans->dev, skb, + skb_frag_address(frag), + skb_frag_size(frag)); tb_idx = iwl_pcie_txq_build_tfd(trans, txq, tb_phys, skb_frag_size(frag), false); if (tb_idx < 0) @@ -2190,8 +2196,8 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb, } iwl_pcie_txq_build_tfd(trans, txq, hdr_tb_phys, hdr_tb_len, false); - trace_iwlwifi_dev_tx_tso_chunk(trans->dev, start_hdr, - hdr_tb_len); + trace_iwlwifi_dev_tx_tb(trans->dev, skb, start_hdr, + hdr_tb_len); /* add this subframe's headers' length to the tx_cmd */ le16_add_cpu(&tx_cmd->len, hdr_page->pos - subf_hdrs_start); @@ -2216,8 +2222,8 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb, iwl_pcie_txq_build_tfd(trans, txq, tb_phys, size, false); - trace_iwlwifi_dev_tx_tso_chunk(trans->dev, tso.data, - size); + trace_iwlwifi_dev_tx_tb(trans->dev, skb, tso.data, + size); data_left -= size; tso_build_data(skb, &tso, size); @@ -2398,6 +2404,13 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb, goto out_err; iwl_pcie_txq_build_tfd(trans, txq, tb1_phys, tb1_len, false); + trace_iwlwifi_dev_tx(trans->dev, skb, + iwl_pcie_get_tfd(trans, txq, + txq->write_ptr), + trans_pcie->tfd_size, + &dev_cmd->hdr, IWL_FIRST_TB_SIZE + tb1_len, + hdr_len); + /* * If gso_size wasn't set, don't give the frame "amsdu treatment" * (adding subframes, etc.). @@ -2421,14 +2434,6 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb, out_meta))) goto out_err; } - - trace_iwlwifi_dev_tx(trans->dev, skb, - iwl_pcie_get_tfd(trans, txq, - txq->write_ptr), - trans_pcie->tfd_size, - &dev_cmd->hdr, IWL_FIRST_TB_SIZE + tb1_len, - hdr_len); - trace_iwlwifi_dev_tx_data(trans->dev, skb, hdr_len); } /* building the A-MSDU might have changed this data, so memcpy it now */ From patchwork Mon Oct 8 08:17:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Coelho X-Patchwork-Id: 10630213 X-Patchwork-Delegate: luca@coelho.fi Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 867DC15E2 for ; Mon, 8 Oct 2018 08:17:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 78673287EF for ; Mon, 8 Oct 2018 08:17:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6CAC228BE5; Mon, 8 Oct 2018 08:17:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 02C3A287EF for ; Mon, 8 Oct 2018 08:17:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726810AbeJHP2P (ORCPT ); Mon, 8 Oct 2018 11:28:15 -0400 Received: from paleale.coelho.fi ([176.9.41.70]:56302 "EHLO farmhouse.coelho.fi" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726113AbeJHP2P (ORCPT ); Mon, 8 Oct 2018 11:28:15 -0400 Received: from 91-156-4-241.elisa-laajakaista.fi ([91.156.4.241] helo=redipa.ger.corp.intel.com) by farmhouse.coelho.fi with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.91) (envelope-from ) id 1g9QjX-0003w6-KO; Mon, 08 Oct 2018 11:17:44 +0300 From: Luca Coelho To: kvalo@codeaurora.org Cc: linux-wireless@vger.kernel.org, Ayala Beker , Luca Coelho Date: Mon, 8 Oct 2018 11:17:21 +0300 Message-Id: <20181008081735.3401-5-luca@coelho.fi> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008081735.3401-1-luca@coelho.fi> References: <20181008081735.3401-1-luca@coelho.fi> MIME-Version: 1.0 Subject: [PATCH 04/18] iwlwifi: mvm: introduce a new fragmented scan type: fast balance Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Ayala Beker Fast balance scan is similar to SCAN_TYPE_MILD, but this scan is fragmented and has shorter out of operating channel time, and therefore better match low latency scenarios. Signed-off-by: Ayala Beker Signed-off-by: Luca Coelho --- drivers/net/wireless/intel/iwlwifi/mvm/mvm.h | 1 + drivers/net/wireless/intel/iwlwifi/mvm/scan.c | 24 +++++++++++++------ 2 files changed, 18 insertions(+), 7 deletions(-) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h index 8f71eeed50d9..6f927052aeaf 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h +++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h @@ -512,6 +512,7 @@ enum iwl_mvm_scan_type { IWL_SCAN_TYPE_WILD, IWL_SCAN_TYPE_MILD, IWL_SCAN_TYPE_FRAGMENTED, + IWL_SCAN_TYPE_FAST_BALANCE, }; enum iwl_mvm_sched_scan_pass_all_states { diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c index ffcd0ca86041..b43bf116c7e8 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c @@ -110,6 +110,10 @@ static struct iwl_mvm_scan_timing_params scan_timing[] = { .suspend_time = 95, .max_out_time = 44, }, + [IWL_SCAN_TYPE_FAST_BALANCE] = { + .suspend_time = 30, + .max_out_time = 37, + }, }; struct iwl_mvm_scan_params { @@ -860,6 +864,12 @@ static inline bool iwl_mvm_is_regular_scan(struct iwl_mvm_scan_params *params) params->scan_plans[0].iterations == 1; } +static bool iwl_mvm_is_scan_fragmented(enum iwl_mvm_scan_type type) +{ + return (type == IWL_SCAN_TYPE_FRAGMENTED || + type == IWL_SCAN_TYPE_FAST_BALANCE); +} + static int iwl_mvm_scan_lmac_flags(struct iwl_mvm *mvm, struct iwl_mvm_scan_params *params, struct ieee80211_vif *vif) @@ -872,7 +882,7 @@ static int iwl_mvm_scan_lmac_flags(struct iwl_mvm *mvm, if (params->n_ssids == 1 && params->ssids[0].ssid_len != 0) flags |= IWL_MVM_LMAC_SCAN_FLAG_PRE_CONNECTION; - if (params->type == IWL_SCAN_TYPE_FRAGMENTED) + if (iwl_mvm_is_scan_fragmented(params->type)) flags |= IWL_MVM_LMAC_SCAN_FLAG_FRAGMENTED; if (iwl_mvm_rrm_scan_needed(mvm) && @@ -895,7 +905,7 @@ static int iwl_mvm_scan_lmac_flags(struct iwl_mvm *mvm, if (iwl_mvm_is_regular_scan(params) && vif->type != NL80211_IFTYPE_P2P_DEVICE && - params->type != IWL_SCAN_TYPE_FRAGMENTED) + !iwl_mvm_is_scan_fragmented(params->type)) flags |= IWL_MVM_LMAC_SCAN_FLAG_EXTENDED_DWELL; return flags; @@ -1162,7 +1172,7 @@ int iwl_mvm_config_scan(struct iwl_mvm *mvm) SCAN_CONFIG_FLAG_SET_MAC_ADDR | SCAN_CONFIG_FLAG_SET_CHANNEL_FLAGS | SCAN_CONFIG_N_CHANNELS(num_channels) | - (type == IWL_SCAN_TYPE_FRAGMENTED ? + (iwl_mvm_is_scan_fragmented(type) ? SCAN_CONFIG_FLAG_SET_FRAGMENTED : SCAN_CONFIG_FLAG_CLEAR_FRAGMENTED); @@ -1177,7 +1187,7 @@ int iwl_mvm_config_scan(struct iwl_mvm *mvm) */ if (iwl_mvm_cdb_scan_api(mvm)) { if (iwl_mvm_is_cdb_supported(mvm)) - flags |= (hb_type == IWL_SCAN_TYPE_FRAGMENTED) ? + flags |= (iwl_mvm_is_scan_fragmented(hb_type)) ? SCAN_CONFIG_FLAG_SET_LMAC2_FRAGMENTED : SCAN_CONFIG_FLAG_CLEAR_LMAC2_FRAGMENTED; iwl_mvm_fill_scan_config(mvm, cfg, flags, channel_flags); @@ -1338,11 +1348,11 @@ static u16 iwl_mvm_scan_umac_flags(struct iwl_mvm *mvm, if (params->n_ssids == 1 && params->ssids[0].ssid_len != 0) flags |= IWL_UMAC_SCAN_GEN_FLAGS_PRE_CONNECT; - if (params->type == IWL_SCAN_TYPE_FRAGMENTED) + if (iwl_mvm_is_scan_fragmented(params->type)) flags |= IWL_UMAC_SCAN_GEN_FLAGS_FRAGMENTED; if (iwl_mvm_is_cdb_supported(mvm) && - params->hb_type == IWL_SCAN_TYPE_FRAGMENTED) + iwl_mvm_is_scan_fragmented(params->hb_type)) flags |= IWL_UMAC_SCAN_GEN_FLAGS_LMAC2_FRAGMENTED; if (iwl_mvm_rrm_scan_needed(mvm) && @@ -1380,7 +1390,7 @@ static u16 iwl_mvm_scan_umac_flags(struct iwl_mvm *mvm, */ if (iwl_mvm_is_regular_scan(params) && vif->type != NL80211_IFTYPE_P2P_DEVICE && - params->type != IWL_SCAN_TYPE_FRAGMENTED && + !iwl_mvm_is_scan_fragmented(params->type) && !iwl_mvm_is_adaptive_dwell_supported(mvm) && !iwl_mvm_is_oce_supported(mvm)) flags |= IWL_UMAC_SCAN_GEN_FLAGS_EXTENDED_DWELL; From patchwork Mon Oct 8 08:17:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Coelho X-Patchwork-Id: 10630211 X-Patchwork-Delegate: luca@coelho.fi Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1A859933 for ; Mon, 8 Oct 2018 08:17:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0BBB928BE5 for ; Mon, 8 Oct 2018 08:17:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 000C728B27; Mon, 8 Oct 2018 08:17:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5BA5528BE6 for ; Mon, 8 Oct 2018 08:17:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726985AbeJHP2Q (ORCPT ); Mon, 8 Oct 2018 11:28:16 -0400 Received: from paleale.coelho.fi ([176.9.41.70]:56312 "EHLO farmhouse.coelho.fi" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726793AbeJHP2Q (ORCPT ); Mon, 8 Oct 2018 11:28:16 -0400 Received: from 91-156-4-241.elisa-laajakaista.fi ([91.156.4.241] helo=redipa.ger.corp.intel.com) by farmhouse.coelho.fi with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.91) (envelope-from ) id 1g9QjY-0003w6-C6; Mon, 08 Oct 2018 11:17:44 +0300 From: Luca Coelho To: kvalo@codeaurora.org Cc: linux-wireless@vger.kernel.org, Ayala Beker , Luca Coelho Date: Mon, 8 Oct 2018 11:17:22 +0300 Message-Id: <20181008081735.3401-6-luca@coelho.fi> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008081735.3401-1-luca@coelho.fi> References: <20181008081735.3401-1-luca@coelho.fi> MIME-Version: 1.0 Subject: [PATCH 05/18] iwlwifi: mvm: use fast balance scan in case of DCM mode with P2P GO Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Ayala Beker Currently in case of DCM with P2P GO where BSS DTIM interval < 220 msec the fw fails to allocate events for the P2P GO dtim due to long passive scan events. Fix this by requesting all scans in this scenario to be fragmented with fast balance scan time settings. The only exception is in case fragmented scan was planned to be set due to low latency or high throughput reason, set the scan timing as planned. Signed-off-by: Ayala Beker Signed-off-by: Luca Coelho --- drivers/net/wireless/intel/iwlwifi/mvm/scan.c | 91 ++++++++++++++----- 1 file changed, 68 insertions(+), 23 deletions(-) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c index b43bf116c7e8..cfb784fea77b 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c @@ -239,8 +239,32 @@ iwl_mvm_get_traffic_load_band(struct iwl_mvm *mvm, enum nl80211_band band) return mvm->tcm.result.band_load[band]; } +struct iwl_is_dcm_with_go_iterator_data { + struct ieee80211_vif *current_vif; + bool is_dcm_with_p2p_go; +}; + +static void iwl_mvm_is_dcm_with_go_iterator(void *_data, u8 *mac, + struct ieee80211_vif *vif) +{ + struct iwl_is_dcm_with_go_iterator_data *data = _data; + struct iwl_mvm_vif *other_mvmvif = iwl_mvm_vif_from_mac80211(vif); + struct iwl_mvm_vif *curr_mvmvif = + iwl_mvm_vif_from_mac80211(data->current_vif); + + /* exclude the given vif */ + if (vif == data->current_vif) + return; + + if (vif->type == NL80211_IFTYPE_AP && vif->p2p && + other_mvmvif->phy_ctxt && curr_mvmvif->phy_ctxt && + other_mvmvif->phy_ctxt->id != curr_mvmvif->phy_ctxt->id) + data->is_dcm_with_p2p_go = true; +} + static enum -iwl_mvm_scan_type _iwl_mvm_get_scan_type(struct iwl_mvm *mvm, bool p2p_device, +iwl_mvm_scan_type _iwl_mvm_get_scan_type(struct iwl_mvm *mvm, + struct ieee80211_vif *vif, enum iwl_mvm_traffic_load load, bool low_latency) { @@ -253,9 +277,30 @@ iwl_mvm_scan_type _iwl_mvm_get_scan_type(struct iwl_mvm *mvm, bool p2p_device, if (!global_cnt) return IWL_SCAN_TYPE_UNASSOC; - if ((load == IWL_MVM_TRAFFIC_HIGH || low_latency) && !p2p_device && - fw_has_api(&mvm->fw->ucode_capa, IWL_UCODE_TLV_API_FRAGMENTED_SCAN)) - return IWL_SCAN_TYPE_FRAGMENTED; + if (fw_has_api(&mvm->fw->ucode_capa, + IWL_UCODE_TLV_API_FRAGMENTED_SCAN)) { + if ((load == IWL_MVM_TRAFFIC_HIGH || low_latency) && + (!vif || vif->type != NL80211_IFTYPE_P2P_DEVICE)) + return IWL_SCAN_TYPE_FRAGMENTED; + + /* in case of DCM with GO where BSS DTIM interval < 220msec + * set all scan requests as fast-balance scan + * */ + if (vif && vif->type == NL80211_IFTYPE_STATION && + vif->bss_conf.dtim_period < 220) { + struct iwl_is_dcm_with_go_iterator_data data = { + .current_vif = vif, + .is_dcm_with_p2p_go = false, + }; + + ieee80211_iterate_active_interfaces_atomic(mvm->hw, + IEEE80211_IFACE_ITER_NORMAL, + iwl_mvm_is_dcm_with_go_iterator, + &data); + if (data.is_dcm_with_p2p_go) + return IWL_SCAN_TYPE_FAST_BALANCE; + } + } if (load >= IWL_MVM_TRAFFIC_MEDIUM || low_latency) return IWL_SCAN_TYPE_MILD; @@ -264,7 +309,8 @@ iwl_mvm_scan_type _iwl_mvm_get_scan_type(struct iwl_mvm *mvm, bool p2p_device, } static enum -iwl_mvm_scan_type iwl_mvm_get_scan_type(struct iwl_mvm *mvm, bool p2p_device) +iwl_mvm_scan_type iwl_mvm_get_scan_type(struct iwl_mvm *mvm, + struct ieee80211_vif *vif) { enum iwl_mvm_traffic_load load; bool low_latency; @@ -272,12 +318,12 @@ iwl_mvm_scan_type iwl_mvm_get_scan_type(struct iwl_mvm *mvm, bool p2p_device) load = iwl_mvm_get_traffic_load(mvm); low_latency = iwl_mvm_low_latency(mvm); - return _iwl_mvm_get_scan_type(mvm, p2p_device, load, low_latency); + return _iwl_mvm_get_scan_type(mvm, vif, load, low_latency); } static enum iwl_mvm_scan_type iwl_mvm_get_scan_type_band(struct iwl_mvm *mvm, - bool p2p_device, + struct ieee80211_vif *vif, enum nl80211_band band) { enum iwl_mvm_traffic_load load; @@ -286,7 +332,7 @@ iwl_mvm_scan_type iwl_mvm_get_scan_type_band(struct iwl_mvm *mvm, load = iwl_mvm_get_traffic_load_band(mvm, band); low_latency = iwl_mvm_low_latency_band(mvm, band); - return _iwl_mvm_get_scan_type(mvm, p2p_device, load, low_latency); + return _iwl_mvm_get_scan_type(mvm, vif, load, low_latency); } static int @@ -1054,7 +1100,7 @@ static void iwl_mvm_fill_channels(struct iwl_mvm *mvm, u8 *channels) static void iwl_mvm_fill_scan_config_v1(struct iwl_mvm *mvm, void *config, u32 flags, u8 channel_flags) { - enum iwl_mvm_scan_type type = iwl_mvm_get_scan_type(mvm, false); + enum iwl_mvm_scan_type type = iwl_mvm_get_scan_type(mvm, NULL); struct iwl_scan_config_v1 *cfg = config; cfg->flags = cpu_to_le32(flags); @@ -1087,9 +1133,9 @@ static void iwl_mvm_fill_scan_config(struct iwl_mvm *mvm, void *config, if (iwl_mvm_is_cdb_supported(mvm)) { enum iwl_mvm_scan_type lb_type, hb_type; - lb_type = iwl_mvm_get_scan_type_band(mvm, false, + lb_type = iwl_mvm_get_scan_type_band(mvm, NULL, NL80211_BAND_2GHZ); - hb_type = iwl_mvm_get_scan_type_band(mvm, false, + hb_type = iwl_mvm_get_scan_type_band(mvm, NULL, NL80211_BAND_5GHZ); cfg->out_of_channel_time[SCAN_LB_LMAC_IDX] = @@ -1103,7 +1149,7 @@ static void iwl_mvm_fill_scan_config(struct iwl_mvm *mvm, void *config, cpu_to_le32(scan_timing[hb_type].suspend_time); } else { enum iwl_mvm_scan_type type = - iwl_mvm_get_scan_type(mvm, false); + iwl_mvm_get_scan_type(mvm, NULL); cfg->out_of_channel_time[SCAN_LB_LMAC_IDX] = cpu_to_le32(scan_timing[type].max_out_time); @@ -1140,14 +1186,14 @@ int iwl_mvm_config_scan(struct iwl_mvm *mvm) return -ENOBUFS; if (iwl_mvm_is_cdb_supported(mvm)) { - type = iwl_mvm_get_scan_type_band(mvm, false, + type = iwl_mvm_get_scan_type_band(mvm, NULL, NL80211_BAND_2GHZ); - hb_type = iwl_mvm_get_scan_type_band(mvm, false, + hb_type = iwl_mvm_get_scan_type_band(mvm, NULL, NL80211_BAND_5GHZ); if (type == mvm->scan_type && hb_type == mvm->hb_scan_type) return 0; } else { - type = iwl_mvm_get_scan_type(mvm, false); + type = iwl_mvm_get_scan_type(mvm, NULL); if (type == mvm->scan_type) return 0; } @@ -1599,19 +1645,20 @@ void iwl_mvm_scan_timeout_wk(struct work_struct *work) static void iwl_mvm_fill_scan_type(struct iwl_mvm *mvm, struct iwl_mvm_scan_params *params, - bool p2p) + struct ieee80211_vif *vif) { if (iwl_mvm_is_cdb_supported(mvm)) { params->type = - iwl_mvm_get_scan_type_band(mvm, p2p, + iwl_mvm_get_scan_type_band(mvm, vif, NL80211_BAND_2GHZ); params->hb_type = - iwl_mvm_get_scan_type_band(mvm, p2p, + iwl_mvm_get_scan_type_band(mvm, vif, NL80211_BAND_5GHZ); } else { - params->type = iwl_mvm_get_scan_type(mvm, p2p); + params->type = iwl_mvm_get_scan_type(mvm, vif); } } + int iwl_mvm_reg_scan_start(struct iwl_mvm *mvm, struct ieee80211_vif *vif, struct cfg80211_scan_request *req, struct ieee80211_scan_ies *ies) @@ -1659,8 +1706,7 @@ int iwl_mvm_reg_scan_start(struct iwl_mvm *mvm, struct ieee80211_vif *vif, params.scan_plans = &scan_plan; params.n_scan_plans = 1; - iwl_mvm_fill_scan_type(mvm, ¶ms, - vif->type == NL80211_IFTYPE_P2P_DEVICE); + iwl_mvm_fill_scan_type(mvm, ¶ms, vif); ret = iwl_mvm_get_measurement_dwell(mvm, req, ¶ms); if (ret < 0) @@ -1755,8 +1801,7 @@ int iwl_mvm_sched_scan_start(struct iwl_mvm *mvm, params.n_scan_plans = req->n_scan_plans; params.scan_plans = req->scan_plans; - iwl_mvm_fill_scan_type(mvm, ¶ms, - vif->type == NL80211_IFTYPE_P2P_DEVICE); + iwl_mvm_fill_scan_type(mvm, ¶ms, vif); /* In theory, LMAC scans can handle a 32-bit delay, but since * waiting for over 18 hours to start the scan is a bit silly From patchwork Mon Oct 8 08:17:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Coelho X-Patchwork-Id: 10630215 X-Patchwork-Delegate: luca@coelho.fi Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2370A15E2 for ; Mon, 8 Oct 2018 08:17:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 13501287EF for ; Mon, 8 Oct 2018 08:17:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0618528BE5; Mon, 8 Oct 2018 08:17:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 90D21287EF for ; Mon, 8 Oct 2018 08:17:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727029AbeJHP2S (ORCPT ); Mon, 8 Oct 2018 11:28:18 -0400 Received: from paleale.coelho.fi ([176.9.41.70]:56330 "EHLO farmhouse.coelho.fi" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726793AbeJHP2S (ORCPT ); Mon, 8 Oct 2018 11:28:18 -0400 Received: from 91-156-4-241.elisa-laajakaista.fi ([91.156.4.241] helo=redipa.ger.corp.intel.com) by farmhouse.coelho.fi with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.91) (envelope-from ) id 1g9QjZ-0003w6-8w; Mon, 08 Oct 2018 11:17:46 +0300 From: Luca Coelho To: kvalo@codeaurora.org Cc: linux-wireless@vger.kernel.org, Shahar S Matityahu , Luca Coelho Date: Mon, 8 Oct 2018 11:17:23 +0300 Message-Id: <20181008081735.3401-7-luca@coelho.fi> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008081735.3401-1-luca@coelho.fi> References: <20181008081735.3401-1-luca@coelho.fi> MIME-Version: 1.0 Subject: [PATCH 06/18] iwlwifi: dump debug data before stop device Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Shahar S Matityahu Debug data dump is not working in flows that stop the device is used in their error handling. During these flows the op mode mutex is locked until the device stops. Because of that, any assert generated from the firmware can be handled only after the device already stopped. Since dumping cannot occour after stopping the device, split the the dump function to two parts, Part that handles locking, and the part that starts the actual dumping and call the second part in the op mode stop device function. Signed-off-by: Shahar S Matityahu Signed-off-by: Luca Coelho --- drivers/net/wireless/intel/iwlwifi/fw/dbg.c | 27 +++++++++++++++----- drivers/net/wireless/intel/iwlwifi/fw/dbg.h | 1 + drivers/net/wireless/intel/iwlwifi/mvm/mvm.h | 5 ++++ 3 files changed, 26 insertions(+), 7 deletions(-) diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c index f44c716b1130..c16757051f16 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c +++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c @@ -1154,14 +1154,14 @@ int iwl_fw_start_dbg_conf(struct iwl_fw_runtime *fwrt, u8 conf_id) } IWL_EXPORT_SYMBOL(iwl_fw_start_dbg_conf); -void iwl_fw_error_dump_wk(struct work_struct *work) +/* this function assumes dump_start was called beforehand and dump_end will be + * called afterwards + */ +void iwl_fw_dbg_collect_sync(struct iwl_fw_runtime *fwrt) { - struct iwl_fw_runtime *fwrt = - container_of(work, struct iwl_fw_runtime, dump.wk.work); struct iwl_fw_dbg_params params = {0}; - if (fwrt->ops && fwrt->ops->dump_start && - fwrt->ops->dump_start(fwrt->ops_ctx)) + if (!test_bit(IWL_FWRT_STATUS_DUMPING, &fwrt->status)) return; if (fwrt->ops && fwrt->ops->fw_running && @@ -1169,7 +1169,7 @@ void iwl_fw_error_dump_wk(struct work_struct *work) IWL_ERR(fwrt, "Firmware not running - cannot dump error\n"); iwl_fw_free_dump_desc(fwrt); clear_bit(IWL_FWRT_STATUS_DUMPING, &fwrt->status); - goto out; + return; } iwl_fw_dbg_stop_recording(fwrt, ¶ms); @@ -1183,7 +1183,20 @@ void iwl_fw_error_dump_wk(struct work_struct *work) udelay(500); iwl_fw_dbg_restart_recording(fwrt, ¶ms); } -out: +} +IWL_EXPORT_SYMBOL(iwl_fw_dbg_collect_sync); + +void iwl_fw_error_dump_wk(struct work_struct *work) +{ + struct iwl_fw_runtime *fwrt = + container_of(work, struct iwl_fw_runtime, dump.wk.work); + + if (fwrt->ops && fwrt->ops->dump_start && + fwrt->ops->dump_start(fwrt->ops_ctx)) + return; + + iwl_fw_dbg_collect_sync(fwrt); + if (fwrt->ops && fwrt->ops->dump_end) fwrt->ops->dump_end(fwrt->ops_ctx); } diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.h b/drivers/net/wireless/intel/iwlwifi/fw/dbg.h index d9578dcec24c..6f8d3256f7b0 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.h @@ -367,4 +367,5 @@ static inline void iwl_fw_resume_timestamp(struct iwl_fw_runtime *fwrt) {} #endif /* CONFIG_IWLWIFI_DEBUGFS */ void iwl_fw_alive_error_dump(struct iwl_fw_runtime *fwrt); +void iwl_fw_dbg_collect_sync(struct iwl_fw_runtime *fwrt); #endif /* __iwl_fw_dbg_h__ */ diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h index 6f927052aeaf..ff1ba84f3aa6 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h +++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h @@ -1906,6 +1906,11 @@ static inline u32 iwl_mvm_flushable_queues(struct iwl_mvm *mvm) static inline void iwl_mvm_stop_device(struct iwl_mvm *mvm) { + lockdep_assert_held(&mvm->mutex); + /* calling this function without using dump_start/end since at this + * point we already hold the op mode mutex + */ + iwl_fw_dbg_collect_sync(&mvm->fwrt); iwl_fw_cancel_timestamp(&mvm->fwrt); iwl_free_fw_paging(&mvm->fwrt); clear_bit(IWL_MVM_STATUS_FIRMWARE_RUNNING, &mvm->status); From patchwork Mon Oct 8 08:17:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Coelho X-Patchwork-Id: 10630221 X-Patchwork-Delegate: luca@coelho.fi Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E0BE015E2 for ; Mon, 8 Oct 2018 08:17:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D298328BE6 for ; Mon, 8 Oct 2018 08:17:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C766628BF8; Mon, 8 Oct 2018 08:17:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 373A328BE6 for ; Mon, 8 Oct 2018 08:17:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727040AbeJHP2T (ORCPT ); Mon, 8 Oct 2018 11:28:19 -0400 Received: from paleale.coelho.fi ([176.9.41.70]:56338 "EHLO farmhouse.coelho.fi" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726995AbeJHP2T (ORCPT ); Mon, 8 Oct 2018 11:28:19 -0400 Received: from 91-156-4-241.elisa-laajakaista.fi ([91.156.4.241] helo=redipa.ger.corp.intel.com) by farmhouse.coelho.fi with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.91) (envelope-from ) id 1g9Qjb-0003w6-6x; Mon, 08 Oct 2018 11:17:47 +0300 From: Luca Coelho To: kvalo@codeaurora.org Cc: linux-wireless@vger.kernel.org, Shahar S Matityahu , Luca Coelho Date: Mon, 8 Oct 2018 11:17:24 +0300 Message-Id: <20181008081735.3401-8-luca@coelho.fi> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008081735.3401-1-luca@coelho.fi> References: <20181008081735.3401-1-luca@coelho.fi> MIME-Version: 1.0 Subject: [PATCH 07/18] iwlwifi: mvm: move rt status check to the start of the resume flow Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Shahar S Matityahu Move the rt status checking to the start of the resume flow in order to avoid sending D0I3_END_CMD to the FW. Also, collect dump if an assert was encountered. Signed-off-by: Shahar S Matityahu Signed-off-by: Luca Coelho --- drivers/net/wireless/intel/iwlwifi/mvm/d3.c | 60 ++++++++++++--------- 1 file changed, 35 insertions(+), 25 deletions(-) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c index fb981270f224..843f3b41b72e 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c @@ -1638,32 +1638,10 @@ struct iwl_wowlan_status *iwl_mvm_send_wowlan_get_status(struct iwl_mvm *mvm) } static struct iwl_wowlan_status * -iwl_mvm_get_wakeup_status(struct iwl_mvm *mvm, struct ieee80211_vif *vif) +iwl_mvm_get_wakeup_status(struct iwl_mvm *mvm) { - u32 base = mvm->error_event_table[0]; - struct error_table_start { - /* cf. struct iwl_error_event_table */ - u32 valid; - u32 error_id; - } err_info; int ret; - iwl_trans_read_mem_bytes(mvm->trans, base, - &err_info, sizeof(err_info)); - - if (err_info.valid) { - IWL_INFO(mvm, "error table is valid (%d) with error (%d)\n", - err_info.valid, err_info.error_id); - if (err_info.error_id == RF_KILL_INDICATOR_FOR_WOWLAN) { - struct cfg80211_wowlan_wakeup wakeup = { - .rfkill_release = true, - }; - ieee80211_report_wowlan_wakeup(vif, &wakeup, - GFP_KERNEL); - } - return ERR_PTR(-EIO); - } - /* only for tracing for now */ ret = iwl_mvm_send_cmd_pdu(mvm, OFFLOADS_QUERY_CMD, 0, 0, NULL); if (ret) @@ -1682,7 +1660,7 @@ static bool iwl_mvm_query_wakeup_reasons(struct iwl_mvm *mvm, bool keep; struct iwl_mvm_sta *mvm_ap_sta; - fw_status = iwl_mvm_get_wakeup_status(mvm, vif); + fw_status = iwl_mvm_get_wakeup_status(mvm); if (IS_ERR_OR_NULL(fw_status)) goto out_unlock; @@ -1807,7 +1785,7 @@ static void iwl_mvm_query_netdetect_reasons(struct iwl_mvm *mvm, u32 reasons = 0; int i, j, n_matches, ret; - fw_status = iwl_mvm_get_wakeup_status(mvm, vif); + fw_status = iwl_mvm_get_wakeup_status(mvm); if (!IS_ERR_OR_NULL(fw_status)) { reasons = le32_to_cpu(fw_status->wakeup_reasons); kfree(fw_status); @@ -1920,6 +1898,29 @@ static void iwl_mvm_d3_disconnect_iter(void *data, u8 *mac, ieee80211_resume_disconnect(vif); } +static int iwl_mvm_check_rt_status(struct iwl_mvm *mvm, + struct ieee80211_vif *vif) +{ + u32 base = mvm->error_event_table[0]; + struct error_table_start { + /* cf. struct iwl_error_event_table */ + u32 valid; + u32 error_id; + } err_info; + + iwl_trans_read_mem_bytes(mvm->trans, base, + &err_info, sizeof(err_info)); + + if (err_info.valid && + err_info.error_id == RF_KILL_INDICATOR_FOR_WOWLAN) { + struct cfg80211_wowlan_wakeup wakeup = { + .rfkill_release = true, + }; + ieee80211_report_wowlan_wakeup(vif, &wakeup, GFP_KERNEL); + } + return err_info.valid; +} + static int __iwl_mvm_resume(struct iwl_mvm *mvm, bool test) { struct ieee80211_vif *vif = NULL; @@ -1951,6 +1952,15 @@ static int __iwl_mvm_resume(struct iwl_mvm *mvm, bool test) /* query SRAM first in case we want event logging */ iwl_mvm_read_d3_sram(mvm); + if (iwl_mvm_check_rt_status(mvm, vif)) { + set_bit(STATUS_FW_ERROR, &mvm->trans->status); + iwl_mvm_dump_nic_error_log(mvm); + iwl_fw_dbg_collect_desc(&mvm->fwrt, &iwl_dump_desc_assert, + NULL, 0); + ret = 1; + goto err; + } + if (d0i3_first) { ret = iwl_mvm_send_cmd_pdu(mvm, D0I3_END_CMD, 0, 0, NULL); if (ret < 0) { From patchwork Mon Oct 8 08:17:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Coelho X-Patchwork-Id: 10630217 X-Patchwork-Delegate: luca@coelho.fi Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1270B15E2 for ; Mon, 8 Oct 2018 08:17:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 01AF928B27 for ; Mon, 8 Oct 2018 08:17:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E876928BE6; Mon, 8 Oct 2018 08:17:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8868C28B27 for ; Mon, 8 Oct 2018 08:17:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727044AbeJHP2T (ORCPT ); Mon, 8 Oct 2018 11:28:19 -0400 Received: from paleale.coelho.fi ([176.9.41.70]:56346 "EHLO farmhouse.coelho.fi" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726793AbeJHP2T (ORCPT ); Mon, 8 Oct 2018 11:28:19 -0400 Received: from 91-156-4-241.elisa-laajakaista.fi ([91.156.4.241] helo=redipa.ger.corp.intel.com) by farmhouse.coelho.fi with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.91) (envelope-from ) id 1g9Qjb-0003w6-Va; Mon, 08 Oct 2018 11:17:48 +0300 From: Luca Coelho To: kvalo@codeaurora.org Cc: linux-wireless@vger.kernel.org, Johannes Berg , Luca Coelho Date: Mon, 8 Oct 2018 11:17:25 +0300 Message-Id: <20181008081735.3401-9-luca@coelho.fi> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008081735.3401-1-luca@coelho.fi> References: <20181008081735.3401-1-luca@coelho.fi> MIME-Version: 1.0 Subject: [PATCH 08/18] iwlwifi: mvm: give TX queue info struct a name Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Johannes Berg Make this a named struct rather than an anonymous one, we'll want to refer to it by name later. Signed-off-by: Johannes Berg Signed-off-by: Luca Coelho --- drivers/net/wireless/intel/iwlwifi/mvm/mvm.h | 24 +++++++++++--------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h index ff1ba84f3aa6..cff58ed35fbe 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h +++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h @@ -788,6 +788,18 @@ struct iwl_mvm_geo_profile { u8 values[ACPI_GEO_TABLE_SIZE]; }; +struct iwl_mvm_dqa_txq_info { + u8 hw_queue_refcount; + u8 ra_sta_id; /* The RA this queue is mapped to, if exists */ + bool reserved; /* Is this the TXQ reserved for a STA */ + u8 mac80211_ac; /* The mac80211 AC this queue is mapped to */ + u8 txq_tid; /* The TID "owner" of this queue*/ + u16 tid_bitmap; /* Bitmap of the TIDs mapped to this queue */ + /* Timestamp for inactivation per TID of this queue */ + unsigned long last_frame_time[IWL_MAX_TID_COUNT + 1]; + enum iwl_mvm_queue_status status; +}; + struct iwl_mvm { /* for logger access */ struct device *dev; @@ -844,17 +856,7 @@ struct iwl_mvm { u16 hw_queue_to_mac80211[IWL_MAX_TVQM_QUEUES]; - struct { - u8 hw_queue_refcount; - u8 ra_sta_id; /* The RA this queue is mapped to, if exists */ - bool reserved; /* Is this the TXQ reserved for a STA */ - u8 mac80211_ac; /* The mac80211 AC this queue is mapped to */ - u8 txq_tid; /* The TID "owner" of this queue*/ - u16 tid_bitmap; /* Bitmap of the TIDs mapped to this queue */ - /* Timestamp for inactivation per TID of this queue */ - unsigned long last_frame_time[IWL_MAX_TID_COUNT + 1]; - enum iwl_mvm_queue_status status; - } queue_info[IWL_MAX_HW_QUEUES]; + struct iwl_mvm_dqa_txq_info queue_info[IWL_MAX_HW_QUEUES]; spinlock_t queue_info_lock; /* For syncing queue mgmt operations */ struct work_struct add_stream_wk; /* To add streams to queues */ From patchwork Mon Oct 8 08:17:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Coelho X-Patchwork-Id: 10630223 X-Patchwork-Delegate: luca@coelho.fi Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2EC39933 for ; Mon, 8 Oct 2018 08:17:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1CD01287EF for ; Mon, 8 Oct 2018 08:17:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 10F3A28BE5; Mon, 8 Oct 2018 08:17:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 70EB8287EF for ; Mon, 8 Oct 2018 08:17:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727078AbeJHP2W (ORCPT ); Mon, 8 Oct 2018 11:28:22 -0400 Received: from paleale.coelho.fi ([176.9.41.70]:56356 "EHLO farmhouse.coelho.fi" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726793AbeJHP2W (ORCPT ); Mon, 8 Oct 2018 11:28:22 -0400 Received: from 91-156-4-241.elisa-laajakaista.fi ([91.156.4.241] helo=redipa.ger.corp.intel.com) by farmhouse.coelho.fi with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.91) (envelope-from ) id 1g9Qjc-0003w6-M5; Mon, 08 Oct 2018 11:17:49 +0300 From: Luca Coelho To: kvalo@codeaurora.org Cc: linux-wireless@vger.kernel.org, Johannes Berg , Luca Coelho Date: Mon, 8 Oct 2018 11:17:26 +0300 Message-Id: <20181008081735.3401-10-luca@coelho.fi> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008081735.3401-1-luca@coelho.fi> References: <20181008081735.3401-1-luca@coelho.fi> MIME-Version: 1.0 Subject: [PATCH 09/18] iwlwifi: mvm: move queue management into sta.c Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Johannes Berg None of these functions really need to be separate, they're all only used in sta.c, move them there and make them static. Fix a small typo in related code while at it. Signed-off-by: Johannes Berg Signed-off-by: Luca Coelho --- drivers/net/wireless/intel/iwlwifi/mvm/mvm.h | 13 - drivers/net/wireless/intel/iwlwifi/mvm/sta.c | 409 +++++++++++++++++ .../net/wireless/intel/iwlwifi/mvm/utils.c | 418 ------------------ 3 files changed, 409 insertions(+), 431 deletions(-) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h index cff58ed35fbe..f5a532dc712d 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h +++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h @@ -1886,17 +1886,6 @@ void iwl_mvm_vif_set_low_latency(struct iwl_mvm_vif *mvmvif, bool set, mvmvif->low_latency &= ~cause; } -/* hw scheduler queue config */ -bool iwl_mvm_enable_txq(struct iwl_mvm *mvm, int queue, int mac80211_queue, - u16 ssn, const struct iwl_trans_txq_scd_cfg *cfg, - unsigned int wdg_timeout); -int iwl_mvm_tvqm_enable_txq(struct iwl_mvm *mvm, int mac80211_queue, - u8 sta_id, u8 tid, unsigned int timeout); - -int iwl_mvm_disable_txq(struct iwl_mvm *mvm, int queue, int mac80211_queue, - u8 tid, u8 flags); -int iwl_mvm_find_free_queue(struct iwl_mvm *mvm, u8 sta_id, u8 minq, u8 maxq); - /* Return a bitmask with all the hw supported queues, except for the * command queue, which can't be flushed. */ @@ -1998,8 +1987,6 @@ void iwl_mvm_reorder_timer_expired(struct timer_list *t); struct ieee80211_vif *iwl_mvm_get_bss_vif(struct iwl_mvm *mvm); bool iwl_mvm_is_vif_assoc(struct iwl_mvm *mvm); -void iwl_mvm_inactivity_check(struct iwl_mvm *mvm); - #define MVM_TCM_PERIOD_MSEC 500 #define MVM_TCM_PERIOD (HZ * MVM_TCM_PERIOD_MSEC / 1000) #define MVM_LL_PERIOD (10 * HZ) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c index 8f929c774e70..fd33b8d148b3 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c @@ -358,6 +358,112 @@ static int iwl_mvm_invalidate_sta_queue(struct iwl_mvm *mvm, int queue, return ret; } +static int iwl_mvm_disable_txq(struct iwl_mvm *mvm, int queue, + int mac80211_queue, u8 tid, u8 flags) +{ + struct iwl_scd_txq_cfg_cmd cmd = { + .scd_queue = queue, + .action = SCD_CFG_DISABLE_QUEUE, + }; + bool remove_mac_queue = mac80211_queue != IEEE80211_INVAL_HW_QUEUE; + int ret; + + if (WARN_ON(remove_mac_queue && mac80211_queue >= IEEE80211_MAX_QUEUES)) + return -EINVAL; + + if (iwl_mvm_has_new_tx_api(mvm)) { + spin_lock_bh(&mvm->queue_info_lock); + + if (remove_mac_queue) + mvm->hw_queue_to_mac80211[queue] &= + ~BIT(mac80211_queue); + + spin_unlock_bh(&mvm->queue_info_lock); + + iwl_trans_txq_free(mvm->trans, queue); + + return 0; + } + + spin_lock_bh(&mvm->queue_info_lock); + + if (WARN_ON(mvm->queue_info[queue].hw_queue_refcount == 0)) { + spin_unlock_bh(&mvm->queue_info_lock); + return 0; + } + + mvm->queue_info[queue].tid_bitmap &= ~BIT(tid); + + /* + * If there is another TID with the same AC - don't remove the MAC queue + * from the mapping + */ + if (tid < IWL_MAX_TID_COUNT) { + unsigned long tid_bitmap = + mvm->queue_info[queue].tid_bitmap; + int ac = tid_to_mac80211_ac[tid]; + int i; + + for_each_set_bit(i, &tid_bitmap, IWL_MAX_TID_COUNT) { + if (tid_to_mac80211_ac[i] == ac) + remove_mac_queue = false; + } + } + + if (remove_mac_queue) + mvm->hw_queue_to_mac80211[queue] &= + ~BIT(mac80211_queue); + mvm->queue_info[queue].hw_queue_refcount--; + + cmd.action = mvm->queue_info[queue].hw_queue_refcount ? + SCD_CFG_ENABLE_QUEUE : SCD_CFG_DISABLE_QUEUE; + if (cmd.action == SCD_CFG_DISABLE_QUEUE) + mvm->queue_info[queue].status = IWL_MVM_QUEUE_FREE; + + IWL_DEBUG_TX_QUEUES(mvm, + "Disabling TXQ #%d refcount=%d (mac80211 map:0x%x)\n", + queue, + mvm->queue_info[queue].hw_queue_refcount, + mvm->hw_queue_to_mac80211[queue]); + + /* If the queue is still enabled - nothing left to do in this func */ + if (cmd.action == SCD_CFG_ENABLE_QUEUE) { + spin_unlock_bh(&mvm->queue_info_lock); + return 0; + } + + cmd.sta_id = mvm->queue_info[queue].ra_sta_id; + cmd.tid = mvm->queue_info[queue].txq_tid; + + /* Make sure queue info is correct even though we overwrite it */ + WARN(mvm->queue_info[queue].hw_queue_refcount || + mvm->queue_info[queue].tid_bitmap || + mvm->hw_queue_to_mac80211[queue], + "TXQ #%d info out-of-sync - refcount=%d, mac map=0x%x, tid=0x%x\n", + queue, mvm->queue_info[queue].hw_queue_refcount, + mvm->hw_queue_to_mac80211[queue], + mvm->queue_info[queue].tid_bitmap); + + /* If we are here - the queue is freed and we can zero out these vals */ + mvm->queue_info[queue].hw_queue_refcount = 0; + mvm->queue_info[queue].tid_bitmap = 0; + mvm->hw_queue_to_mac80211[queue] = 0; + + /* Regardless if this is a reserved TXQ for a STA - mark it as false */ + mvm->queue_info[queue].reserved = false; + + spin_unlock_bh(&mvm->queue_info_lock); + + iwl_trans_txq_disable(mvm->trans, queue, false); + ret = iwl_mvm_send_cmd_pdu(mvm, SCD_QUEUE_CFG, flags, + sizeof(struct iwl_scd_txq_cfg_cmd), &cmd); + + if (ret) + IWL_ERR(mvm, "Failed to disable queue %d (ret=%d)\n", + queue, ret); + return ret; +} + static int iwl_mvm_get_queue_agg_tids(struct iwl_mvm *mvm, int queue) { struct ieee80211_sta *sta; @@ -674,6 +780,57 @@ int iwl_mvm_scd_queue_redirect(struct iwl_mvm *mvm, int queue, int tid, return ret; } +static int iwl_mvm_find_free_queue(struct iwl_mvm *mvm, u8 sta_id, + u8 minq, u8 maxq) +{ + int i; + + lockdep_assert_held(&mvm->queue_info_lock); + + /* This should not be hit with new TX path */ + if (WARN_ON(iwl_mvm_has_new_tx_api(mvm))) + return -ENOSPC; + + /* Start by looking for a free queue */ + for (i = minq; i <= maxq; i++) + if (mvm->queue_info[i].hw_queue_refcount == 0 && + mvm->queue_info[i].status == IWL_MVM_QUEUE_FREE) + return i; + + return -ENOSPC; +} + +static int iwl_mvm_tvqm_enable_txq(struct iwl_mvm *mvm, int mac80211_queue, + u8 sta_id, u8 tid, unsigned int timeout) +{ + int queue, size = IWL_DEFAULT_QUEUE_SIZE; + + if (tid == IWL_MAX_TID_COUNT) { + tid = IWL_MGMT_TID; + size = IWL_MGMT_QUEUE_SIZE; + } + queue = iwl_trans_txq_alloc(mvm->trans, + cpu_to_le16(TX_QUEUE_CFG_ENABLE_QUEUE), + sta_id, tid, SCD_QUEUE_CFG, size, timeout); + + if (queue < 0) { + IWL_DEBUG_TX_QUEUES(mvm, + "Failed allocating TXQ for sta %d tid %d, ret: %d\n", + sta_id, tid, queue); + return queue; + } + + IWL_DEBUG_TX_QUEUES(mvm, "Enabling TXQ #%d for sta %d tid %d\n", + queue, sta_id, tid); + + mvm->hw_queue_to_mac80211[queue] |= BIT(mac80211_queue); + IWL_DEBUG_TX_QUEUES(mvm, + "Enabling TXQ #%d (mac80211 map:0x%x)\n", + queue, mvm->hw_queue_to_mac80211[queue]); + + return queue; +} + static int iwl_mvm_sta_alloc_queue_tvqm(struct iwl_mvm *mvm, struct ieee80211_sta *sta, u8 ac, int tid) @@ -704,6 +861,93 @@ static int iwl_mvm_sta_alloc_queue_tvqm(struct iwl_mvm *mvm, return 0; } +static bool iwl_mvm_update_txq_mapping(struct iwl_mvm *mvm, int queue, + int mac80211_queue, u8 sta_id, u8 tid) +{ + bool enable_queue = true; + + spin_lock_bh(&mvm->queue_info_lock); + + /* Make sure this TID isn't already enabled */ + if (mvm->queue_info[queue].tid_bitmap & BIT(tid)) { + spin_unlock_bh(&mvm->queue_info_lock); + IWL_ERR(mvm, "Trying to enable TXQ %d with existing TID %d\n", + queue, tid); + return false; + } + + /* Update mappings and refcounts */ + if (mvm->queue_info[queue].hw_queue_refcount > 0) + enable_queue = false; + + if (mac80211_queue != IEEE80211_INVAL_HW_QUEUE) { + WARN(mac80211_queue >= + BITS_PER_BYTE * sizeof(mvm->hw_queue_to_mac80211[0]), + "cannot track mac80211 queue %d (queue %d, sta %d, tid %d)\n", + mac80211_queue, queue, sta_id, tid); + mvm->hw_queue_to_mac80211[queue] |= BIT(mac80211_queue); + } + + mvm->queue_info[queue].hw_queue_refcount++; + mvm->queue_info[queue].tid_bitmap |= BIT(tid); + mvm->queue_info[queue].ra_sta_id = sta_id; + + if (enable_queue) { + if (tid != IWL_MAX_TID_COUNT) + mvm->queue_info[queue].mac80211_ac = + tid_to_mac80211_ac[tid]; + else + mvm->queue_info[queue].mac80211_ac = IEEE80211_AC_VO; + + mvm->queue_info[queue].txq_tid = tid; + } + + IWL_DEBUG_TX_QUEUES(mvm, + "Enabling TXQ #%d refcount=%d (mac80211 map:0x%x)\n", + queue, mvm->queue_info[queue].hw_queue_refcount, + mvm->hw_queue_to_mac80211[queue]); + + spin_unlock_bh(&mvm->queue_info_lock); + + return enable_queue; +} + +static bool iwl_mvm_enable_txq(struct iwl_mvm *mvm, int queue, + int mac80211_queue, u16 ssn, + const struct iwl_trans_txq_scd_cfg *cfg, + unsigned int wdg_timeout) +{ + struct iwl_scd_txq_cfg_cmd cmd = { + .scd_queue = queue, + .action = SCD_CFG_ENABLE_QUEUE, + .window = cfg->frame_limit, + .sta_id = cfg->sta_id, + .ssn = cpu_to_le16(ssn), + .tx_fifo = cfg->fifo, + .aggregate = cfg->aggregate, + .tid = cfg->tid, + }; + bool inc_ssn; + + if (WARN_ON(iwl_mvm_has_new_tx_api(mvm))) + return false; + + /* Send the enabling command if we need to */ + if (!iwl_mvm_update_txq_mapping(mvm, queue, mac80211_queue, + cfg->sta_id, cfg->tid)) + return false; + + inc_ssn = iwl_trans_txq_enable_cfg(mvm->trans, queue, ssn, + NULL, wdg_timeout); + if (inc_ssn) + le16_add_cpu(&cmd.ssn, 1); + + WARN(iwl_mvm_send_cmd_pdu(mvm, SCD_QUEUE_CFG, 0, sizeof(cmd), &cmd), + "Failed to configure queue %d on FIFO %d\n", queue, cfg->fifo); + + return inc_ssn; +} + static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm, struct ieee80211_sta *sta, u8 ac, int tid, struct ieee80211_hdr *hdr) @@ -1032,6 +1276,171 @@ static void iwl_mvm_unshare_queue(struct iwl_mvm *mvm, int queue) spin_unlock_bh(&mvm->queue_info_lock); } +/* + * Remove inactive TIDs of a given queue. + * If all queue TIDs are inactive - mark the queue as inactive + * If only some the queue TIDs are inactive - unmap them from the queue + */ +static void iwl_mvm_remove_inactive_tids(struct iwl_mvm *mvm, + struct iwl_mvm_sta *mvmsta, int queue, + unsigned long tid_bitmap) +{ + int tid; + + lockdep_assert_held(&mvmsta->lock); + lockdep_assert_held(&mvm->queue_info_lock); + + if (WARN_ON(iwl_mvm_has_new_tx_api(mvm))) + return; + + /* Go over all non-active TIDs, incl. IWL_MAX_TID_COUNT (for mgmt) */ + for_each_set_bit(tid, &tid_bitmap, IWL_MAX_TID_COUNT + 1) { + /* If some TFDs are still queued - don't mark TID as inactive */ + if (iwl_mvm_tid_queued(mvm, &mvmsta->tid_data[tid])) + tid_bitmap &= ~BIT(tid); + + /* Don't mark as inactive any TID that has an active BA */ + if (mvmsta->tid_data[tid].state != IWL_AGG_OFF) + tid_bitmap &= ~BIT(tid); + } + + /* If all TIDs in the queue are inactive - mark queue as inactive. */ + if (tid_bitmap == mvm->queue_info[queue].tid_bitmap) { + mvm->queue_info[queue].status = IWL_MVM_QUEUE_INACTIVE; + + for_each_set_bit(tid, &tid_bitmap, IWL_MAX_TID_COUNT + 1) + mvmsta->tid_data[tid].is_tid_active = false; + + IWL_DEBUG_TX_QUEUES(mvm, "Queue %d marked as inactive\n", + queue); + return; + } + + /* + * If we are here, this is a shared queue and not all TIDs timed-out. + * Remove the ones that did. + */ + for_each_set_bit(tid, &tid_bitmap, IWL_MAX_TID_COUNT + 1) { + int mac_queue = mvmsta->vif->hw_queue[tid_to_mac80211_ac[tid]]; + + mvmsta->tid_data[tid].txq_id = IWL_MVM_INVALID_QUEUE; + mvm->hw_queue_to_mac80211[queue] &= ~BIT(mac_queue); + mvm->queue_info[queue].hw_queue_refcount--; + mvm->queue_info[queue].tid_bitmap &= ~BIT(tid); + mvmsta->tid_data[tid].is_tid_active = false; + + IWL_DEBUG_TX_QUEUES(mvm, + "Removing inactive TID %d from shared Q:%d\n", + tid, queue); + } + + IWL_DEBUG_TX_QUEUES(mvm, + "TXQ #%d left with tid bitmap 0x%x\n", queue, + mvm->queue_info[queue].tid_bitmap); + + /* + * There may be different TIDs with the same mac queues, so make + * sure all TIDs have existing corresponding mac queues enabled + */ + tid_bitmap = mvm->queue_info[queue].tid_bitmap; + for_each_set_bit(tid, &tid_bitmap, IWL_MAX_TID_COUNT + 1) { + mvm->hw_queue_to_mac80211[queue] |= + BIT(mvmsta->vif->hw_queue[tid_to_mac80211_ac[tid]]); + } + + /* If the queue is marked as shared - "unshare" it */ + if (mvm->queue_info[queue].hw_queue_refcount == 1 && + mvm->queue_info[queue].status == IWL_MVM_QUEUE_SHARED) { + mvm->queue_info[queue].status = IWL_MVM_QUEUE_RECONFIGURING; + IWL_DEBUG_TX_QUEUES(mvm, "Marking Q:%d for reconfig\n", + queue); + } +} + +static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) +{ + unsigned long timeout_queues_map = 0; + unsigned long now = jiffies; + int i; + + if (iwl_mvm_has_new_tx_api(mvm)) + return; + + spin_lock_bh(&mvm->queue_info_lock); + for (i = 0; i < IWL_MAX_HW_QUEUES; i++) + if (mvm->queue_info[i].hw_queue_refcount > 0) + timeout_queues_map |= BIT(i); + spin_unlock_bh(&mvm->queue_info_lock); + + rcu_read_lock(); + + /* + * If a queue times out - mark it as INACTIVE (don't remove right away + * if we don't have to.) This is an optimization in case traffic comes + * later, and we don't HAVE to use a currently-inactive queue + */ + for_each_set_bit(i, &timeout_queues_map, IWL_MAX_HW_QUEUES) { + struct ieee80211_sta *sta; + struct iwl_mvm_sta *mvmsta; + u8 sta_id; + int tid; + unsigned long inactive_tid_bitmap = 0; + unsigned long queue_tid_bitmap; + + spin_lock_bh(&mvm->queue_info_lock); + queue_tid_bitmap = mvm->queue_info[i].tid_bitmap; + + /* If TXQ isn't in active use anyway - nothing to do here... */ + if (mvm->queue_info[i].status != IWL_MVM_QUEUE_READY && + mvm->queue_info[i].status != IWL_MVM_QUEUE_SHARED) { + spin_unlock_bh(&mvm->queue_info_lock); + continue; + } + + /* Check to see if there are inactive TIDs on this queue */ + for_each_set_bit(tid, &queue_tid_bitmap, + IWL_MAX_TID_COUNT + 1) { + if (time_after(mvm->queue_info[i].last_frame_time[tid] + + IWL_MVM_DQA_QUEUE_TIMEOUT, now)) + continue; + + inactive_tid_bitmap |= BIT(tid); + } + spin_unlock_bh(&mvm->queue_info_lock); + + /* If all TIDs are active - finish check on this queue */ + if (!inactive_tid_bitmap) + continue; + + /* + * If we are here - the queue hadn't been served recently and is + * in use + */ + + sta_id = mvm->queue_info[i].ra_sta_id; + sta = rcu_dereference(mvm->fw_id_to_mac_id[sta_id]); + + /* + * If the STA doesn't exist anymore, it isn't an error. It could + * be that it was removed since getting the queues, and in this + * case it should've inactivated its queues anyway. + */ + if (IS_ERR_OR_NULL(sta)) + continue; + + mvmsta = iwl_mvm_sta_from_mac80211(sta); + + spin_lock_bh(&mvmsta->lock); + spin_lock(&mvm->queue_info_lock); + iwl_mvm_remove_inactive_tids(mvm, mvmsta, i, + inactive_tid_bitmap); + spin_unlock(&mvm->queue_info_lock); + spin_unlock_bh(&mvmsta->lock); + } + + rcu_read_unlock(); +} + static inline u8 iwl_mvm_tid_to_ac_queue(int tid) { if (tid == IWL_MAX_TID_COUNT) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c index 6c14d3413bdc..dec097b72300 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c @@ -599,36 +599,6 @@ void iwl_mvm_dump_nic_error_log(struct iwl_mvm *mvm) iwl_mvm_dump_umac_error_log(mvm); } -int iwl_mvm_find_free_queue(struct iwl_mvm *mvm, u8 sta_id, u8 minq, u8 maxq) -{ - int i; - - lockdep_assert_held(&mvm->queue_info_lock); - - /* This should not be hit with new TX path */ - if (WARN_ON(iwl_mvm_has_new_tx_api(mvm))) - return -ENOSPC; - - /* Start by looking for a free queue */ - for (i = minq; i <= maxq; i++) - if (mvm->queue_info[i].hw_queue_refcount == 0 && - mvm->queue_info[i].status == IWL_MVM_QUEUE_FREE) - return i; - - /* - * If no free queue found - settle for an inactive one to reconfigure - * Make sure that the inactive queue either already belongs to this STA, - * or that if it belongs to another one - it isn't the reserved queue - */ - for (i = minq; i <= maxq; i++) - if (mvm->queue_info[i].status == IWL_MVM_QUEUE_INACTIVE && - (sta_id == mvm->queue_info[i].ra_sta_id || - !mvm->queue_info[i].reserved)) - return i; - - return -ENOSPC; -} - int iwl_mvm_reconfig_scd(struct iwl_mvm *mvm, int queue, int fifo, int sta_id, int tid, int frame_limit, u16 ssn) { @@ -665,229 +635,6 @@ int iwl_mvm_reconfig_scd(struct iwl_mvm *mvm, int queue, int fifo, int sta_id, return ret; } -static bool iwl_mvm_update_txq_mapping(struct iwl_mvm *mvm, int queue, - int mac80211_queue, u8 sta_id, u8 tid) -{ - bool enable_queue = true; - - spin_lock_bh(&mvm->queue_info_lock); - - /* Make sure this TID isn't already enabled */ - if (mvm->queue_info[queue].tid_bitmap & BIT(tid)) { - spin_unlock_bh(&mvm->queue_info_lock); - IWL_ERR(mvm, "Trying to enable TXQ %d with existing TID %d\n", - queue, tid); - return false; - } - - /* Update mappings and refcounts */ - if (mvm->queue_info[queue].hw_queue_refcount > 0) - enable_queue = false; - - if (mac80211_queue != IEEE80211_INVAL_HW_QUEUE) { - WARN(mac80211_queue >= - BITS_PER_BYTE * sizeof(mvm->hw_queue_to_mac80211[0]), - "cannot track mac80211 queue %d (queue %d, sta %d, tid %d)\n", - mac80211_queue, queue, sta_id, tid); - mvm->hw_queue_to_mac80211[queue] |= BIT(mac80211_queue); - } - - mvm->queue_info[queue].hw_queue_refcount++; - mvm->queue_info[queue].tid_bitmap |= BIT(tid); - mvm->queue_info[queue].ra_sta_id = sta_id; - - if (enable_queue) { - if (tid != IWL_MAX_TID_COUNT) - mvm->queue_info[queue].mac80211_ac = - tid_to_mac80211_ac[tid]; - else - mvm->queue_info[queue].mac80211_ac = IEEE80211_AC_VO; - - mvm->queue_info[queue].txq_tid = tid; - } - - IWL_DEBUG_TX_QUEUES(mvm, - "Enabling TXQ #%d refcount=%d (mac80211 map:0x%x)\n", - queue, mvm->queue_info[queue].hw_queue_refcount, - mvm->hw_queue_to_mac80211[queue]); - - spin_unlock_bh(&mvm->queue_info_lock); - - return enable_queue; -} - -int iwl_mvm_tvqm_enable_txq(struct iwl_mvm *mvm, int mac80211_queue, - u8 sta_id, u8 tid, unsigned int timeout) -{ - int queue, size = IWL_DEFAULT_QUEUE_SIZE; - - if (tid == IWL_MAX_TID_COUNT) { - tid = IWL_MGMT_TID; - size = IWL_MGMT_QUEUE_SIZE; - } - queue = iwl_trans_txq_alloc(mvm->trans, - cpu_to_le16(TX_QUEUE_CFG_ENABLE_QUEUE), - sta_id, tid, SCD_QUEUE_CFG, size, timeout); - - if (queue < 0) { - IWL_DEBUG_TX_QUEUES(mvm, - "Failed allocating TXQ for sta %d tid %d, ret: %d\n", - sta_id, tid, queue); - return queue; - } - - IWL_DEBUG_TX_QUEUES(mvm, "Enabling TXQ #%d for sta %d tid %d\n", - queue, sta_id, tid); - - mvm->hw_queue_to_mac80211[queue] |= BIT(mac80211_queue); - IWL_DEBUG_TX_QUEUES(mvm, - "Enabling TXQ #%d (mac80211 map:0x%x)\n", - queue, mvm->hw_queue_to_mac80211[queue]); - - return queue; -} - -bool iwl_mvm_enable_txq(struct iwl_mvm *mvm, int queue, int mac80211_queue, - u16 ssn, const struct iwl_trans_txq_scd_cfg *cfg, - unsigned int wdg_timeout) -{ - struct iwl_scd_txq_cfg_cmd cmd = { - .scd_queue = queue, - .action = SCD_CFG_ENABLE_QUEUE, - .window = cfg->frame_limit, - .sta_id = cfg->sta_id, - .ssn = cpu_to_le16(ssn), - .tx_fifo = cfg->fifo, - .aggregate = cfg->aggregate, - .tid = cfg->tid, - }; - bool inc_ssn; - - if (WARN_ON(iwl_mvm_has_new_tx_api(mvm))) - return false; - - /* Send the enabling command if we need to */ - if (!iwl_mvm_update_txq_mapping(mvm, queue, mac80211_queue, - cfg->sta_id, cfg->tid)) - return false; - - inc_ssn = iwl_trans_txq_enable_cfg(mvm->trans, queue, ssn, - NULL, wdg_timeout); - if (inc_ssn) - le16_add_cpu(&cmd.ssn, 1); - - WARN(iwl_mvm_send_cmd_pdu(mvm, SCD_QUEUE_CFG, 0, sizeof(cmd), &cmd), - "Failed to configure queue %d on FIFO %d\n", queue, cfg->fifo); - - return inc_ssn; -} - -int iwl_mvm_disable_txq(struct iwl_mvm *mvm, int queue, int mac80211_queue, - u8 tid, u8 flags) -{ - struct iwl_scd_txq_cfg_cmd cmd = { - .scd_queue = queue, - .action = SCD_CFG_DISABLE_QUEUE, - }; - bool remove_mac_queue = mac80211_queue != IEEE80211_INVAL_HW_QUEUE; - int ret; - - if (WARN_ON(remove_mac_queue && mac80211_queue >= IEEE80211_MAX_QUEUES)) - return -EINVAL; - - if (iwl_mvm_has_new_tx_api(mvm)) { - spin_lock_bh(&mvm->queue_info_lock); - - if (remove_mac_queue) - mvm->hw_queue_to_mac80211[queue] &= - ~BIT(mac80211_queue); - - spin_unlock_bh(&mvm->queue_info_lock); - - iwl_trans_txq_free(mvm->trans, queue); - - return 0; - } - - spin_lock_bh(&mvm->queue_info_lock); - - if (WARN_ON(mvm->queue_info[queue].hw_queue_refcount == 0)) { - spin_unlock_bh(&mvm->queue_info_lock); - return 0; - } - - mvm->queue_info[queue].tid_bitmap &= ~BIT(tid); - - /* - * If there is another TID with the same AC - don't remove the MAC queue - * from the mapping - */ - if (tid < IWL_MAX_TID_COUNT) { - unsigned long tid_bitmap = - mvm->queue_info[queue].tid_bitmap; - int ac = tid_to_mac80211_ac[tid]; - int i; - - for_each_set_bit(i, &tid_bitmap, IWL_MAX_TID_COUNT) { - if (tid_to_mac80211_ac[i] == ac) - remove_mac_queue = false; - } - } - - if (remove_mac_queue) - mvm->hw_queue_to_mac80211[queue] &= - ~BIT(mac80211_queue); - mvm->queue_info[queue].hw_queue_refcount--; - - cmd.action = mvm->queue_info[queue].hw_queue_refcount ? - SCD_CFG_ENABLE_QUEUE : SCD_CFG_DISABLE_QUEUE; - if (cmd.action == SCD_CFG_DISABLE_QUEUE) - mvm->queue_info[queue].status = IWL_MVM_QUEUE_FREE; - - IWL_DEBUG_TX_QUEUES(mvm, - "Disabling TXQ #%d refcount=%d (mac80211 map:0x%x)\n", - queue, - mvm->queue_info[queue].hw_queue_refcount, - mvm->hw_queue_to_mac80211[queue]); - - /* If the queue is still enabled - nothing left to do in this func */ - if (cmd.action == SCD_CFG_ENABLE_QUEUE) { - spin_unlock_bh(&mvm->queue_info_lock); - return 0; - } - - cmd.sta_id = mvm->queue_info[queue].ra_sta_id; - cmd.tid = mvm->queue_info[queue].txq_tid; - - /* Make sure queue info is correct even though we overwrite it */ - WARN(mvm->queue_info[queue].hw_queue_refcount || - mvm->queue_info[queue].tid_bitmap || - mvm->hw_queue_to_mac80211[queue], - "TXQ #%d info out-of-sync - refcount=%d, mac map=0x%x, tid=0x%x\n", - queue, mvm->queue_info[queue].hw_queue_refcount, - mvm->hw_queue_to_mac80211[queue], - mvm->queue_info[queue].tid_bitmap); - - /* If we are here - the queue is freed and we can zero out these vals */ - mvm->queue_info[queue].hw_queue_refcount = 0; - mvm->queue_info[queue].tid_bitmap = 0; - mvm->hw_queue_to_mac80211[queue] = 0; - - /* Regardless if this is a reserved TXQ for a STA - mark it as false */ - mvm->queue_info[queue].reserved = false; - - spin_unlock_bh(&mvm->queue_info_lock); - - iwl_trans_txq_disable(mvm->trans, queue, false); - ret = iwl_mvm_send_cmd_pdu(mvm, SCD_QUEUE_CFG, flags, - sizeof(struct iwl_scd_txq_cfg_cmd), &cmd); - - if (ret) - IWL_ERR(mvm, "Failed to disable queue %d (ret=%d)\n", - queue, ret); - return ret; -} - /** * iwl_mvm_send_lq_cmd() - Send link quality command * @sync: This command can be sent synchronously. @@ -1255,171 +1002,6 @@ void iwl_mvm_connection_loss(struct iwl_mvm *mvm, struct ieee80211_vif *vif, ieee80211_connection_loss(vif); } -/* - * Remove inactive TIDs of a given queue. - * If all queue TIDs are inactive - mark the queue as inactive - * If only some the queue TIDs are inactive - unmap them from the queue - */ -static void iwl_mvm_remove_inactive_tids(struct iwl_mvm *mvm, - struct iwl_mvm_sta *mvmsta, int queue, - unsigned long tid_bitmap) -{ - int tid; - - lockdep_assert_held(&mvmsta->lock); - lockdep_assert_held(&mvm->queue_info_lock); - - if (WARN_ON(iwl_mvm_has_new_tx_api(mvm))) - return; - - /* Go over all non-active TIDs, incl. IWL_MAX_TID_COUNT (for mgmt) */ - for_each_set_bit(tid, &tid_bitmap, IWL_MAX_TID_COUNT + 1) { - /* If some TFDs are still queued - don't mark TID as inactive */ - if (iwl_mvm_tid_queued(mvm, &mvmsta->tid_data[tid])) - tid_bitmap &= ~BIT(tid); - - /* Don't mark as inactive any TID that has an active BA */ - if (mvmsta->tid_data[tid].state != IWL_AGG_OFF) - tid_bitmap &= ~BIT(tid); - } - - /* If all TIDs in the queue are inactive - mark queue as inactive. */ - if (tid_bitmap == mvm->queue_info[queue].tid_bitmap) { - mvm->queue_info[queue].status = IWL_MVM_QUEUE_INACTIVE; - - for_each_set_bit(tid, &tid_bitmap, IWL_MAX_TID_COUNT + 1) - mvmsta->tid_data[tid].is_tid_active = false; - - IWL_DEBUG_TX_QUEUES(mvm, "Queue %d marked as inactive\n", - queue); - return; - } - - /* - * If we are here, this is a shared queue and not all TIDs timed-out. - * Remove the ones that did. - */ - for_each_set_bit(tid, &tid_bitmap, IWL_MAX_TID_COUNT + 1) { - int mac_queue = mvmsta->vif->hw_queue[tid_to_mac80211_ac[tid]]; - - mvmsta->tid_data[tid].txq_id = IWL_MVM_INVALID_QUEUE; - mvm->hw_queue_to_mac80211[queue] &= ~BIT(mac_queue); - mvm->queue_info[queue].hw_queue_refcount--; - mvm->queue_info[queue].tid_bitmap &= ~BIT(tid); - mvmsta->tid_data[tid].is_tid_active = false; - - IWL_DEBUG_TX_QUEUES(mvm, - "Removing inactive TID %d from shared Q:%d\n", - tid, queue); - } - - IWL_DEBUG_TX_QUEUES(mvm, - "TXQ #%d left with tid bitmap 0x%x\n", queue, - mvm->queue_info[queue].tid_bitmap); - - /* - * There may be different TIDs with the same mac queues, so make - * sure all TIDs have existing corresponding mac queues enabled - */ - tid_bitmap = mvm->queue_info[queue].tid_bitmap; - for_each_set_bit(tid, &tid_bitmap, IWL_MAX_TID_COUNT + 1) { - mvm->hw_queue_to_mac80211[queue] |= - BIT(mvmsta->vif->hw_queue[tid_to_mac80211_ac[tid]]); - } - - /* If the queue is marked as shared - "unshare" it */ - if (mvm->queue_info[queue].hw_queue_refcount == 1 && - mvm->queue_info[queue].status == IWL_MVM_QUEUE_SHARED) { - mvm->queue_info[queue].status = IWL_MVM_QUEUE_RECONFIGURING; - IWL_DEBUG_TX_QUEUES(mvm, "Marking Q:%d for reconfig\n", - queue); - } -} - -void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) -{ - unsigned long timeout_queues_map = 0; - unsigned long now = jiffies; - int i; - - if (iwl_mvm_has_new_tx_api(mvm)) - return; - - spin_lock_bh(&mvm->queue_info_lock); - for (i = 0; i < IWL_MAX_HW_QUEUES; i++) - if (mvm->queue_info[i].hw_queue_refcount > 0) - timeout_queues_map |= BIT(i); - spin_unlock_bh(&mvm->queue_info_lock); - - rcu_read_lock(); - - /* - * If a queue time outs - mark it as INACTIVE (don't remove right away - * if we don't have to.) This is an optimization in case traffic comes - * later, and we don't HAVE to use a currently-inactive queue - */ - for_each_set_bit(i, &timeout_queues_map, IWL_MAX_HW_QUEUES) { - struct ieee80211_sta *sta; - struct iwl_mvm_sta *mvmsta; - u8 sta_id; - int tid; - unsigned long inactive_tid_bitmap = 0; - unsigned long queue_tid_bitmap; - - spin_lock_bh(&mvm->queue_info_lock); - queue_tid_bitmap = mvm->queue_info[i].tid_bitmap; - - /* If TXQ isn't in active use anyway - nothing to do here... */ - if (mvm->queue_info[i].status != IWL_MVM_QUEUE_READY && - mvm->queue_info[i].status != IWL_MVM_QUEUE_SHARED) { - spin_unlock_bh(&mvm->queue_info_lock); - continue; - } - - /* Check to see if there are inactive TIDs on this queue */ - for_each_set_bit(tid, &queue_tid_bitmap, - IWL_MAX_TID_COUNT + 1) { - if (time_after(mvm->queue_info[i].last_frame_time[tid] + - IWL_MVM_DQA_QUEUE_TIMEOUT, now)) - continue; - - inactive_tid_bitmap |= BIT(tid); - } - spin_unlock_bh(&mvm->queue_info_lock); - - /* If all TIDs are active - finish check on this queue */ - if (!inactive_tid_bitmap) - continue; - - /* - * If we are here - the queue hadn't been served recently and is - * in use - */ - - sta_id = mvm->queue_info[i].ra_sta_id; - sta = rcu_dereference(mvm->fw_id_to_mac_id[sta_id]); - - /* - * If the STA doesn't exist anymore, it isn't an error. It could - * be that it was removed since getting the queues, and in this - * case it should've inactivated its queues anyway. - */ - if (IS_ERR_OR_NULL(sta)) - continue; - - mvmsta = iwl_mvm_sta_from_mac80211(sta); - - spin_lock_bh(&mvmsta->lock); - spin_lock(&mvm->queue_info_lock); - iwl_mvm_remove_inactive_tids(mvm, mvmsta, i, - inactive_tid_bitmap); - spin_unlock(&mvm->queue_info_lock); - spin_unlock_bh(&mvmsta->lock); - } - - rcu_read_unlock(); -} - void iwl_mvm_event_frame_timeout_callback(struct iwl_mvm *mvm, struct ieee80211_vif *vif, const struct ieee80211_sta *sta, From patchwork Mon Oct 8 08:17:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Coelho X-Patchwork-Id: 10630225 X-Patchwork-Delegate: luca@coelho.fi Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AAF8E933 for ; Mon, 8 Oct 2018 08:18:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9B48C287EF for ; Mon, 8 Oct 2018 08:18:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8FD2728BE5; Mon, 8 Oct 2018 08:18:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E1E6A287EF for ; Mon, 8 Oct 2018 08:18:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727081AbeJHP2b (ORCPT ); Mon, 8 Oct 2018 11:28:31 -0400 Received: from paleale.coelho.fi ([176.9.41.70]:56390 "EHLO farmhouse.coelho.fi" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725973AbeJHP2b (ORCPT ); Mon, 8 Oct 2018 11:28:31 -0400 Received: from 91-156-4-241.elisa-laajakaista.fi ([91.156.4.241] helo=redipa.ger.corp.intel.com) by farmhouse.coelho.fi with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.91) (envelope-from ) id 1g9Qje-0003w6-0n; Mon, 08 Oct 2018 11:17:50 +0300 From: Luca Coelho To: kvalo@codeaurora.org Cc: linux-wireless@vger.kernel.org, Johannes Berg , Luca Coelho Date: Mon, 8 Oct 2018 11:17:27 +0300 Message-Id: <20181008081735.3401-11-luca@coelho.fi> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008081735.3401-1-luca@coelho.fi> References: <20181008081735.3401-1-luca@coelho.fi> MIME-Version: 1.0 Subject: [PATCH 10/18] iwlwifi: mvm: remove per-queue hw refcount Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Johannes Berg There's no need to have a hw refcount if we just mark the command queue with a (fake) TID; at that point, the refcount becomes equivalent to the hweight() of the TID bitmap. Signed-off-by: Johannes Berg Signed-off-by: Luca Coelho --- drivers/net/wireless/intel/iwlwifi/mvm/fw.c | 9 +++- drivers/net/wireless/intel/iwlwifi/mvm/mvm.h | 1 - drivers/net/wireless/intel/iwlwifi/mvm/sta.c | 41 +++++++++---------- .../net/wireless/intel/iwlwifi/mvm/utils.c | 2 +- 4 files changed, 28 insertions(+), 25 deletions(-) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c index c5df73231ba3..dade206d5511 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c @@ -364,7 +364,14 @@ static int iwl_mvm_load_ucode_wait_alive(struct iwl_mvm *mvm, */ memset(&mvm->queue_info, 0, sizeof(mvm->queue_info)); - mvm->queue_info[IWL_MVM_DQA_CMD_QUEUE].hw_queue_refcount = 1; + /* + * Set a 'fake' TID for the command queue, since we use the + * hweight() of the tid_bitmap as a refcount now. Not that + * we ever even consider the command queue as one we might + * want to reuse, but be safe nevertheless. + */ + mvm->queue_info[IWL_MVM_DQA_CMD_QUEUE].tid_bitmap = + BIT(IWL_MAX_TID_COUNT + 2); for (i = 0; i < IEEE80211_MAX_QUEUES; i++) atomic_set(&mvm->mac80211_queue_stop_count[i], 0); diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h index f5a532dc712d..7074fd1f8ce7 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h +++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h @@ -789,7 +789,6 @@ struct iwl_mvm_geo_profile { }; struct iwl_mvm_dqa_txq_info { - u8 hw_queue_refcount; u8 ra_sta_id; /* The RA this queue is mapped to, if exists */ bool reserved; /* Is this the TXQ reserved for a STA */ u8 mac80211_ac; /* The mac80211 AC this queue is mapped to */ diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c index fd33b8d148b3..ce03f9750c3a 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c @@ -387,7 +387,7 @@ static int iwl_mvm_disable_txq(struct iwl_mvm *mvm, int queue, spin_lock_bh(&mvm->queue_info_lock); - if (WARN_ON(mvm->queue_info[queue].hw_queue_refcount == 0)) { + if (WARN_ON(mvm->queue_info[queue].tid_bitmap == 0)) { spin_unlock_bh(&mvm->queue_info_lock); return 0; } @@ -413,17 +413,16 @@ static int iwl_mvm_disable_txq(struct iwl_mvm *mvm, int queue, if (remove_mac_queue) mvm->hw_queue_to_mac80211[queue] &= ~BIT(mac80211_queue); - mvm->queue_info[queue].hw_queue_refcount--; - cmd.action = mvm->queue_info[queue].hw_queue_refcount ? + cmd.action = mvm->queue_info[queue].tid_bitmap ? SCD_CFG_ENABLE_QUEUE : SCD_CFG_DISABLE_QUEUE; if (cmd.action == SCD_CFG_DISABLE_QUEUE) mvm->queue_info[queue].status = IWL_MVM_QUEUE_FREE; IWL_DEBUG_TX_QUEUES(mvm, - "Disabling TXQ #%d refcount=%d (mac80211 map:0x%x)\n", + "Disabling TXQ #%d tids=0x%x (mac80211 map:0x%x)\n", queue, - mvm->queue_info[queue].hw_queue_refcount, + mvm->queue_info[queue].tid_bitmap, mvm->hw_queue_to_mac80211[queue]); /* If the queue is still enabled - nothing left to do in this func */ @@ -436,16 +435,13 @@ static int iwl_mvm_disable_txq(struct iwl_mvm *mvm, int queue, cmd.tid = mvm->queue_info[queue].txq_tid; /* Make sure queue info is correct even though we overwrite it */ - WARN(mvm->queue_info[queue].hw_queue_refcount || - mvm->queue_info[queue].tid_bitmap || - mvm->hw_queue_to_mac80211[queue], - "TXQ #%d info out-of-sync - refcount=%d, mac map=0x%x, tid=0x%x\n", - queue, mvm->queue_info[queue].hw_queue_refcount, + WARN(mvm->queue_info[queue].tid_bitmap || mvm->hw_queue_to_mac80211[queue], + "TXQ #%d info out-of-sync - mac map=0x%x, tids=0x%x\n", + queue, mvm->hw_queue_to_mac80211[queue], mvm->queue_info[queue].tid_bitmap); /* If we are here - the queue is freed and we can zero out these vals */ - mvm->queue_info[queue].hw_queue_refcount = 0; mvm->queue_info[queue].tid_bitmap = 0; mvm->hw_queue_to_mac80211[queue] = 0; @@ -722,7 +718,7 @@ int iwl_mvm_scd_queue_redirect(struct iwl_mvm *mvm, int queue, int tid, cmd.tx_fifo = iwl_mvm_ac_to_tx_fifo[mvm->queue_info[queue].mac80211_ac]; cmd.tid = mvm->queue_info[queue].txq_tid; mq = mvm->hw_queue_to_mac80211[queue]; - shared_queue = (mvm->queue_info[queue].hw_queue_refcount > 1); + shared_queue = hweight16(mvm->queue_info[queue].tid_bitmap) > 1; spin_unlock_bh(&mvm->queue_info_lock); IWL_DEBUG_TX_QUEUES(mvm, "Redirecting TXQ #%d to FIFO #%d\n", @@ -793,7 +789,7 @@ static int iwl_mvm_find_free_queue(struct iwl_mvm *mvm, u8 sta_id, /* Start by looking for a free queue */ for (i = minq; i <= maxq; i++) - if (mvm->queue_info[i].hw_queue_refcount == 0 && + if (mvm->queue_info[i].tid_bitmap == 0 && mvm->queue_info[i].status == IWL_MVM_QUEUE_FREE) return i; @@ -877,7 +873,7 @@ static bool iwl_mvm_update_txq_mapping(struct iwl_mvm *mvm, int queue, } /* Update mappings and refcounts */ - if (mvm->queue_info[queue].hw_queue_refcount > 0) + if (mvm->queue_info[queue].tid_bitmap) enable_queue = false; if (mac80211_queue != IEEE80211_INVAL_HW_QUEUE) { @@ -888,7 +884,6 @@ static bool iwl_mvm_update_txq_mapping(struct iwl_mvm *mvm, int queue, mvm->hw_queue_to_mac80211[queue] |= BIT(mac80211_queue); } - mvm->queue_info[queue].hw_queue_refcount++; mvm->queue_info[queue].tid_bitmap |= BIT(tid); mvm->queue_info[queue].ra_sta_id = sta_id; @@ -903,8 +898,8 @@ static bool iwl_mvm_update_txq_mapping(struct iwl_mvm *mvm, int queue, } IWL_DEBUG_TX_QUEUES(mvm, - "Enabling TXQ #%d refcount=%d (mac80211 map:0x%x)\n", - queue, mvm->queue_info[queue].hw_queue_refcount, + "Enabling TXQ #%d tids=0x%x (mac80211 map:0x%x)\n", + queue, mvm->queue_info[queue].tid_bitmap, mvm->hw_queue_to_mac80211[queue]); spin_unlock_bh(&mvm->queue_info_lock); @@ -1325,7 +1320,6 @@ static void iwl_mvm_remove_inactive_tids(struct iwl_mvm *mvm, mvmsta->tid_data[tid].txq_id = IWL_MVM_INVALID_QUEUE; mvm->hw_queue_to_mac80211[queue] &= ~BIT(mac_queue); - mvm->queue_info[queue].hw_queue_refcount--; mvm->queue_info[queue].tid_bitmap &= ~BIT(tid); mvmsta->tid_data[tid].is_tid_active = false; @@ -1349,7 +1343,7 @@ static void iwl_mvm_remove_inactive_tids(struct iwl_mvm *mvm, } /* If the queue is marked as shared - "unshare" it */ - if (mvm->queue_info[queue].hw_queue_refcount == 1 && + if (hweight16(mvm->queue_info[queue].tid_bitmap) == 1 && mvm->queue_info[queue].status == IWL_MVM_QUEUE_SHARED) { mvm->queue_info[queue].status = IWL_MVM_QUEUE_RECONFIGURING; IWL_DEBUG_TX_QUEUES(mvm, "Marking Q:%d for reconfig\n", @@ -1367,9 +1361,12 @@ static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) return; spin_lock_bh(&mvm->queue_info_lock); - for (i = 0; i < IWL_MAX_HW_QUEUES; i++) - if (mvm->queue_info[i].hw_queue_refcount > 0) + /* skip the CMD queue */ + BUILD_BUG_ON(IWL_MVM_DQA_CMD_QUEUE != 0); + for (i = 1; i < IWL_MAX_HW_QUEUES; i++) { + if (mvm->queue_info[i].tid_bitmap) timeout_queues_map |= BIT(i); + } spin_unlock_bh(&mvm->queue_info_lock); rcu_read_lock(); @@ -1592,7 +1589,7 @@ static int iwl_mvm_reserve_sta_stream(struct iwl_mvm *mvm, /* Make sure we have free resources for this STA */ if (vif_type == NL80211_IFTYPE_STATION && !sta->tdls && - !mvm->queue_info[IWL_MVM_DQA_BSS_CLIENT_QUEUE].hw_queue_refcount && + !mvm->queue_info[IWL_MVM_DQA_BSS_CLIENT_QUEUE].tid_bitmap && (mvm->queue_info[IWL_MVM_DQA_BSS_CLIENT_QUEUE].status == IWL_MVM_QUEUE_FREE)) queue = IWL_MVM_DQA_BSS_CLIENT_QUEUE; diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c index dec097b72300..818e1180bbdd 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c @@ -619,7 +619,7 @@ int iwl_mvm_reconfig_scd(struct iwl_mvm *mvm, int queue, int fifo, int sta_id, return -EINVAL; spin_lock_bh(&mvm->queue_info_lock); - if (WARN(mvm->queue_info[queue].hw_queue_refcount == 0, + if (WARN(mvm->queue_info[queue].tid_bitmap == 0, "Trying to reconfig unallocated queue %d\n", queue)) { spin_unlock_bh(&mvm->queue_info_lock); return -ENXIO; From patchwork Mon Oct 8 08:17:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Coelho X-Patchwork-Id: 10630235 X-Patchwork-Delegate: luca@coelho.fi Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 893E815E2 for ; Mon, 8 Oct 2018 08:18:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7058B28768 for ; Mon, 8 Oct 2018 08:18:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 64A81288D3; Mon, 8 Oct 2018 08:18:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0358128768 for ; Mon, 8 Oct 2018 08:18:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727193AbeJHP2x (ORCPT ); Mon, 8 Oct 2018 11:28:53 -0400 Received: from paleale.coelho.fi ([176.9.41.70]:56430 "EHLO farmhouse.coelho.fi" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725973AbeJHP2x (ORCPT ); Mon, 8 Oct 2018 11:28:53 -0400 Received: from 91-156-4-241.elisa-laajakaista.fi ([91.156.4.241] helo=redipa.ger.corp.intel.com) by farmhouse.coelho.fi with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.91) (envelope-from ) id 1g9Qje-0003w6-RD; Mon, 08 Oct 2018 11:17:51 +0300 From: Luca Coelho To: kvalo@codeaurora.org Cc: linux-wireless@vger.kernel.org, Johannes Berg , Luca Coelho Date: Mon, 8 Oct 2018 11:17:28 +0300 Message-Id: <20181008081735.3401-12-luca@coelho.fi> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008081735.3401-1-luca@coelho.fi> References: <20181008081735.3401-1-luca@coelho.fi> MIME-Version: 1.0 Subject: [PATCH 11/18] iwlwifi: mvm: clean up iteration in iwl_mvm_inactivity_check() Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Johannes Berg There's no need to build a bitmap first and then iterate, just do the iteration with the right locking directly. Signed-off-by: Johannes Berg Signed-off-by: Luca Coelho --- drivers/net/wireless/intel/iwlwifi/mvm/sta.c | 32 +++++++++----------- 1 file changed, 15 insertions(+), 17 deletions(-) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c index ce03f9750c3a..5b6ec7de8043 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c @@ -1353,7 +1353,6 @@ static void iwl_mvm_remove_inactive_tids(struct iwl_mvm *mvm, static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) { - unsigned long timeout_queues_map = 0; unsigned long now = jiffies; int i; @@ -1361,22 +1360,18 @@ static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) return; spin_lock_bh(&mvm->queue_info_lock); - /* skip the CMD queue */ - BUILD_BUG_ON(IWL_MVM_DQA_CMD_QUEUE != 0); - for (i = 1; i < IWL_MAX_HW_QUEUES; i++) { - if (mvm->queue_info[i].tid_bitmap) - timeout_queues_map |= BIT(i); - } - spin_unlock_bh(&mvm->queue_info_lock); rcu_read_lock(); + /* we skip the CMD queue below by starting at 1 */ + BUILD_BUG_ON(IWL_MVM_DQA_CMD_QUEUE != 0); + /* * If a queue times out - mark it as INACTIVE (don't remove right away * if we don't have to.) This is an optimization in case traffic comes * later, and we don't HAVE to use a currently-inactive queue */ - for_each_set_bit(i, &timeout_queues_map, IWL_MAX_HW_QUEUES) { + for (i = 1; i < IWL_MAX_HW_QUEUES; i++) { struct ieee80211_sta *sta; struct iwl_mvm_sta *mvmsta; u8 sta_id; @@ -1384,15 +1379,14 @@ static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) unsigned long inactive_tid_bitmap = 0; unsigned long queue_tid_bitmap; - spin_lock_bh(&mvm->queue_info_lock); queue_tid_bitmap = mvm->queue_info[i].tid_bitmap; + if (!queue_tid_bitmap) + continue; /* If TXQ isn't in active use anyway - nothing to do here... */ if (mvm->queue_info[i].status != IWL_MVM_QUEUE_READY && - mvm->queue_info[i].status != IWL_MVM_QUEUE_SHARED) { - spin_unlock_bh(&mvm->queue_info_lock); + mvm->queue_info[i].status != IWL_MVM_QUEUE_SHARED) continue; - } /* Check to see if there are inactive TIDs on this queue */ for_each_set_bit(tid, &queue_tid_bitmap, @@ -1403,7 +1397,6 @@ static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) inactive_tid_bitmap |= BIT(tid); } - spin_unlock_bh(&mvm->queue_info_lock); /* If all TIDs are active - finish check on this queue */ if (!inactive_tid_bitmap) @@ -1427,15 +1420,20 @@ static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) mvmsta = iwl_mvm_sta_from_mac80211(sta); - spin_lock_bh(&mvmsta->lock); + /* this isn't so nice, but works OK due to the way we loop */ + spin_unlock(&mvm->queue_info_lock); + + /* and we need this locking order */ + spin_lock(&mvmsta->lock); spin_lock(&mvm->queue_info_lock); iwl_mvm_remove_inactive_tids(mvm, mvmsta, i, inactive_tid_bitmap); - spin_unlock(&mvm->queue_info_lock); - spin_unlock_bh(&mvmsta->lock); + /* only unlock sta lock - we still need the queue info lock */ + spin_unlock(&mvmsta->lock); } rcu_read_unlock(); + spin_unlock_bh(&mvm->queue_info_lock); } static inline u8 iwl_mvm_tid_to_ac_queue(int tid) From patchwork Mon Oct 8 08:17:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Coelho X-Patchwork-Id: 10630233 X-Patchwork-Delegate: luca@coelho.fi Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 15741933 for ; Mon, 8 Oct 2018 08:18:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0695C288D3 for ; Mon, 8 Oct 2018 08:18:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ECAE42890F; Mon, 8 Oct 2018 08:18:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 82E7C28768 for ; Mon, 8 Oct 2018 08:18:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727166AbeJHP2t (ORCPT ); Mon, 8 Oct 2018 11:28:49 -0400 Received: from paleale.coelho.fi ([176.9.41.70]:56422 "EHLO farmhouse.coelho.fi" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725973AbeJHP2t (ORCPT ); Mon, 8 Oct 2018 11:28:49 -0400 Received: from 91-156-4-241.elisa-laajakaista.fi ([91.156.4.241] helo=redipa.ger.corp.intel.com) by farmhouse.coelho.fi with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.91) (envelope-from ) id 1g9Qjf-0003w6-KF; Mon, 08 Oct 2018 11:17:52 +0300 From: Luca Coelho To: kvalo@codeaurora.org Cc: linux-wireless@vger.kernel.org, Johannes Berg , Luca Coelho Date: Mon, 8 Oct 2018 11:17:29 +0300 Message-Id: <20181008081735.3401-13-luca@coelho.fi> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008081735.3401-1-luca@coelho.fi> References: <20181008081735.3401-1-luca@coelho.fi> MIME-Version: 1.0 Subject: [PATCH 12/18] iwlwifi: mvm: move queue reconfiguration into new function Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Johannes Berg If TVQM is used we skip over this, move the code into a new function to get rid of the label. Signed-off-by: Johannes Berg Signed-off-by: Luca Coelho --- drivers/net/wireless/intel/iwlwifi/mvm/sta.c | 64 ++++++++++---------- 1 file changed, 33 insertions(+), 31 deletions(-) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c index 5b6ec7de8043..a36a631cdfa6 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c @@ -1351,6 +1351,33 @@ static void iwl_mvm_remove_inactive_tids(struct iwl_mvm *mvm, } } +static void iwl_mvm_reconfigure_queue(struct iwl_mvm *mvm, int queue) +{ + bool reconfig; + bool change_owner; + + spin_lock_bh(&mvm->queue_info_lock); + reconfig = mvm->queue_info[queue].status == IWL_MVM_QUEUE_RECONFIGURING; + + /* + * We need to take into account a situation in which a TXQ was + * allocated to TID x, and then turned shared by adding TIDs y + * and z. If TID x becomes inactive and is removed from the TXQ, + * ownership must be given to one of the remaining TIDs. + * This is mainly because if TID x continues - a new queue can't + * be allocated for it as long as it is an owner of another TXQ. + */ + change_owner = !(mvm->queue_info[queue].tid_bitmap & + BIT(mvm->queue_info[queue].txq_tid)) && + (mvm->queue_info[queue].status == IWL_MVM_QUEUE_SHARED); + spin_unlock_bh(&mvm->queue_info_lock); + + if (reconfig) + iwl_mvm_unshare_queue(mvm, queue); + else if (change_owner) + iwl_mvm_change_queue_owner(mvm, queue); +} + static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) { unsigned long now = jiffies; @@ -1504,7 +1531,7 @@ void iwl_mvm_add_new_dqa_stream_wk(struct work_struct *wk) struct ieee80211_sta *sta; struct iwl_mvm_sta *mvmsta; unsigned long deferred_tid_traffic; - int queue, sta_id, tid; + int sta_id, tid; /* Check inactivity of queues */ iwl_mvm_inactivity_check(mvm); @@ -1512,39 +1539,14 @@ void iwl_mvm_add_new_dqa_stream_wk(struct work_struct *wk) mutex_lock(&mvm->mutex); /* No queue reconfiguration in TVQM mode */ - if (iwl_mvm_has_new_tx_api(mvm)) - goto alloc_queues; - - /* Reconfigure queues requiring reconfiguation */ - for (queue = 0; queue < ARRAY_SIZE(mvm->queue_info); queue++) { - bool reconfig; - bool change_owner; - - spin_lock_bh(&mvm->queue_info_lock); - reconfig = (mvm->queue_info[queue].status == - IWL_MVM_QUEUE_RECONFIGURING); - - /* - * We need to take into account a situation in which a TXQ was - * allocated to TID x, and then turned shared by adding TIDs y - * and z. If TID x becomes inactive and is removed from the TXQ, - * ownership must be given to one of the remaining TIDs. - * This is mainly because if TID x continues - a new queue can't - * be allocated for it as long as it is an owner of another TXQ. - */ - change_owner = !(mvm->queue_info[queue].tid_bitmap & - BIT(mvm->queue_info[queue].txq_tid)) && - (mvm->queue_info[queue].status == - IWL_MVM_QUEUE_SHARED); - spin_unlock_bh(&mvm->queue_info_lock); + if (!iwl_mvm_has_new_tx_api(mvm)) { + int queue; - if (reconfig) - iwl_mvm_unshare_queue(mvm, queue); - else if (change_owner) - iwl_mvm_change_queue_owner(mvm, queue); + /* Reconfigure queues requiring reconfiguation */ + for (queue = 0; queue < ARRAY_SIZE(mvm->queue_info); queue++) + iwl_mvm_reconfigure_queue(mvm, queue); } -alloc_queues: /* Go over all stations with deferred traffic */ for_each_set_bit(sta_id, mvm->sta_deferred_frames, IWL_MVM_STATION_COUNT) { From patchwork Mon Oct 8 08:17:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Coelho X-Patchwork-Id: 10630231 X-Patchwork-Delegate: luca@coelho.fi Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 04A45933 for ; Mon, 8 Oct 2018 08:18:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DB3A528768 for ; Mon, 8 Oct 2018 08:18:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CCCF0287EF; Mon, 8 Oct 2018 08:18:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 615A62855E for ; Mon, 8 Oct 2018 08:18:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727148AbeJHP2p (ORCPT ); Mon, 8 Oct 2018 11:28:45 -0400 Received: from paleale.coelho.fi ([176.9.41.70]:56414 "EHLO farmhouse.coelho.fi" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725973AbeJHP2p (ORCPT ); Mon, 8 Oct 2018 11:28:45 -0400 Received: from 91-156-4-241.elisa-laajakaista.fi ([91.156.4.241] helo=redipa.ger.corp.intel.com) by farmhouse.coelho.fi with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.91) (envelope-from ) id 1g9Qjg-0003w6-JF; Mon, 08 Oct 2018 11:17:53 +0300 From: Luca Coelho To: kvalo@codeaurora.org Cc: linux-wireless@vger.kernel.org, Johannes Berg , Luca Coelho Date: Mon, 8 Oct 2018 11:17:30 +0300 Message-Id: <20181008081735.3401-14-luca@coelho.fi> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008081735.3401-1-luca@coelho.fi> References: <20181008081735.3401-1-luca@coelho.fi> MIME-Version: 1.0 Subject: [PATCH 13/18] iwlwifi: mvm: reconfigure queues during inactivity check Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Johannes Berg We currently reconfigure the queues after the inactivity check, but only in one of the two callers. This might leave queues in a state where the TID owner is wrong, if called when reserving a queue for a new station. Clean this up and do the reconfiguration inside the inactivity check function. This requires changing the locking, but one of the two places already holds the mvm mutex and the other easily can. Signed-off-by: Johannes Berg Signed-off-by: Luca Coelho --- drivers/net/wireless/intel/iwlwifi/mvm/sta.c | 18 +++++++----------- 1 file changed, 7 insertions(+), 11 deletions(-) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c index a36a631cdfa6..8a0cf736122a 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c @@ -1383,6 +1383,8 @@ static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) unsigned long now = jiffies; int i; + lockdep_assert_held(&mvm->mutex); + if (iwl_mvm_has_new_tx_api(mvm)) return; @@ -1461,6 +1463,10 @@ static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) rcu_read_unlock(); spin_unlock_bh(&mvm->queue_info_lock); + + /* Reconfigure queues requiring reconfiguation */ + for (i = 0; i < ARRAY_SIZE(mvm->queue_info); i++) + iwl_mvm_reconfigure_queue(mvm, i); } static inline u8 iwl_mvm_tid_to_ac_queue(int tid) @@ -1533,19 +1539,9 @@ void iwl_mvm_add_new_dqa_stream_wk(struct work_struct *wk) unsigned long deferred_tid_traffic; int sta_id, tid; - /* Check inactivity of queues */ - iwl_mvm_inactivity_check(mvm); - mutex_lock(&mvm->mutex); - /* No queue reconfiguration in TVQM mode */ - if (!iwl_mvm_has_new_tx_api(mvm)) { - int queue; - - /* Reconfigure queues requiring reconfiguation */ - for (queue = 0; queue < ARRAY_SIZE(mvm->queue_info); queue++) - iwl_mvm_reconfigure_queue(mvm, queue); - } + iwl_mvm_inactivity_check(mvm); /* Go over all stations with deferred traffic */ for_each_set_bit(sta_id, mvm->sta_deferred_frames, From patchwork Mon Oct 8 08:17:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Coelho X-Patchwork-Id: 10630229 X-Patchwork-Delegate: luca@coelho.fi Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9BF3A15E2 for ; Mon, 8 Oct 2018 08:18:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8C2852855E for ; Mon, 8 Oct 2018 08:18:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7EBB5287EF; Mon, 8 Oct 2018 08:18:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0120E2855E for ; Mon, 8 Oct 2018 08:18:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727123AbeJHP2j (ORCPT ); Mon, 8 Oct 2018 11:28:39 -0400 Received: from paleale.coelho.fi ([176.9.41.70]:56406 "EHLO farmhouse.coelho.fi" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725973AbeJHP2j (ORCPT ); Mon, 8 Oct 2018 11:28:39 -0400 Received: from 91-156-4-241.elisa-laajakaista.fi ([91.156.4.241] helo=redipa.ger.corp.intel.com) by farmhouse.coelho.fi with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.91) (envelope-from ) id 1g9Qjh-0003w6-Av; Mon, 08 Oct 2018 11:17:53 +0300 From: Luca Coelho To: kvalo@codeaurora.org Cc: linux-wireless@vger.kernel.org, Johannes Berg , Luca Coelho Date: Mon, 8 Oct 2018 11:17:31 +0300 Message-Id: <20181008081735.3401-15-luca@coelho.fi> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008081735.3401-1-luca@coelho.fi> References: <20181008081735.3401-1-luca@coelho.fi> MIME-Version: 1.0 Subject: [PATCH 14/18] iwlwifi: mvm: remove RECONFIGURING queue state Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Johannes Berg We set the queue to this state, only to pretty much immediately move it out of it again. However, we can't even hit any of the code that checks if the queue is reconfiguring, because all of this happens under mvm->mutex and we hold the all the way from marking the queue as RECONFIGURING to marking it as READY again. Additionally, the queue that became RECONFIGURING would've been in SHARED state before, and it can safely stay in that state. In case of errors, it previously would have stayed in RECONFIGURING which it could never have left again. Remove the state entirely and just track the queues that need to be reconfigured in a separate, local, bitmap. Signed-off-by: Johannes Berg Signed-off-by: Luca Coelho --- drivers/net/wireless/intel/iwlwifi/mvm/mvm.h | 5 --- drivers/net/wireless/intel/iwlwifi/mvm/sta.c | 36 ++++++++------------ 2 files changed, 15 insertions(+), 26 deletions(-) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h index 7074fd1f8ce7..8cbd9468aa8b 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h +++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h @@ -760,10 +760,6 @@ iwl_mvm_baid_data_from_reorder_buf(struct iwl_mvm_reorder_buffer *buf) * it. In this state, when a new queue is needed to be allocated but no * such free queue exists, an inactive queue might be freed and given to * the new RA/TID. - * @IWL_MVM_QUEUE_RECONFIGURING: queue is being reconfigured - * This is the state of a queue that has had traffic pass through it, but - * needs to be reconfigured for some reason, e.g. the queue needs to - * become unshared and aggregations re-enabled on. */ enum iwl_mvm_queue_status { IWL_MVM_QUEUE_FREE, @@ -771,7 +767,6 @@ enum iwl_mvm_queue_status { IWL_MVM_QUEUE_READY, IWL_MVM_QUEUE_SHARED, IWL_MVM_QUEUE_INACTIVE, - IWL_MVM_QUEUE_RECONFIGURING, }; #define IWL_MVM_DQA_QUEUE_TIMEOUT (5 * HZ) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c index 8a0cf736122a..b30980c34fd0 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c @@ -606,7 +606,13 @@ static int iwl_mvm_get_shared_queue(struct iwl_mvm *mvm, u8 ac_to_queue[IEEE80211_NUM_ACS]; int i; + /* + * This protects us against grabbing a queue that's being reconfigured + * by the inactivity checker. + */ + lockdep_assert_held(&mvm->mutex); lockdep_assert_held(&mvm->queue_info_lock); + if (WARN_ON(iwl_mvm_has_new_tx_api(mvm))) return -EINVAL; @@ -619,11 +625,6 @@ static int iwl_mvm_get_shared_queue(struct iwl_mvm *mvm, i != IWL_MVM_DQA_BSS_CLIENT_QUEUE) continue; - /* Don't try and take queues being reconfigured */ - if (mvm->queue_info[queue].status == - IWL_MVM_QUEUE_RECONFIGURING) - continue; - ac_to_queue[mvm->queue_info[i].mac80211_ac] = i; } @@ -664,14 +665,6 @@ static int iwl_mvm_get_shared_queue(struct iwl_mvm *mvm, return -ENOSPC; } - /* Make sure the queue isn't in the middle of being reconfigured */ - if (mvm->queue_info[queue].status == IWL_MVM_QUEUE_RECONFIGURING) { - IWL_ERR(mvm, - "TXQ %d is in the middle of re-config - try again\n", - queue); - return -EBUSY; - } - return queue; } @@ -1278,7 +1271,8 @@ static void iwl_mvm_unshare_queue(struct iwl_mvm *mvm, int queue) */ static void iwl_mvm_remove_inactive_tids(struct iwl_mvm *mvm, struct iwl_mvm_sta *mvmsta, int queue, - unsigned long tid_bitmap) + unsigned long tid_bitmap, + unsigned long *unshare_queues) { int tid; @@ -1345,19 +1339,17 @@ static void iwl_mvm_remove_inactive_tids(struct iwl_mvm *mvm, /* If the queue is marked as shared - "unshare" it */ if (hweight16(mvm->queue_info[queue].tid_bitmap) == 1 && mvm->queue_info[queue].status == IWL_MVM_QUEUE_SHARED) { - mvm->queue_info[queue].status = IWL_MVM_QUEUE_RECONFIGURING; IWL_DEBUG_TX_QUEUES(mvm, "Marking Q:%d for reconfig\n", queue); + set_bit(queue, unshare_queues); } } static void iwl_mvm_reconfigure_queue(struct iwl_mvm *mvm, int queue) { - bool reconfig; bool change_owner; spin_lock_bh(&mvm->queue_info_lock); - reconfig = mvm->queue_info[queue].status == IWL_MVM_QUEUE_RECONFIGURING; /* * We need to take into account a situation in which a TXQ was @@ -1372,15 +1364,14 @@ static void iwl_mvm_reconfigure_queue(struct iwl_mvm *mvm, int queue) (mvm->queue_info[queue].status == IWL_MVM_QUEUE_SHARED); spin_unlock_bh(&mvm->queue_info_lock); - if (reconfig) - iwl_mvm_unshare_queue(mvm, queue); - else if (change_owner) + if (change_owner) iwl_mvm_change_queue_owner(mvm, queue); } static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) { unsigned long now = jiffies; + unsigned long unshare_queues = 0; int i; lockdep_assert_held(&mvm->mutex); @@ -1456,7 +1447,8 @@ static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) spin_lock(&mvmsta->lock); spin_lock(&mvm->queue_info_lock); iwl_mvm_remove_inactive_tids(mvm, mvmsta, i, - inactive_tid_bitmap); + inactive_tid_bitmap, + &unshare_queues); /* only unlock sta lock - we still need the queue info lock */ spin_unlock(&mvmsta->lock); } @@ -1465,6 +1457,8 @@ static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) spin_unlock_bh(&mvm->queue_info_lock); /* Reconfigure queues requiring reconfiguation */ + for_each_set_bit(i, &unshare_queues, IWL_MAX_HW_QUEUES) + iwl_mvm_unshare_queue(mvm, i); for (i = 0; i < ARRAY_SIZE(mvm->queue_info); i++) iwl_mvm_reconfigure_queue(mvm, i); } From patchwork Mon Oct 8 08:17:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Coelho X-Patchwork-Id: 10630227 X-Patchwork-Delegate: luca@coelho.fi Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 592BD15E2 for ; Mon, 8 Oct 2018 08:18:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 49817287EF for ; Mon, 8 Oct 2018 08:18:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3DEF728BE5; Mon, 8 Oct 2018 08:18:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C0AF3287EF for ; Mon, 8 Oct 2018 08:18:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727094AbeJHP2f (ORCPT ); Mon, 8 Oct 2018 11:28:35 -0400 Received: from paleale.coelho.fi ([176.9.41.70]:56398 "EHLO farmhouse.coelho.fi" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725973AbeJHP2f (ORCPT ); Mon, 8 Oct 2018 11:28:35 -0400 Received: from 91-156-4-241.elisa-laajakaista.fi ([91.156.4.241] helo=redipa.ger.corp.intel.com) by farmhouse.coelho.fi with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.91) (envelope-from ) id 1g9Qji-0003w6-3d; Mon, 08 Oct 2018 11:17:54 +0300 From: Luca Coelho To: kvalo@codeaurora.org Cc: linux-wireless@vger.kernel.org, Johannes Berg , Luca Coelho Date: Mon, 8 Oct 2018 11:17:32 +0300 Message-Id: <20181008081735.3401-16-luca@coelho.fi> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008081735.3401-1-luca@coelho.fi> References: <20181008081735.3401-1-luca@coelho.fi> MIME-Version: 1.0 Subject: [PATCH 15/18] iwlwifi: mvm: make queue TID change more explicit Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Johannes Berg Instead of iterating all the queues after having potentially changed some queue configurations, rechecking if that was done, mark the ones that do need a TID change explicitly in a bitmap and use that to send the change to the firmware. While at it, also rename iwl_mvm_change_queue_owner() to iwl_mvm_change_queue_tid() since that's more obvious - the "kind" of owner isn't immediately clear right now. Signed-off-by: Johannes Berg Signed-off-by: Luca Coelho --- drivers/net/wireless/intel/iwlwifi/mvm/sta.c | 53 +++++++++----------- 1 file changed, 25 insertions(+), 28 deletions(-) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c index b30980c34fd0..ec4925c89295 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c @@ -1141,7 +1141,7 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm, return ret; } -static void iwl_mvm_change_queue_owner(struct iwl_mvm *mvm, int queue) +static void iwl_mvm_change_queue_tid(struct iwl_mvm *mvm, int queue) { struct iwl_scd_txq_cfg_cmd cmd = { .scd_queue = queue, @@ -1272,7 +1272,8 @@ static void iwl_mvm_unshare_queue(struct iwl_mvm *mvm, int queue) static void iwl_mvm_remove_inactive_tids(struct iwl_mvm *mvm, struct iwl_mvm_sta *mvmsta, int queue, unsigned long tid_bitmap, - unsigned long *unshare_queues) + unsigned long *unshare_queues, + unsigned long *changetid_queues) { int tid; @@ -1311,12 +1312,29 @@ static void iwl_mvm_remove_inactive_tids(struct iwl_mvm *mvm, */ for_each_set_bit(tid, &tid_bitmap, IWL_MAX_TID_COUNT + 1) { int mac_queue = mvmsta->vif->hw_queue[tid_to_mac80211_ac[tid]]; + u16 tid_bitmap; mvmsta->tid_data[tid].txq_id = IWL_MVM_INVALID_QUEUE; mvm->hw_queue_to_mac80211[queue] &= ~BIT(mac_queue); mvm->queue_info[queue].tid_bitmap &= ~BIT(tid); mvmsta->tid_data[tid].is_tid_active = false; + tid_bitmap = mvm->queue_info[queue].tid_bitmap; + + /* + * We need to take into account a situation in which a TXQ was + * allocated to TID x, and then turned shared by adding TIDs y + * and z. If TID x becomes inactive and is removed from the TXQ, + * ownership must be given to one of the remaining TIDs. + * This is mainly because if TID x continues - a new queue can't + * be allocated for it as long as it is an owner of another TXQ. + * + * Mark this queue in the right bitmap, we'll send the command + * to the firmware later. + */ + if (!(tid_bitmap & BIT(mvm->queue_info[queue].txq_tid))) + set_bit(queue, changetid_queues); + IWL_DEBUG_TX_QUEUES(mvm, "Removing inactive TID %d from shared Q:%d\n", tid, queue); @@ -1345,33 +1363,11 @@ static void iwl_mvm_remove_inactive_tids(struct iwl_mvm *mvm, } } -static void iwl_mvm_reconfigure_queue(struct iwl_mvm *mvm, int queue) -{ - bool change_owner; - - spin_lock_bh(&mvm->queue_info_lock); - - /* - * We need to take into account a situation in which a TXQ was - * allocated to TID x, and then turned shared by adding TIDs y - * and z. If TID x becomes inactive and is removed from the TXQ, - * ownership must be given to one of the remaining TIDs. - * This is mainly because if TID x continues - a new queue can't - * be allocated for it as long as it is an owner of another TXQ. - */ - change_owner = !(mvm->queue_info[queue].tid_bitmap & - BIT(mvm->queue_info[queue].txq_tid)) && - (mvm->queue_info[queue].status == IWL_MVM_QUEUE_SHARED); - spin_unlock_bh(&mvm->queue_info_lock); - - if (change_owner) - iwl_mvm_change_queue_owner(mvm, queue); -} - static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) { unsigned long now = jiffies; unsigned long unshare_queues = 0; + unsigned long changetid_queues = 0; int i; lockdep_assert_held(&mvm->mutex); @@ -1448,7 +1444,8 @@ static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) spin_lock(&mvm->queue_info_lock); iwl_mvm_remove_inactive_tids(mvm, mvmsta, i, inactive_tid_bitmap, - &unshare_queues); + &unshare_queues, + &changetid_queues); /* only unlock sta lock - we still need the queue info lock */ spin_unlock(&mvmsta->lock); } @@ -1459,8 +1456,8 @@ static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) /* Reconfigure queues requiring reconfiguation */ for_each_set_bit(i, &unshare_queues, IWL_MAX_HW_QUEUES) iwl_mvm_unshare_queue(mvm, i); - for (i = 0; i < ARRAY_SIZE(mvm->queue_info); i++) - iwl_mvm_reconfigure_queue(mvm, i); + for_each_set_bit(i, &changetid_queues, IWL_MAX_HW_QUEUES) + iwl_mvm_change_queue_tid(mvm, i); } static inline u8 iwl_mvm_tid_to_ac_queue(int tid) From patchwork Mon Oct 8 08:17:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Coelho X-Patchwork-Id: 10630237 X-Patchwork-Delegate: luca@coelho.fi Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 925AC15E2 for ; Mon, 8 Oct 2018 08:18:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 82A3A28768 for ; Mon, 8 Oct 2018 08:18:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7700D288D3; Mon, 8 Oct 2018 08:18:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 17D0F28768 for ; Mon, 8 Oct 2018 08:18:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726967AbeJHP24 (ORCPT ); Mon, 8 Oct 2018 11:28:56 -0400 Received: from paleale.coelho.fi ([176.9.41.70]:56438 "EHLO farmhouse.coelho.fi" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725973AbeJHP24 (ORCPT ); Mon, 8 Oct 2018 11:28:56 -0400 Received: from 91-156-4-241.elisa-laajakaista.fi ([91.156.4.241] helo=redipa.ger.corp.intel.com) by farmhouse.coelho.fi with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.91) (envelope-from ) id 1g9Qji-0003w6-RG; Mon, 08 Oct 2018 11:17:55 +0300 From: Luca Coelho To: kvalo@codeaurora.org Cc: linux-wireless@vger.kernel.org, Johannes Berg , Luca Coelho Date: Mon, 8 Oct 2018 11:17:33 +0300 Message-Id: <20181008081735.3401-17-luca@coelho.fi> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008081735.3401-1-luca@coelho.fi> References: <20181008081735.3401-1-luca@coelho.fi> MIME-Version: 1.0 Subject: [PATCH 16/18] iwlwifi: mvm: make iwl_mvm_scd_queue_redirect() static Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Johannes Berg This function is only used in the file where it's declared, so just make it static. Signed-off-by: Johannes Berg Signed-off-by: Luca Coelho --- drivers/net/wireless/intel/iwlwifi/mvm/sta.c | 6 +++--- drivers/net/wireless/intel/iwlwifi/mvm/sta.h | 4 ---- 2 files changed, 3 insertions(+), 7 deletions(-) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c index ec4925c89295..24631866a174 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c @@ -674,9 +674,9 @@ static int iwl_mvm_get_shared_queue(struct iwl_mvm *mvm, * in such a case, otherwise - if no redirection required - it does nothing, * unless the %force param is true. */ -int iwl_mvm_scd_queue_redirect(struct iwl_mvm *mvm, int queue, int tid, - int ac, int ssn, unsigned int wdg_timeout, - bool force) +static int iwl_mvm_scd_queue_redirect(struct iwl_mvm *mvm, int queue, int tid, + int ac, int ssn, unsigned int wdg_timeout, + bool force) { struct iwl_scd_txq_cfg_cmd cmd = { .scd_queue = queue, diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.h b/drivers/net/wireless/intel/iwlwifi/mvm/sta.h index 0fc211108149..492cfd37521b 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.h +++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.h @@ -572,8 +572,4 @@ void iwl_mvm_modify_all_sta_disable_tx(struct iwl_mvm *mvm, void iwl_mvm_csa_client_absent(struct iwl_mvm *mvm, struct ieee80211_vif *vif); void iwl_mvm_add_new_dqa_stream_wk(struct work_struct *wk); -int iwl_mvm_scd_queue_redirect(struct iwl_mvm *mvm, int queue, int tid, - int ac, int ssn, unsigned int wdg_timeout, - bool force); - #endif /* __sta_h__ */ From patchwork Mon Oct 8 08:17:34 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Coelho X-Patchwork-Id: 10630241 X-Patchwork-Delegate: luca@coelho.fi Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 62D6115E2 for ; Mon, 8 Oct 2018 08:18:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 73CDC287EF for ; Mon, 8 Oct 2018 08:18:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 67D4C289CD; Mon, 8 Oct 2018 08:18:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 815A4287EF for ; Mon, 8 Oct 2018 08:18:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727246AbeJHP3F (ORCPT ); Mon, 8 Oct 2018 11:29:05 -0400 Received: from paleale.coelho.fi ([176.9.41.70]:56452 "EHLO farmhouse.coelho.fi" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725973AbeJHP3E (ORCPT ); Mon, 8 Oct 2018 11:29:04 -0400 Received: from 91-156-4-241.elisa-laajakaista.fi ([91.156.4.241] helo=redipa.ger.corp.intel.com) by farmhouse.coelho.fi with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.91) (envelope-from ) id 1g9Qjj-0003w6-Gz; Mon, 08 Oct 2018 11:17:56 +0300 From: Luca Coelho To: kvalo@codeaurora.org Cc: linux-wireless@vger.kernel.org, Johannes Berg , Luca Coelho Date: Mon, 8 Oct 2018 11:17:34 +0300 Message-Id: <20181008081735.3401-18-luca@coelho.fi> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008081735.3401-1-luca@coelho.fi> References: <20181008081735.3401-1-luca@coelho.fi> MIME-Version: 1.0 Subject: [PATCH 17/18] iwlwifi: mvm: move iwl_mvm_sta_alloc_queue() down Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Johannes Berg We want to call iwl_mvm_inactivity_check() from here in the next patch, so need to move the code down to be able to. Fix a minor checkpatch complaint while at it. Signed-off-by: Johannes Berg Signed-off-by: Luca Coelho --- drivers/net/wireless/intel/iwlwifi/mvm/sta.c | 410 +++++++++---------- 1 file changed, 205 insertions(+), 205 deletions(-) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c index 24631866a174..8ae4fbf17f03 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c @@ -936,211 +936,6 @@ static bool iwl_mvm_enable_txq(struct iwl_mvm *mvm, int queue, return inc_ssn; } -static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm, - struct ieee80211_sta *sta, u8 ac, int tid, - struct ieee80211_hdr *hdr) -{ - struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta); - struct iwl_trans_txq_scd_cfg cfg = { - .fifo = iwl_mvm_mac_ac_to_tx_fifo(mvm, ac), - .sta_id = mvmsta->sta_id, - .tid = tid, - .frame_limit = IWL_FRAME_LIMIT, - }; - unsigned int wdg_timeout = - iwl_mvm_get_wd_timeout(mvm, mvmsta->vif, false, false); - u8 mac_queue = mvmsta->vif->hw_queue[ac]; - int queue = -1; - bool using_inactive_queue = false, same_sta = false; - unsigned long disable_agg_tids = 0; - enum iwl_mvm_agg_state queue_state; - bool shared_queue = false, inc_ssn; - int ssn; - unsigned long tfd_queue_mask; - int ret; - - lockdep_assert_held(&mvm->mutex); - - if (iwl_mvm_has_new_tx_api(mvm)) - return iwl_mvm_sta_alloc_queue_tvqm(mvm, sta, ac, tid); - - spin_lock_bh(&mvmsta->lock); - tfd_queue_mask = mvmsta->tfd_queue_msk; - spin_unlock_bh(&mvmsta->lock); - - spin_lock_bh(&mvm->queue_info_lock); - - /* - * Non-QoS, QoS NDP and MGMT frames should go to a MGMT queue, if one - * exists - */ - if (!ieee80211_is_data_qos(hdr->frame_control) || - ieee80211_is_qos_nullfunc(hdr->frame_control)) { - queue = iwl_mvm_find_free_queue(mvm, mvmsta->sta_id, - IWL_MVM_DQA_MIN_MGMT_QUEUE, - IWL_MVM_DQA_MAX_MGMT_QUEUE); - if (queue >= IWL_MVM_DQA_MIN_MGMT_QUEUE) - IWL_DEBUG_TX_QUEUES(mvm, "Found free MGMT queue #%d\n", - queue); - - /* If no such queue is found, we'll use a DATA queue instead */ - } - - if ((queue < 0 && mvmsta->reserved_queue != IEEE80211_INVAL_HW_QUEUE) && - (mvm->queue_info[mvmsta->reserved_queue].status == - IWL_MVM_QUEUE_RESERVED || - mvm->queue_info[mvmsta->reserved_queue].status == - IWL_MVM_QUEUE_INACTIVE)) { - queue = mvmsta->reserved_queue; - mvm->queue_info[queue].reserved = true; - IWL_DEBUG_TX_QUEUES(mvm, "Using reserved queue #%d\n", queue); - } - - if (queue < 0) - queue = iwl_mvm_find_free_queue(mvm, mvmsta->sta_id, - IWL_MVM_DQA_MIN_DATA_QUEUE, - IWL_MVM_DQA_MAX_DATA_QUEUE); - - /* - * Check if this queue is already allocated but inactive. - * In such a case, we'll need to first free this queue before enabling - * it again, so we'll mark it as reserved to make sure no new traffic - * arrives on it - */ - if (queue > 0 && - mvm->queue_info[queue].status == IWL_MVM_QUEUE_INACTIVE) { - mvm->queue_info[queue].status = IWL_MVM_QUEUE_RESERVED; - using_inactive_queue = true; - same_sta = mvm->queue_info[queue].ra_sta_id == mvmsta->sta_id; - IWL_DEBUG_TX_QUEUES(mvm, - "Re-assigning TXQ %d: sta_id=%d, tid=%d\n", - queue, mvmsta->sta_id, tid); - } - - /* No free queue - we'll have to share */ - if (queue <= 0) { - queue = iwl_mvm_get_shared_queue(mvm, tfd_queue_mask, ac); - if (queue > 0) { - shared_queue = true; - mvm->queue_info[queue].status = IWL_MVM_QUEUE_SHARED; - } - } - - /* - * Mark TXQ as ready, even though it hasn't been fully configured yet, - * to make sure no one else takes it. - * This will allow avoiding re-acquiring the lock at the end of the - * configuration. On error we'll mark it back as free. - */ - if ((queue > 0) && !shared_queue) - mvm->queue_info[queue].status = IWL_MVM_QUEUE_READY; - - spin_unlock_bh(&mvm->queue_info_lock); - - /* This shouldn't happen - out of queues */ - if (WARN_ON(queue <= 0)) { - IWL_ERR(mvm, "No available queues for tid %d on sta_id %d\n", - tid, cfg.sta_id); - return queue; - } - - /* - * Actual en/disablement of aggregations is through the ADD_STA HCMD, - * but for configuring the SCD to send A-MPDUs we need to mark the queue - * as aggregatable. - * Mark all DATA queues as allowing to be aggregated at some point - */ - cfg.aggregate = (queue >= IWL_MVM_DQA_MIN_DATA_QUEUE || - queue == IWL_MVM_DQA_BSS_CLIENT_QUEUE); - - /* - * If this queue was previously inactive (idle) - we need to free it - * first - */ - if (using_inactive_queue) { - ret = iwl_mvm_free_inactive_queue(mvm, queue, same_sta); - if (ret) - return ret; - } - - IWL_DEBUG_TX_QUEUES(mvm, - "Allocating %squeue #%d to sta %d on tid %d\n", - shared_queue ? "shared " : "", queue, - mvmsta->sta_id, tid); - - if (shared_queue) { - /* Disable any open aggs on this queue */ - disable_agg_tids = iwl_mvm_get_queue_agg_tids(mvm, queue); - - if (disable_agg_tids) { - IWL_DEBUG_TX_QUEUES(mvm, "Disabling aggs on queue %d\n", - queue); - iwl_mvm_invalidate_sta_queue(mvm, queue, - disable_agg_tids, false); - } - } - - ssn = IEEE80211_SEQ_TO_SN(le16_to_cpu(hdr->seq_ctrl)); - inc_ssn = iwl_mvm_enable_txq(mvm, queue, mac_queue, - ssn, &cfg, wdg_timeout); - if (inc_ssn) { - ssn = (ssn + 1) & IEEE80211_SCTL_SEQ; - le16_add_cpu(&hdr->seq_ctrl, 0x10); - } - - /* - * Mark queue as shared in transport if shared - * Note this has to be done after queue enablement because enablement - * can also set this value, and there is no indication there to shared - * queues - */ - if (shared_queue) - iwl_trans_txq_set_shared_mode(mvm->trans, queue, true); - - spin_lock_bh(&mvmsta->lock); - /* - * This looks racy, but it is not. We have only one packet for - * this ra/tid in our Tx path since we stop the Qdisc when we - * need to allocate a new TFD queue. - */ - if (inc_ssn) - mvmsta->tid_data[tid].seq_number += 0x10; - mvmsta->tid_data[tid].txq_id = queue; - mvmsta->tid_data[tid].is_tid_active = true; - mvmsta->tfd_queue_msk |= BIT(queue); - queue_state = mvmsta->tid_data[tid].state; - - if (mvmsta->reserved_queue == queue) - mvmsta->reserved_queue = IEEE80211_INVAL_HW_QUEUE; - spin_unlock_bh(&mvmsta->lock); - - if (!shared_queue) { - ret = iwl_mvm_sta_send_to_fw(mvm, sta, true, STA_MODIFY_QUEUES); - if (ret) - goto out_err; - - /* If we need to re-enable aggregations... */ - if (queue_state == IWL_AGG_ON) { - ret = iwl_mvm_sta_tx_agg(mvm, sta, tid, queue, true); - if (ret) - goto out_err; - } - } else { - /* Redirect queue, if needed */ - ret = iwl_mvm_scd_queue_redirect(mvm, queue, tid, ac, ssn, - wdg_timeout, false); - if (ret) - goto out_err; - } - - return 0; - -out_err: - iwl_mvm_disable_txq(mvm, queue, mac_queue, tid, 0); - - return ret; -} - static void iwl_mvm_change_queue_tid(struct iwl_mvm *mvm, int queue) { struct iwl_scd_txq_cfg_cmd cmd = { @@ -1460,6 +1255,211 @@ static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) iwl_mvm_change_queue_tid(mvm, i); } +static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm, + struct ieee80211_sta *sta, u8 ac, int tid, + struct ieee80211_hdr *hdr) +{ + struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta); + struct iwl_trans_txq_scd_cfg cfg = { + .fifo = iwl_mvm_mac_ac_to_tx_fifo(mvm, ac), + .sta_id = mvmsta->sta_id, + .tid = tid, + .frame_limit = IWL_FRAME_LIMIT, + }; + unsigned int wdg_timeout = + iwl_mvm_get_wd_timeout(mvm, mvmsta->vif, false, false); + u8 mac_queue = mvmsta->vif->hw_queue[ac]; + int queue = -1; + bool using_inactive_queue = false, same_sta = false; + unsigned long disable_agg_tids = 0; + enum iwl_mvm_agg_state queue_state; + bool shared_queue = false, inc_ssn; + int ssn; + unsigned long tfd_queue_mask; + int ret; + + lockdep_assert_held(&mvm->mutex); + + if (iwl_mvm_has_new_tx_api(mvm)) + return iwl_mvm_sta_alloc_queue_tvqm(mvm, sta, ac, tid); + + spin_lock_bh(&mvmsta->lock); + tfd_queue_mask = mvmsta->tfd_queue_msk; + spin_unlock_bh(&mvmsta->lock); + + spin_lock_bh(&mvm->queue_info_lock); + + /* + * Non-QoS, QoS NDP and MGMT frames should go to a MGMT queue, if one + * exists + */ + if (!ieee80211_is_data_qos(hdr->frame_control) || + ieee80211_is_qos_nullfunc(hdr->frame_control)) { + queue = iwl_mvm_find_free_queue(mvm, mvmsta->sta_id, + IWL_MVM_DQA_MIN_MGMT_QUEUE, + IWL_MVM_DQA_MAX_MGMT_QUEUE); + if (queue >= IWL_MVM_DQA_MIN_MGMT_QUEUE) + IWL_DEBUG_TX_QUEUES(mvm, "Found free MGMT queue #%d\n", + queue); + + /* If no such queue is found, we'll use a DATA queue instead */ + } + + if ((queue < 0 && mvmsta->reserved_queue != IEEE80211_INVAL_HW_QUEUE) && + (mvm->queue_info[mvmsta->reserved_queue].status == + IWL_MVM_QUEUE_RESERVED || + mvm->queue_info[mvmsta->reserved_queue].status == + IWL_MVM_QUEUE_INACTIVE)) { + queue = mvmsta->reserved_queue; + mvm->queue_info[queue].reserved = true; + IWL_DEBUG_TX_QUEUES(mvm, "Using reserved queue #%d\n", queue); + } + + if (queue < 0) + queue = iwl_mvm_find_free_queue(mvm, mvmsta->sta_id, + IWL_MVM_DQA_MIN_DATA_QUEUE, + IWL_MVM_DQA_MAX_DATA_QUEUE); + + /* + * Check if this queue is already allocated but inactive. + * In such a case, we'll need to first free this queue before enabling + * it again, so we'll mark it as reserved to make sure no new traffic + * arrives on it + */ + if (queue > 0 && + mvm->queue_info[queue].status == IWL_MVM_QUEUE_INACTIVE) { + mvm->queue_info[queue].status = IWL_MVM_QUEUE_RESERVED; + using_inactive_queue = true; + same_sta = mvm->queue_info[queue].ra_sta_id == mvmsta->sta_id; + IWL_DEBUG_TX_QUEUES(mvm, + "Re-assigning TXQ %d: sta_id=%d, tid=%d\n", + queue, mvmsta->sta_id, tid); + } + + /* No free queue - we'll have to share */ + if (queue <= 0) { + queue = iwl_mvm_get_shared_queue(mvm, tfd_queue_mask, ac); + if (queue > 0) { + shared_queue = true; + mvm->queue_info[queue].status = IWL_MVM_QUEUE_SHARED; + } + } + + /* + * Mark TXQ as ready, even though it hasn't been fully configured yet, + * to make sure no one else takes it. + * This will allow avoiding re-acquiring the lock at the end of the + * configuration. On error we'll mark it back as free. + */ + if (queue > 0 && !shared_queue) + mvm->queue_info[queue].status = IWL_MVM_QUEUE_READY; + + spin_unlock_bh(&mvm->queue_info_lock); + + /* This shouldn't happen - out of queues */ + if (WARN_ON(queue <= 0)) { + IWL_ERR(mvm, "No available queues for tid %d on sta_id %d\n", + tid, cfg.sta_id); + return queue; + } + + /* + * Actual en/disablement of aggregations is through the ADD_STA HCMD, + * but for configuring the SCD to send A-MPDUs we need to mark the queue + * as aggregatable. + * Mark all DATA queues as allowing to be aggregated at some point + */ + cfg.aggregate = (queue >= IWL_MVM_DQA_MIN_DATA_QUEUE || + queue == IWL_MVM_DQA_BSS_CLIENT_QUEUE); + + /* + * If this queue was previously inactive (idle) - we need to free it + * first + */ + if (using_inactive_queue) { + ret = iwl_mvm_free_inactive_queue(mvm, queue, same_sta); + if (ret) + return ret; + } + + IWL_DEBUG_TX_QUEUES(mvm, + "Allocating %squeue #%d to sta %d on tid %d\n", + shared_queue ? "shared " : "", queue, + mvmsta->sta_id, tid); + + if (shared_queue) { + /* Disable any open aggs on this queue */ + disable_agg_tids = iwl_mvm_get_queue_agg_tids(mvm, queue); + + if (disable_agg_tids) { + IWL_DEBUG_TX_QUEUES(mvm, "Disabling aggs on queue %d\n", + queue); + iwl_mvm_invalidate_sta_queue(mvm, queue, + disable_agg_tids, false); + } + } + + ssn = IEEE80211_SEQ_TO_SN(le16_to_cpu(hdr->seq_ctrl)); + inc_ssn = iwl_mvm_enable_txq(mvm, queue, mac_queue, + ssn, &cfg, wdg_timeout); + if (inc_ssn) { + ssn = (ssn + 1) & IEEE80211_SCTL_SEQ; + le16_add_cpu(&hdr->seq_ctrl, 0x10); + } + + /* + * Mark queue as shared in transport if shared + * Note this has to be done after queue enablement because enablement + * can also set this value, and there is no indication there to shared + * queues + */ + if (shared_queue) + iwl_trans_txq_set_shared_mode(mvm->trans, queue, true); + + spin_lock_bh(&mvmsta->lock); + /* + * This looks racy, but it is not. We have only one packet for + * this ra/tid in our Tx path since we stop the Qdisc when we + * need to allocate a new TFD queue. + */ + if (inc_ssn) + mvmsta->tid_data[tid].seq_number += 0x10; + mvmsta->tid_data[tid].txq_id = queue; + mvmsta->tid_data[tid].is_tid_active = true; + mvmsta->tfd_queue_msk |= BIT(queue); + queue_state = mvmsta->tid_data[tid].state; + + if (mvmsta->reserved_queue == queue) + mvmsta->reserved_queue = IEEE80211_INVAL_HW_QUEUE; + spin_unlock_bh(&mvmsta->lock); + + if (!shared_queue) { + ret = iwl_mvm_sta_send_to_fw(mvm, sta, true, STA_MODIFY_QUEUES); + if (ret) + goto out_err; + + /* If we need to re-enable aggregations... */ + if (queue_state == IWL_AGG_ON) { + ret = iwl_mvm_sta_tx_agg(mvm, sta, tid, queue, true); + if (ret) + goto out_err; + } + } else { + /* Redirect queue, if needed */ + ret = iwl_mvm_scd_queue_redirect(mvm, queue, tid, ac, ssn, + wdg_timeout, false); + if (ret) + goto out_err; + } + + return 0; + +out_err: + iwl_mvm_disable_txq(mvm, queue, mac_queue, tid, 0); + + return ret; +} + static inline u8 iwl_mvm_tid_to_ac_queue(int tid) { if (tid == IWL_MAX_TID_COUNT) From patchwork Mon Oct 8 08:17:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Coelho X-Patchwork-Id: 10630239 X-Patchwork-Delegate: luca@coelho.fi Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A4BF5933 for ; Mon, 8 Oct 2018 08:18:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 937B0287EF for ; Mon, 8 Oct 2018 08:18:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 87CC928909; Mon, 8 Oct 2018 08:18:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7D564287EF for ; Mon, 8 Oct 2018 08:18:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726982AbeJHP3C (ORCPT ); Mon, 8 Oct 2018 11:29:02 -0400 Received: from paleale.coelho.fi ([176.9.41.70]:56446 "EHLO farmhouse.coelho.fi" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725973AbeJHP3B (ORCPT ); Mon, 8 Oct 2018 11:29:01 -0400 Received: from 91-156-4-241.elisa-laajakaista.fi ([91.156.4.241] helo=redipa.ger.corp.intel.com) by farmhouse.coelho.fi with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.91) (envelope-from ) id 1g9Qjk-0003w6-IP; Mon, 08 Oct 2018 11:17:57 +0300 From: Luca Coelho To: kvalo@codeaurora.org Cc: linux-wireless@vger.kernel.org, Johannes Berg , Luca Coelho Date: Mon, 8 Oct 2018 11:17:35 +0300 Message-Id: <20181008081735.3401-19-luca@coelho.fi> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008081735.3401-1-luca@coelho.fi> References: <20181008081735.3401-1-luca@coelho.fi> MIME-Version: 1.0 Subject: [PATCH 18/18] iwlwifi: mvm: kill INACTIVE queue state Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Johannes Berg We don't really need this state: instead of having an inactive state where we can awaken zombie queues again if needed, just keep them in their normal state unless a new queue is actually needed and there's no other way of getting one. We do this here by making the inactivity check not free queues unless instructed that we now really need to allocate one to a specific station, and in that case it'll just free the queue immediately, without doing any inactivity step inbetween. The only downside is a little bit more processing in this case, but the code complexity is lower. Additionally, this fixes a corner case: due to the way the code worked, we could only ever reuse an inactive queue if it was the reserved queue for a station, as iwl_mvm_find_free_queue() would never consider returning an inactive queue. Signed-off-by: Johannes Berg Signed-off-by: Luca Coelho --- drivers/net/wireless/intel/iwlwifi/mvm/mvm.h | 7 - drivers/net/wireless/intel/iwlwifi/mvm/sta.c | 136 ++++++++----------- drivers/net/wireless/intel/iwlwifi/mvm/sta.h | 4 - drivers/net/wireless/intel/iwlwifi/mvm/tx.c | 34 ++--- 4 files changed, 66 insertions(+), 115 deletions(-) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h index 8cbd9468aa8b..7ba5bc2ed1c4 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h +++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h @@ -754,19 +754,12 @@ iwl_mvm_baid_data_from_reorder_buf(struct iwl_mvm_reorder_buffer *buf) * This is a state in which a single queue serves more than one TID, all of * which are not aggregated. Note that the queue is only associated to one * RA. - * @IWL_MVM_QUEUE_INACTIVE: queue is allocated but no traffic on it - * This is a state of a queue that has had traffic on it, but during the - * last %IWL_MVM_DQA_QUEUE_TIMEOUT time period there has been no traffic on - * it. In this state, when a new queue is needed to be allocated but no - * such free queue exists, an inactive queue might be freed and given to - * the new RA/TID. */ enum iwl_mvm_queue_status { IWL_MVM_QUEUE_FREE, IWL_MVM_QUEUE_RESERVED, IWL_MVM_QUEUE_READY, IWL_MVM_QUEUE_SHARED, - IWL_MVM_QUEUE_INACTIVE, }; #define IWL_MVM_DQA_QUEUE_TIMEOUT (5 * HZ) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c index 8ae4fbf17f03..1887d2b9f185 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c @@ -549,11 +549,12 @@ static int iwl_mvm_remove_sta_queue_marking(struct iwl_mvm *mvm, int queue) } static int iwl_mvm_free_inactive_queue(struct iwl_mvm *mvm, int queue, - bool same_sta) + u8 new_sta_id) { struct iwl_mvm_sta *mvmsta; u8 txq_curr_ac, sta_id, tid; unsigned long disable_agg_tids = 0; + bool same_sta; int ret; lockdep_assert_held(&mvm->mutex); @@ -567,6 +568,8 @@ static int iwl_mvm_free_inactive_queue(struct iwl_mvm *mvm, int queue, tid = mvm->queue_info[queue].txq_tid; spin_unlock_bh(&mvm->queue_info_lock); + same_sta = sta_id == new_sta_id; + mvmsta = iwl_mvm_sta_from_staid_protected(mvm, sta_id); if (WARN_ON(!mvmsta)) return -EINVAL; @@ -581,10 +584,6 @@ static int iwl_mvm_free_inactive_queue(struct iwl_mvm *mvm, int queue, mvmsta->vif->hw_queue[txq_curr_ac], tid, 0); if (ret) { - /* Re-mark the inactive queue as inactive */ - spin_lock_bh(&mvm->queue_info_lock); - mvm->queue_info[queue].status = IWL_MVM_QUEUE_INACTIVE; - spin_unlock_bh(&mvm->queue_info_lock); IWL_ERR(mvm, "Failed to free inactive queue %d (ret=%d)\n", queue, ret); @@ -844,7 +843,6 @@ static int iwl_mvm_sta_alloc_queue_tvqm(struct iwl_mvm *mvm, spin_lock_bh(&mvmsta->lock); mvmsta->tid_data[tid].txq_id = queue; - mvmsta->tid_data[tid].is_tid_active = true; spin_unlock_bh(&mvmsta->lock); return 0; @@ -1063,8 +1061,10 @@ static void iwl_mvm_unshare_queue(struct iwl_mvm *mvm, int queue) * Remove inactive TIDs of a given queue. * If all queue TIDs are inactive - mark the queue as inactive * If only some the queue TIDs are inactive - unmap them from the queue + * + * Returns %true if all TIDs were removed and the queue could be reused. */ -static void iwl_mvm_remove_inactive_tids(struct iwl_mvm *mvm, +static bool iwl_mvm_remove_inactive_tids(struct iwl_mvm *mvm, struct iwl_mvm_sta *mvmsta, int queue, unsigned long tid_bitmap, unsigned long *unshare_queues, @@ -1076,7 +1076,7 @@ static void iwl_mvm_remove_inactive_tids(struct iwl_mvm *mvm, lockdep_assert_held(&mvm->queue_info_lock); if (WARN_ON(iwl_mvm_has_new_tx_api(mvm))) - return; + return false; /* Go over all non-active TIDs, incl. IWL_MAX_TID_COUNT (for mgmt) */ for_each_set_bit(tid, &tid_bitmap, IWL_MAX_TID_COUNT + 1) { @@ -1089,16 +1089,10 @@ static void iwl_mvm_remove_inactive_tids(struct iwl_mvm *mvm, tid_bitmap &= ~BIT(tid); } - /* If all TIDs in the queue are inactive - mark queue as inactive. */ + /* If all TIDs in the queue are inactive - return it can be reused */ if (tid_bitmap == mvm->queue_info[queue].tid_bitmap) { - mvm->queue_info[queue].status = IWL_MVM_QUEUE_INACTIVE; - - for_each_set_bit(tid, &tid_bitmap, IWL_MAX_TID_COUNT + 1) - mvmsta->tid_data[tid].is_tid_active = false; - - IWL_DEBUG_TX_QUEUES(mvm, "Queue %d marked as inactive\n", - queue); - return; + IWL_DEBUG_TX_QUEUES(mvm, "Queue %d is inactive\n", queue); + return true; } /* @@ -1112,7 +1106,6 @@ static void iwl_mvm_remove_inactive_tids(struct iwl_mvm *mvm, mvmsta->tid_data[tid].txq_id = IWL_MVM_INVALID_QUEUE; mvm->hw_queue_to_mac80211[queue] &= ~BIT(mac_queue); mvm->queue_info[queue].tid_bitmap &= ~BIT(tid); - mvmsta->tid_data[tid].is_tid_active = false; tid_bitmap = mvm->queue_info[queue].tid_bitmap; @@ -1156,19 +1149,30 @@ static void iwl_mvm_remove_inactive_tids(struct iwl_mvm *mvm, queue); set_bit(queue, unshare_queues); } + + return false; } -static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) +/* + * Check for inactivity - this includes checking if any queue + * can be unshared and finding one (and only one) that can be + * reused. + * This function is also invoked as a sort of clean-up task, + * in which case @alloc_for_sta is IWL_MVM_INVALID_STA. + * + * Returns the queue number, or -ENOSPC. + */ +static int iwl_mvm_inactivity_check(struct iwl_mvm *mvm, u8 alloc_for_sta) { unsigned long now = jiffies; unsigned long unshare_queues = 0; unsigned long changetid_queues = 0; - int i; + int i, ret, free_queue = -ENOSPC; lockdep_assert_held(&mvm->mutex); if (iwl_mvm_has_new_tx_api(mvm)) - return; + return -ENOSPC; spin_lock_bh(&mvm->queue_info_lock); @@ -1177,11 +1181,6 @@ static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) /* we skip the CMD queue below by starting at 1 */ BUILD_BUG_ON(IWL_MVM_DQA_CMD_QUEUE != 0); - /* - * If a queue times out - mark it as INACTIVE (don't remove right away - * if we don't have to.) This is an optimization in case traffic comes - * later, and we don't HAVE to use a currently-inactive queue - */ for (i = 1; i < IWL_MAX_HW_QUEUES; i++) { struct ieee80211_sta *sta; struct iwl_mvm_sta *mvmsta; @@ -1237,10 +1236,12 @@ static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) /* and we need this locking order */ spin_lock(&mvmsta->lock); spin_lock(&mvm->queue_info_lock); - iwl_mvm_remove_inactive_tids(mvm, mvmsta, i, - inactive_tid_bitmap, - &unshare_queues, - &changetid_queues); + ret = iwl_mvm_remove_inactive_tids(mvm, mvmsta, i, + inactive_tid_bitmap, + &unshare_queues, + &changetid_queues); + if (ret >= 0 && free_queue < 0) + free_queue = ret; /* only unlock sta lock - we still need the queue info lock */ spin_unlock(&mvmsta->lock); } @@ -1253,6 +1254,15 @@ static void iwl_mvm_inactivity_check(struct iwl_mvm *mvm) iwl_mvm_unshare_queue(mvm, i); for_each_set_bit(i, &changetid_queues, IWL_MAX_HW_QUEUES) iwl_mvm_change_queue_tid(mvm, i); + + if (free_queue >= 0 && alloc_for_sta != IWL_MVM_INVALID_STA) { + ret = iwl_mvm_free_inactive_queue(mvm, free_queue, + alloc_for_sta); + if (ret) + return ret; + } + + return free_queue; } static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm, @@ -1270,7 +1280,6 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm, iwl_mvm_get_wd_timeout(mvm, mvmsta->vif, false, false); u8 mac_queue = mvmsta->vif->hw_queue[ac]; int queue = -1; - bool using_inactive_queue = false, same_sta = false; unsigned long disable_agg_tids = 0; enum iwl_mvm_agg_state queue_state; bool shared_queue = false, inc_ssn; @@ -1307,9 +1316,7 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm, if ((queue < 0 && mvmsta->reserved_queue != IEEE80211_INVAL_HW_QUEUE) && (mvm->queue_info[mvmsta->reserved_queue].status == - IWL_MVM_QUEUE_RESERVED || - mvm->queue_info[mvmsta->reserved_queue].status == - IWL_MVM_QUEUE_INACTIVE)) { + IWL_MVM_QUEUE_RESERVED)) { queue = mvmsta->reserved_queue; mvm->queue_info[queue].reserved = true; IWL_DEBUG_TX_QUEUES(mvm, "Using reserved queue #%d\n", queue); @@ -1319,21 +1326,13 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm, queue = iwl_mvm_find_free_queue(mvm, mvmsta->sta_id, IWL_MVM_DQA_MIN_DATA_QUEUE, IWL_MVM_DQA_MAX_DATA_QUEUE); + if (queue < 0) { + spin_unlock_bh(&mvm->queue_info_lock); - /* - * Check if this queue is already allocated but inactive. - * In such a case, we'll need to first free this queue before enabling - * it again, so we'll mark it as reserved to make sure no new traffic - * arrives on it - */ - if (queue > 0 && - mvm->queue_info[queue].status == IWL_MVM_QUEUE_INACTIVE) { - mvm->queue_info[queue].status = IWL_MVM_QUEUE_RESERVED; - using_inactive_queue = true; - same_sta = mvm->queue_info[queue].ra_sta_id == mvmsta->sta_id; - IWL_DEBUG_TX_QUEUES(mvm, - "Re-assigning TXQ %d: sta_id=%d, tid=%d\n", - queue, mvmsta->sta_id, tid); + /* try harder - perhaps kill an inactive queue */ + queue = iwl_mvm_inactivity_check(mvm, mvmsta->sta_id); + + spin_lock_bh(&mvm->queue_info_lock); } /* No free queue - we'll have to share */ @@ -1372,16 +1371,6 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm, cfg.aggregate = (queue >= IWL_MVM_DQA_MIN_DATA_QUEUE || queue == IWL_MVM_DQA_BSS_CLIENT_QUEUE); - /* - * If this queue was previously inactive (idle) - we need to free it - * first - */ - if (using_inactive_queue) { - ret = iwl_mvm_free_inactive_queue(mvm, queue, same_sta); - if (ret) - return ret; - } - IWL_DEBUG_TX_QUEUES(mvm, "Allocating %squeue #%d to sta %d on tid %d\n", shared_queue ? "shared " : "", queue, @@ -1425,7 +1414,6 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm, if (inc_ssn) mvmsta->tid_data[tid].seq_number += 0x10; mvmsta->tid_data[tid].txq_id = queue; - mvmsta->tid_data[tid].is_tid_active = true; mvmsta->tfd_queue_msk |= BIT(queue); queue_state = mvmsta->tid_data[tid].state; @@ -1532,7 +1520,7 @@ void iwl_mvm_add_new_dqa_stream_wk(struct work_struct *wk) mutex_lock(&mvm->mutex); - iwl_mvm_inactivity_check(mvm); + iwl_mvm_inactivity_check(mvm, IWL_MVM_INVALID_STA); /* Go over all stations with deferred traffic */ for_each_set_bit(sta_id, mvm->sta_deferred_frames, @@ -1560,17 +1548,13 @@ static int iwl_mvm_reserve_sta_stream(struct iwl_mvm *mvm, { struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta); int queue; - bool using_inactive_queue = false, same_sta = false; /* queue reserving is disabled on new TX path */ if (WARN_ON(iwl_mvm_has_new_tx_api(mvm))) return 0; - /* - * Check for inactive queues, so we don't reach a situation where we - * can't add a STA due to a shortage in queues that doesn't really exist - */ - iwl_mvm_inactivity_check(mvm); + /* run the general cleanup/unsharing of queues */ + iwl_mvm_inactivity_check(mvm, IWL_MVM_INVALID_STA); spin_lock_bh(&mvm->queue_info_lock); @@ -1586,16 +1570,13 @@ static int iwl_mvm_reserve_sta_stream(struct iwl_mvm *mvm, IWL_MVM_DQA_MAX_DATA_QUEUE); if (queue < 0) { spin_unlock_bh(&mvm->queue_info_lock); - IWL_ERR(mvm, "No available queues for new station\n"); - return -ENOSPC; - } else if (mvm->queue_info[queue].status == IWL_MVM_QUEUE_INACTIVE) { - /* - * If this queue is already allocated but inactive we'll need to - * first free this queue before enabling it again, we'll mark - * it as reserved to make sure no new traffic arrives on it - */ - using_inactive_queue = true; - same_sta = mvm->queue_info[queue].ra_sta_id == mvmsta->sta_id; + /* try again - this time kick out a queue if needed */ + queue = iwl_mvm_inactivity_check(mvm, mvmsta->sta_id); + if (queue < 0) { + IWL_ERR(mvm, "No available queues for new station\n"); + return -ENOSPC; + } + spin_lock_bh(&mvm->queue_info_lock); } mvm->queue_info[queue].status = IWL_MVM_QUEUE_RESERVED; @@ -1603,9 +1584,6 @@ static int iwl_mvm_reserve_sta_stream(struct iwl_mvm *mvm, mvmsta->reserved_queue = queue; - if (using_inactive_queue) - iwl_mvm_free_inactive_queue(mvm, queue, same_sta); - IWL_DEBUG_TX_QUEUES(mvm, "Reserving data queue #%d for sta_id %d\n", queue, mvmsta->sta_id); diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.h b/drivers/net/wireless/intel/iwlwifi/mvm/sta.h index 492cfd37521b..de1a0a2d8723 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.h +++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.h @@ -312,9 +312,6 @@ enum iwl_mvm_agg_state { * Basically when next_reclaimed reaches ssn, we can tell mac80211 that * we are ready to finish the Tx AGG stop / start flow. * @tx_time: medium time consumed by this A-MPDU - * @is_tid_active: has this TID sent traffic in the last - * %IWL_MVM_DQA_QUEUE_TIMEOUT time period. If %txq_id is invalid, this - * field should be ignored. * @tpt_meas_start: time of the throughput measurements start, is reset every HZ * @tx_count_last: number of frames transmitted during the last second * @tx_count: counts the number of frames transmitted since the last reset of @@ -332,7 +329,6 @@ struct iwl_mvm_tid_data { u16 txq_id; u16 ssn; u16 tx_time; - bool is_tid_active; unsigned long tpt_meas_start; u32 tx_count_last; u32 tx_count; diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c index 99c64ea2619b..ec57682efe54 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c @@ -1140,32 +1140,16 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb, WARN_ON_ONCE(info->flags & IEEE80211_TX_CTL_SEND_AFTER_DTIM); /* Check if TXQ needs to be allocated or re-activated */ - if (unlikely(txq_id == IWL_MVM_INVALID_QUEUE || - !mvmsta->tid_data[tid].is_tid_active)) { - /* If TXQ needs to be allocated... */ - if (txq_id == IWL_MVM_INVALID_QUEUE) { - iwl_mvm_tx_add_stream(mvm, mvmsta, tid, skb); + if (unlikely(txq_id == IWL_MVM_INVALID_QUEUE)) { + iwl_mvm_tx_add_stream(mvm, mvmsta, tid, skb); - /* - * The frame is now deferred, and the worker scheduled - * will re-allocate it, so we can free it for now. - */ - iwl_trans_free_tx_cmd(mvm->trans, dev_cmd); - spin_unlock(&mvmsta->lock); - return 0; - } - - /* queue should always be active in new TX path */ - WARN_ON(iwl_mvm_has_new_tx_api(mvm)); - - /* If we are here - TXQ exists and needs to be re-activated */ - spin_lock(&mvm->queue_info_lock); - mvm->queue_info[txq_id].status = IWL_MVM_QUEUE_READY; - mvmsta->tid_data[tid].is_tid_active = true; - spin_unlock(&mvm->queue_info_lock); - - IWL_DEBUG_TX_QUEUES(mvm, "Re-activating queue %d for TX\n", - txq_id); + /* + * The frame is now deferred, and the worker scheduled + * will re-allocate it, so we can free it for now. + */ + iwl_trans_free_tx_cmd(mvm->trans, dev_cmd); + spin_unlock(&mvmsta->lock); + return 0; } if (!iwl_mvm_has_new_tx_api(mvm)) {