diff mbox series

[10/19] wifi: iwlwifi: limit EHT capabilities based on PCIe link speed

Message ID 20230620125813.b77a1574a0a7.Id4120c161fb7df6dedc70d5f3e3829e9117b8cb1@changeid (mailing list archive)
State Accepted
Delegated to: Johannes Berg
Headers show
Series wifi: iwlwifi: updates intended for v6.5 2023-06-20 | expand

Commit Message

Greenman, Gregory June 20, 2023, 10:03 a.m. UTC
From: Johannes Berg <johannes.berg@intel.com>

If a discrete NIC is connected to a PCIe link hat isn't at least
Gen3 (8.0 GT/s), then we cannot sustain 320 MHz traffic, so remove
that from EHT capabilities in that case.

While at it, also move setting 320 MHz beamformee to the right
place in the code so it's not set while not supporting 320 MHz.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
---
 .../net/wireless/intel/iwlwifi/iwl-nvm-parse.c    | 15 +++++++++++----
 drivers/net/wireless/intel/iwlwifi/iwl-trans.h    |  4 ++++
 drivers/net/wireless/intel/iwlwifi/pcie/drv.c     |  9 +++++++++
 3 files changed, 24 insertions(+), 4 deletions(-)

Comments

Ben Greear June 20, 2023, 1:19 p.m. UTC | #1
On 6/20/23 3:03 AM, gregory.greenman@intel.com wrote:
> From: Johannes Berg <johannes.berg@intel.com>
> 
> If a discrete NIC is connected to a PCIe link hat isn't at least
> Gen3 (8.0 GT/s), then we cannot sustain 320 MHz traffic, so remove
> that from EHT capabilities in that case.
> 
> While at it, also move setting 320 MHz beamformee to the right
> place in the code so it's not set while not supporting 320 MHz.

Is there not an advantage to allowing 320Mhz for longer distance connections
where signal is relatively weak, so over-all tput would easily fit in lesser
pcie bus?  Especially on 6E band where the US regdom allows more over-all power
when using wider bandwidths?

Thanks,
Ben
Johannes Berg June 21, 2023, 11:57 a.m. UTC | #2
On Tue, 2023-06-20 at 06:19 -0700, Ben Greear wrote:
> On 6/20/23 3:03 AM, gregory.greenman@intel.com wrote:
> > From: Johannes Berg <johannes.berg@intel.com>
> > 
> > If a discrete NIC is connected to a PCIe link hat isn't at least
> > Gen3 (8.0 GT/s), then we cannot sustain 320 MHz traffic, so remove
> > that from EHT capabilities in that case.
> > 
> > While at it, also move setting 320 MHz beamformee to the right
> > place in the code so it's not set while not supporting 320 MHz.
> 
> Is there not an advantage to allowing 320Mhz for longer distance connections
> where signal is relatively weak, so over-all tput would easily fit in lesser
> pcie bus?  Especially on 6E band where the US regdom allows more over-all power
> when using wider bandwidths?
> 

I actually don't know. This surely isn't ideal, but it's the only way to
really force the AP to not send too much than the NIC can pass out, and
it gets unhappy if it can't.

johannes
Ben Greear June 21, 2023, 2:48 p.m. UTC | #3
On 6/21/23 4:57 AM, Johannes Berg wrote:
> On Tue, 2023-06-20 at 06:19 -0700, Ben Greear wrote:
>> On 6/20/23 3:03 AM, gregory.greenman@intel.com wrote:
>>> From: Johannes Berg <johannes.berg@intel.com>
>>>
>>> If a discrete NIC is connected to a PCIe link hat isn't at least
>>> Gen3 (8.0 GT/s), then we cannot sustain 320 MHz traffic, so remove
>>> that from EHT capabilities in that case.
>>>
>>> While at it, also move setting 320 MHz beamformee to the right
>>> place in the code so it's not set while not supporting 320 MHz.
>>
>> Is there not an advantage to allowing 320Mhz for longer distance connections
>> where signal is relatively weak, so over-all tput would easily fit in lesser
>> pcie bus?  Especially on 6E band where the US regdom allows more over-all power
>> when using wider bandwidths?
>>
> 
> I actually don't know. This surely isn't ideal, but it's the only way to
> really force the AP to not send too much than the NIC can pass out, and
> it gets unhappy if it can't.

So this is to work around hardware/firmware bug in NIC?  If so, that should
be mentioned.

I have heard in the past that higher bandwidth works better than higher NSS
in a lot of cases, so if HW/FW can be made to deal with floods in unlikely
case that the RF is perfect enough to saturate the PCI bus, then I think you
should allow 320Mhz even on slower PCI bus configurations.

Thanks,
Ben

> 
> johannes
>
Johannes Berg June 21, 2023, 2:51 p.m. UTC | #4
On Wed, 2023-06-21 at 07:48 -0700, Ben Greear wrote:
> On 6/21/23 4:57 AM, Johannes Berg wrote:
> > On Tue, 2023-06-20 at 06:19 -0700, Ben Greear wrote:
> > > On 6/20/23 3:03 AM, gregory.greenman@intel.com wrote:
> > > > From: Johannes Berg <johannes.berg@intel.com>
> > > > 
> > > > If a discrete NIC is connected to a PCIe link hat isn't at least
> > > > Gen3 (8.0 GT/s), then we cannot sustain 320 MHz traffic, so remove
> > > > that from EHT capabilities in that case.
> > > > 
> > > > While at it, also move setting 320 MHz beamformee to the right
> > > > place in the code so it's not set while not supporting 320 MHz.
> > > 
> > > Is there not an advantage to allowing 320Mhz for longer distance connections
> > > where signal is relatively weak, so over-all tput would easily fit in lesser
> > > pcie bus?  Especially on 6E band where the US regdom allows more over-all power
> > > when using wider bandwidths?
> > > 
> > 
> > I actually don't know. This surely isn't ideal, but it's the only way to
> > really force the AP to not send too much than the NIC can pass out, and
> > it gets unhappy if it can't.
> 
> So this is to work around hardware/firmware bug in NIC?  If so, that should
> be mentioned.

I'm not sure that's really even a _bug_, it just doesn't have a lot of
buffer space inside of it; as far as I know, given how the HW
architecture works, the FW doesn't have a lot of options.

> I have heard in the past that higher bandwidth works better than higher NSS
> in a lot of cases, so if HW/FW can be made to deal with floods in unlikely
> case that the RF is perfect enough to saturate the PCI bus, then I think you
> should allow 320Mhz even on slower PCI bus configurations.

Right. I don't think it's likely that the firmware will do, but hey, I
can let them know :)

johannes
Janusz Dziedzic Oct. 17, 2024, 6:55 p.m. UTC | #5
śr., 21 cze 2023 o 18:12 Johannes Berg <johannes@sipsolutions.net> napisał(a):
>
> On Wed, 2023-06-21 at 07:48 -0700, Ben Greear wrote:
> > On 6/21/23 4:57 AM, Johannes Berg wrote:
> > > On Tue, 2023-06-20 at 06:19 -0700, Ben Greear wrote:
> > > > On 6/20/23 3:03 AM, gregory.greenman@intel.com wrote:
> > > > > From: Johannes Berg <johannes.berg@intel.com>
> > > > >
> > > > > If a discrete NIC is connected to a PCIe link hat isn't at least
> > > > > Gen3 (8.0 GT/s), then we cannot sustain 320 MHz traffic, so remove
> > > > > that from EHT capabilities in that case.
> > > > >
> > > > > While at it, also move setting 320 MHz beamformee to the right
> > > > > place in the code so it's not set while not supporting 320 MHz.
> > > >
> > > > Is there not an advantage to allowing 320Mhz for longer distance connections
> > > > where signal is relatively weak, so over-all tput would easily fit in lesser
> > > > pcie bus?  Especially on 6E band where the US regdom allows more over-all power
> > > > when using wider bandwidths?
> > > >
> > >
> > > I actually don't know. This surely isn't ideal, but it's the only way to
> > > really force the AP to not send too much than the NIC can pass out, and
> > > it gets unhappy if it can't.
> >
> > So this is to work around hardware/firmware bug in NIC?  If so, that should
> > be mentioned.
>
> I'm not sure that's really even a _bug_, it just doesn't have a lot of
> buffer space inside of it; as far as I know, given how the HW
> architecture works, the FW doesn't have a lot of options.
>
> > I have heard in the past that higher bandwidth works better than higher NSS
> > in a lot of cases, so if HW/FW can be made to deal with floods in unlikely
> > case that the RF is perfect enough to saturate the PCI bus, then I think you
> > should allow 320Mhz even on slower PCI bus configurations.
>
> Right. I don't think it's likely that the firmware will do, but hey, I
> can let them know :)
>

Could we revert this one?
In my noisy env after disarm this check, still get better results for
320 than 160:

janusz@e850:~/github/wifi_tests/tests/remote$ ./run-tests.py -c
cfg-janusz.py -d bpi-r4 -r be200 -t connect_6g_tp
DUT: bpi-r4
REF: be200
RUN check_devices
PASS
START - connect_6g_tp (1/1)
PASS (
       (6g/1/EHT320): ping pkt loss ap->sta: 0%, sta->ap: 0%
       (6g/1/EHT320): ap->sta: 1453 Mbits/sec, sta->ap: 2097 Mbits/sec
) - 69.055398s

janusz@e850:~/github/wifi_tests/tests/remote$ ./run-tests.py -c
cfg-janusz.py -d bpi-r4 -r be200 -t connect_6g_tp
DUT: bpi-r4
REF: be200
RUN check_devices
PASS
START - connect_6g_tp (1/1)
PASS (
       (6g/1/EHT160): ping pkt loss ap->sta: 0%, sta->ap: 0%
       (6g/1/EHT160): ap->sta: 1266 Mbits/sec, sta->ap: 1706 Mbits/sec
) - 69.324334s
janusz@e850:~/github/wifi_tests/tests/remote$

BR
Janusz
diff mbox series

Patch

diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
index 39f13518ca5b..8c23f57f5c89 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
@@ -680,8 +680,7 @@  static const struct ieee80211_sband_iftype_data iwl_he_eht_capa[] = {
 					IEEE80211_EHT_PHY_CAP0_BEAMFORMEE_SS_80MHZ_MASK,
 				.phy_cap_info[1] =
 					IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_80MHZ_MASK  |
-					IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_160MHZ_MASK |
-					IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_320MHZ_MASK,
+					IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_160MHZ_MASK,
 				.phy_cap_info[3] =
 					IEEE80211_EHT_PHY_CAP3_NG_16_SU_FEEDBACK |
 					IEEE80211_EHT_PHY_CAP3_NG_16_MU_FEEDBACK |
@@ -890,6 +889,10 @@  iwl_nvm_fixup_sband_iftd(struct iwl_trans *trans,
 			 const struct iwl_fw *fw)
 {
 	bool is_ap = iftype_data->types_mask & BIT(NL80211_IFTYPE_AP);
+	bool no_320;
+
+	no_320 = !trans->trans_cfg->integrated &&
+		 trans->pcie_link_speed < PCI_EXP_LNKSTA_CLS_8_0GB;
 
 	if (!data->sku_cap_11be_enable || iwlwifi_mod_params.disable_11be)
 		iftype_data->eht_cap.has_eht = false;
@@ -916,8 +919,12 @@  iwl_nvm_fixup_sband_iftd(struct iwl_trans *trans,
 				       IEEE80211_EHT_MAC_CAP0_MAX_MPDU_LEN_MASK);
 		break;
 	case NL80211_BAND_6GHZ:
-		iftype_data->eht_cap.eht_cap_elem.phy_cap_info[0] |=
-			IEEE80211_EHT_PHY_CAP0_320MHZ_IN_6GHZ;
+		if (!no_320) {
+			iftype_data->eht_cap.eht_cap_elem.phy_cap_info[0] |=
+				IEEE80211_EHT_PHY_CAP0_320MHZ_IN_6GHZ;
+			iftype_data->eht_cap.eht_cap_elem.phy_cap_info[1] |=
+				IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_320MHZ_MASK;
+		}
 		fallthrough;
 	case NL80211_BAND_5GHZ:
 		iftype_data->he_cap.he_cap_elem.phy_cap_info[0] |=
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
index 1fa035decc03..d02943d0ea62 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
@@ -1067,6 +1067,8 @@  struct iwl_trans_txqs {
  * @iwl_trans_txqs: transport tx queues data.
  * @mbx_addr_0_step: step address data 0
  * @mbx_addr_1_step: step address data 1
+ * @pcie_link_speed: current PCIe link speed (%PCI_EXP_LNKSTA_CLS_*),
+ *	only valid for discrete (not integrated) NICs
  */
 struct iwl_trans {
 	bool csme_own;
@@ -1129,6 +1131,8 @@  struct iwl_trans {
 	u32 mbx_addr_0_step;
 	u32 mbx_addr_1_step;
 
+	u8 pcie_link_speed;
+
 	/* pointer to trans specific struct */
 	/*Ensure that this pointer will always be aligned to sizeof pointer */
 	char trans_specific[] __aligned(sizeof(void *));
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
index fcc0f3319bcd..18550c03f870 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
@@ -1760,6 +1760,15 @@  static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 		trans_pcie->num_rx_bufs = RX_QUEUE_SIZE;
 	}
 
+	if (!iwl_trans->trans_cfg->integrated) {
+		u16 link_status;
+
+		pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &link_status);
+
+		iwl_trans->pcie_link_speed =
+			u16_get_bits(link_status, PCI_EXP_LNKSTA_CLS);
+	}
+
 	ret = iwl_trans_init(iwl_trans);
 	if (ret)
 		goto out_free_trans;