diff mbox series

[v3,08/10] PCI/bwctrl: Add "controller" part into PCIe bwctrl

Message ID 20230929115723.7864-9-ilpo.jarvinen@linux.intel.com (mailing list archive)
State Handled Elsewhere, archived
Headers show
Series Add PCIe Bandwidth Controller | expand

Commit Message

Ilpo Järvinen Sept. 29, 2023, 11:57 a.m. UTC
Add "controller" parts into PCIe bwctrl for limiting PCIe Link Speed
(due to thermal reasons).

PCIe bandwidth controller introduces an in-kernel API to set PCIe Link
Speed. This new API is intended to be used in an upcoming commit that
adds a thermal cooling device to throttle PCIe bandwidth when thermal
thresholds are reached. No users are introduced in this commit yet.

The PCIe bandwidth control procedure is as follows. The requested speed
is validated against Link Speeds supported by the port and downstream
device. Then bandwidth controller sets the Target Link Speed in the
Link Control 2 Register and retrains the PCIe Link.

Bandwidth notifications enable the cur_bus_speed in the struct pci_bus
to keep track PCIe Link Speed changes. This keeps the link speed seen
through sysfs correct (both for PCI device and thermal cooling device).
While bandwidth notifications should also be generated when bandwidth
controller alters the PCIe Link Speed, a few platforms do not deliver
LMBS interrupt after Link Training as expected. Thus, after changing
the Link Speed, bandwidth controller makes additional read for the Link
Status Register to ensure cur_bus_speed is consistent with the new PCIe
Link Speed.

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
---
 MAINTAINERS                |   7 ++
 drivers/pci/pcie/Kconfig   |   9 +-
 drivers/pci/pcie/bwctrl.c  | 177 +++++++++++++++++++++++++++++++++++--
 include/linux/pci-bwctrl.h |  17 ++++
 4 files changed, 201 insertions(+), 9 deletions(-)
 create mode 100644 include/linux/pci-bwctrl.h

Comments

Lukas Wunner Dec. 30, 2023, 6:49 p.m. UTC | #1
On Fri, Sep 29, 2023 at 02:57:21PM +0300, Ilpo Järvinen wrote:
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -16569,6 +16569,13 @@ F:	include/linux/pci*
>  F:	include/uapi/linux/pci*
>  F:	lib/pci*
>  
> +PCIE BANDWIDTH CONTROLLER
> +M:	Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
> +L:	linux-pci@vger.kernel.org
> +S:	Supported
> +F:	drivers/pci/pcie/bwctrl.c
> +F:	include/linux/pci-bwctrl.h
> +

Maybe add this already in the preceding patch.


> --- a/drivers/pci/pcie/Kconfig
> +++ b/drivers/pci/pcie/Kconfig
> @@ -138,12 +138,13 @@ config PCIE_PTM
>  	  is safe to enable even if you don't.
>  
>  config PCIE_BW
> -	bool "PCI Express Bandwidth Change Notification"
> +	bool "PCI Express Bandwidth Controller"

I'd fold this change into the preceding patch.
Deleting a line introduced by the preceding patch isn't great.

Never mind that the patch isn't a clean revert that way.
Your approach of making changes to the original version
of the bandwith monitoring driver and calling out those
changes in the commit message seems fine to me.


>  	depends on PCIEPORTBUS
>  	help
> -	  This enables PCI Express Bandwidth Change Notification.  If
> -	  you know link width or rate changes occur to correct unreliable
> -	  links, you may answer Y.
> +	  This enables PCI Express Bandwidth Controller. The Bandwidth
> +	  Controller allows controlling PCIe link speed and listens for link
> +	  peed Change Notifications. If you know link width or rate changes
> +	  occur to correct unreliable links, you may answer Y.

It would be neater if you could avoid deleting lines introduced by
the preceding patch.  Ideally you'd add a new paragraph, thus only
add new lines but not delete any existing ones.


> --- a/drivers/pci/pcie/bwctrl.c
> +++ b/drivers/pci/pcie/bwctrl.c
> @@ -1,14 +1,16 @@
>  // SPDX-License-Identifier: GPL-2.0+
>  /*
> - * PCI Express Link Bandwidth Notification services driver
> + * PCIe bandwidth controller
> + *

Again, fold this change into the preceding patch please.

>   * Author: Alexandru Gagniuc <mr.nuke.me@gmail.com>
>   *
>   * Copyright (C) 2019, Dell Inc
> + * Copyright (C) 2023 Intel Corporation.

For extra neatness, drop the comma in the preceding patch and
the period in this patch.


> - * The PCIe Link Bandwidth Notification provides a way to notify the
> - * operating system when the link width or data rate changes.  This
> - * capability is required for all root ports and downstream ports
> - * supporting links wider than x1 and/or multiple link speeds.
> + * The PCIe Bandwidth Controller provides a way to alter PCIe link speeds
> + * and notify the operating system when the link width or data rate changes.
> + * The notification capability is required for all Root Ports and Downstream
> + * Ports supporting links wider than x1 and/or multiple link speeds.

Again, adding a new paragraph and not deleting lines introduced by
the preceding patch would be much nicer.


> +#include <linux/bitops.h>
> +#include <linux/errno.h>
> +#include <linux/mutex.h>
> +#include <linux/pci.h>
> +#include <linux/pci-bwctrl.h>
> +#include <linux/types.h>
> +
> +#include <asm/rwonce.h>
> +

Which of these are necessary for functionality introduced by this
patch and which are necessary anyway and should be moved to the
preceding patch?


> +/**
> + * struct bwctrl_service_data - PCIe Port Bandwidth Controller

Code comment at the top of this file says "PCIe bandwidth controller",
use here as well for neatness.


> +static u16 speed2lnkctl2(enum pci_bus_speed speed)
> +{
> +	static const u16 speed_conv[] = {
> +		[PCIE_SPEED_2_5GT] = PCI_EXP_LNKCTL2_TLS_2_5GT,
> +		[PCIE_SPEED_5_0GT] = PCI_EXP_LNKCTL2_TLS_5_0GT,
> +		[PCIE_SPEED_8_0GT] = PCI_EXP_LNKCTL2_TLS_8_0GT,
> +		[PCIE_SPEED_16_0GT] = PCI_EXP_LNKCTL2_TLS_16_0GT,
> +		[PCIE_SPEED_32_0GT] = PCI_EXP_LNKCTL2_TLS_32_0GT,
> +		[PCIE_SPEED_64_0GT] = PCI_EXP_LNKCTL2_TLS_64_0GT,
> +	};

Looks like this could be a u8 array to save a little bit of memory.

Also, this seems to duplicate pcie_link_speed[] defined in probe.c.


> +static int bwctrl_select_speed(struct pcie_device *srv, enum pci_bus_speed *speed)
> +{
> +	struct pci_bus *bus = srv->port->subordinate;
> +	u8 speeds, dev_speeds;
> +	int i;
> +
> +	if (*speed > PCIE_LNKCAP2_SLS2SPEED(bus->pcie_bus_speeds))
> +		return -EINVAL;
> +
> +	dev_speeds = READ_ONCE(bus->pcie_dev_speeds);

Hm, why is the compiler barrier needed?


> +	/* Only the lowest speed can be set when there are no devices */
> +	if (!dev_speeds)
> +		dev_speeds = PCI_EXP_LNKCAP2_SLS_2_5GB;

Maybe move this to patch [06/10], i.e. set dev->bus->pcie_dev_speeds to
PCI_EXP_LNKCAP2_SLS_2_5GB on removal (instead of 0).


> +	speeds = bus->pcie_bus_speeds & dev_speeds;
> +	i = BIT(fls(speeds));
> +	while (i >= PCI_EXP_LNKCAP2_SLS_2_5GB) {
> +		enum pci_bus_speed candidate;
> +
> +		if (speeds & i) {
> +			candidate = PCIE_LNKCAP2_SLS2SPEED(i);
> +			if (candidate <= *speed) {
> +				*speed = candidate;
> +				return 0;
> +			}
> +		}
> +		i >>= 1;
> +	}

Can't we just do something like

	supported_speeds = bus->pcie_bus_speeds & dev_speeds;
	desired_speeds = GEN_MASK(pcie_link_speed[*speed], 1);
	*speed = BIT(fls(supported_speeds & desired_speeds));

and thus avoid the loop altogether?


> +/**
> + * bwctrl_set_current_speed - Set downstream link speed for PCIe port
> + * @srv: PCIe port
> + * @speed: PCIe bus speed to set
> + *
> + * Attempts to set PCIe port link speed to @speed. As long as @speed is less
> + * than the maximum of what is supported by @srv, the speed is adjusted
> + * downwards to the best speed supported by both the port and device
> + * underneath it.
> + *
> + * Return:
> + * * 0 - on success
> + * * -EINVAL - @speed is higher than the maximum @srv supports
> + * * -ETIMEDOUT - changing link speed took too long
> + * * -EAGAIN - link speed was changed but @speed was not achieved
> + */

So passing a higher speed than what's supported by the Endpoint is fine
but passing a higher speed than what's supported by the Downstream Port
is not?  Why the distinction?  Maybe it would be more logical to just
pick the maximum of what either supports and use that in lieu of a
too-high speed?


> +int bwctrl_set_current_speed(struct pcie_device *srv, enum pci_bus_speed speed)
> +{
> +	struct bwctrl_service_data *data = get_service_data(srv);
> +	struct pci_dev *port = srv->port;
> +	u16 link_status;
> +	int ret;
> +
> +	if (WARN_ON_ONCE(!bwctrl_valid_pcie_speed(speed)))
> +		return -EINVAL;
> +
> +	ret = bwctrl_select_speed(srv, &speed);
> +	if (ret < 0)
> +		return ret;

How about checking here if the selected speed is equivalent to what's
being used right now and bailing out if so?  No need to retrain if
we're already at the desired speed.


> @@ -91,16 +242,32 @@ static int pcie_bandwidth_notification_probe(struct pcie_device *srv)
>  	if (ret)
>  		return ret;
>  
> +	data = kzalloc(sizeof(*data), GFP_KERNEL);
> +	if (!data) {
> +		ret = -ENOMEM;
> +		goto free_irq;
> +	}
> +	mutex_init(&data->set_speed_mutex);
> +	set_service_data(srv, data);
> +

I think you should move that further up so that you allocate and populate
the data struct before requesting the IRQ.  Otherwise if BIOS has already
enabled link bandwith notification for some reason, the IRQ handler might
be invoked without the data struct being allocated.


> --- /dev/null
> +++ b/include/linux/pci-bwctrl.h
> @@ -0,0 +1,17 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * PCIe bandwidth controller
> + *
> + * Copyright (C) 2023 Intel Corporation.

Another trailing period I'd remove.

Thanks,

Lukas
Ilpo Järvinen Jan. 1, 2024, 6:12 p.m. UTC | #2
On Sat, 30 Dec 2023, Lukas Wunner wrote:

> On Fri, Sep 29, 2023 at 02:57:21PM +0300, Ilpo Järvinen wrote:
> 
> 
> > --- a/drivers/pci/pcie/bwctrl.c

> > +static u16 speed2lnkctl2(enum pci_bus_speed speed)
> > +{
> > +	static const u16 speed_conv[] = {
> > +		[PCIE_SPEED_2_5GT] = PCI_EXP_LNKCTL2_TLS_2_5GT,
> > +		[PCIE_SPEED_5_0GT] = PCI_EXP_LNKCTL2_TLS_5_0GT,
> > +		[PCIE_SPEED_8_0GT] = PCI_EXP_LNKCTL2_TLS_8_0GT,
> > +		[PCIE_SPEED_16_0GT] = PCI_EXP_LNKCTL2_TLS_16_0GT,
> > +		[PCIE_SPEED_32_0GT] = PCI_EXP_LNKCTL2_TLS_32_0GT,
> > +		[PCIE_SPEED_64_0GT] = PCI_EXP_LNKCTL2_TLS_64_0GT,
> > +	};
> 
> Looks like this could be a u8 array to save a little bit of memory.
> 
> Also, this seems to duplicate pcie_link_speed[] defined in probe.c.

It's not the same. pcie_link_speed[] is indexed by a different thing 
unless you also suggest I should do index minus a number? There are 
plenty of arithmetics possible when converting between the types but
the existing converions functions don't seem to take advantage of those
properties so I've been a bit hesitant to add such assumptions myself.

I suppose I used u16 because the reg is u16 but you're of course correct 
that the values don't take up more than u8.

> > +static int bwctrl_select_speed(struct pcie_device *srv, enum pci_bus_speed *speed)
> > +{
> > +	struct pci_bus *bus = srv->port->subordinate;
> > +	u8 speeds, dev_speeds;
> > +	int i;
> > +
> > +	if (*speed > PCIE_LNKCAP2_SLS2SPEED(bus->pcie_bus_speeds))
> > +		return -EINVAL;
> > +
> > +	dev_speeds = READ_ONCE(bus->pcie_dev_speeds);
> 
> Hm, why is the compiler barrier needed?

It's probably an overkill but there could be a checker which finds this 
read is not protected by anything while the value could get updated 
concurrectly (there's probably already such checker as I've seen patches 
to data races found with some tool). I suppose the alternative would be to 
mark the data race being harmless (because if endpoint removal clears 
pcie_dev_speeds, bwctrl will be pretty moot). I just chose to use 
READ_ONCE() that prevents rereading the same value later in this function 
making the function behave consistently regardless of what occurs parallel 
to it with the endpoints.

If the value selected cannot be set because of endpoint no longer being 
there, the other parts of the code will detect that.

So if I just add a comment to this line why there's the data race and keep 
it as is?

> > +	/* Only the lowest speed can be set when there are no devices */
> > +	if (!dev_speeds)
> > +		dev_speeds = PCI_EXP_LNKCAP2_SLS_2_5GB;
> 
> Maybe move this to patch [06/10], i.e. set dev->bus->pcie_dev_speeds to
> PCI_EXP_LNKCAP2_SLS_2_5GB on removal (instead of 0).

Okay, I'll set it to 2_5GB on init and removal.

> > +	speeds = bus->pcie_bus_speeds & dev_speeds;
> > +	i = BIT(fls(speeds));
> > +	while (i >= PCI_EXP_LNKCAP2_SLS_2_5GB) {
> > +		enum pci_bus_speed candidate;
> > +
> > +		if (speeds & i) {
> > +			candidate = PCIE_LNKCAP2_SLS2SPEED(i);
> > +			if (candidate <= *speed) {
> > +				*speed = candidate;
> > +				return 0;
> > +			}
> > +		}
> > +		i >>= 1;
> > +	}
> 
> Can't we just do something like
> 
> 	supported_speeds = bus->pcie_bus_speeds & dev_speeds;
> 	desired_speeds = GEN_MASK(pcie_link_speed[*speed], 1);
> 	*speed = BIT(fls(supported_speeds & desired_speeds));
> 
> and thus avoid the loop altogether?

Yes, I can change to loopless version.

I'll try to create functions for the speed format conversions though 
rather than open coding them into the logic.

> > @@ -91,16 +242,32 @@ static int pcie_bandwidth_notification_probe(struct pcie_device *srv)
> >  	if (ret)
> >  		return ret;
> >  
> > +	data = kzalloc(sizeof(*data), GFP_KERNEL);
> > +	if (!data) {
> > +		ret = -ENOMEM;
> > +		goto free_irq;
> > +	}
> > +	mutex_init(&data->set_speed_mutex);
> > +	set_service_data(srv, data);
> > +
> 
> I think you should move that further up so that you allocate and populate
> the data struct before requesting the IRQ.  Otherwise if BIOS has already
> enabled link bandwith notification for some reason, the IRQ handler might
> be invoked without the data struct being allocated.

Sure, I don't know why I missed that possibility.
Lukas Wunner Jan. 3, 2024, 4:40 p.m. UTC | #3
On Mon, Jan 01, 2024 at 08:12:43PM +0200, Ilpo Järvinen wrote:
> On Sat, 30 Dec 2023, Lukas Wunner wrote:
> > On Fri, Sep 29, 2023 at 02:57:21PM +0300, Ilpo Järvinen wrote:
> > > --- a/drivers/pci/pcie/bwctrl.c
> > > +static u16 speed2lnkctl2(enum pci_bus_speed speed)
> > > +{
> > > +	static const u16 speed_conv[] = {
> > > +		[PCIE_SPEED_2_5GT] = PCI_EXP_LNKCTL2_TLS_2_5GT,
> > > +		[PCIE_SPEED_5_0GT] = PCI_EXP_LNKCTL2_TLS_5_0GT,
> > > +		[PCIE_SPEED_8_0GT] = PCI_EXP_LNKCTL2_TLS_8_0GT,
> > > +		[PCIE_SPEED_16_0GT] = PCI_EXP_LNKCTL2_TLS_16_0GT,
> > > +		[PCIE_SPEED_32_0GT] = PCI_EXP_LNKCTL2_TLS_32_0GT,
> > > +		[PCIE_SPEED_64_0GT] = PCI_EXP_LNKCTL2_TLS_64_0GT,
> > > +	};
> > 
> > Looks like this could be a u8 array to save a little bit of memory.
> > 
> > Also, this seems to duplicate pcie_link_speed[] defined in probe.c.
> 
> It's not the same. pcie_link_speed[] is indexed by a different thing

Ah, missed that.  Those various conversion functions are confusing.


> > > +	dev_speeds = READ_ONCE(bus->pcie_dev_speeds);
> > 
> > Hm, why is the compiler barrier needed?
> 
> It's probably an overkill but there could be a checker which finds this 
> read is not protected by anything while the value could get updated 
> concurrectly

Why should it be updated?  It's a static value cached on enumeration
and never changed AFAICS.


> If the value selected cannot be set because of endpoint no longer being 
> there, the other parts of the code will detect that.

If the endpoint is hot-unplugged, the link is down, so retraining
the link will fail.

If the device is replaced concurrently to retraining then you may
try to retrain to a speed which is too fast or too slow for the new
device.  Maybe it's necessary to cope with that?  Basically find the
pci_dev on the bus with the device/function id and check if the pci_dev
pointer still points to the same location.  Another idea would be to
hook up bwctrl with pciehp so that pciehp tells bwctrl the device is
gone and any speed changes should be aborted.  We've hooked up DPC
with pciehp in a similar way.


> So if I just add a comment to this line why there's the data race and keep 
> it as is?

I'd just drop the READ_ONCE() here if there's not a good reason for it.

Thanks,

Lukas
diff mbox series

Patch

diff --git a/MAINTAINERS b/MAINTAINERS
index 90f13281d297..2e4ad074eaf3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -16569,6 +16569,13 @@  F:	include/linux/pci*
 F:	include/uapi/linux/pci*
 F:	lib/pci*
 
+PCIE BANDWIDTH CONTROLLER
+M:	Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
+L:	linux-pci@vger.kernel.org
+S:	Supported
+F:	drivers/pci/pcie/bwctrl.c
+F:	include/linux/pci-bwctrl.h
+
 PCIE DRIVER FOR AMAZON ANNAPURNA LABS
 M:	Jonathan Chocron <jonnyc@amazon.com>
 L:	linux-pci@vger.kernel.org
diff --git a/drivers/pci/pcie/Kconfig b/drivers/pci/pcie/Kconfig
index 1ef8073fa89a..1c6509cf169a 100644
--- a/drivers/pci/pcie/Kconfig
+++ b/drivers/pci/pcie/Kconfig
@@ -138,12 +138,13 @@  config PCIE_PTM
 	  is safe to enable even if you don't.
 
 config PCIE_BW
-	bool "PCI Express Bandwidth Change Notification"
+	bool "PCI Express Bandwidth Controller"
 	depends on PCIEPORTBUS
 	help
-	  This enables PCI Express Bandwidth Change Notification.  If
-	  you know link width or rate changes occur to correct unreliable
-	  links, you may answer Y.
+	  This enables PCI Express Bandwidth Controller. The Bandwidth
+	  Controller allows controlling PCIe link speed and listens for link
+	  peed Change Notifications. If you know link width or rate changes
+	  occur to correct unreliable links, you may answer Y.
 
 config PCIE_EDR
 	bool "PCI Express Error Disconnect Recover support"
diff --git a/drivers/pci/pcie/bwctrl.c b/drivers/pci/pcie/bwctrl.c
index 4fc6718fc0e5..e3172d69476f 100644
--- a/drivers/pci/pcie/bwctrl.c
+++ b/drivers/pci/pcie/bwctrl.c
@@ -1,14 +1,16 @@ 
 // SPDX-License-Identifier: GPL-2.0+
 /*
- * PCI Express Link Bandwidth Notification services driver
+ * PCIe bandwidth controller
+ *
  * Author: Alexandru Gagniuc <mr.nuke.me@gmail.com>
  *
  * Copyright (C) 2019, Dell Inc
+ * Copyright (C) 2023 Intel Corporation.
  *
- * The PCIe Link Bandwidth Notification provides a way to notify the
- * operating system when the link width or data rate changes.  This
- * capability is required for all root ports and downstream ports
- * supporting links wider than x1 and/or multiple link speeds.
+ * The PCIe Bandwidth Controller provides a way to alter PCIe link speeds
+ * and notify the operating system when the link width or data rate changes.
+ * The notification capability is required for all Root Ports and Downstream
+ * Ports supporting links wider than x1 and/or multiple link speeds.
  *
  * This service port driver hooks into the bandwidth notification interrupt
  * watching for link speed changes or links becoming degraded in operation
@@ -17,9 +19,48 @@ 
 
 #define dev_fmt(fmt) "bwctrl: " fmt
 
+#include <linux/bitops.h>
+#include <linux/errno.h>
+#include <linux/mutex.h>
+#include <linux/pci.h>
+#include <linux/pci-bwctrl.h>
+#include <linux/types.h>
+
+#include <asm/rwonce.h>
+
 #include "../pci.h"
 #include "portdrv.h"
 
+/**
+ * struct bwctrl_service_data - PCIe Port Bandwidth Controller
+ * @set_speed_mutex: serializes link speed changes
+ */
+struct bwctrl_service_data {
+	struct mutex set_speed_mutex;
+};
+
+static bool bwctrl_valid_pcie_speed(enum pci_bus_speed speed)
+{
+	return (speed >= PCIE_SPEED_2_5GT) && (speed <= PCIE_SPEED_64_0GT);
+}
+
+static u16 speed2lnkctl2(enum pci_bus_speed speed)
+{
+	static const u16 speed_conv[] = {
+		[PCIE_SPEED_2_5GT] = PCI_EXP_LNKCTL2_TLS_2_5GT,
+		[PCIE_SPEED_5_0GT] = PCI_EXP_LNKCTL2_TLS_5_0GT,
+		[PCIE_SPEED_8_0GT] = PCI_EXP_LNKCTL2_TLS_8_0GT,
+		[PCIE_SPEED_16_0GT] = PCI_EXP_LNKCTL2_TLS_16_0GT,
+		[PCIE_SPEED_32_0GT] = PCI_EXP_LNKCTL2_TLS_32_0GT,
+		[PCIE_SPEED_64_0GT] = PCI_EXP_LNKCTL2_TLS_64_0GT,
+	};
+
+	if (WARN_ON_ONCE(!bwctrl_valid_pcie_speed(speed)))
+		return 0;
+
+	return speed_conv[speed];
+}
+
 static bool pcie_link_bandwidth_notification_supported(struct pci_dev *dev)
 {
 	int ret;
@@ -77,8 +118,118 @@  static irqreturn_t pcie_bw_notification_irq(int irq, void *context)
 	return IRQ_HANDLED;
 }
 
+/* Configure target speed to the requested speed and set train link */
+static int bwctrl_set_speed(struct pci_dev *port, u16 lnkctl2_speed)
+{
+	int ret;
+
+	ret = pcie_capability_clear_and_set_word(port, PCI_EXP_LNKCTL2,
+						 PCI_EXP_LNKCTL2_TLS, lnkctl2_speed);
+	if (ret != PCIBIOS_SUCCESSFUL)
+		return pcibios_err_to_errno(ret);
+
+	return 0;
+}
+
+static int bwctrl_select_speed(struct pcie_device *srv, enum pci_bus_speed *speed)
+{
+	struct pci_bus *bus = srv->port->subordinate;
+	u8 speeds, dev_speeds;
+	int i;
+
+	if (*speed > PCIE_LNKCAP2_SLS2SPEED(bus->pcie_bus_speeds))
+		return -EINVAL;
+
+	dev_speeds = READ_ONCE(bus->pcie_dev_speeds);
+	/* Only the lowest speed can be set when there are no devices */
+	if (!dev_speeds)
+		dev_speeds = PCI_EXP_LNKCAP2_SLS_2_5GB;
+
+	/*
+	 * Implementation Note in PCIe r6.0.1 sec 7.5.3.18 recommends OS to
+	 * utilize Supported Link Speeds vector for determining which link
+	 * speeds are supported.
+	 *
+	 * Take into account Supported Link Speeds both from the Root Port
+	 * and the device.
+	 */
+	speeds = bus->pcie_bus_speeds & dev_speeds;
+	i = BIT(fls(speeds));
+	while (i >= PCI_EXP_LNKCAP2_SLS_2_5GB) {
+		enum pci_bus_speed candidate;
+
+		if (speeds & i) {
+			candidate = PCIE_LNKCAP2_SLS2SPEED(i);
+			if (candidate <= *speed) {
+				*speed = candidate;
+				return 0;
+			}
+		}
+		i >>= 1;
+	}
+
+	return -EINVAL;
+}
+
+/**
+ * bwctrl_set_current_speed - Set downstream link speed for PCIe port
+ * @srv: PCIe port
+ * @speed: PCIe bus speed to set
+ *
+ * Attempts to set PCIe port link speed to @speed. As long as @speed is less
+ * than the maximum of what is supported by @srv, the speed is adjusted
+ * downwards to the best speed supported by both the port and device
+ * underneath it.
+ *
+ * Return:
+ * * 0 - on success
+ * * -EINVAL - @speed is higher than the maximum @srv supports
+ * * -ETIMEDOUT - changing link speed took too long
+ * * -EAGAIN - link speed was changed but @speed was not achieved
+ */
+int bwctrl_set_current_speed(struct pcie_device *srv, enum pci_bus_speed speed)
+{
+	struct bwctrl_service_data *data = get_service_data(srv);
+	struct pci_dev *port = srv->port;
+	u16 link_status;
+	int ret;
+
+	if (WARN_ON_ONCE(!bwctrl_valid_pcie_speed(speed)))
+		return -EINVAL;
+
+	ret = bwctrl_select_speed(srv, &speed);
+	if (ret < 0)
+		return ret;
+
+	mutex_lock(&data->set_speed_mutex);
+	ret = bwctrl_set_speed(port, speed2lnkctl2(speed));
+	if (ret < 0)
+		goto unlock;
+
+	ret = pcie_retrain_link(port, true);
+	if (ret < 0)
+		goto unlock;
+
+	/*
+	 * Ensure link speed updates also with platforms that have problems
+	 * with notifications
+	 */
+	ret = pcie_capability_read_word(port, PCI_EXP_LNKSTA, &link_status);
+	if (ret == PCIBIOS_SUCCESSFUL)
+		pcie_update_link_speed(port->subordinate, link_status);
+
+	if (port->subordinate->cur_bus_speed != speed)
+		ret = -EAGAIN;
+
+unlock:
+	mutex_unlock(&data->set_speed_mutex);
+
+	return ret;
+}
+
 static int pcie_bandwidth_notification_probe(struct pcie_device *srv)
 {
+	struct bwctrl_service_data *data;
 	struct pci_dev *port = srv->port;
 	int ret;
 
@@ -91,16 +242,32 @@  static int pcie_bandwidth_notification_probe(struct pcie_device *srv)
 	if (ret)
 		return ret;
 
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (!data) {
+		ret = -ENOMEM;
+		goto free_irq;
+	}
+	mutex_init(&data->set_speed_mutex);
+	set_service_data(srv, data);
+
 	pcie_enable_link_bandwidth_notification(port);
 	pci_info(port, "enabled with IRQ %d\n", srv->irq);
 
 	return 0;
+
+free_irq:
+	free_irq(srv->irq, srv);
+	return ret;
 }
 
 static void pcie_bandwidth_notification_remove(struct pcie_device *srv)
 {
+	struct bwctrl_service_data *data = get_service_data(srv);
+
 	pcie_disable_link_bandwidth_notification(srv->port);
 	free_irq(srv->irq, srv);
+	mutex_destroy(&data->set_speed_mutex);
+	kfree(data);
 }
 
 static int pcie_bandwidth_notification_suspend(struct pcie_device *srv)
diff --git a/include/linux/pci-bwctrl.h b/include/linux/pci-bwctrl.h
new file mode 100644
index 000000000000..8eae09bd03b5
--- /dev/null
+++ b/include/linux/pci-bwctrl.h
@@ -0,0 +1,17 @@ 
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * PCIe bandwidth controller
+ *
+ * Copyright (C) 2023 Intel Corporation.
+ */
+
+#ifndef LINUX_PCI_BWCTRL_H
+#define LINUX_PCI_BWCTRL_H
+
+#include <linux/pci.h>
+
+struct pcie_device;
+
+int bwctrl_set_current_speed(struct pcie_device *srv, enum pci_bus_speed speed);
+
+#endif