diff mbox

[v2,2/4] PCI/DPC/AER: Address Concurrency between AER and DPC

Message ID 1514532259-19383-3-git-send-email-poza@codeaurora.org (mailing list archive)
State New, archived
Delegated to: Bjorn Helgaas
Headers show

Commit Message

Oza Pawandeep Dec. 29, 2017, 7:24 a.m. UTC
This patch addresses the race condition between AER and DPC for recovery.

Current DPC driver does not do recovery, e.g. calling end-point's driver's
callbacks, which sanitize the device.
DPC driver implements link_reset callback, and calls pci_do_recovery.

Signed-off-by: Oza Pawandeep <poza@codeaurora.org>

Comments

Keith Busch Dec. 29, 2017, 5:23 p.m. UTC | #1
On Fri, Dec 29, 2017 at 12:54:17PM +0530, Oza Pawandeep wrote:
> This patch addresses the race condition between AER and DPC for recovery.
> 
> Current DPC driver does not do recovery, e.g. calling end-point's driver's
> callbacks, which sanitize the device.
> DPC driver implements link_reset callback, and calls pci_do_recovery.

I'm not sure I see why any of this is necessary for two reasons:

1. A downstream port containment event disables the link. How can a driver
sanitize an end device when all the end devices below the containment are
physically inaccessible? Any attempt to access such devices will just
end with either CA or UR (depending on DPC control settings). Since we
already know the failed outcome from attempting to access such devices,
why do you want the drivers to do anything?

2. A DPC event suppresses the error message required for the Linux
AER driver to run. How can AER and DPC run concurrently?
Oza Pawandeep Dec. 29, 2017, 6 p.m. UTC | #2
On 2017-12-29 22:53, Keith Busch wrote:
> On Fri, Dec 29, 2017 at 12:54:17PM +0530, Oza Pawandeep wrote:
>> This patch addresses the race condition between AER and DPC for 
>> recovery.
>> 
>> Current DPC driver does not do recovery, e.g. calling end-point's 
>> driver's
>> callbacks, which sanitize the device.
>> DPC driver implements link_reset callback, and calls pci_do_recovery.
> 
> I'm not sure I see why any of this is necessary for two reasons:
> 
> 1. A downstream port containment event disables the link. How can a 
> driver
> sanitize an end device when all the end devices below the containment 
> are
> physically inaccessible? Any attempt to access such devices will just
> end with either CA or UR (depending on DPC control settings). Since we
> already know the failed outcome from attempting to access such devices,
> why do you want the drivers to do anything?
> 
Ok I think my statement was misleading, not device sanitation, but the 
device driver making
SW sanitize.
for e.g. have a look at e1000_io_error_detected which is called say in 
case of AER ERR_FATAL msg.
which sanitizes sw stack, interrupts management (synchronize_irq), 
delete timers etc..

yes, DPC would have made the link state disabled, and HW would have 
reset the internal logic with
quiescence activities so yes, any transaction on will end with CA or UR. 
well but device driver
has to handle rest of the possible things as I mentioned (error 
callbacks)

> 2. A DPC event suppresses the error message required for the Linux
> AER driver to run. How can AER and DPC run concurrently?

I afraid I could not grasp the first line completely.

but they way it is triggering AER and DPC on our platform concurrently 
is, we have same MSIx registered
for both AER and DPC, and linux calls the shared handlers to handle both 
the triggers anyway.

otherwise also if ERR_FATAL msg occurs, the Root port should trigger 
both AER and DPC
(assuming both are enabled, and no FW first for AER/DPC)

the problem with the current framework of AER and DPC in Linux is:
both try to act independently, while we know that (for e.g. ERR_FATAL 
msg) is responsible for triggering
both AER and DPC depending on the configuration.  (currently DPC is 
configured for both FATAL and NONFATAL in linux anyway)

It does not make sense that AER goes ahead and attempts to sanitize with 
the device driver's callbacks as I mentioned.
and DPC being unaware, asynchronously disables the link (although this 
is all HW)
but DPC service driver should adapt to some kind of error handling and 
error resume which AER has adapted.

Hence this whole design changes proposed with respect to error handling.

Let me give you another problem statement on the same line:
when DPC is active, AER does not need to act at all...because it doesnt 
make sense for AER to act independently.,
without knowing what DPC service driver is upto!

which is handled in one of the patches.
the point I am trying to make is: DPC should not rely on AER to call 
error callbacks, and AER should not be doing it without knowing that
DPC is active and it is also going to some course of action (be in HW or 
SW)

Regards,
Oza.
Keith Busch Dec. 29, 2017, 6:13 p.m. UTC | #3
On Fri, Dec 29, 2017 at 11:30:02PM +0530, poza@codeaurora.org wrote:
> On 2017-12-29 22:53, Keith Busch wrote:
> 
> > 2. A DPC event suppresses the error message required for the Linux
> > AER driver to run. How can AER and DPC run concurrently?
> 
> I afraid I could not grasp the first line completely.

A DPC capable and enabled port discards error messages; the ERR_FATAL
or ERR_NONFATAL message required to trigger AER handling won't exist in
such a setup.

This behavior is defined in the specification 6.2.10 for Downstream
Port Containment:

  When DPC is triggered due to receipt of an uncorrectable error Message,
  the Requester ID from the Message is recorded in the DPC Error
  Source ID register and that Message is discarded and not forwarded
  Upstream. When DPC is triggered by an unmasked uncorrectable error,
  that error will not be signaled with an uncorrectable error Message,
  even if otherwise enabled.
Oza Pawandeep Dec. 30, 2017, 3:57 a.m. UTC | #4
On 2017-12-29 23:43, Keith Busch wrote:
> On Fri, Dec 29, 2017 at 11:30:02PM +0530, poza@codeaurora.org wrote:
>> On 2017-12-29 22:53, Keith Busch wrote:
>> 
>> > 2. A DPC event suppresses the error message required for the Linux
>> > AER driver to run. How can AER and DPC run concurrently?
>> 
>> I afraid I could not grasp the first line completely.
> 
> A DPC capable and enabled port discards error messages; the ERR_FATAL
> or ERR_NONFATAL message required to trigger AER handling won't exist in
> such a setup.
> 
> This behavior is defined in the specification 6.2.10 for Downstream
> Port Containment:
> 
>   When DPC is triggered due to receipt of an uncorrectable error 
> Message,
>   the Requester ID from the Message is recorded in the DPC Error
>   Source ID register and that Message is discarded and not forwarded
>   Upstream. When DPC is triggered by an unmasked uncorrectable error,
>   that error will not be signaled with an uncorrectable error Message,
>   even if otherwise enabled.

In my understanding, thiis talks about DPC enabled switch. this case is 
taken care as well.
if you look at patchset-3, when AER is triggered, AER's pci_dev is of 
endpoint
will traverse all the way up until it finds associated DPC service 
enabled.
pdev = pcie_port_upstream_bridge(dev);

if AER is not triggered, then at switch level DPC will take 
care/suppress the msg
and entire SW will not come into picture then.

But specifically the patches attempts to bring in some sort of 
coordination and understanding between AER and DPC.
as I mentioned in my previous mail.

Regards,
Oza.
Sinan Kaya Jan. 2, 2018, 1:25 p.m. UTC | #5
Hi Keith,

On 12/29/2017 12:23 PM, Keith Busch wrote:
> On Fri, Dec 29, 2017 at 12:54:17PM +0530, Oza Pawandeep wrote:
>> This patch addresses the race condition between AER and DPC for recovery.
>>
>> Current DPC driver does not do recovery, e.g. calling end-point's driver's
>> callbacks, which sanitize the device.
>> DPC driver implements link_reset callback, and calls pci_do_recovery.
> 
> I'm not sure I see why any of this is necessary for two reasons:
> 
> 1. A downstream port containment event disables the link. How can a driver
> sanitize an end device when all the end devices below the containment are
> physically inaccessible? Any attempt to access such devices will just
> end with either CA or UR (depending on DPC control settings). Since we
> already know the failed outcome from attempting to access such devices,
> why do you want the drivers to do anything?

The reset callback to the endpoint driver has a status field indicating
whether the IO is frozen or not. If IO is not frozen, an endpoint driver
can potentially recover from the error by reissuing the failed request. 

If IO is frozen, then the endpoint driver needs to clean up outstanding
resources. It is not safe to just shutdown the driver while there are
transactions in flight. This is the reason for the status field and a
chance for driver to clean up any state machines and resources. 

Also note that the error callback has a result return value. An endpoint
driver indicates whether it was successful on recovering or not.


> 
> 2. A DPC event suppresses the error message required for the Linux
> AER driver to run. How can AER and DPC run concurrently?
> 

As we briefly discussed in previous email exchanges, I think you are
looking at a use case with a switch that supports DPC functionality. 

Oza and I are looking at a root port functionality with DPC feature. 

As you already know, AER errors are logged to AER capability register
independent of the DPC driver presence.

A root port is also allowed to share the MSI interrupts across DPC and
AER. 

Therefore, when a DPC interrupt fires; both AER driver and DPC driver
starts recovery work. This is the issue we are trying to deal with. 

In the end, the driver needs to work for both root port and switches.
I think you verified it against a switch. We are doing the same for a
root port and submitting the plumbing code.
Keith Busch Jan. 2, 2018, 5:12 p.m. UTC | #6
On Tue, Jan 02, 2018 at 08:25:08AM -0500, Sinan Kaya wrote:
> > 2. A DPC event suppresses the error message required for the Linux
> > AER driver to run. How can AER and DPC run concurrently?
> > 
> 
> As we briefly discussed in previous email exchanges, I think you are
> looking at a use case with a switch that supports DPC functionality. 

No, I'm interested in DPC in a general.

> Oza and I are looking at a root port functionality with DPC feature. 
> 
> As you already know, AER errors are logged to AER capability register
> independent of the DPC driver presence.

The error is noted in the Uncorrectable Error Status Register if that's
what triggered the DPC event. This register has nothing to do with the
Root Error Status Register, which is required to have received an error
Message in order to have a status for the AER driver.

> A root port is also allowed to share the MSI interrupts across DPC and
> AER. 
> 
> Therefore, when a DPC interrupt fires; both AER driver and DPC driver
> starts recovery work. This is the issue we are trying to deal with. 

If DPC is implemented correctly, the AER Root Status can't have an
uncorrectable status for the driver to deal with. The only thing the AER
driver could possibly see is a correctable error if DPC ERR_COR Enable
is set.

> In the end, the driver needs to work for both root port and switches.
> I think you verified it against a switch. We are doing the same for a
> root port and submitting the plumbing code. 

I think we need to consider the possibility you are enabling a platform
that implemented DPC incorrectly. There's nothing in the specification
that says that DPC enabled root ports are not to discard the error message
if it came from downstream, or skip signalling the message for root port
detected errors.
Sinan Kaya Jan. 2, 2018, 6:34 p.m. UTC | #7
On 1/2/2018 12:12 PM, Keith Busch wrote:
> On Tue, Jan 02, 2018 at 08:25:08AM -0500, Sinan Kaya wrote:
>>> 2. A DPC event suppresses the error message required for the Linux
>>> AER driver to run. How can AER and DPC run concurrently?
>>>
>>
>> As we briefly discussed in previous email exchanges, I think you are
>> looking at a use case with a switch that supports DPC functionality. 
> 
> No, I'm interested in DPC in a general.
> 
>> Oza and I are looking at a root port functionality with DPC feature. 
>>
>> As you already know, AER errors are logged to AER capability register
>> independent of the DPC driver presence.
> 
> The error is noted in the Uncorrectable Error Status Register if that's
> what triggered the DPC event. This register has nothing to do with the
> Root Error Status Register, which is required to have received an error
> Message in order to have a status for the AER driver.
> 
>> A root port is also allowed to share the MSI interrupts across DPC and
>> AER. 
>>
>> Therefore, when a DPC interrupt fires; both AER driver and DPC driver
>> starts recovery work. This is the issue we are trying to deal with. 
> 
> If DPC is implemented correctly, the AER Root Status can't have an
> uncorrectable status for the driver to deal with. The only thing the AER
> driver could possibly see is a correctable error if DPC ERR_COR Enable
> is set.
> 
>> In the end, the driver needs to work for both root port and switches.
>> I think you verified it against a switch. We are doing the same for a
>> root port and submitting the plumbing code. 
> 
> I think we need to consider the possibility you are enabling a platform
> that implemented DPC incorrectly. There's nothing in the specification
> that says that DPC enabled root ports are not to discard the error message
> if it came from downstream, or skip signalling the message for root port
> detected errors.
> 

I'll circle this with the HW team.

The current code still doesn't handle outstanding transactions properly.
We can probably split the patch into two and deal with this aspect later.
diff mbox

Patch

diff --git a/drivers/pci/pcie/pcie-dpc.c b/drivers/pci/pcie/pcie-dpc.c
index 2d976a6..68296ec 100644
--- a/drivers/pci/pcie/pcie-dpc.c
+++ b/drivers/pci/pcie/pcie-dpc.c
@@ -15,6 +15,9 @@ 
 #include <linux/pci.h>
 #include <linux/pcieport_if.h>
 #include "../pci.h"
+#include "portdrv.h"
+
+static pci_ers_result_t dpc_reset_link(struct pci_dev *pdev);
 
 struct rp_pio_header_log_regs {
 	u32 dw0;
@@ -67,6 +70,60 @@  struct dpc_dev {
 	"Memory Request Completion Timeout",		 /* Bit Position 18 */
 };
 
+static int find_dpc_dev_iter(struct device *device, void *data)
+{
+	struct pcie_port_service_driver *service_driver;
+	struct device **dev;
+
+	dev = (struct device **) data;
+
+	if (device->bus == &pcie_port_bus_type && device->driver) {
+		service_driver = to_service_driver(device->driver);
+		if (service_driver->service == PCIE_PORT_SERVICE_DPC) {
+			*dev = device;
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
+static struct device *pci_find_dpc_dev(struct pci_dev *pdev)
+{
+	struct device *dev = NULL;
+
+	device_for_each_child(&pdev->dev, &dev, find_dpc_dev_iter);
+
+	return dev;
+}
+
+static int find_dpc_service_iter(struct device *device, void *data)
+{
+	struct pcie_port_service_driver *service_driver, **drv;
+
+	drv = (struct pcie_port_service_driver **) data;
+
+	if (device->bus == &pcie_port_bus_type && device->driver) {
+		service_driver = to_service_driver(device->driver);
+		if (service_driver->service == PCIE_PORT_SERVICE_DPC) {
+			*drv = service_driver;
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
+struct pcie_port_service_driver *pci_find_dpc_service(struct pci_dev *dev)
+{
+	struct pcie_port_service_driver *drv = NULL;
+
+	device_for_each_child(&dev->dev, &drv, find_dpc_service_iter);
+
+	return drv;
+}
+EXPORT_SYMBOL(pci_find_dpc_service);
+
 static int dpc_wait_rp_inactive(struct dpc_dev *dpc)
 {
 	unsigned long timeout = jiffies + HZ;
@@ -104,11 +161,23 @@  static void dpc_wait_link_inactive(struct dpc_dev *dpc)
 		dev_warn(dev, "Link state not disabled for DPC event\n");
 }
 
-static void interrupt_event_handler(struct work_struct *work)
+/**
+ * dpc_reset_link - reset link DPC  routine
+ * @dev: pointer to Root Port's pci_dev data structure
+ *
+ * Invoked by Port Bus driver when performing link reset at Root Port.
+ */
+static pci_ers_result_t dpc_reset_link(struct pci_dev *pdev)
 {
-	struct dpc_dev *dpc = container_of(work, struct dpc_dev, work);
-	struct pci_dev *dev, *temp, *pdev = dpc->dev->port;
 	struct pci_bus *parent = pdev->subordinate;
+	struct pci_dev *dev, *temp;
+	struct dpc_dev *dpc;
+	struct pcie_device *pciedev;
+	struct device *devdpc;
+
+	devdpc = pci_find_dpc_dev(pdev);
+	pciedev = to_pcie_device(devdpc);
+	dpc = get_service_data(pciedev);
 
 	pci_lock_rescan_remove();
 	list_for_each_entry_safe_reverse(dev, temp, &parent->devices,
@@ -125,7 +194,7 @@  static void interrupt_event_handler(struct work_struct *work)
 
 	dpc_wait_link_inactive(dpc);
 	if (dpc->rp && dpc_wait_rp_inactive(dpc))
-		return;
+		return PCI_ERS_RESULT_DISCONNECT;
 	if (dpc->rp && dpc->rp_pio_status) {
 		pci_write_config_dword(pdev,
 				      dpc->cap_pos + PCI_EXP_DPC_RP_PIO_STATUS,
@@ -135,6 +204,17 @@  static void interrupt_event_handler(struct work_struct *work)
 
 	pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_STATUS,
 		PCI_EXP_DPC_STATUS_TRIGGER | PCI_EXP_DPC_STATUS_INTERRUPT);
+
+	return PCI_ERS_RESULT_RECOVERED;
+}
+
+static void interrupt_event_handler(struct work_struct *work)
+{
+	struct dpc_dev *dpc = container_of(work, struct dpc_dev, work);
+	struct pci_dev *pdev = dpc->dev->port;
+
+	/* From DPC point of view error is always FATAL. */
+	pci_do_recovery(pdev, PCI_ERR_DPC_FATAL);
 }
 
 static void dpc_rp_pio_print_tlp_header(struct device *dev,
@@ -339,6 +419,7 @@  static void dpc_remove(struct pcie_device *dev)
 	.service	= PCIE_PORT_SERVICE_DPC,
 	.probe		= dpc_probe,
 	.remove		= dpc_remove,
+	.reset_link     = dpc_reset_link,
 };
 
 static int __init dpc_service_init(void)
diff --git a/drivers/pci/pcie/pcie-err.c b/drivers/pci/pcie/pcie-err.c
index a76a8bf..858c94c 100644
--- a/drivers/pci/pcie/pcie-err.c
+++ b/drivers/pci/pcie/pcie-err.c
@@ -176,7 +176,7 @@  static pci_ers_result_t pci_default_reset_link(struct pci_dev *dev)
 	return PCI_ERS_RESULT_RECOVERED;
 }
 
-pci_ers_result_t pci_reset_link(struct pci_dev *dev)
+pci_ers_result_t pci_reset_link(struct pci_dev *dev, int severity)
 {
 	struct pci_dev *udev;
 	pci_ers_result_t status;
@@ -190,9 +190,17 @@  pci_ers_result_t pci_reset_link(struct pci_dev *dev)
 		udev = dev->bus->self;
 	}
 
+
+	/* Use the service driver of the component firstly */
+#if IS_ENABLED(CONFIG_PCIEDPC)
+	if (severity == PCI_ERR_DPC_FATAL)
+		driver = pci_find_dpc_service(udev);
+#endif
 #if IS_ENABLED(CONFIG_PCIEAER)
-	/* Use the aer driver of the component firstly */
-	driver = pci_find_aer_service(udev);
+	if ((severity == PCI_ERR_AER_FATAL) ||
+	    (severity == PCI_ERR_AER_NONFATAL) ||
+	    (severity == PCI_ERR_AER_CORRECTABLE))
+		driver = pci_find_aer_service(udev);
 #endif
 
 	if (driver && driver->reset_link) {
@@ -282,7 +290,8 @@  void pci_do_recovery(struct pci_dev *dev, int severity)
 
 	mutex_lock(&pci_err_recovery_lock);
 
-	if (severity == PCI_ERR_AER_FATAL)
+	if ((severity == PCI_ERR_AER_FATAL) ||
+	    (severity == PCI_ERR_DPC_FATAL))
 		state = pci_channel_io_frozen;
 	else
 		state = pci_channel_io_normal;
@@ -292,8 +301,9 @@  void pci_do_recovery(struct pci_dev *dev, int severity)
 			"error_detected",
 			pci_report_error_detected);
 
-	if (severity == PCI_ERR_AER_FATAL) {
-		result = pci_reset_link(dev);
+	if ((severity == PCI_ERR_AER_FATAL) ||
+	    (severity == PCI_ERR_DPC_FATAL)) {
+		result = pci_reset_link(dev, severity);
 		if (result != PCI_ERS_RESULT_RECOVERED)
 			goto failed;
 	}
diff --git a/drivers/pci/pcie/portdrv.h b/drivers/pci/pcie/portdrv.h
index 4f1992d..b013e24 100644
--- a/drivers/pci/pcie/portdrv.h
+++ b/drivers/pci/pcie/portdrv.h
@@ -80,4 +80,5 @@  static inline void pcie_port_platform_notify(struct pci_dev *port, int *mask){}
 #endif /* !CONFIG_ACPI */
 
 struct pcie_port_service_driver *pci_find_aer_service(struct pci_dev *dev);
+struct pcie_port_service_driver *pci_find_dpc_service(struct pci_dev *dev);
 #endif /* _PORTDRV_H_ */
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 083408e..123ee15 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -2005,6 +2005,7 @@  static inline resource_size_t pci_iov_resource_size(struct pci_dev *dev, int res
 #define PCI_ERR_AER_NONFATAL		0
 #define PCI_ERR_AER_FATAL		1
 #define PCI_ERR_AER_CORRECTABLE		2
+#define PCI_ERR_DPC_FATAL		4
 
 pci_ers_result_t pci_broadcast_error_message(struct pci_dev *dev,
 					enum pci_channel_state state,
@@ -2014,7 +2015,7 @@  pci_ers_result_t pci_broadcast_error_message(struct pci_dev *dev,
 int pci_report_slot_reset(struct pci_dev *dev, void *data);
 int pci_report_resume(struct pci_dev *dev, void *data);
 int pci_report_error_detected(struct pci_dev *dev, void *data);
-pci_ers_result_t pci_reset_link(struct pci_dev *dev);
+pci_ers_result_t pci_reset_link(struct pci_dev *dev, int severity);
 pci_ers_result_t pci_merge_result(enum pci_ers_result orig,
 				enum pci_ers_result new);
 void pci_do_recovery(struct pci_dev *dev, int severity);