diff mbox

[v5,4/4] PCI/DPC: Enumerate the devices after DPC trigger event

Message ID 1516185438-31556-5-git-send-email-poza@codeaurora.org (mailing list archive)
State New, archived
Delegated to: Bjorn Helgaas
Headers show

Commit Message

Oza Pawandeep Jan. 17, 2018, 10:37 a.m. UTC
Implement error_resume callback in DPC so, after DPC trigger event
enumerates the devices beneath.

Signed-off-by: Oza Pawandeep <poza@codeaurora.org>

Comments

Sinan Kaya Jan. 17, 2018, 4:27 p.m. UTC | #1
On 1/17/2018 5:37 AM, Oza Pawandeep wrote:
> +static bool dpc_wait_link_active(struct pci_dev *pdev)
> +{

I think you can also make this function common instead of making another copy here.
Of course, this would be another patch.
Keith Busch Jan. 18, 2018, 2:56 a.m. UTC | #2
On Wed, Jan 17, 2018 at 08:27:39AM -0800, Sinan Kaya wrote:
> On 1/17/2018 5:37 AM, Oza Pawandeep wrote:
> > +static bool dpc_wait_link_active(struct pci_dev *pdev)
> > +{
> 
> I think you can also make this function common instead of making another copy here.
> Of course, this would be another patch.

It is actually very similar to __pcie_wait_link_active in pciehp_hpc.c,
so there's some opprotunity to make even more common code.
Oza Pawandeep Jan. 18, 2018, 5:26 a.m. UTC | #3
On 2018-01-17 21:57, Sinan Kaya wrote:
> On 1/17/2018 5:37 AM, Oza Pawandeep wrote:
>> +static bool dpc_wait_link_active(struct pci_dev *pdev)
>> +{
> 
> I think you can also make this function common instead of making
> another copy here.
> Of course, this would be another patch.

ok I will make a separate patch taking one more parameter
dpc_wait_link_active(struct pci_dev *, bool)

if not in this series, then immediate one.

Regards,
Oza.
Oza Pawandeep Jan. 18, 2018, 5:32 a.m. UTC | #4
On 2018-01-18 08:26, Keith Busch wrote:
> On Wed, Jan 17, 2018 at 08:27:39AM -0800, Sinan Kaya wrote:
>> On 1/17/2018 5:37 AM, Oza Pawandeep wrote:
>> > +static bool dpc_wait_link_active(struct pci_dev *pdev)
>> > +{
>> 
>> I think you can also make this function common instead of making 
>> another copy here.
>> Of course, this would be another patch.
> 
> It is actually very similar to __pcie_wait_link_active in pciehp_hpc.c,
> so there's some opprotunity to make even more common code.

in that case there has to be a generic function in
drives/pci.c

which addresses folowing functions from

pcie-dpc.c:
dpc_wait_link_inactive
dpc_wait_link_active

drivers/pci/hotplug/pciehp_hpc.c
pcie_wait_link_active


all aboe making one generic function to be moved to drives/pci.c

please let me know if this is okay.

Regards,
Oza.
Sinan Kaya Jan. 18, 2018, 4:35 p.m. UTC | #5
On 1/18/2018 12:32 AM, poza@codeaurora.org wrote:
> On 2018-01-18 08:26, Keith Busch wrote:
>> On Wed, Jan 17, 2018 at 08:27:39AM -0800, Sinan Kaya wrote:
>>> On 1/17/2018 5:37 AM, Oza Pawandeep wrote:
>>> > +static bool dpc_wait_link_active(struct pci_dev *pdev)
>>> > +{
>>>
>>> I think you can also make this function common instead of making another copy here.
>>> Of course, this would be another patch.
>>
>> It is actually very similar to __pcie_wait_link_active in pciehp_hpc.c,
>> so there's some opprotunity to make even more common code.
> 
> in that case there has to be a generic function in
> drives/pci.c
> 
> which addresses folowing functions from
> 
> pcie-dpc.c:
> dpc_wait_link_inactive
> dpc_wait_link_active
> 
> drivers/pci/hotplug/pciehp_hpc.c
> pcie_wait_link_active
> 
> 
> all aboe making one generic function to be moved to drives/pci.c
> 
> please let me know if this is okay.

Works for me. Keith/Bjorn?

> 
> Regards,
> Oza.
> 
>
Keith Busch Jan. 19, 2018, 1:43 a.m. UTC | #6
On Thu, Jan 18, 2018 at 11:35:59AM -0500, Sinan Kaya wrote:
> On 1/18/2018 12:32 AM, poza@codeaurora.org wrote:
> > On 2018-01-18 08:26, Keith Busch wrote:
> >> On Wed, Jan 17, 2018 at 08:27:39AM -0800, Sinan Kaya wrote:
> >>> On 1/17/2018 5:37 AM, Oza Pawandeep wrote:
> >>> > +static bool dpc_wait_link_active(struct pci_dev *pdev)
> >>> > +{
> >>>
> >>> I think you can also make this function common instead of making another copy here.
> >>> Of course, this would be another patch.
> >>
> >> It is actually very similar to __pcie_wait_link_active in pciehp_hpc.c,
> >> so there's some opprotunity to make even more common code.
> > 
> > in that case there has to be a generic function in
> > drives/pci.c
> > 
> > which addresses folowing functions from
> > 
> > pcie-dpc.c:
> > dpc_wait_link_inactive
> > dpc_wait_link_active
> > 
> > drivers/pci/hotplug/pciehp_hpc.c
> > pcie_wait_link_active
> > 
> > 
> > all aboe making one generic function to be moved to drives/pci.c
> > 
> > please let me know if this is okay.
> 
> Works for me. Keith/Bjorn?

Yep, I believe common solutions that reduce code is always encouraged
in the Linux kernel.
Oza Pawandeep Jan. 19, 2018, 4:21 a.m. UTC | #7
On 2018-01-19 07:13, Keith Busch wrote:
> On Thu, Jan 18, 2018 at 11:35:59AM -0500, Sinan Kaya wrote:
>> On 1/18/2018 12:32 AM, poza@codeaurora.org wrote:
>> > On 2018-01-18 08:26, Keith Busch wrote:
>> >> On Wed, Jan 17, 2018 at 08:27:39AM -0800, Sinan Kaya wrote:
>> >>> On 1/17/2018 5:37 AM, Oza Pawandeep wrote:
>> >>> > +static bool dpc_wait_link_active(struct pci_dev *pdev)
>> >>> > +{
>> >>>
>> >>> I think you can also make this function common instead of making another copy here.
>> >>> Of course, this would be another patch.
>> >>
>> >> It is actually very similar to __pcie_wait_link_active in pciehp_hpc.c,
>> >> so there's some opprotunity to make even more common code.
>> >
>> > in that case there has to be a generic function in
>> > drives/pci.c
>> >
>> > which addresses folowing functions from
>> >
>> > pcie-dpc.c:
>> > dpc_wait_link_inactive
>> > dpc_wait_link_active
>> >
>> > drivers/pci/hotplug/pciehp_hpc.c
>> > pcie_wait_link_active
>> >
>> >
>> > all aboe making one generic function to be moved to drives/pci.c
>> >
>> > please let me know if this is okay.
>> 
>> Works for me. Keith/Bjorn?
> 
> Yep, I believe common solutions that reduce code is always encouraged
> in the Linux kernel.


okay, I will work on this.

Regards,
Oza.
diff mbox

Patch

diff --git a/drivers/pci/pcie/pcie-dpc.c b/drivers/pci/pcie/pcie-dpc.c
index ed916765..f99adcae 100644
--- a/drivers/pci/pcie/pcie-dpc.c
+++ b/drivers/pci/pcie/pcie-dpc.c
@@ -162,6 +162,43 @@  static void dpc_wait_link_inactive(struct dpc_dev *dpc)
 		dev_warn(dev, "Link state not disabled for DPC event\n");
 }
 
+static bool dpc_wait_link_active(struct pci_dev *pdev)
+{
+	unsigned long timeout = jiffies + HZ;
+	u16 lnk_status;
+	bool ret = true;
+
+	pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status);
+
+	while (!(lnk_status & PCI_EXP_LNKSTA_DLLLA) &&
+					!time_after(jiffies, timeout)) {
+		msleep(10);
+		pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status);
+	}
+
+	if (!(lnk_status & PCI_EXP_LNKSTA_DLLLA)) {
+		dev_warn(&pdev->dev, "Link state not enabled after DPC event\n");
+		ret = false;
+	}
+
+	return ret;
+}
+
+/**
+ * dpc_error_resume - enumerate the devices beneath
+ * @dev: pointer to Root Port's pci_dev data structure
+ *
+ * Invoked by Port Bus driver during nonfatal recovery.
+ */
+static void dpc_error_resume(struct pci_dev *pdev)
+{
+	if (dpc_wait_link_active(pdev)) {
+		pci_lock_rescan_remove();
+		pci_rescan_bus(pdev->bus);
+		pci_unlock_rescan_remove();
+	}
+}
+
 /**
  * dpc_reset_link - reset link DPC  routine
  * @dev: pointer to Root Port's pci_dev data structure
@@ -420,6 +457,7 @@  static void dpc_remove(struct pcie_device *dev)
 	.service	= PCIE_PORT_SERVICE_DPC,
 	.probe		= dpc_probe,
 	.remove		= dpc_remove,
+	.error_resume	= dpc_error_resume,
 	.reset_link     = dpc_reset_link,
 };
 
diff --git a/drivers/pci/pcie/pcie-err.c b/drivers/pci/pcie/pcie-err.c
index 8ce1de1..de72b0a 100644
--- a/drivers/pci/pcie/pcie-err.c
+++ b/drivers/pci/pcie/pcie-err.c
@@ -236,6 +236,7 @@  static pci_ers_result_t reset_link(struct pci_dev *dev, int severity)
  * @state: error state
  * @error_mesg: message to print
  * @cb: callback to be broadcasted
+ * @severity: error severity
  *
  * Invoked during error recovery process. Once being invoked, the content
  * of error severity will be broadcasted to all downstream drivers in a
@@ -244,7 +245,8 @@  static pci_ers_result_t reset_link(struct pci_dev *dev, int severity)
 static pci_ers_result_t broadcast_error_message(struct pci_dev *dev,
 	enum pci_channel_state state,
 	char *error_mesg,
-	int (*cb)(struct pci_dev *, void *))
+	int (*cb)(struct pci_dev *, void *),
+	int severity)
 {
 	struct aer_broadcast_data result_data;
 
@@ -256,6 +258,15 @@  static pci_ers_result_t broadcast_error_message(struct pci_dev *dev,
 		result_data.result = PCI_ERS_RESULT_RECOVERED;
 
 	if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) {
+		/* If DPC is triggered, call resume error hanlder
+		 * because, at this point we can safely assume that
+		 * link recovery has happened.
+		 */
+		if ((severity == DPC_FATAL) &&
+			(cb == report_resume)) {
+			cb(dev, NULL);
+			return PCI_ERS_RESULT_RECOVERED;
+		}
 		/*
 		 * If the error is reported by a bridge, we think this error
 		 * is related to the downstream link of the bridge, so we
@@ -305,7 +316,8 @@  void pci_do_recovery(struct pci_dev *dev, int severity)
 	status = broadcast_error_message(dev,
 			state,
 			"error_detected",
-			report_error_detected);
+			report_error_detected,
+			severity);
 
 	if ((severity == AER_FATAL) ||
 	    (severity == DPC_FATAL)) {
@@ -318,7 +330,8 @@  void pci_do_recovery(struct pci_dev *dev, int severity)
 		status = broadcast_error_message(dev,
 				state,
 				"mmio_enabled",
-				report_mmio_enabled);
+				report_mmio_enabled,
+				severity);
 
 	if (status == PCI_ERS_RESULT_NEED_RESET) {
 		/*
@@ -329,7 +342,8 @@  void pci_do_recovery(struct pci_dev *dev, int severity)
 		status = broadcast_error_message(dev,
 				state,
 				"slot_reset",
-				report_slot_reset);
+				report_slot_reset,
+				severity);
 	}
 
 	if (status != PCI_ERS_RESULT_RECOVERED)
@@ -338,7 +352,8 @@  void pci_do_recovery(struct pci_dev *dev, int severity)
 	broadcast_error_message(dev,
 				state,
 				"resume",
-				report_resume);
+				report_resume,
+				severity);
 
 	dev_info(&dev->dev, "Device recovery successful\n");
 	mutex_unlock(&pci_err_recovery_lock);