diff mbox series

[2/5,V4] PCI: pciehp: check and wait port status out of DPC before handling DLLSC and PDC

Message ID 20200927082736.14633-3-haifeng.zhao@intel.com (mailing list archive)
State Superseded, archived
Headers show
Series Fix DPC hotplug race and enhance error handling | expand

Commit Message

Zhao, Haifeng Sept. 27, 2020, 8:27 a.m. UTC
When root port has DPC capability and it is enabled, then triggered by
errors, DPC DLLSC and PDC interrupts will be sent to DPC driver, pciehp
driver at the same time.
That will cause following result:

1. Link and device are recovered by hardware DPC and software DPC driver, 
   device
   isn't removed, but the pciehp might treat it as device was hot removed.

2. Race condition happens bettween pciehp_unconfigure_device() called by
   pciehp_ist() in pciehp driver and pci_do_recovery() called by
   dpc_handler in DPC driver. no luck, there is no lock to protect 
   pci_stop_and_remove_bus_device()
   against pci_walk_bus(), they hold different samphore and mutex,
   pci_stop_and_remove_bus_device holds pci_rescan_remove_lock, and
   pci_walk_bus() holds pci_bus_sem.

This race condition is not purely code analysis, it could be triggered by
following command series:

  # setpci -s 64:02.0 0x196.w=000a // 64:02.0 rootport has DPC capability
  # setpci -s 65:00.0 0x04.w=0544  // 65:00.0 NVMe SSD populated in port
  # mount /dev/nvme0n1p1 nvme

One shot will cause system panic and NULL pointer reference happened.
(tested on stable 5.8 & ICS(Ice Lake SP platform, see
https://en.wikichip.org/wiki/intel/microarchitectures/ice_lake_(server))

   Buffer I/O error on dev nvme0n1p1, logical block 3328, async page read
   BUG: kernel NULL pointer dereference, address: 0000000000000050
   #PF: supervisor read access in kernel mode
   #PF: error_code(0x0000) - not-present page
   PGD 0
   Oops: 0000 [#1] SMP NOPTI
   CPU: 12 PID: 513 Comm: irq/124-pcie-dp Not tainted 5.8.0 el8.x86_64+ #1
   RIP: 0010:report_error_detected.cold.4+0x7d/0xe6
   Code: b6 d0 e8 e8 fe 11 00 e8 16 c5 fb ff be 06 00 00 00 48 89 df e8 d3
   65 ff ff b8 06 00 00 00 e9 75 fc ff ff 48 8b 43 68 45 31 c9 <48> 8b 50
   50 48 83 3a 00 41 0f 94 c1 45 31 c0 48 85 d2 41 0f 94 c0
   RSP: 0018:ff8e06cf8762fda8 EFLAGS: 00010246
   RAX: 0000000000000000 RBX: ff4e3eaacf42a000 RCX: ff4e3eb31f223c01
   RDX: ff4e3eaacf42a140 RSI: ff4e3eb31f223c00 RDI: ff4e3eaacf42a138
   RBP: ff8e06cf8762fdd0 R08: 00000000000000bf R09: 0000000000000000
   R10: 000000eb8ebeab53 R11: ffffffff93453258 R12: 0000000000000002
   R13: ff4e3eaacf42a130 R14: ff8e06cf8762fe2c R15: ff4e3eab44733828
   FS:  0000000000000000(0000) GS:ff4e3eab1fd00000(0000) knl
   GS:0000000000000000
   CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
   CR2: 0000000000000050 CR3: 0000000f8f80a004 CR4: 0000000000761ee0
   DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
   DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
   PKRU: 55555554
   Call Trace:
   ? report_normal_detected+0x20/0x20
   report_frozen_detected+0x16/0x20
   pci_walk_bus+0x75/0x90
   ? dpc_irq+0x90/0x90
   pcie_do_recovery+0x157/0x201
   ? irq_finalize_oneshot.part.47+0xe0/0xe0
   dpc_handler+0x29/0x40
   irq_thread_fn+0x24/0x60
   irq_thread+0xea/0x170
   ? irq_forced_thread_fn+0x80/0x80
   ? irq_thread_check_affinity+0xf0/0xf0
   kthread+0x124/0x140
   ? kthread_park+0x90/0x90
   ret_from_fork+0x1f/0x30
   Modules linked in: nft_fib_inet.........
   CR2: 0000000000000050

With this patch, the handling flow of DPC containment and hotplug is
partly ordered and serialized, let hardware DPC do the controller reset
etc recovery action first, then DPC driver handling the call-back from
device drivers, clear the DPC status, at the end, pciehp handle the DLLSC
and PDC etc.

Signed-off-by: Ethan Zhao <haifeng.zhao@intel.com>
Tested-by: Wen Jin <wen.jin@intel.com>
Tested-by: Shanshan Zhang <ShanshanX.Zhang@intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
---
Changes:
 V2: revise doc according to Andy's suggestion.
 V3: no change.
 V4: no change.

 drivers/pci/hotplug/pciehp_hpc.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Comments

Andy Shevchenko Sept. 27, 2020, 9:10 a.m. UTC | #1
On Sun, Sep 27, 2020 at 11:31 AM Ethan Zhao <haifeng.zhao@intel.com> wrote:
>
> When root port has DPC capability and it is enabled, then triggered by
> errors, DPC DLLSC and PDC interrupts will be sent to DPC driver, pciehp
> driver at the same time.
> That will cause following result:
>
> 1. Link and device are recovered by hardware DPC and software DPC driver,
>    device
>    isn't removed, but the pciehp might treat it as device was hot removed.
>
> 2. Race condition happens bettween pciehp_unconfigure_device() called by
>    pciehp_ist() in pciehp driver and pci_do_recovery() called by
>    dpc_handler in DPC driver. no luck, there is no lock to protect
>    pci_stop_and_remove_bus_device()
>    against pci_walk_bus(), they hold different samphore and mutex,
>    pci_stop_and_remove_bus_device holds pci_rescan_remove_lock, and
>    pci_walk_bus() holds pci_bus_sem.
>
> This race condition is not purely code analysis, it could be triggered by
> following command series:
>
>   # setpci -s 64:02.0 0x196.w=000a // 64:02.0 rootport has DPC capability
>   # setpci -s 65:00.0 0x04.w=0544  // 65:00.0 NVMe SSD populated in port
>   # mount /dev/nvme0n1p1 nvme
>
> One shot will cause system panic and NULL pointer reference happened.
> (tested on stable 5.8 & ICS(Ice Lake SP platform, see
> https://en.wikichip.org/wiki/intel/microarchitectures/ice_lake_(server))
>
>    Buffer I/O error on dev nvme0n1p1, logical block 3328, async page read
>    BUG: kernel NULL pointer dereference, address: 0000000000000050
>    #PF: supervisor read access in kernel mode
>    #PF: error_code(0x0000) - not-present page
>    PGD 0

Seems like you randomly did something about the series and would like
it to be applied?! It's no go!
Please, read my comments again v1 one more time and carefully comment
or address.

Why do you still have these (some above, some below this comment)
non-relevant lines of oops?

>    Oops: 0000 [#1] SMP NOPTI
>    CPU: 12 PID: 513 Comm: irq/124-pcie-dp Not tainted 5.8.0 el8.x86_64+ #1
>    RIP: 0010:report_error_detected.cold.4+0x7d/0xe6
>    Code: b6 d0 e8 e8 fe 11 00 e8 16 c5 fb ff be 06 00 00 00 48 89 df e8 d3
>    65 ff ff b8 06 00 00 00 e9 75 fc ff ff 48 8b 43 68 45 31 c9 <48> 8b 50
>    50 48 83 3a 00 41 0f 94 c1 45 31 c0 48 85 d2 41 0f 94 c0
>    RSP: 0018:ff8e06cf8762fda8 EFLAGS: 00010246
>    RAX: 0000000000000000 RBX: ff4e3eaacf42a000 RCX: ff4e3eb31f223c01
>    RDX: ff4e3eaacf42a140 RSI: ff4e3eb31f223c00 RDI: ff4e3eaacf42a138
>    RBP: ff8e06cf8762fdd0 R08: 00000000000000bf R09: 0000000000000000
>    R10: 000000eb8ebeab53 R11: ffffffff93453258 R12: 0000000000000002
>    R13: ff4e3eaacf42a130 R14: ff8e06cf8762fe2c R15: ff4e3eab44733828
>    FS:  0000000000000000(0000) GS:ff4e3eab1fd00000(0000) knl
>    GS:0000000000000000
>    CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>    CR2: 0000000000000050 CR3: 0000000f8f80a004 CR4: 0000000000761ee0
>    DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>    DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>    PKRU: 55555554
>    Call Trace:
>    ? report_normal_detected+0x20/0x20
>    report_frozen_detected+0x16/0x20
>    pci_walk_bus+0x75/0x90
>    ? dpc_irq+0x90/0x90
>    pcie_do_recovery+0x157/0x201
>    ? irq_finalize_oneshot.part.47+0xe0/0xe0
>    dpc_handler+0x29/0x40
>    irq_thread_fn+0x24/0x60
>    irq_thread+0xea/0x170
>    ? irq_forced_thread_fn+0x80/0x80
>    ? irq_thread_check_affinity+0xf0/0xf0
>    kthread+0x124/0x140
>    ? kthread_park+0x90/0x90
>    ret_from_fork+0x1f/0x30
>    Modules linked in: nft_fib_inet.........
>    CR2: 0000000000000050

> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>

And no, this is not how the tags are being applied.
Zhao, Haifeng Sept. 28, 2020, 2:54 a.m. UTC | #2
Andy, 
   May I ask which line of the Oops is " you randomly did something " ? and should be removed ?

Thanks,
Ethan

-----Original Message-----
From: Andy Shevchenko <andy.shevchenko@gmail.com> 
Sent: Sunday, September 27, 2020 5:10 PM
To: Zhao, Haifeng <haifeng.zhao@intel.com>; Hansen, Dave <dave.hansen@intel.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>; Oliver <oohall@gmail.com>; ruscur@russell.cc; Lukas Wunner <lukas@wunner.de>; Andy Shevchenko <andriy.shevchenko@linux.intel.com>; Stuart Hayes <stuart.w.hayes@gmail.com>; Alexandru Gagniuc <mr.nuke.me@gmail.com>; Mika Westerberg <mika.westerberg@linux.intel.com>; linux-pci <linux-pci@vger.kernel.org>; Linux Kernel Mailing List <linux-kernel@vger.kernel.org>; Jia, Pei P <pei.p.jia@intel.com>; ashok.raj@linux.intel.com; Kuppuswamy, Sathyanarayanan <sathyanarayanan.kuppuswamy@intel.com>; Christoph Hellwig <hch@infradead.org>; Joe Perches <joe@perches.com>
Subject: Re: [PATCH 2/5 V4] PCI: pciehp: check and wait port status out of DPC before handling DLLSC and PDC

On Sun, Sep 27, 2020 at 11:31 AM Ethan Zhao <haifeng.zhao@intel.com> wrote:
>
> When root port has DPC capability and it is enabled, then triggered by 
> errors, DPC DLLSC and PDC interrupts will be sent to DPC driver, 
> pciehp driver at the same time.
> That will cause following result:
>
> 1. Link and device are recovered by hardware DPC and software DPC driver,
>    device
>    isn't removed, but the pciehp might treat it as device was hot removed.
>
> 2. Race condition happens bettween pciehp_unconfigure_device() called by
>    pciehp_ist() in pciehp driver and pci_do_recovery() called by
>    dpc_handler in DPC driver. no luck, there is no lock to protect
>    pci_stop_and_remove_bus_device()
>    against pci_walk_bus(), they hold different samphore and mutex,
>    pci_stop_and_remove_bus_device holds pci_rescan_remove_lock, and
>    pci_walk_bus() holds pci_bus_sem.
>
> This race condition is not purely code analysis, it could be triggered 
> by following command series:
>
>   # setpci -s 64:02.0 0x196.w=000a // 64:02.0 rootport has DPC capability
>   # setpci -s 65:00.0 0x04.w=0544  // 65:00.0 NVMe SSD populated in port
>   # mount /dev/nvme0n1p1 nvme
>
> One shot will cause system panic and NULL pointer reference happened.
> (tested on stable 5.8 & ICS(Ice Lake SP platform, see
> https://en.wikichip.org/wiki/intel/microarchitectures/ice_lake_(server
> ))
>
>    Buffer I/O error on dev nvme0n1p1, logical block 3328, async page read
>    BUG: kernel NULL pointer dereference, address: 0000000000000050
>    #PF: supervisor read access in kernel mode
>    #PF: error_code(0x0000) - not-present page
>    PGD 0

Seems like you randomly did something about the series and would like it to be applied?! It's no go!
Please, read my comments again v1 one more time and carefully comment or address.

Why do you still have these (some above, some below this comment) non-relevant lines of oops?

>    Oops: 0000 [#1] SMP NOPTI
>    CPU: 12 PID: 513 Comm: irq/124-pcie-dp Not tainted 5.8.0 el8.x86_64+ #1
>    RIP: 0010:report_error_detected.cold.4+0x7d/0xe6
>    Code: b6 d0 e8 e8 fe 11 00 e8 16 c5 fb ff be 06 00 00 00 48 89 df e8 d3
>    65 ff ff b8 06 00 00 00 e9 75 fc ff ff 48 8b 43 68 45 31 c9 <48> 8b 50
>    50 48 83 3a 00 41 0f 94 c1 45 31 c0 48 85 d2 41 0f 94 c0
>    RSP: 0018:ff8e06cf8762fda8 EFLAGS: 00010246
>    RAX: 0000000000000000 RBX: ff4e3eaacf42a000 RCX: ff4e3eb31f223c01
>    RDX: ff4e3eaacf42a140 RSI: ff4e3eb31f223c00 RDI: ff4e3eaacf42a138
>    RBP: ff8e06cf8762fdd0 R08: 00000000000000bf R09: 0000000000000000
>    R10: 000000eb8ebeab53 R11: ffffffff93453258 R12: 0000000000000002
>    R13: ff4e3eaacf42a130 R14: ff8e06cf8762fe2c R15: ff4e3eab44733828
>    FS:  0000000000000000(0000) GS:ff4e3eab1fd00000(0000) knl
>    GS:0000000000000000
>    CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>    CR2: 0000000000000050 CR3: 0000000f8f80a004 CR4: 0000000000761ee0
>    DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>    DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>    PKRU: 55555554
>    Call Trace:
>    ? report_normal_detected+0x20/0x20
>    report_frozen_detected+0x16/0x20
>    pci_walk_bus+0x75/0x90
>    ? dpc_irq+0x90/0x90
>    pcie_do_recovery+0x157/0x201
>    ? irq_finalize_oneshot.part.47+0xe0/0xe0
>    dpc_handler+0x29/0x40
>    irq_thread_fn+0x24/0x60
>    irq_thread+0xea/0x170
>    ? irq_forced_thread_fn+0x80/0x80
>    ? irq_thread_check_affinity+0xf0/0xf0
>    kthread+0x124/0x140
>    ? kthread_park+0x90/0x90
>    ret_from_fork+0x1f/0x30
>    Modules linked in: nft_fib_inet.........
>    CR2: 0000000000000050

> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>

And no, this is not how the tags are being applied.


--
With Best Regards,
Andy Shevchenko
Zhao, Haifeng Sept. 28, 2020, 4:58 a.m. UTC | #3
Sathyanarayanan,

-----Original Message-----
From: Kuppuswamy, Sathyanarayanan <sathyanarayanan.kuppuswamy@intel.com> 
Sent: Monday, September 28, 2020 2:59 AM
To: Zhao, Haifeng <haifeng.zhao@intel.com>; bhelgaas@google.com; oohall@gmail.com; ruscur@russell.cc; lukas@wunner.de; andriy.shevchenko@linux.intel.com; stuart.w.hayes@gmail.com; mr.nuke.me@gmail.com; mika.westerberg@linux.intel.com
Cc: linux-pci@vger.kernel.org; linux-kernel@vger.kernel.org; Jia, Pei P <pei.p.jia@intel.com>; ashok.raj@linux.intel.com; hch@infradead.org; joe@perches.com
Subject: Re: [PATCH 2/5 V4] PCI: pciehp: check and wait port status out of DPC before handling DLLSC and PDC


On 9/27/20 1:27 AM, Ethan Zhao wrote:
> When root port has DPC capability and it is enabled, then triggered by 
> errors, DPC DLLSC and PDC interrupts will be sent to DPC driver, 
> pciehp driver at the same time.
> That will cause following result:
>
> 1. Link and device are recovered by hardware DPC and software DPC driver,
>     device
>     isn't removed, but the pciehp might treat it as device was hot removed.
>
> 2. Race condition happens bettween pciehp_unconfigure_device() called by
>     pciehp_ist() in pciehp driver and pci_do_recovery() called by
>     dpc_handler in DPC driver. no luck, there is no lock to protect
>     pci_stop_and_remove_bus_device()
>     against pci_walk_bus(), they hold different samphore and mutex,
>     pci_stop_and_remove_bus_device holds pci_rescan_remove_lock, and
>     pci_walk_bus() holds pci_bus_sem.
Why not address the locking issue? May be a common lock?
>
> This race condition is not purely code analysis, it could be triggered 
> by following command series:
>
>    # setpci -s 64:02.0 0x196.w=000a // 64:02.0 rootport has DPC capability
>    # setpci -s 65:00.0 0x04.w=0544  // 65:00.0 NVMe SSD populated in port
>    # mount /dev/nvme0n1p1 nvme
>
> One shot will cause system panic and NULL pointer reference happened.
> (tested on stable 5.8 & ICS(Ice Lake SP platform, see
> https://en.wikichip.org/wiki/intel/microarchitectures/ice_lake_(server
> ))
>
>     Buffer I/O error on dev nvme0n1p1, logical block 3328, async page read
>     BUG: kernel NULL pointer dereference, address: 0000000000000050
>     #PF: supervisor read access in kernel mode
>     #PF: error_code(0x0000) - not-present page
>     PGD 0
>     Oops: 0000 [#1] SMP NOPTI
>     CPU: 12 PID: 513 Comm: irq/124-pcie-dp Not tainted 5.8.0 el8.x86_64+ #1
>     RIP: 0010:report_error_detected.cold.4+0x7d/0xe6
>     Code: b6 d0 e8 e8 fe 11 00 e8 16 c5 fb ff be 06 00 00 00 48 89 df e8 d3
>     65 ff ff b8 06 00 00 00 e9 75 fc ff ff 48 8b 43 68 45 31 c9 <48> 8b 50
>     50 48 83 3a 00 41 0f 94 c1 45 31 c0 48 85 d2 41 0f 94 c0
>     RSP: 0018:ff8e06cf8762fda8 EFLAGS: 00010246
>     RAX: 0000000000000000 RBX: ff4e3eaacf42a000 RCX: ff4e3eb31f223c01
>     RDX: ff4e3eaacf42a140 RSI: ff4e3eb31f223c00 RDI: ff4e3eaacf42a138
>     RBP: ff8e06cf8762fdd0 R08: 00000000000000bf R09: 0000000000000000
>     R10: 000000eb8ebeab53 R11: ffffffff93453258 R12: 0000000000000002
>     R13: ff4e3eaacf42a130 R14: ff8e06cf8762fe2c R15: ff4e3eab44733828
>     FS:  0000000000000000(0000) GS:ff4e3eab1fd00000(0000) knl
>     GS:0000000000000000
>     CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>     CR2: 0000000000000050 CR3: 0000000f8f80a004 CR4: 0000000000761ee0
>     DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>     DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>     PKRU: 55555554
>     Call Trace:
>     ? report_normal_detected+0x20/0x20
>     report_frozen_detected+0x16/0x20
>     pci_walk_bus+0x75/0x90
>     ? dpc_irq+0x90/0x90
>     pcie_do_recovery+0x157/0x201
>     ? irq_finalize_oneshot.part.47+0xe0/0xe0
>     dpc_handler+0x29/0x40
>     irq_thread_fn+0x24/0x60
>     irq_thread+0xea/0x170
>     ? irq_forced_thread_fn+0x80/0x80
>     ? irq_thread_check_affinity+0xf0/0xf0
>     kthread+0x124/0x140
>     ? kthread_park+0x90/0x90
>     ret_from_fork+0x1f/0x30
>     Modules linked in: nft_fib_inet.........
>     CR2: 0000000000000050
>
> With this patch, the handling flow of DPC containment and hotplug is 
> partly ordered and serialized,
If its a partial fix, what scenario is not covered?


:see the 1/5 patch.

> let hardware DPC do the controller reset etc recovery action first, 
> then DPC driver handling the call-back from device drivers, clear the 
> DPC status, at the end, pciehp handle the DLLSC and PDC etc.
>
> Signed-off-by: Ethan Zhao <haifeng.zhao@intel.com>
> Tested-by: Wen Jin <wen.jin@intel.com>
> Tested-by: Shanshan Zhang <ShanshanX.Zhang@intel.com>
> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> ---
> Changes:
>   V2: revise doc according to Andy's suggestion.
>   V3: no change.
>   V4: no change.
>
>   drivers/pci/hotplug/pciehp_hpc.c | 4 +++-
>   1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/pci/hotplug/pciehp_hpc.c 
> b/drivers/pci/hotplug/pciehp_hpc.c
> index 53433b37e181..6f271160f18d 100644
> --- a/drivers/pci/hotplug/pciehp_hpc.c
> +++ b/drivers/pci/hotplug/pciehp_hpc.c
> @@ -710,8 +710,10 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
>   	down_read(&ctrl->reset_lock);
>   	if (events & DISABLE_SLOT)
>   		pciehp_handle_disable_request(ctrl);
> -	else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC))
> +	else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC)) {
> +		pci_wait_port_outdpc(pdev);
This would add worst case 1s delay in handling the DLLSC events. This does not distinguish between DLLSC event triggered by DPC or hotplug. Also additional delay may violate the timing requirements.
: It will wait only when DPC is enabled and triggered. Or it will skip the waiting. 
Test with different time interval between hot-remove and hot-plugin, also no spec
Says it will violate timing requirement. It works. 

Thanks,
Ethan

>   		pciehp_handle_presence_or_link_change(ctrl, events);
> +	}
>   	up_read(&ctrl->reset_lock);
>   
>   	ret = IRQ_HANDLED;

--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer
Andy Shevchenko Sept. 28, 2020, 8:42 a.m. UTC | #4
On Mon, Sep 28, 2020 at 5:24 AM Zhao, Haifeng <haifeng.zhao@intel.com> wrote:
>
> Andy,
>     May I ask what you think that's not relevant in the oops ?  any rules ?

First of all, please don't send private messages on OSS development
matters (partially returned Cc list).
Second, stop top postings.

Now to the question, the rule is simple — it's called "common sense".
See below the examples, but you may cut even more. So, don't pollute
commit messages with unrelated information.

> On Sun, Sep 27, 2020 at 11:31 AM Ethan Zhao <haifeng.zhao@intel.com> wrote:

...

> >    Buffer I/O error on dev nvme0n1p1, logical block 3328, async page read
> >    BUG: kernel NULL pointer dereference, address: 0000000000000050

The below...

> >    #PF: supervisor read access in kernel mode
> >    #PF: error_code(0x0000) - not-present page
> >    PGD 0
>
> Seems like you randomly did something about the series and would like it to be applied?! It's no go!
> Please, read my comments again v1 one more time and carefully comment or address.
>
> Why do you still have these (some above, some below this comment) non-relevant lines of oops?

..

> >    Oops: 0000 [#1] SMP NOPTI

...is anybody getting new information from the above? I don't think so.

> >    CPU: 12 PID: 513 Comm: irq/124-pcie-dp Not tainted 5.8.0 el8.x86_64+ #1
> >    RIP: 0010:report_error_detected.cold.4+0x7d/0xe6
> >    Code: b6 d0 e8 e8 fe 11 00 e8 16 c5 fb ff be 06 00 00 00 48 89 df e8 d3
> >    65 ff ff b8 06 00 00 00 e9 75 fc ff ff 48 8b 43 68 45 31 c9 <48> 8b 50
> >    50 48 83 3a 00 41 0f 94 c1 45 31 c0 48 85 d2 41 0f 94 c0
> >    RSP: 0018:ff8e06cf8762fda8 EFLAGS: 00010246
> >    RAX: 0000000000000000 RBX: ff4e3eaacf42a000 RCX: ff4e3eb31f223c01
> >    RDX: ff4e3eaacf42a140 RSI: ff4e3eb31f223c00 RDI: ff4e3eaacf42a138
> >    RBP: ff8e06cf8762fdd0 R08: 00000000000000bf R09: 0000000000000000
> >    R10: 000000eb8ebeab53 R11: ffffffff93453258 R12: 0000000000000002
> >    R13: ff4e3eaacf42a130 R14: ff8e06cf8762fe2c R15: ff4e3eab44733828
> >    FS:  0000000000000000(0000) GS:ff4e3eab1fd00000(0000) knl
> >    GS:0000000000000000
> >    CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> >    CR2: 0000000000000050 CR3: 0000000f8f80a004 CR4: 0000000000761ee0
> >    DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> >    DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> >    PKRU: 55555554
> >    Call Trace:
> >    ? report_normal_detected+0x20/0x20
> >    report_frozen_detected+0x16/0x20
> >    pci_walk_bus+0x75/0x90
> >    ? dpc_irq+0x90/0x90
> >    pcie_do_recovery+0x157/0x201
> >    ? irq_finalize_oneshot.part.47+0xe0/0xe0
> >    dpc_handler+0x29/0x40
> >    irq_thread_fn+0x24/0x60

The below, for example, does it affect anyhow the meaning of the crash
log? I don't think so, can be cut.

> >    irq_thread+0xea/0x170
> >    ? irq_forced_thread_fn+0x80/0x80
> >    ? irq_thread_check_affinity+0xf0/0xf0
> >    kthread+0x124/0x140
> >    ? kthread_park+0x90/0x90
> >    ret_from_fork+0x1f/0x30
> >    Modules linked in: nft_fib_inet.........
> >    CR2: 0000000000000050


--
With Best Regards,
Andy Shevchenko
diff mbox series

Patch

diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
index 53433b37e181..6f271160f18d 100644
--- a/drivers/pci/hotplug/pciehp_hpc.c
+++ b/drivers/pci/hotplug/pciehp_hpc.c
@@ -710,8 +710,10 @@  static irqreturn_t pciehp_ist(int irq, void *dev_id)
 	down_read(&ctrl->reset_lock);
 	if (events & DISABLE_SLOT)
 		pciehp_handle_disable_request(ctrl);
-	else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC))
+	else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC)) {
+		pci_wait_port_outdpc(pdev);
 		pciehp_handle_presence_or_link_change(ctrl, events);
+	}
 	up_read(&ctrl->reset_lock);
 
 	ret = IRQ_HANDLED;