diff mbox series

PCI/hotplug: Replaced down_write_nested with hotplug_slot_rwsem if ctrl->depth > 0 when taking the ctrl->reset_lock.

Message ID 20230113170131.5086-1-a.antonovitch@gmail.com (mailing list archive)
State Not Applicable
Delegated to: Bjorn Helgaas
Headers show
Series PCI/hotplug: Replaced down_write_nested with hotplug_slot_rwsem if ctrl->depth > 0 when taking the ctrl->reset_lock. | expand

Commit Message

Anatoli Antonovitch Jan. 13, 2023, 5:01 p.m. UTC
From: Anatoli Antonovitch <anatoli.antonovitch@amd.com>

It is to avoid any potential issues when S3 resume but at the same time we want to hot-unplug.

To fix the race between pciehp and AER reported in https://bugzilla.kernel.org/show_bug.cgi?id=215590

INFO: task irq/26-aerdrv:95 blocked for more than 120 seconds.
Tainted: G        W          6.2.0-rc3-custom-norework-jan11+ #29
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:irq/26-aerdrv   state:D stack:0     pid:95    ppid:2      flags:0x00004000
Call Trace:
<TASK>
__schedule+0x3df/0xec0
? rcu_read_lock_held_common+0x12/0x60
schedule+0x6b/0x100
rwsem_down_write_slowpath+0x3b2/0x9c0
down_write_nested+0x16b/0x220
pciehp_reset_slot+0x63/0x160
pci_reset_hotplug_slot+0x44/0x80
pci_slot_reset+0x10d/0x1a0
pci_bus_error_reset+0xb2/0xe0
aer_root_reset+0x144/0x1a0
pcie_do_recovery+0x15a/0x280
? __pfx_aer_root_reset+0x20/0x20
aer_process_err_devices+0xfa/0x115
aer_isr.cold+0x52/0xa1
? __kmem_cache_free+0x36a/0x3c0
? irq_thread+0xb0/0x1e0
? irq_thread+0xb0/0x1e0
irq_thread_fn+0x28/0x80
irq_thread+0x106/0x1e0
? __pfx_irq_thread_fn+0x20/0x20
? __pfx_irq_thread_dtor+0x20/0x20
? __pfx_irq_thread+0x20/0x20
kthread+0x10a/0x140
? __pfx_kthread+0x20/0x20
ret_from_fork+0x35/0x60
</TASK>

INFO: task irq/26-pciehp:96 blocked for more than 120 seconds.
Tainted: G        W          6.2.0-rc3-custom-norework-jan11+ #29
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:irq/26-pciehp   state:D stack:0     pid:96    ppid:2      flags:0x00004000
Call Trace:
<TASK>
__schedule+0x3df/0xec0
? rcu_read_lock_sched_held+0x25/0x80
schedule+0x6b/0x100
schedule_preempt_disabled+0x18/0x40
__mutex_lock+0x685/0xf60
? rcu_read_lock_sched_held+0x25/0x80
? rcu_read_lock_held_common+0x12/0x60
? pci_dev_set_disconnected+0x1b/0x80
mutex_lock_nested+0x1b/0x40
? mutex_lock_nested+0x1b/0x40
pci_dev_set_disconnected+0x1b/0x80
? __pfx_pci_dev_set_disconnected+0x20/0x20
pci_walk_bus+0x48/0xa0
pciehp_unconfigure_device+0x129/0x140
pciehp_disable_slot+0x6e/0x120
pciehp_handle_presence_or_link_change+0xf1/0x320
pciehp_ist+0x1a0/0x1c0
? irq_thread+0xb0/0x1e0
irq_thread_fn+0x28/0x80
irq_thread+0x106/0x1e0
? __pfx_irq_thread_fn+0x20/0x20
? __pfx_irq_thread_dtor+0x20/0x20
? __pfx_irq_thread+0x20/0x20
kthread+0x10a/0x140
? __pfx_kthread+0x20/0x20
ret_from_fork+0x35/0x60
</TASK>

Signed-off-by: Anatoli Antonovitch <anatoli.antonovitch@amd.com>
---
 drivers/pci/hotplug/pciehp_hpc.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

Comments

Lukas Wunner Jan. 20, 2023, 9:28 a.m. UTC | #1
Hi Anatoli,

On Fri, Jan 13, 2023 at 12:01:31PM -0500, Anatoli Antonovitch wrote:
> It is to avoid any potential issues when S3 resume but at the same
> time we want to hot-unplug.
> 
> To fix the race between pciehp and AER reported in
> https://bugzilla.kernel.org/show_bug.cgi?id=215590

I've just submitted an alternative patch to fix this, could you give
it a spin and see if the issue goes away?

https://patchwork.kernel.org/project/linux-pci/patch/3dc88ea82bdc0e37d9000e413d5ebce481cbd629.1674205689.git.lukas@wunner.de/

That alternative approach is preferable IMO because it also solves the
problem that marking devices as permanently offline isn't possible
concurrently to driver bind/unbind at the moment.  Additionally,
the alternative patch simplifies locking and reduces code size.

Thanks and sorry for my belated response.

Lukas
Anatoli Antonovitch Jan. 20, 2023, 9:35 p.m. UTC | #2
Hi Lukas,

The patch has been tested on the same setup.
Unfortunately, this alternative approach with optimization and simplified
locking is not resolve the issue in
https://bugzilla.kernel.org/show_bug.cgi?id=215590

I have uploaded the log dmesg_6_2_rc4_hotadd_aer_fix_a6bd101b8f84.txt into
the bugzilla for your patch.

Thanks,

Anatoli


On 2023-01-20 04:28, Lukas Wunner wrote:
> I've just submitted an alternative patch to fix this, could you give
> it a spin and see if the issue goes away?
>
> https://patchwork.kernel.org/project/linux-pci/patch/3dc88ea82bdc0e37d9000e413d5ebce481cbd629.1674205689.git.lukas@wunner.de/
Lukas Wunner Jan. 21, 2023, 7:21 a.m. UTC | #3
On Fri, Jan 20, 2023 at 04:35:01PM -0500, Anatoli Antonovitch wrote:
> On 2023-01-20 04:28, Lukas Wunner wrote:
> > I've just submitted an alternative patch to fix this, could you give
> > it a spin and see if the issue goes away?
> > 
> > https://patchwork.kernel.org/project/linux-pci/patch/3dc88ea82bdc0e37d9000e413d5ebce481cbd629.1674205689.git.lukas@wunner.de/
> 
> The patch has been tested on the same setup.
> Unfortunately, this alternative approach with optimization and simplified
> locking is not resolve the issue in
> https://bugzilla.kernel.org/show_bug.cgi?id=215590
> 
> I have uploaded the log dmesg_6_2_rc4_hotadd_aer_fix_a6bd101b8f84.txt into
> the bugzilla for your patch.

You're now getting a different deadlock.  That one is addressed by this
old patch (it's already linked from the bugzilla):

https://lore.kernel.org/linux-pci/908047f7699d9de9ec2efd6b79aa752d73dab4b6.1595329748.git.lukas@wunner.de/

If you apply that patch plus the new one, do you still see a deadlock?

Thanks,

Lukas
Anatoli Antonovitch Jan. 23, 2023, 7:30 p.m. UTC | #4
I do not see a deadlock, when applying the following old patch:
https://lore.kernel.org/linux-pci/908047f7699d9de9ec2efd6b79aa752d73dab4b6.1595329748.git.lukas@wunner.de/
after merge for the kernel 6.2.0-rc5, and applied the alternative patch:
https://patchwork.kernel.org/project/linux-pci/patch/3dc88ea82bdc0e37d9000e413d5ebce481cbd629.1674205689.git.lukas@wunner.de/

I have uploaded the merged patch and the system log for the upstream kernel.

Anatoli


On 2023-01-21 02:21, Lukas Wunner wrote:
> You're now getting a different deadlock.  That one is addressed by this
> old patch (it's already linked from the bugzilla):
>
> https://lore.kernel.org/linux-pci/908047f7699d9de9ec2efd6b79aa752d73dab4b6.1595329748.git.lukas@wunner.de/
>
> If you apply that patch plus the new one, do you still see a deadlock?
Anatoli Antonovitch Feb. 13, 2023, 2:59 p.m. UTC | #5
Hi Lukas,

Can we revisit the patches again to get a fix?
The issue still reproduce and visible in the kernel 6.2.0-rc8.

Thanks,
Anatoli

On 2023-01-23 14:30, Anatoli Antonovitch wrote:
> I do not see a deadlock, when applying the following old patch:
> https://lore.kernel.org/linux-pci/908047f7699d9de9ec2efd6b79aa752d73dab4b6.1595329748.git.lukas@wunner.de/ 
>
> after merge for the kernel 6.2.0-rc5, and applied the alternative patch:
> https://patchwork.kernel.org/project/linux-pci/patch/3dc88ea82bdc0e37d9000e413d5ebce481cbd629.1674205689.git.lukas@wunner.de/ 
>
>
> I have uploaded the merged patch and the system log for the upstream 
> kernel.
>
> Anatoli
>
>
> On 2023-01-21 02:21, Lukas Wunner wrote:
>> You're now getting a different deadlock. That one is addressed by this
>> old patch (it's already linked from the bugzilla):
>>
>> https://lore.kernel.org/linux-pci/908047f7699d9de9ec2efd6b79aa752d73dab4b6.1595329748.git.lukas@wunner.de/ 
>>
>>
>> If you apply that patch plus the new one, do you still see a deadlock?
Bjorn Helgaas Feb. 17, 2023, 4:03 p.m. UTC | #6
On Mon, Feb 13, 2023 at 09:59:52AM -0500, Anatoli Antonovitch wrote:
> Hi Lukas,
> 
> Can we revisit the patches again to get a fix?
> The issue still reproduce and visible in the kernel 6.2.0-rc8.

> On 2023-01-23 14:30, Anatoli Antonovitch wrote:
> > I do not see a deadlock, when applying the following old patch:
> > https://lore.kernel.org/linux-pci/908047f7699d9de9ec2efd6b79aa752d73dab4b6.1595329748.git.lukas@wunner.de/

This old patch would need to be updated and reposted.  There was a
0-day bot issue and a question to be resolved.  Maybe this is all
already resolved, but it needs to be posted and tested with a current
kernel.

> > after merge for the kernel 6.2.0-rc5, and applied the alternative patch:
> > https://patchwork.kernel.org/project/linux-pci/patch/3dc88ea82bdc0e37d9000e413d5ebce481cbd629.1674205689.git.lukas@wunner.de/

This one is on track to appear in v6.3-rc1:
https://git.kernel.org/cgit/linux/kernel/git/pci/pci.git/commit/?id=74ff8864cc84

> > I have uploaded the merged patch and the system log for the upstream
> > kernel.
> > 
> > Anatoli
> > 
> > 
> > On 2023-01-21 02:21, Lukas Wunner wrote:
> > > You're now getting a different deadlock. That one is addressed by this
> > > old patch (it's already linked from the bugzilla):
> > > 
> > > https://lore.kernel.org/linux-pci/908047f7699d9de9ec2efd6b79aa752d73dab4b6.1595329748.git.lukas@wunner.de/
> > > 
> > > 
> > > If you apply that patch plus the new one, do you still see a deadlock?
Alex Deucher Feb. 17, 2023, 6:37 p.m. UTC | #7
[Public]

> -----Original Message-----
> From: Bjorn Helgaas <helgaas@kernel.org>
> Sent: Friday, February 17, 2023 11:04 AM
> To: Antonovitch, Anatoli <Anatoli.Antonovitch@amd.com>
> Cc: Lukas Wunner <lukas@wunner.de>; Anatoli Antonovitch
> <a.antonovitch@gmail.com>; linux-pci@vger.kernel.org;
> bhelgaas@google.com; Deucher, Alexander
> <Alexander.Deucher@amd.com>; Koenig, Christian
> <Christian.Koenig@amd.com>
> Subject: Re: [PATCH] PCI/hotplug: Replaced down_write_nested with
> hotplug_slot_rwsem if ctrl->depth > 0 when taking the ctrl->reset_lock.
> 
> On Mon, Feb 13, 2023 at 09:59:52AM -0500, Anatoli Antonovitch wrote:
> > Hi Lukas,
> >
> > Can we revisit the patches again to get a fix?
> > The issue still reproduce and visible in the kernel 6.2.0-rc8.
> 
> > On 2023-01-23 14:30, Anatoli Antonovitch wrote:
> > > I do not see a deadlock, when applying the following old patch:
> > > https://lore.kernel.org/linux-pci/908047f7699d9de9ec2efd6b79aa752d73
> > > dab4b6.1595329748.git.lukas@wunner.de/
> 
> This old patch would need to be updated and reposted.  There was a 0-day
> bot issue and a question to be resolved.  Maybe this is all already resolved,
> but it needs to be posted and tested with a current kernel.

Lukas, can you resend that patch?  We can test it.

Alex

> 
> > > after merge for the kernel 6.2.0-rc5, and applied the alternative patch:
> > > https://patchwork.kernel.org/project/linux-pci/patch/3dc88ea82bdc0e3
> > > 7d9000e413d5ebce481cbd629.1674205689.git.lukas@wunner.de/
> 
> This one is on track to appear in v6.3-rc1:
> https://git.kernel.org/cgit/linux/kernel/git/pci/pci.git/commit/?id=74ff8864c
> c84
> 
> > > I have uploaded the merged patch and the system log for the upstream
> > > kernel.
> > >
> > > Anatoli
> > >
> > >
> > > On 2023-01-21 02:21, Lukas Wunner wrote:
> > > > You're now getting a different deadlock. That one is addressed by
> > > > this old patch (it's already linked from the bugzilla):
> > > >
> > > > https://lore.kernel.org/linux-pci/908047f7699d9de9ec2efd6b79aa752d
> > > > 73dab4b6.1595329748.git.lukas@wunner.de/
> > > >
> > > >
> > > > If you apply that patch plus the new one, do you still see a deadlock?
Lukas Wunner Feb. 19, 2023, 8:21 p.m. UTC | #8
On Fri, Feb 17, 2023 at 06:37:54PM +0000, Deucher, Alexander wrote:
> > From: Bjorn Helgaas <helgaas@kernel.org>
> > On Mon, Feb 13, 2023 at 09:59:52AM -0500, Anatoli Antonovitch wrote:
> > > On 2023-01-23 14:30, Anatoli Antonovitch wrote:
> > > > I do not see a deadlock, when applying the following old patch:
> > > > https://lore.kernel.org/linux-pci/908047f7699d9de9ec2efd6b79aa752d73dab4b6.1595329748.git.lukas@wunner.de/
> > > 
> > > Can we revisit the patches again to get a fix?
> > > The issue still reproduce and visible in the kernel 6.2.0-rc8.
> > 
> > This old patch would need to be updated and reposted.  There was a 0-day
> > bot issue and a question to be resolved.  Maybe this is all already resolved,
> > but it needs to be posted and tested with a current kernel.
> 
> Lukas, can you resend that patch?  We can test it.

I'm working on a patch which aims to solve these deadlocks differently,
by reducing the critical sections for which the reset_lock is held.
Please stand by.

Thanks,

Lukas
Anatoli Antonovitch April 10, 2023, 8:36 p.m. UTC | #9
Thanks Lukas.

The patch has been tested with current kernel 6.3.0-rc5 on the same setup.
The deadlock between reset_lock and device_lock has been fixed.

See details in the dmesg log: dmesg_6.3.0-rc5_fix.txt in bugzilla:

https://bugzilla.kernel.org/show_bug.cgi?id=215590

Thanks,
Anatoli

On 2023-02-19 15:21, Lukas Wunner wrote:
> On Fri, Feb 17, 2023 at 06:37:54PM +0000, Deucher, Alexander wrote:
>>> From: Bjorn Helgaas <helgaas@kernel.org>
>>> On Mon, Feb 13, 2023 at 09:59:52AM -0500, Anatoli Antonovitch wrote:
>>>> On 2023-01-23 14:30, Anatoli Antonovitch wrote:
>>>>> I do not see a deadlock, when applying the following old patch:
>>>>> https://lore.kernel.org/linux-pci/908047f7699d9de9ec2efd6b79aa752d73dab4b6.1595329748.git.lukas@wunner.de/
>>>> Can we revisit the patches again to get a fix?
>>>> The issue still reproduce and visible in the kernel 6.2.0-rc8.
>>> This old patch would need to be updated and reposted.  There was a 0-day
>>> bot issue and a question to be resolved.  Maybe this is all already resolved,
>>> but it needs to be posted and tested with a current kernel.
>> Lukas, can you resend that patch?  We can test it.
> I'm working on a patch which aims to solve these deadlocks differently,
> by reducing the critical sections for which the reset_lock is held.
> Please stand by.
>
> Thanks,
>
> Lukas
diff mbox series

Patch

diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
index 10e9670eea0b..b1084e67f798 100644
--- a/drivers/pci/hotplug/pciehp_hpc.c
+++ b/drivers/pci/hotplug/pciehp_hpc.c
@@ -27,6 +27,8 @@ 
 #include "../pci.h"
 #include "pciehp.h"
 
+static DECLARE_RWSEM(hotplug_slot_rwsem);
+
 static const struct dmi_system_id inband_presence_disabled_dmi_table[] = {
 	/*
 	 * Match all Dell systems, as some Dell systems have inband
@@ -911,7 +913,10 @@  int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, bool probe)
 	if (probe)
 		return 0;
 
-	down_write_nested(&ctrl->reset_lock, ctrl->depth);
+	if (ctrl->depth > 0)
+		down_write_nested(&ctrl->reset_lock, ctrl->depth);
+	else
+		down_write(&hotplug_slot_rwsem);
 
 	if (!ATTN_BUTTN(ctrl)) {
 		ctrl_mask |= PCI_EXP_SLTCTL_PDCE;
@@ -931,7 +936,11 @@  int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, bool probe)
 	ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
 		 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, ctrl_mask);
 
-	up_write(&ctrl->reset_lock);
+	if (ctrl->depth > 0)
+		up_write(&ctrl->reset_lock);
+	else
+		up_write(&hotplug_slot_rwsem);
+
 	return rc;
 }