diff mbox series

[v3,27/31] scsi: pm8001: Cleanup pm8001_queue_command()

Message ID 20220214021747.4976-28-damien.lemoal@opensource.wdc.com (mailing list archive)
State Superseded
Headers show
Series libsas and pm8001 fixes | expand

Commit Message

Damien Le Moal Feb. 14, 2022, 2:17 a.m. UTC
Avoid repeatedly declaring "struct task_status_struct *ts" to handle
error cases by declaring this variable for the entire function scope.
This allows simplifying the error cases, and together with the addition
of blank lines make the code more readable.

Reviewed-by: John Garry <john.garry@huawei.com>
Signed-off-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
---
 drivers/scsi/pm8001/pm8001_sas.c | 27 +++++++++++++--------------
 1 file changed, 13 insertions(+), 14 deletions(-)

Comments

John Garry Feb. 15, 2022, 10:55 a.m. UTC | #1
On 14/02/2022 02:17, Damien Le Moal wrote:
> Avoid repeatedly declaring "struct task_status_struct *ts" to handle
> error cases by declaring this variable for the entire function scope.
> This allows simplifying the error cases, and together with the addition
> of blank lines make the code more readable.
> 
> Reviewed-by: John Garry<john.garry@huawei.com>
> Signed-off-by: Damien Le Moal<damien.lemoal@opensource.wdc.com>
> ---

I assume that this can now just be merged with 30/31

Thanks,
John
Damien Le Moal Feb. 16, 2022, 11:36 a.m. UTC | #2
On 2/15/22 19:55, John Garry wrote:
> On 14/02/2022 02:17, Damien Le Moal wrote:
>> Avoid repeatedly declaring "struct task_status_struct *ts" to handle
>> error cases by declaring this variable for the entire function scope.
>> This allows simplifying the error cases, and together with the addition
>> of blank lines make the code more readable.
>>
>> Reviewed-by: John Garry<john.garry@huawei.com>
>> Signed-off-by: Damien Le Moal<damien.lemoal@opensource.wdc.com>
>> ---
> 
> I assume that this can now just be merged with 30/31

patch 30 cleans up pm8001_task_exec(). This patch is for
pm8001_queue_command(). I preferred to separate to facilitate review.
But if you insist, I can merge these into a much bigger "code cleanup"
patch...

> 
> Thanks,
> John
John Garry Feb. 16, 2022, 11:38 a.m. UTC | #3
On 16/02/2022 11:36, Damien Le Moal wrote:
> On 2/15/22 19:55, John Garry wrote:
>> On 14/02/2022 02:17, Damien Le Moal wrote:
>>> Avoid repeatedly declaring "struct task_status_struct *ts" to handle
>>> error cases by declaring this variable for the entire function scope.
>>> This allows simplifying the error cases, and together with the addition
>>> of blank lines make the code more readable.
>>>
>>> Reviewed-by: John Garry<john.garry@huawei.com>
>>> Signed-off-by: Damien Le Moal<damien.lemoal@opensource.wdc.com>
>>> ---
>>
>> I assume that this can now just be merged with 30/31
> 

Hi Damien,

> patch 30 cleans up pm8001_task_exec(). This patch is for
> pm8001_queue_command(). I preferred to separate to facilitate review.
> But if you insist, I can merge these into a much bigger "code cleanup"
> patch...
> 
I don't mind really.

BTW, on a separate topic, IIRC you said that rmmod hangs for this driver 
- if so, did you investigate why?

Thanks,
John
Damien Le Moal Feb. 16, 2022, 11:42 a.m. UTC | #4
On 2/16/22 20:38, John Garry wrote:
> On 16/02/2022 11:36, Damien Le Moal wrote:
>> On 2/15/22 19:55, John Garry wrote:
>>> On 14/02/2022 02:17, Damien Le Moal wrote:
>>>> Avoid repeatedly declaring "struct task_status_struct *ts" to handle
>>>> error cases by declaring this variable for the entire function scope.
>>>> This allows simplifying the error cases, and together with the addition
>>>> of blank lines make the code more readable.
>>>>
>>>> Reviewed-by: John Garry<john.garry@huawei.com>
>>>> Signed-off-by: Damien Le Moal<damien.lemoal@opensource.wdc.com>
>>>> ---
>>>
>>> I assume that this can now just be merged with 30/31
>>
> 
> Hi Damien,
> 
>> patch 30 cleans up pm8001_task_exec(). This patch is for
>> pm8001_queue_command(). I preferred to separate to facilitate review.
>> But if you insist, I can merge these into a much bigger "code cleanup"
>> patch...
>>
> I don't mind really.
> 
> BTW, on a separate topic, IIRC you said that rmmod hangs for this driver 
> - if so, did you investigate why?

The problem is gone with the fixes. I suspect it was due to the buggy
non-data command handling (likely, the flush issued when stopping the
device on rmmod).

I have not tackled/tried again the QD change failure though.

Preparing v4 now. Will check the QD change.

> 
> Thanks,
> John
John Garry Feb. 16, 2022, 11:50 a.m. UTC | #5
On 16/02/2022 11:42, Damien Le Moal wrote:
>> Hi Damien,
>>
>>> patch 30 cleans up pm8001_task_exec(). This patch is for
>>> pm8001_queue_command(). I preferred to separate to facilitate review.
>>> But if you insist, I can merge these into a much bigger "code cleanup"
>>> patch...
>>>
>> I don't mind really.
>>
>> BTW, on a separate topic, IIRC you said that rmmod hangs for this driver
>> - if so, did you investigate why?
> The problem is gone with the fixes. I suspect it was due to the buggy
> non-data command handling (likely, the flush issued when stopping the
> device on rmmod).
> 
> I have not tackled/tried again the QD change failure though.
> 
> Preparing v4 now. Will check the QD change.
> 

ok, great.

JFYI, turning on DMA debug sometimes gives this even after fdisk -l:

[   45.080945] sas: sas_scsi_find_task: querying task 0x(____ptrval____)
[   45.087582] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
[   45.093681] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO 
Failure Drive:5000c50085ff5559
[   45.102641] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
[   45.108739] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO 
Failure Drive:5000c50085ff5559
[   45.117694] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
[   45.123792] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO 
Failure Drive:5000c50085ff5559
[   45.132652] pm80xx: rc= -5
[   45.135370] sas: sas_scsi_find_task: task 0x(____ptrval____) result 
code -5 not handled
[   45.143466] sas: task 0x(____ptrval____) is not at LU: I_T recover
[   45.149741] sas: I_T nexus reset for dev 5000c50085ff5559
[   47.183916] sas: I_T 5000c50085ff5559 recovered
[   47.189034] sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 
tries: 1
[   47.204168] ------------[ cut here ]------------
[   47.208829] DMA-API: pm80xx 0000:04:00.0: cacheline tracking EEXIST, 
overlapping mappings aren't supported
[   47.218502] WARNING: CPU: 3 PID: 641 at kernel/dma/debug.c:570 
add_dma_entry+0x308/0x3f0
[   47.226607] Modules linked in:
[   47.229678] CPU: 3 PID: 641 Comm: kworker/3:1H Not tainted 
5.17.0-rc1-11918-gd9d909a8c666 #407
[   47.238298] Hardware name: Huawei D06 /D06, BIOS Hisilicon D06 UEFI 
RC0 - V1.16.01 03/15/2019
[   47.246829] Workqueue: kblockd blk_mq_run_work_fn
[   47.251552] pstate: 604000c9 (nZCv daIF +PAN -UAO -TCO -DIT -SSBS 
BTYPE=--)
[   47.258522] pc : add_dma_entry+0x308/0x3f0
[   47.262626] lr : add_dma_entry+0x308/0x3f0
[   47.266730] sp : ffff80002e5c75f0
[   47.270049] x29: ffff80002e5c75f0 x28: 0000002880a908c0 x27: 
ffff80000cc95440
[   47.277216] x26: ffff80000cc94000 x25: ffff80000cc94e20 x24: 
ffff00208e4660c8
[   47.284382] x23: ffff800009d16b40 x22: ffff80000a5b8700 x21: 
1ffff00005cb8eca
[   47.291548] x20: ffff80000caf4c90 x19: ffff0a2009726100 x18: 
0000000000000000
[   47.298713] x17: 70616c7265766f20 x16: 2c54534958454520 x15: 
676e696b63617274
[   47.305879] x14: 1ffff00005cb8df4 x13: 0000000041b58ab3 x12: 
ffff700005cb8e27
[   47.313044] x11: 1ffff00005cb8e26 x10: ffff700005cb8e26 x9 : 
dfff800000000000
[   47.320210] x8 : ffff80002e5c7137 x7 : 0000000000000001 x6 : 
00008ffffa3471da
[   47.327375] x5 : ffff80002e5c7130 x4 : dfff800000000000 x3 : 
ffff8000083a1f48
[   47.334540] x2 : 0000000000000000 x1 : 0000000000000000 x0 : 
ffff00208f7ab200
[   47.341706] Call trace:
[   47.344157]  add_dma_entry+0x308/0x3f0
[   47.347914]  debug_dma_map_sg+0x3ac/0x500
[   47.351931]  __dma_map_sg_attrs+0xac/0x130
[   47.356037]  dma_map_sg_attrs+0x14/0x2c
[   47.359883]  pm8001_task_exec.constprop.0+0x5e0/0x800
[   47.364945]  pm8001_queue_command+0x1c/0x2c
[   47.369136]  sas_queuecommand+0x2c4/0x360
[   47.373153]  scsi_queue_rq+0x810/0x1334
[   47.377000]  blk_mq_dispatch_rq_list+0x340/0xda0
[   47.381625]  __blk_mq_sched_dispatch_requests+0x14c/0x22c
[   47.387034]  blk_mq_sched_dispatch_requests+0x60/0x9c
[   47.392095]  __blk_mq_run_hw_queue+0xc8/0x274
[   47.396460]  blk_mq_run_work_fn+0x30/0x40
[   47.400476]  process_one_work+0x494/0xbac
[   47.404494]  worker_thread+0xac/0x6d0
[   47.408164]  kthread+0x174/0x184
[   47.411401]  ret_from_fork+0x10/0x2[   45.080945] sas: 
sas_scsi_find_task: querying task 0x(____ptrval____)
[   45.087582] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
[   45.093681] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO 
Failure Drive:5000c50085ff5559
[   45.102641] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
[   45.108739] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO 
Failure Drive:5000c50085ff5559
[   45.117694] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
[   45.123792] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO 
Failure Drive:5000c50085ff5559
[   45.132652] pm80xx: rc= -5
[   45.135370] sas: sas_scsi_find_task: task 0x(____ptrval____) result 
code -5 not handled
[   45.143466] sas: task 0x(____ptrval____) is not at LU: I_T recover
[   45.149741] sas: I_T nexus reset for dev 5000c50085ff5559
[   47.183916] sas: I_T 5000c50085ff5559 recovered
[   47.189034] sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 
tries: 1
[   47.204168] ------------[ cut here ]------------
[   47.208829] DMA-API: pm80xx 0000:04:00.0: cacheline tracking EEXIST, 
overlapping mappings aren't supported
[   47.218502] WARNING: CPU: 3 PID: 641 at kernel/dma/debug.c:570 
add_dma_entry+0x308/0x3f0
[   47.226607] Modules linked in:
[   47.229678] CPU: 3 PID: 641 Comm: kworker/3:1H Not tainted 
5.17.0-rc1-11918-gd9d909a8c666 #407
[   47.238298] Hardware name: Huawei D06 /D06, BIOS Hisilicon D06 UEFI 
RC0 - V1.16.01 03/15/2019
[   47.246829] Workqueue: kblockd blk_mq_run_work_fn
[   47.251552] pstate: 604000c9 (nZCv daIF +PAN -UAO -TCO -DIT -SSBS 
BTYPE=--)
[   47.258522] pc : add_dma_entry+0x308/0x3f0
[   47.262626] lr : add_dma_entry+0x308/0x3f0
[   47.266730] sp : ffff80002e5c75f0
[   47.270049] x29: ffff80002e5c75f0 x28: 0000002880a908c0 x27: 
ffff80000cc95440
[   47.277216] x26: ffff80000cc94000 x25: ffff80000cc94e20 x24: 
ffff00208e4660c8
[   47.284382] x23: ffff800009d16b40 x22: ffff80000a5b8700 x21: 
1ffff00005cb8eca
[   47.291548] x20: ffff80000caf4c90 x19: ffff0a2009726100 x18: 
0000000000000000
[   47.298713] x17: 70616c7265766f20 x16: 2c54534958454520 x15: 
676e696b63617274
[   47.305879] x14: 1ffff00005cb8df4 x13: 0000000041b58ab3 x12: 
ffff700005cb8e27
[   47.313044] x11: 1ffff00005cb8e26 x10: ffff700005cb8e26 x9 : 
dfff800000000000
[   47.320210] x8 : ffff80002e5c7137 x7 : 0000000000000001 x6 : 
00008ffffa3471da
[   47.327375] x5 : ffff80002e5c7130 x4 : dfff800000000000 x3 : 
ffff8000083a1f48
[   47.334540] x2 : 0000000000000000 x1 : 0000000000000000 x0 : 
ffff00208f7ab200
[   47.341706] Call trace:
[   47.344157]  add_dma_entry+0x308/0x3f0
[   47.347914]  debug_dma_map_sg+0x3ac/0x500
[   47.351931]  __dma_map_sg_attrs+0xac/0x130
[   47.356037]  dma_map_sg_attrs+0x14/0x2c
[   47.359883]  pm8001_task_exec.constprop.0+0x5e0/0x800
[   47.364945]  pm8001_queue_command+0x1c/0x2c
[   47.369136]  sas_queuecommand+0x2c4/0x360
[   47.373153]  scsi_queue_rq+0x810/0x1334
[   47.377000]  blk_mq_dispatch_rq_list+0x340/0xda0
[   47.381625]  __blk_mq_sched_dispatch_requests+0x14c/0x22c
[   47.387034]  blk_mq_sched_dispatch_requests+0x60/0x9c
[   47.392095]  __blk_mq_run_hw_queue+0xc8/0x274
[   47.396460]  blk_mq_run_work_fn+0x30/0x40
[   47.400476]  process_one_work+0x494/0xbac
[   47.404494]  worker_thread+0xac/0x6d0
[   47.408164]  kthread+0x174/0x184
[   47.411401]  ret_from_fork+0x10/0x2

I'll have a look at it. And that is on mainline or mkp-scsi staging, and 
not your patchset.

Thanks,
John
Damien Le Moal Feb. 16, 2022, 12:05 p.m. UTC | #6
On 2/16/22 20:50, John Garry wrote:
> On 16/02/2022 11:42, Damien Le Moal wrote:
>>> Hi Damien,
>>>
>>>> patch 30 cleans up pm8001_task_exec(). This patch is for
>>>> pm8001_queue_command(). I preferred to separate to facilitate review.
>>>> But if you insist, I can merge these into a much bigger "code cleanup"
>>>> patch...
>>>>
>>> I don't mind really.
>>>
>>> BTW, on a separate topic, IIRC you said that rmmod hangs for this driver
>>> - if so, did you investigate why?
>> The problem is gone with the fixes. I suspect it was due to the buggy
>> non-data command handling (likely, the flush issued when stopping the
>> device on rmmod).
>>
>> I have not tackled/tried again the QD change failure though.
>>
>> Preparing v4 now. Will check the QD change.
>>
> 
> ok, great.
> 
> JFYI, turning on DMA debug sometimes gives this even after fdisk -l:
> 
> [   45.080945] sas: sas_scsi_find_task: querying task 0x(____ptrval____)
> [   45.087582] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b

What is status 0x3b ? Is this a driver thing or sas thing ? Have not
checked.

> [   45.093681] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO 
> Failure Drive:5000c50085ff5559
> [   45.102641] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
> [   45.108739] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO 
> Failure Drive:5000c50085ff5559
> [   45.117694] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
> [   45.123792] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO 
> Failure Drive:5000c50085ff5559
> [   45.132652] pm80xx: rc= -5

This comes from pm8001_query_task(), pm8001_abort_task() or
pm8001_chip_abort_task()...

> [   45.135370] sas: sas_scsi_find_task: task 0x(____ptrval____) result 
> code -5 not handled

Missing error handling ?

> [   45.143466] sas: task 0x(____ptrval____) is not at LU: I_T recover
> [   45.149741] sas: I_T nexus reset for dev 5000c50085ff5559
> [   47.183916] sas: I_T 5000c50085ff5559 recovered

Weird... Losing your drive ? Bad cable ?

> [   47.189034] sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 
> tries: 1
> [   47.204168] ------------[ cut here ]------------
> [   47.208829] DMA-API: pm80xx 0000:04:00.0: cacheline tracking EEXIST, 
> overlapping mappings aren't supported
> [   47.218502] WARNING: CPU: 3 PID: 641 at kernel/dma/debug.c:570 
> add_dma_entry+0x308/0x3f0
> [   47.226607] Modules linked in:
> [   47.229678] CPU: 3 PID: 641 Comm: kworker/3:1H Not tainted 
> 5.17.0-rc1-11918-gd9d909a8c666 #407
> [   47.238298] Hardware name: Huawei D06 /D06, BIOS Hisilicon D06 UEFI 
> RC0 - V1.16.01 03/15/2019
> [   47.246829] Workqueue: kblockd blk_mq_run_work_fn
> [   47.251552] pstate: 604000c9 (nZCv daIF +PAN -UAO -TCO -DIT -SSBS 
> BTYPE=--)
> [   47.258522] pc : add_dma_entry+0x308/0x3f0
> [   47.262626] lr : add_dma_entry+0x308/0x3f0
> [   47.266730] sp : ffff80002e5c75f0
> [   47.270049] x29: ffff80002e5c75f0 x28: 0000002880a908c0 x27: 
> ffff80000cc95440
> [   47.277216] x26: ffff80000cc94000 x25: ffff80000cc94e20 x24: 
> ffff00208e4660c8
> [   47.284382] x23: ffff800009d16b40 x22: ffff80000a5b8700 x21: 
> 1ffff00005cb8eca
> [   47.291548] x20: ffff80000caf4c90 x19: ffff0a2009726100 x18: 
> 0000000000000000
> [   47.298713] x17: 70616c7265766f20 x16: 2c54534958454520 x15: 
> 676e696b63617274
> [   47.305879] x14: 1ffff00005cb8df4 x13: 0000000041b58ab3 x12: 
> ffff700005cb8e27
> [   47.313044] x11: 1ffff00005cb8e26 x10: ffff700005cb8e26 x9 : 
> dfff800000000000
> [   47.320210] x8 : ffff80002e5c7137 x7 : 0000000000000001 x6 : 
> 00008ffffa3471da
> [   47.327375] x5 : ffff80002e5c7130 x4 : dfff800000000000 x3 : 
> ffff8000083a1f48
> [   47.334540] x2 : 0000000000000000 x1 : 0000000000000000 x0 : 
> ffff00208f7ab200
> [   47.341706] Call trace:
> [   47.344157]  add_dma_entry+0x308/0x3f0
> [   47.347914]  debug_dma_map_sg+0x3ac/0x500
> [   47.351931]  __dma_map_sg_attrs+0xac/0x130
> [   47.356037]  dma_map_sg_attrs+0x14/0x2c
> [   47.359883]  pm8001_task_exec.constprop.0+0x5e0/0x800
> [   47.364945]  pm8001_queue_command+0x1c/0x2c
> [   47.369136]  sas_queuecommand+0x2c4/0x360
> [   47.373153]  scsi_queue_rq+0x810/0x1334
> [   47.377000]  blk_mq_dispatch_rq_list+0x340/0xda0
> [   47.381625]  __blk_mq_sched_dispatch_requests+0x14c/0x22c
> [   47.387034]  blk_mq_sched_dispatch_requests+0x60/0x9c
> [   47.392095]  __blk_mq_run_hw_queue+0xc8/0x274
> [   47.396460]  blk_mq_run_work_fn+0x30/0x40
> [   47.400476]  process_one_work+0x494/0xbac
> [   47.404494]  worker_thread+0xac/0x6d0
> [   47.408164]  kthread+0x174/0x184
> [   47.411401]  ret_from_fork+0x10/0x2[   45.080945] sas: 
> sas_scsi_find_task: querying task 0x(____ptrval____)
> [   45.087582] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
> [   45.093681] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO 
> Failure Drive:5000c50085ff5559
> [   45.102641] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
> [   45.108739] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO 
> Failure Drive:5000c50085ff5559
> [   45.117694] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
> [   45.123792] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO 
> Failure Drive:5000c50085ff5559
> [   45.132652] pm80xx: rc= -5
> [   45.135370] sas: sas_scsi_find_task: task 0x(____ptrval____) result 
> code -5 not handled
> [   45.143466] sas: task 0x(____ptrval____) is not at LU: I_T recover
> [   45.149741] sas: I_T nexus reset for dev 5000c50085ff5559
> [   47.183916] sas: I_T 5000c50085ff5559 recovered
> [   47.189034] sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 
> tries: 1
> [   47.204168] ------------[ cut here ]------------
> [   47.208829] DMA-API: pm80xx 0000:04:00.0: cacheline tracking EEXIST, 
> overlapping mappings aren't supported
> [   47.218502] WARNING: CPU: 3 PID: 641 at kernel/dma/debug.c:570 
> add_dma_entry+0x308/0x3f0
> [   47.226607] Modules linked in:
> [   47.229678] CPU: 3 PID: 641 Comm: kworker/3:1H Not tainted 
> 5.17.0-rc1-11918-gd9d909a8c666 #407
> [   47.238298] Hardware name: Huawei D06 /D06, BIOS Hisilicon D06 UEFI 
> RC0 - V1.16.01 03/15/2019
> [   47.246829] Workqueue: kblockd blk_mq_run_work_fn
> [   47.251552] pstate: 604000c9 (nZCv daIF +PAN -UAO -TCO -DIT -SSBS 
> BTYPE=--)
> [   47.258522] pc : add_dma_entry+0x308/0x3f0
> [   47.262626] lr : add_dma_entry+0x308/0x3f0
> [   47.266730] sp : ffff80002e5c75f0
> [   47.270049] x29: ffff80002e5c75f0 x28: 0000002880a908c0 x27: 
> ffff80000cc95440
> [   47.277216] x26: ffff80000cc94000 x25: ffff80000cc94e20 x24: 
> ffff00208e4660c8
> [   47.284382] x23: ffff800009d16b40 x22: ffff80000a5b8700 x21: 
> 1ffff00005cb8eca
> [   47.291548] x20: ffff80000caf4c90 x19: ffff0a2009726100 x18: 
> 0000000000000000
> [   47.298713] x17: 70616c7265766f20 x16: 2c54534958454520 x15: 
> 676e696b63617274
> [   47.305879] x14: 1ffff00005cb8df4 x13: 0000000041b58ab3 x12: 
> ffff700005cb8e27
> [   47.313044] x11: 1ffff00005cb8e26 x10: ffff700005cb8e26 x9 : 
> dfff800000000000
> [   47.320210] x8 : ffff80002e5c7137 x7 : 0000000000000001 x6 : 
> 00008ffffa3471da
> [   47.327375] x5 : ffff80002e5c7130 x4 : dfff800000000000 x3 : 
> ffff8000083a1f48
> [   47.334540] x2 : 0000000000000000 x1 : 0000000000000000 x0 : 
> ffff00208f7ab200
> [   47.341706] Call trace:
> [   47.344157]  add_dma_entry+0x308/0x3f0
> [   47.347914]  debug_dma_map_sg+0x3ac/0x500
> [   47.351931]  __dma_map_sg_attrs+0xac/0x130
> [   47.356037]  dma_map_sg_attrs+0x14/0x2c
> [   47.359883]  pm8001_task_exec.constprop.0+0x5e0/0x800
> [   47.364945]  pm8001_queue_command+0x1c/0x2c
> [   47.369136]  sas_queuecommand+0x2c4/0x360
> [   47.373153]  scsi_queue_rq+0x810/0x1334
> [   47.377000]  blk_mq_dispatch_rq_list+0x340/0xda0
> [   47.381625]  __blk_mq_sched_dispatch_requests+0x14c/0x22c
> [   47.387034]  blk_mq_sched_dispatch_requests+0x60/0x9c
> [   47.392095]  __blk_mq_run_hw_queue+0xc8/0x274
> [   47.396460]  blk_mq_run_work_fn+0x30/0x40
> [   47.400476]  process_one_work+0x494/0xbac
> [   47.404494]  worker_thread+0xac/0x6d0
> [   47.408164]  kthread+0x174/0x184
> [   47.411401]  ret_from_fork+0x10/0x2
> 
> I'll have a look at it. And that is on mainline or mkp-scsi staging, and 
> not your patchset.

Are you saying that my patches suppresses the above ? This is submission
path and the dma code seems to complain about alignment... So bad buffer
addresses ?

> 
> Thanks,
> John
John Garry Feb. 16, 2022, 12:21 p.m. UTC | #7
>> JFYI, turning on DMA debug sometimes gives this even after fdisk -l:
>>
>> [   45.080945] sas: sas_scsi_find_task: querying task 0x(____ptrval____)
>> [   45.087582] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
> 
> What is status 0x3b ? Is this a driver thing or sas thing ? Have not
> checked.

This is a driver thing. I'd need the manual to check.

> 
>> [   45.093681] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO
>> Failure Drive:5000c50085ff5559
>> [   45.102641] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
>> [   45.108739] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO
>> Failure Drive:5000c50085ff5559
>> [   45.117694] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
>> [   45.123792] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO
>> Failure Drive:5000c50085ff5559
>> [   45.132652] pm80xx: rc= -5
> 
> This comes from pm8001_query_task(), pm8001_abort_task() or
> pm8001_chip_abort_task()...
> 
>> [   45.135370] sas: sas_scsi_find_task: task 0x(____ptrval____) result
>> code -5 not handled
> 
> Missing error handling ?

This is something I added. So the driver does not return a valid TMF 
code - it returns -5, which I think comes from pm8001_query_task() -> 
pm8001_exec_internal_tmf_task(). And sas_scsi_find_task() does not 
recognise -5 and just assumes that the TMF failed, so ...

> 
>> [   45.143466] sas: task 0x(____ptrval____) is not at LU: I_T recover
>> [   45.149741] sas: I_T nexus reset for dev 5000c50085ff5559
>> [   47.183916] sas: I_T 5000c50085ff5559 recovered
> 
> Weird... Losing your drive ? Bad cable ?

.. we escalate the error handling and call sas_eh_handle_sas_errors() -> 
sas_recover_I_T(), which resets the PHY - see pm8001_I_T_nexus_reset().

> 
>> [   47.189034] sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1
>> tries: 1
>> [   47.204168] ------------[ cut here ]------------
>> [   47.208829] DMA-API: pm80xx 0000:04:00.0: cacheline tracking EEXIST,
>> overlapping mappings aren't supported
>> [   47.218502] WARNING: CPU: 3 PID: 641 at kernel/dma/debug.c:570
>> add_dma_entry+0x308/0x3f0
>> [   47.226607] Modules linked in:
>> [   47.229678] CPU: 3 PID: 641 Comm: kworker/3:1H Not tainted
>> 5.17.0-rc1-11918-gd9d909a8c666 #407
>> [   47.238298] Hardware name: Huawei D06 /D06, BIOS Hisilicon D06 UEFI
>> RC0 - V1.16.01 03/15/2019
>> [   47.246829] Workqueue: kblockd blk_mq_run_work_fn
>> [   47.251552] pstate: 604000c9 (nZCv daIF +PAN -UAO -TCO -DIT -SSBS
>> BTYPE=--)
>> [   47.258522] pc : add_dma_entry+0x308/0x3f0
>> [   47.262626] lr : add_dma_entry+0x308/0x3f0
>> [   47.266730] sp : ffff80002e5c75f0
>> [   47.270049] x29: ffff80002e5c75f0 x28: 0000002880a908c0 x27:
>> ffff80000cc95440
>> [   47.277216] x26: ffff80000cc94000 x25: ffff80000cc94e20 x24:
>> ffff00208e4660c8
>> [   47.284382] x23: ffff800009d16b40 x22: ffff80000a5b8700 x21:
>> 1ffff00005cb8eca
>> [   47.291548] x20: ffff80000caf4c90 x19: ffff0a2009726100 x18:
>> 0000000000000000
>> [   47.298713] x17: 70616c7265766f20 x16: 2c54534958454520 x15:
>> 676e696b63617274
>> [   47.305879] x14: 1ffff00005cb8df4 x13: 0000000041b58ab3 x12:
>> ffff700005cb8e27
>> [   47.313044] x11: 1ffff00005cb8e26 x10: ffff700005cb8e26 x9 :
>> dfff800000000000
>> [   47.320210] x8 : ffff80002e5c7137 x7 : 0000000000000001 x6 :
>> 00008ffffa3471da
>> [   47.327375] x5 : ffff80002e5c7130 x4 : dfff800000000000 x3 :
>> ffff8000083a1f48
>> [   47.334540] x2 : 0000000000000000 x1 : 0000000000000000 x0 :
>> ffff00208f7ab200
>> [   47.341706] Call trace:
>> [   47.344157]  add_dma_entry+0x308/0x3f0
>> [   47.347914]  debug_dma_map_sg+0x3ac/0x500
>> [   47.351931]  __dma_map_sg_attrs+0xac/0x130
>> [   47.356037]  dma_map_sg_attrs+0x14/0x2c
>> [   47.359883]  pm8001_task_exec.constprop.0+0x5e0/0x800
>> [   47.364945]  pm8001_queue_command+0x1c/0x2c
>> [   47.369136]  sas_queuecommand+0x2c4/0x360
>> [   47.373153]  scsi_queue_rq+0x810/0x1334
>> [   47.377000]  blk_mq_dispatch_rq_list+0x340/0xda0
>> [   47.381625]  __blk_mq_sched_dispatch_requests+0x14c/0x22c
>> [   47.387034]  blk_mq_sched_dispatch_requests+0x60/0x9c
>> [   47.392095]  __blk_mq_run_hw_queue+0xc8/0x274
>> [   47.396460]  blk_mq_run_work_fn+0x30/0x40
>> [   47.400476]  process_one_work+0x494/0xbac
>> [   47.404494]  worker_thread+0xac/0x6d0
>> [   47.408164]  kthread+0x174/0x184
>> [   47.411401]  ret_from_fork+0x10/0x2[   45.080945] sas:
>> sas_scsi_find_task: querying task 0x(____ptrval____)
>> [   45.087582] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
>> [   45.093681] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO
>> Failure Drive:5000c50085ff5559
>> [   45.102641] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
>> [   45.108739] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO
>> Failure Drive:5000c50085ff5559
>> [   45.117694] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
>> [   45.123792] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO
>> Failure Drive:5000c50085ff5559
>> [   45.132652] pm80xx: rc= -5
>> [   45.135370] sas: sas_scsi_find_task: task 0x(____ptrval____) result
>> code -5 not handled
>> [   45.143466] sas: task 0x(____ptrval____) is not at LU: I_T recover
>> [   45.149741] sas: I_T nexus reset for dev 5000c50085ff5559
>> [   47.183916] sas: I_T 5000c50085ff5559 recovered
>> [   47.189034] sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1
>> tries: 1
>> [   47.204168] ------------[ cut here ]------------
>> [   47.208829] DMA-API: pm80xx 0000:04:00.0: cacheline tracking EEXIST,
>> overlapping mappings aren't supported
>> [   47.218502] WARNING: CPU: 3 PID: 641 at kernel/dma/debug.c:570
>> add_dma_entry+0x308/0x3f0
>> [   47.226607] Modules linked in:
>> [   47.229678] CPU: 3 PID: 641 Comm: kworker/3:1H Not tainted
>> 5.17.0-rc1-11918-gd9d909a8c666 #407
>> [   47.238298] Hardware name: Huawei D06 /D06, BIOS Hisilicon D06 UEFI
>> RC0 - V1.16.01 03/15/2019
>> [   47.246829] Workqueue: kblockd blk_mq_run_work_fn
>> [   47.251552] pstate: 604000c9 (nZCv daIF +PAN -UAO -TCO -DIT -SSBS
>> BTYPE=--)
>> [   47.258522] pc : add_dma_entry+0x308/0x3f0
>> [   47.262626] lr : add_dma_entry+0x308/0x3f0
>> [   47.266730] sp : ffff80002e5c75f0
>> [   47.270049] x29: ffff80002e5c75f0 x28: 0000002880a908c0 x27:
>> ffff80000cc95440
>> [   47.277216] x26: ffff80000cc94000 x25: ffff80000cc94e20 x24:
>> ffff00208e4660c8
>> [   47.284382] x23: ffff800009d16b40 x22: ffff80000a5b8700 x21:
>> 1ffff00005cb8eca
>> [   47.291548] x20: ffff80000caf4c90 x19: ffff0a2009726100 x18:
>> 0000000000000000
>> [   47.298713] x17: 70616c7265766f20 x16: 2c54534958454520 x15:
>> 676e696b63617274
>> [   47.305879] x14: 1ffff00005cb8df4 x13: 0000000041b58ab3 x12:
>> ffff700005cb8e27
>> [   47.313044] x11: 1ffff00005cb8e26 x10: ffff700005cb8e26 x9 :
>> dfff800000000000
>> [   47.320210] x8 : ffff80002e5c7137 x7 : 0000000000000001 x6 :
>> 00008ffffa3471da
>> [   47.327375] x5 : ffff80002e5c7130 x4 : dfff800000000000 x3 :
>> ffff8000083a1f48
>> [   47.334540] x2 : 0000000000000000 x1 : 0000000000000000 x0 :
>> ffff00208f7ab200
>> [   47.341706] Call trace:
>> [   47.344157]  add_dma_entry+0x308/0x3f0
>> [   47.347914]  debug_dma_map_sg+0x3ac/0x500
>> [   47.351931]  __dma_map_sg_attrs+0xac/0x130
>> [   47.356037]  dma_map_sg_attrs+0x14/0x2c
>> [   47.359883]  pm8001_task_exec.constprop.0+0x5e0/0x800
>> [   47.364945]  pm8001_queue_command+0x1c/0x2c
>> [   47.369136]  sas_queuecommand+0x2c4/0x360
>> [   47.373153]  scsi_queue_rq+0x810/0x1334
>> [   47.377000]  blk_mq_dispatch_rq_list+0x340/0xda0
>> [   47.381625]  __blk_mq_sched_dispatch_requests+0x14c/0x22c
>> [   47.387034]  blk_mq_sched_dispatch_requests+0x60/0x9c
>> [   47.392095]  __blk_mq_run_hw_queue+0xc8/0x274
>> [   47.396460]  blk_mq_run_work_fn+0x30/0x40
>> [   47.400476]  process_one_work+0x494/0xbac
>> [   47.404494]  worker_thread+0xac/0x6d0
>> [   47.408164]  kthread+0x174/0x184
>> [   47.411401]  ret_from_fork+0x10/0x2
>>
>> I'll have a look at it. And that is on mainline or mkp-scsi staging, and
>> not your patchset.
> 
> Are you saying that my patches suppresses the above ? This is submission
> path and the dma code seems to complain about alignment... So bad buffer
> addresses ?

Your series does not suppress it. It doesn't occur often, so I need to 
check more.

I think the issue is that we call dma_map_sg() twice, i.e. ccb never 
unmapped.

Thanks,
John
Damien Le Moal Feb. 17, 2022, 12:12 a.m. UTC | #8
On 2/16/22 21:21, John Garry wrote:
> 
>>> JFYI, turning on DMA debug sometimes gives this even after fdisk -l:
>>>
>>> [   45.080945] sas: sas_scsi_find_task: querying task 0x(____ptrval____)
>>> [   45.087582] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
>>
>> What is status 0x3b ? Is this a driver thing or sas thing ? Have not
>> checked.
> 
> This is a driver thing. I'd need the manual to check.
> 
>>
>>> [   45.093681] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO
>>> Failure Drive:5000c50085ff5559
>>> [   45.102641] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
>>> [   45.108739] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO
>>> Failure Drive:5000c50085ff5559
>>> [   45.117694] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
>>> [   45.123792] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO
>>> Failure Drive:5000c50085ff5559
>>> [   45.132652] pm80xx: rc= -5
>>
>> This comes from pm8001_query_task(), pm8001_abort_task() or
>> pm8001_chip_abort_task()...
>>
>>> [   45.135370] sas: sas_scsi_find_task: task 0x(____ptrval____) result
>>> code -5 not handled
>>
>> Missing error handling ?
> 
> This is something I added. So the driver does not return a valid TMF 
> code - it returns -5, which I think comes from pm8001_query_task() -> 
> pm8001_exec_internal_tmf_task(). And sas_scsi_find_task() does not 
> recognise -5 and just assumes that the TMF failed, so ...
> 
>>
>>> [   45.143466] sas: task 0x(____ptrval____) is not at LU: I_T recover
>>> [   45.149741] sas: I_T nexus reset for dev 5000c50085ff5559
>>> [   47.183916] sas: I_T 5000c50085ff5559 recovered
>>
>> Weird... Losing your drive ? Bad cable ?
> 
> .. we escalate the error handling and call sas_eh_handle_sas_errors() -> 
> sas_recover_I_T(), which resets the PHY - see pm8001_I_T_nexus_reset().
> 
>>
>>> [   47.189034] sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1
>>> tries: 1
>>> [   47.204168] ------------[ cut here ]------------
>>> [   47.208829] DMA-API: pm80xx 0000:04:00.0: cacheline tracking EEXIST,
>>> overlapping mappings aren't supported
>>> [   47.218502] WARNING: CPU: 3 PID: 641 at kernel/dma/debug.c:570
>>> add_dma_entry+0x308/0x3f0
>>> [   47.226607] Modules linked in:
>>> [   47.229678] CPU: 3 PID: 641 Comm: kworker/3:1H Not tainted
>>> 5.17.0-rc1-11918-gd9d909a8c666 #407
>>> [   47.238298] Hardware name: Huawei D06 /D06, BIOS Hisilicon D06 UEFI
>>> RC0 - V1.16.01 03/15/2019
>>> [   47.246829] Workqueue: kblockd blk_mq_run_work_fn
>>> [   47.251552] pstate: 604000c9 (nZCv daIF +PAN -UAO -TCO -DIT -SSBS
>>> BTYPE=--)
>>> [   47.258522] pc : add_dma_entry+0x308/0x3f0
>>> [   47.262626] lr : add_dma_entry+0x308/0x3f0
>>> [   47.266730] sp : ffff80002e5c75f0
>>> [   47.270049] x29: ffff80002e5c75f0 x28: 0000002880a908c0 x27:
>>> ffff80000cc95440
>>> [   47.277216] x26: ffff80000cc94000 x25: ffff80000cc94e20 x24:
>>> ffff00208e4660c8
>>> [   47.284382] x23: ffff800009d16b40 x22: ffff80000a5b8700 x21:
>>> 1ffff00005cb8eca
>>> [   47.291548] x20: ffff80000caf4c90 x19: ffff0a2009726100 x18:
>>> 0000000000000000
>>> [   47.298713] x17: 70616c7265766f20 x16: 2c54534958454520 x15:
>>> 676e696b63617274
>>> [   47.305879] x14: 1ffff00005cb8df4 x13: 0000000041b58ab3 x12:
>>> ffff700005cb8e27
>>> [   47.313044] x11: 1ffff00005cb8e26 x10: ffff700005cb8e26 x9 :
>>> dfff800000000000
>>> [   47.320210] x8 : ffff80002e5c7137 x7 : 0000000000000001 x6 :
>>> 00008ffffa3471da
>>> [   47.327375] x5 : ffff80002e5c7130 x4 : dfff800000000000 x3 :
>>> ffff8000083a1f48
>>> [   47.334540] x2 : 0000000000000000 x1 : 0000000000000000 x0 :
>>> ffff00208f7ab200
>>> [   47.341706] Call trace:
>>> [   47.344157]  add_dma_entry+0x308/0x3f0
>>> [   47.347914]  debug_dma_map_sg+0x3ac/0x500
>>> [   47.351931]  __dma_map_sg_attrs+0xac/0x130
>>> [   47.356037]  dma_map_sg_attrs+0x14/0x2c
>>> [   47.359883]  pm8001_task_exec.constprop.0+0x5e0/0x800
>>> [   47.364945]  pm8001_queue_command+0x1c/0x2c
>>> [   47.369136]  sas_queuecommand+0x2c4/0x360
>>> [   47.373153]  scsi_queue_rq+0x810/0x1334
>>> [   47.377000]  blk_mq_dispatch_rq_list+0x340/0xda0
>>> [   47.381625]  __blk_mq_sched_dispatch_requests+0x14c/0x22c
>>> [   47.387034]  blk_mq_sched_dispatch_requests+0x60/0x9c
>>> [   47.392095]  __blk_mq_run_hw_queue+0xc8/0x274
>>> [   47.396460]  blk_mq_run_work_fn+0x30/0x40
>>> [   47.400476]  process_one_work+0x494/0xbac
>>> [   47.404494]  worker_thread+0xac/0x6d0
>>> [   47.408164]  kthread+0x174/0x184
>>> [   47.411401]  ret_from_fork+0x10/0x2[   45.080945] sas:
>>> sas_scsi_find_task: querying task 0x(____ptrval____)
>>> [   45.087582] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
>>> [   45.093681] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO
>>> Failure Drive:5000c50085ff5559
>>> [   45.102641] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
>>> [   45.108739] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO
>>> Failure Drive:5000c50085ff5559
>>> [   45.117694] pm80xx0:: mpi_ssp_completion  1936:sas IO status 0x3b
>>> [   45.123792] pm80xx0:: mpi_ssp_completion  1947:SAS Address of IO
>>> Failure Drive:5000c50085ff5559
>>> [   45.132652] pm80xx: rc= -5
>>> [   45.135370] sas: sas_scsi_find_task: task 0x(____ptrval____) result
>>> code -5 not handled
>>> [   45.143466] sas: task 0x(____ptrval____) is not at LU: I_T recover
>>> [   45.149741] sas: I_T nexus reset for dev 5000c50085ff5559
>>> [   47.183916] sas: I_T 5000c50085ff5559 recovered
>>> [   47.189034] sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1
>>> tries: 1
>>> [   47.204168] ------------[ cut here ]------------
>>> [   47.208829] DMA-API: pm80xx 0000:04:00.0: cacheline tracking EEXIST,
>>> overlapping mappings aren't supported
>>> [   47.218502] WARNING: CPU: 3 PID: 641 at kernel/dma/debug.c:570
>>> add_dma_entry+0x308/0x3f0
>>> [   47.226607] Modules linked in:
>>> [   47.229678] CPU: 3 PID: 641 Comm: kworker/3:1H Not tainted
>>> 5.17.0-rc1-11918-gd9d909a8c666 #407
>>> [   47.238298] Hardware name: Huawei D06 /D06, BIOS Hisilicon D06 UEFI
>>> RC0 - V1.16.01 03/15/2019
>>> [   47.246829] Workqueue: kblockd blk_mq_run_work_fn
>>> [   47.251552] pstate: 604000c9 (nZCv daIF +PAN -UAO -TCO -DIT -SSBS
>>> BTYPE=--)
>>> [   47.258522] pc : add_dma_entry+0x308/0x3f0
>>> [   47.262626] lr : add_dma_entry+0x308/0x3f0
>>> [   47.266730] sp : ffff80002e5c75f0
>>> [   47.270049] x29: ffff80002e5c75f0 x28: 0000002880a908c0 x27:
>>> ffff80000cc95440
>>> [   47.277216] x26: ffff80000cc94000 x25: ffff80000cc94e20 x24:
>>> ffff00208e4660c8
>>> [   47.284382] x23: ffff800009d16b40 x22: ffff80000a5b8700 x21:
>>> 1ffff00005cb8eca
>>> [   47.291548] x20: ffff80000caf4c90 x19: ffff0a2009726100 x18:
>>> 0000000000000000
>>> [   47.298713] x17: 70616c7265766f20 x16: 2c54534958454520 x15:
>>> 676e696b63617274
>>> [   47.305879] x14: 1ffff00005cb8df4 x13: 0000000041b58ab3 x12:
>>> ffff700005cb8e27
>>> [   47.313044] x11: 1ffff00005cb8e26 x10: ffff700005cb8e26 x9 :
>>> dfff800000000000
>>> [   47.320210] x8 : ffff80002e5c7137 x7 : 0000000000000001 x6 :
>>> 00008ffffa3471da
>>> [   47.327375] x5 : ffff80002e5c7130 x4 : dfff800000000000 x3 :
>>> ffff8000083a1f48
>>> [   47.334540] x2 : 0000000000000000 x1 : 0000000000000000 x0 :
>>> ffff00208f7ab200
>>> [   47.341706] Call trace:
>>> [   47.344157]  add_dma_entry+0x308/0x3f0
>>> [   47.347914]  debug_dma_map_sg+0x3ac/0x500
>>> [   47.351931]  __dma_map_sg_attrs+0xac/0x130
>>> [   47.356037]  dma_map_sg_attrs+0x14/0x2c
>>> [   47.359883]  pm8001_task_exec.constprop.0+0x5e0/0x800
>>> [   47.364945]  pm8001_queue_command+0x1c/0x2c
>>> [   47.369136]  sas_queuecommand+0x2c4/0x360
>>> [   47.373153]  scsi_queue_rq+0x810/0x1334
>>> [   47.377000]  blk_mq_dispatch_rq_list+0x340/0xda0
>>> [   47.381625]  __blk_mq_sched_dispatch_requests+0x14c/0x22c
>>> [   47.387034]  blk_mq_sched_dispatch_requests+0x60/0x9c
>>> [   47.392095]  __blk_mq_run_hw_queue+0xc8/0x274
>>> [   47.396460]  blk_mq_run_work_fn+0x30/0x40
>>> [   47.400476]  process_one_work+0x494/0xbac
>>> [   47.404494]  worker_thread+0xac/0x6d0
>>> [   47.408164]  kthread+0x174/0x184
>>> [   47.411401]  ret_from_fork+0x10/0x2
>>>
>>> I'll have a look at it. And that is on mainline or mkp-scsi staging, and
>>> not your patchset.
>>
>> Are you saying that my patches suppresses the above ? This is submission
>> path and the dma code seems to complain about alignment... So bad buffer
>> addresses ?
> 
> Your series does not suppress it. It doesn't occur often, so I need to 
> check more.
> 
> I think the issue is that we call dma_map_sg() twice, i.e. ccb never 
> unmapped.

That would be a big issue indeed. We could add a flag to CCBs to track
the buf_prd DMA mapping state and BUG_ON() when ccb free function is
called with the buffer still mapped. That should allow catching this
infrequent problem ?

> 
> Thanks,
> John
>
John Garry Feb. 17, 2022, 9:23 a.m. UTC | #9
On 17/02/2022 00:12, Damien Le Moal wrote:
>>>> I'll have a look at it. And that is on mainline or mkp-scsi staging, and
>>>> not your patchset.
>>> Are you saying that my patches suppresses the above ? This is submission
>>> path and the dma code seems to complain about alignment... So bad buffer
>>> addresses ?
>> Your series does not suppress it. It doesn't occur often, so I need to
>> check more.
>>
>> I think the issue is that we call dma_map_sg() twice, i.e. ccb never
>> unmapped.
> That would be a big issue indeed. We could add a flag to CCBs to track
> the buf_prd DMA mapping state and BUG_ON() when ccb free function is
> called with the buffer still mapped. That should allow catching this
> infrequent problem ?
> 

I figured out what is happening here and it does not help solve the 
mystery of my hang.

Here's the steps:
a. scsi_cmnd times out
b. scsi error handling kicks in
c. libsas attempts to abort the task, which fails
d. libsas then tries IT nexus reset, which passes
  - libsas assumes the scsi_cmnd has completed with failure
e. error handling concludes
f. scsi midlayer then retries the same scsi_cmnd
g. since we did not "free" associated ccb earlier or dma unmap at d., 
the dma unmap on the same scsi_cmnd causes the warn

So the LLD should really free resources and dma unmap at point IT nexus 
reset completes, but it doesn't. I think in certain conditions dma map 
should not be done twice.

Anyway, that can be fixed, but I still have the hang :(

Thanks,
John
Damien Le Moal Feb. 17, 2022, 10:47 a.m. UTC | #10
On 2/17/22 18:23, John Garry wrote:
> On 17/02/2022 00:12, Damien Le Moal wrote:
>>>>> I'll have a look at it. And that is on mainline or mkp-scsi staging, and
>>>>> not your patchset.
>>>> Are you saying that my patches suppresses the above ? This is submission
>>>> path and the dma code seems to complain about alignment... So bad buffer
>>>> addresses ?
>>> Your series does not suppress it. It doesn't occur often, so I need to
>>> check more.
>>>
>>> I think the issue is that we call dma_map_sg() twice, i.e. ccb never
>>> unmapped.
>> That would be a big issue indeed. We could add a flag to CCBs to track
>> the buf_prd DMA mapping state and BUG_ON() when ccb free function is
>> called with the buffer still mapped. That should allow catching this
>> infrequent problem ?
>>
> 
> I figured out what is happening here and it does not help solve the 
> mystery of my hang.
> 
> Here's the steps:
> a. scsi_cmnd times out
> b. scsi error handling kicks in
> c. libsas attempts to abort the task, which fails
> d. libsas then tries IT nexus reset, which passes
>   - libsas assumes the scsi_cmnd has completed with failure
> e. error handling concludes
> f. scsi midlayer then retries the same scsi_cmnd
> g. since we did not "free" associated ccb earlier or dma unmap at d., 
> the dma unmap on the same scsi_cmnd causes the warn
> 
> So the LLD should really free resources and dma unmap at point IT nexus 
> reset completes, but it doesn't. I think in certain conditions dma map 
> should not be done twice.
> 
> Anyway, that can be fixed, but I still have the hang :(

I guess (a) (cmd timeout) is only the symptom of the hang ? That is, the
hang is causing the timeout ?
It may be good to turn on scsi trace to see if the command was only
partially done, or not at all, or if it is a non-data command.

And speaking of errors, I am currently testing v4 of my series and
noticed some weird things in the error handling. E.g., with one of the
test executing a report zones command with an LBA out of range, I see this:

[23962.027105] pm80xx0:: mpi_sata_event  2788:SATA EVENT 0x23
[23962.036099] pm80xx0:: pm80xx_send_read_log  1863:Executing read log end

All good: this is IO_XFER_ERROR_ABORTED_NCQ_MODE. And the read log is to
get the drive queue out of error state.

[23962.046101] pm80xx0:: mpi_sata_event  2788:SATA EVENT 0x26

This is IO_XFER_ERROR_UNEXPECTED_PHASE. No clue what this is doing.

[23962.054947] pm80xx0:: mpi_sata_event  2805:task or dev null

Why ?

[23962.063865] pm80xx0:: pm80xx_send_abort_all  1796:Executing abort
task end

All queued commands are aborted when the read log completes. Again, per
ATA NCQ specs, all good. After that, normal (some useless) messages.

[23962.063964] pm80xx0:: mpi_sata_completion  2395:IO failed device_id
16388 status 0x1 tag 1
[23962.084587] pm80xx0:: mpi_sata_completion  2430:SAS Address of IO
Failure Drive:50010860002f5657
[23962.095526] sas: sas_ata_task_done: SAS error 0x8d
[23962.139470] sas: Enter sas_scsi_recover_host busy: 1 failed: 1
[23962.147897] ata24.00: exception Emask 0x0 SAct 0x800 SErr 0x0 action 0x0
[23962.156253] ata24.00: failed command: RECEIVE FPDMA QUEUED
[23962.163307] ata24.00: cmd 65/01:00:00:00:0c/00:02:23:01:00/40 tag 11
ncq dma 512 in
[23962.163307]          res 43/10:00:00:00:00/00:00:00:00:00/00 Emask
0x480 (invalid argument) <F>

Good here, correct error...

[23962.182879] ata24.00: status: { DRDY SENSE ERR }
[23962.189100] ata24.00: error: { IDNF }

... but I need to look at this in libata. Getting the same with AHCI.

[23962.215456] ata24.00: configured for UDMA/133
[23962.221469] ata24: EH complete
[23962.226056] sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1
tries: 1

That event 0x26 and the "task or dev null" are obscure. No clue, but
they look wrong.

Overall, this driver is by default way too verbose I think. I would
prefer to reduce the above to something like:

[23962.095526] sas: sas_ata_task_done: SAS error 0x8d
[23962.139470] sas: Enter sas_scsi_recover_host busy: 1 failed: 1
[23962.147897] ata24.00: exception Emask 0x0 SAct 0x800 SErr 0x0 action 0x0
[23962.156253] ata24.00: failed command: RECEIVE FPDMA QUEUED
[23962.163307] ata24.00: cmd 65/01:00:00:00:0c/00:02:23:01:00/40 tag 11
ncq dma 512 in
[23962.163307]          res 43/10:00:00:00:00/00:00:00:00:00/00 Emask
0x480 (invalid argument) <F>
[23962.182879] ata24.00: status: { DRDY SENSE ERR }
[23962.189100] ata24.00: error: { IDNF }
[23962.215456] ata24.00: configured for UDMA/133
[23962.221469] ata24: EH complete
[23962.226056] sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1
tries: 1

That is, by default, remove most FAIL messages, changing them to DEV
level. The same test with a SAS drive is totally silent, as it should
be, since the command error is not a fatal one.

But not touching this for now. I first want the series to be queued.
Damien Le Moal Feb. 17, 2022, 11:47 a.m. UTC | #11
On 2/17/22 18:23, John Garry wrote:
> On 17/02/2022 00:12, Damien Le Moal wrote:
>>>>> I'll have a look at it. And that is on mainline or mkp-scsi staging, and
>>>>> not your patchset.
>>>> Are you saying that my patches suppresses the above ? This is submission
>>>> path and the dma code seems to complain about alignment... So bad buffer
>>>> addresses ?
>>> Your series does not suppress it. It doesn't occur often, so I need to
>>> check more.
>>>
>>> I think the issue is that we call dma_map_sg() twice, i.e. ccb never
>>> unmapped.
>> That would be a big issue indeed. We could add a flag to CCBs to track
>> the buf_prd DMA mapping state and BUG_ON() when ccb free function is
>> called with the buffer still mapped. That should allow catching this
>> infrequent problem ?
>>
> 
> I figured out what is happening here and it does not help solve the 
> mystery of my hang.
> 
> Here's the steps:
> a. scsi_cmnd times out
> b. scsi error handling kicks in
> c. libsas attempts to abort the task, which fails
> d. libsas then tries IT nexus reset, which passes
>   - libsas assumes the scsi_cmnd has completed with failure
> e. error handling concludes
> f. scsi midlayer then retries the same scsi_cmnd
> g. since we did not "free" associated ccb earlier or dma unmap at d., 
> the dma unmap on the same scsi_cmnd causes the warn
> 
> So the LLD should really free resources and dma unmap at point IT nexus 
> reset completes, but it doesn't. I think in certain conditions dma map 
> should not be done twice.
> 
> Anyway, that can be fixed, but I still have the hang :(

One thought: could it be bug with the DMA engine of your platform ?
What if you simply run an fio workload on the disk directly (no FS),
hang happens too ?

For the bugs I fixed with my series, it was the reverse: fio worked
great but everything broke down when I ran libzbc tests...

> 
> Thanks,
> John
John Garry Feb. 17, 2022, 12:49 p.m. UTC | #12
>>>
>>
>> I figured out what is happening here and it does not help solve the
>> mystery of my hang.
>>
>> Here's the steps:
>> a. scsi_cmnd times out
>> b. scsi error handling kicks in
>> c. libsas attempts to abort the task, which fails
>> d. libsas then tries IT nexus reset, which passes
>>    - libsas assumes the scsi_cmnd has completed with failure
>> e. error handling concludes
>> f. scsi midlayer then retries the same scsi_cmnd
>> g. since we did not "free" associated ccb earlier or dma unmap at d.,
>> the dma unmap on the same scsi_cmnd causes the warn
>>
>> So the LLD should really free resources and dma unmap at point IT nexus
>> reset completes, but it doesn't. I think in certain conditions dma map
>> should not be done twice.
>>
>> Anyway, that can be fixed, but I still have the hang :(
> 
> I guess (a) (cmd timeout) is only the symptom of the hang ? That is, the
> hang is causing the timeout ?

Right

> It may be good to turn on scsi trace to see if the command was only
> partially done, or not at all, or if it is a non-data command.
> 

I could do that. But I think that the command just does not complete. Or 
maybe it is missed.

> And speaking of errors, I am currently testing v4 of my series and
> noticed some weird things in the error handling. E.g., with one of the
> test executing a report zones command with an LBA out of range, I see this:
> 
> [23962.027105] pm80xx0:: mpi_sata_event  2788:SATA EVENT 0x23
> [23962.036099] pm80xx0:: pm80xx_send_read_log  1863:Executing read log end
> 

I don't know why the driver even does this, but the implementation of 
pm80xx_send_read_log() is questionable. It would be nice to not see ATA 
code in the driver like this.

> All good: this is IO_XFER_ERROR_ABORTED_NCQ_MODE. And the read log is to
> get the drive queue out of error state.
> 
> [23962.046101] pm80xx0:: mpi_sata_event  2788:SATA EVENT 0x26
> 
> This is IO_XFER_ERROR_UNEXPECTED_PHASE. No clue what this is doing.
> 
> [23962.054947] pm80xx0:: mpi_sata_event  2805:task or dev null
> 
> Why ?
> 
> [23962.063865] pm80xx0:: pm80xx_send_abort_all  1796:Executing abort
> task end
> 
> All queued commands are aborted when the read log completes. Again, per
> ATA NCQ specs, all good. After that, normal (some useless) messages.
> 
> [23962.063964] pm80xx0:: mpi_sata_completion  2395:IO failed device_id
> 16388 status 0x1 tag 1
> [23962.084587] pm80xx0:: mpi_sata_completion  2430:SAS Address of IO
> Failure Drive:50010860002f5657
> [23962.095526] sas: sas_ata_task_done: SAS error 0x8d
> [23962.139470] sas: Enter sas_scsi_recover_host busy: 1 failed: 1
> [23962.147897] ata24.00: exception Emask 0x0 SAct 0x800 SErr 0x0 action 0x0
> [23962.156253] ata24.00: failed command: RECEIVE FPDMA QUEUED
> [23962.163307] ata24.00: cmd 65/01:00:00:00:0c/00:02:23:01:00/40 tag 11
> ncq dma 512 in
> [23962.163307]          res 43/10:00:00:00:00/00:00:00:00:00/00 Emask
> 0x480 (invalid argument) <F>
> 
> Good here, correct error...
> 
> [23962.182879] ata24.00: status: { DRDY SENSE ERR }
> [23962.189100] ata24.00: error: { IDNF }
> 
> ... but I need to look at this in libata. Getting the same with AHCI.

ok, in a way that is a relief.

> 
> [23962.215456] ata24.00: configured for UDMA/133
> [23962.221469] ata24: EH complete
> [23962.226056] sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1
> tries: 1
> 
> That event 0x26 and the "task or dev null" are obscure. No clue, but
> they look wrong.

As above, I doubt the implemenation of that code. It alloc's a domain 
device itself, which is not how things should be done.

> 
> Overall, this driver is by default way too verbose I think. I would
> prefer to reduce the above to something like:
> 
> [23962.095526] sas: sas_ata_task_done: SAS error 0x8d
> [23962.139470] sas: Enter sas_scsi_recover_host busy: 1 failed: 1
> [23962.147897] ata24.00: exception Emask 0x0 SAct 0x800 SErr 0x0 action 0x0
> [23962.156253] ata24.00: failed command: RECEIVE FPDMA QUEUED
> [23962.163307] ata24.00: cmd 65/01:00:00:00:0c/00:02:23:01:00/40 tag 11
> ncq dma 512 in
> [23962.163307]          res 43/10:00:00:00:00/00:00:00:00:00/00 Emask
> 0x480 (invalid argument) <F>
> [23962.182879] ata24.00: status: { DRDY SENSE ERR }
> [23962.189100] ata24.00: error: { IDNF }
> [23962.215456] ata24.00: configured for UDMA/133
> [23962.221469] ata24: EH complete
> [23962.226056] sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1
> tries: 1
> 
> That is, by default, remove most FAIL messages, changing them to DEV
> level. The same test with a SAS drive is totally silent, as it should
> be, since the command error is not a fatal one.
> 
> But not touching this for now. I first want the series to be queued.
> 

In response to later message:

On 17/02/2022 11:47, Damien Le Moal wrote:
 >> Anyway, that can be fixed, but I still have the hang:(
 > One thought: could it be bug with the DMA engine of your platform ?
 > What if you simply run an fio workload on the disk directly (no FS),
 > hang happens too ?

I did try fio a while ago and it worked ok. Strange.

 > For the bugs I fixed with my series, it was the reverse: fio worked
 > great but everything broke down when I ran libzbc tests...
 >

I got a few more things to try but need to make progress on my libsas 
series now ...

Thanks,
John
Damien Le Moal Feb. 18, 2022, 3:12 a.m. UTC | #13
On 2/17/22 21:49, John Garry wrote:
>>>>
>>>
>>> I figured out what is happening here and it does not help solve the
>>> mystery of my hang.
>>>
>>> Here's the steps:
>>> a. scsi_cmnd times out
>>> b. scsi error handling kicks in
>>> c. libsas attempts to abort the task, which fails
>>> d. libsas then tries IT nexus reset, which passes
>>>    - libsas assumes the scsi_cmnd has completed with failure
>>> e. error handling concludes
>>> f. scsi midlayer then retries the same scsi_cmnd
>>> g. since we did not "free" associated ccb earlier or dma unmap at d.,
>>> the dma unmap on the same scsi_cmnd causes the warn
>>>
>>> So the LLD should really free resources and dma unmap at point IT nexus
>>> reset completes, but it doesn't. I think in certain conditions dma map
>>> should not be done twice.
>>>
>>> Anyway, that can be fixed, but I still have the hang :(
>>
>> I guess (a) (cmd timeout) is only the symptom of the hang ? That is, the
>> hang is causing the timeout ?
> 
> Right
> 
>> It may be good to turn on scsi trace to see if the command was only
>> partially done, or not at all, or if it is a non-data command.
>>
> 
> I could do that. But I think that the command just does not complete. Or 
> maybe it is missed.
> 
>> And speaking of errors, I am currently testing v4 of my series and
>> noticed some weird things in the error handling. E.g., with one of the
>> test executing a report zones command with an LBA out of range, I see this:
>>
>> [23962.027105] pm80xx0:: mpi_sata_event  2788:SATA EVENT 0x23
>> [23962.036099] pm80xx0:: pm80xx_send_read_log  1863:Executing read log end
>>
> 
> I don't know why the driver even does this, but the implementation of 
> pm80xx_send_read_log() is questionable. It would be nice to not see ATA 
> code in the driver like this.

I have been thinking about this one. We should be able to avoid this
read log and rely on libata-eh to do it. All we should need to do is an
internal abort all without completing the commands. libata will do the
read log and resend the retry for the failed command (if appropriate)
and all the other aborted NCQ commands.

Need to look at how other libsas drivers are handling this. But the
above should work, I think.

Not adding this to the current series though :) That will be for another
patch series.
John Garry Feb. 18, 2022, 11:21 a.m. UTC | #14
On 18/02/2022 03:12, Damien Le Moal wrote:
>> I don't know why the driver even does this, but the implementation of
>> pm80xx_send_read_log() is questionable. It would be nice to not see ATA
>> code in the driver like this.
> I have been thinking about this one. We should be able to avoid this
> read log and rely on libata-eh to do it. All we should need to do is an
> internal abort all without completing the commands. libata will do the
> read log and resend the retry for the failed command (if appropriate)
> and all the other aborted NCQ commands.
> 
> Need to look at how other libsas drivers are handling this. But the
> above should work, I think.

I just assumed that this was something specific to that HW as no other 
libsas driver does this manually.

> 
> Not adding this to the current series though:)  That will be for another
> patch series.
Please note that the Hisi SAS driver also issues an ATA softreset as 
part of the disk reset procedure - it would be nice to have that sort of 
code in libata also; I see softreset code in libahci which maybe could 
be refactored. I can check further when I get a chance.

Thanks,
John
diff mbox series

Patch

diff --git a/drivers/scsi/pm8001/pm8001_sas.c b/drivers/scsi/pm8001/pm8001_sas.c
index cbcb56b56a2f..d08cbf3ef879 100644
--- a/drivers/scsi/pm8001/pm8001_sas.c
+++ b/drivers/scsi/pm8001/pm8001_sas.c
@@ -381,54 +381,53 @@  static int pm8001_task_exec(struct sas_task *task,
 	struct pm8001_device *pm8001_dev;
 	struct pm8001_port *port = NULL;
 	struct sas_task *t = task;
+	struct task_status_struct *ts = &t->task_status;
 	struct pm8001_ccb_info *ccb;
 	u32 tag = 0xdeadbeef, rc = 0, n_elem = 0;
 	unsigned long flags = 0;
 	enum sas_protocol task_proto = t->task_proto;
 
 	if (!dev->port) {
-		struct task_status_struct *tsm = &t->task_status;
-		tsm->resp = SAS_TASK_UNDELIVERED;
-		tsm->stat = SAS_PHY_DOWN;
+		ts->resp = SAS_TASK_UNDELIVERED;
+		ts->stat = SAS_PHY_DOWN;
 		if (dev->dev_type != SAS_SATA_DEV)
 			t->task_done(t);
 		return 0;
 	}
+
 	pm8001_ha = pm8001_find_ha_by_dev(task->dev);
 	if (pm8001_ha->controller_fatal_error) {
-		struct task_status_struct *ts = &t->task_status;
-
 		ts->resp = SAS_TASK_UNDELIVERED;
 		t->task_done(t);
 		return 0;
 	}
+
 	pm8001_dbg(pm8001_ha, IO, "pm8001_task_exec device\n");
+
 	spin_lock_irqsave(&pm8001_ha->lock, flags);
+
 	do {
 		dev = t->dev;
 		pm8001_dev = dev->lldd_dev;
 		port = &pm8001_ha->port[sas_find_local_port_id(dev)];
+
 		if (DEV_IS_GONE(pm8001_dev) || !port->port_attached) {
+			ts->resp = SAS_TASK_UNDELIVERED;
+			ts->stat = SAS_PHY_DOWN;
 			if (sas_protocol_ata(task_proto)) {
-				struct task_status_struct *ts = &t->task_status;
-				ts->resp = SAS_TASK_UNDELIVERED;
-				ts->stat = SAS_PHY_DOWN;
-
 				spin_unlock_irqrestore(&pm8001_ha->lock, flags);
 				t->task_done(t);
 				spin_lock_irqsave(&pm8001_ha->lock, flags);
-				continue;
 			} else {
-				struct task_status_struct *ts = &t->task_status;
-				ts->resp = SAS_TASK_UNDELIVERED;
-				ts->stat = SAS_PHY_DOWN;
 				t->task_done(t);
-				continue;
 			}
+			continue;
 		}
+
 		rc = pm8001_tag_alloc(pm8001_ha, &tag);
 		if (rc)
 			goto err_out;
+
 		ccb = &pm8001_ha->ccb_info[tag];
 
 		if (!sas_protocol_ata(task_proto)) {