mbox series

[0/2] iscsit/isert deadlock prevention under heavy I/O

Message ID 20220311175713.2344960-1-djeffery@redhat.com (mailing list archive)
Headers show
Series iscsit/isert deadlock prevention under heavy I/O | expand

Message

David Jeffery March 11, 2022, 5:57 p.m. UTC
With fast infiniband networks and rdma through isert, the isert version of
an iSCSI target can get itself into a deadlock condition from when
max_cmd_sn updates are pushed to the client versus when commands are fully
released after rdma completes.

iscsit preallocates a limited number of iscsi_cmd structs used for any
commands from the initiator. While the iscsi window would normally be
expected to limit the number used by normal SCSI commands, isert can exceed
this limit with commands waiting finally completion. max_cmd_sn gets
incremented and pushed to the client on sending the target's final
response, but the iscsi_cmd won't be freed for reuse until after all rdma
is acknowledged as complete.

This allows more new commands to come in even as older commands are not yet
released. With enough commands on the initiator wanting to be sent, this can
result in all iscsi_cmd structs being allocated and used for SCSI commands.

And once all are allocated, isert can deadlock when another new command is
received. Its receive processing waits for an iscsi_cmd to become available.
But this also stalls processing of the completions which would result in
releasing an iscsi_cmd, resulting in a deadlock.

This small patch series prevents this issue by altering when and how
max_cmd_sn changes are reported to the initiator for isert. It gets delayed
until iscsi_cmd release instead of when sending a final response.

To prevent failure or large delays for informing the initiator of changes to
max_cmd_sn, NOPIN is used as a method to inform the initiator should the
difference between internal max_cmd_sn and what has been passed to the
initiator grow too large.

David Jeffery (2):
  isert: support for unsolicited NOPIN with no response.
  iscsit: increment max_cmd_sn for isert on command release

 drivers/infiniband/ulp/isert/ib_isert.c    | 11 ++++++-
 drivers/target/iscsi/iscsi_target.c        | 18 +++++------
 drivers/target/iscsi/iscsi_target_device.c | 35 +++++++++++++++++++++-
 drivers/target/iscsi/iscsi_target_login.c  |  1 +
 drivers/target/iscsi/iscsi_target_util.c   |  5 +++-
 drivers/target/iscsi/iscsi_target_util.h   |  1 +
 include/target/iscsi/iscsi_target_core.h   |  8 +++++
 include/target/iscsi/iscsi_transport.h     |  1 +
 8 files changed, 68 insertions(+), 12 deletions(-)

Comments

Laurence Oberman March 11, 2022, 7:08 p.m. UTC | #1
On Fri, 2022-03-11 at 12:57 -0500, David Jeffery wrote:
> With fast infiniband networks and rdma through isert, the isert
> version of
> an iSCSI target can get itself into a deadlock condition from when
> max_cmd_sn updates are pushed to the client versus when commands are
> fully
> released after rdma completes.
> 
> iscsit preallocates a limited number of iscsi_cmd structs used for
> any
> commands from the initiator. While the iscsi window would normally be
> expected to limit the number used by normal SCSI commands, isert can
> exceed
> this limit with commands waiting finally completion. max_cmd_sn gets
> incremented and pushed to the client on sending the target's final
> response, but the iscsi_cmd won't be freed for reuse until after all
> rdma
> is acknowledged as complete.
> 
> This allows more new commands to come in even as older commands are
> not yet
> released. With enough commands on the initiator wanting to be sent,
> this can
> result in all iscsi_cmd structs being allocated and used for SCSI
> commands.
> 
> And once all are allocated, isert can deadlock when another new
> command is
> received. Its receive processing waits for an iscsi_cmd to become
> available.
> But this also stalls processing of the completions which would result
> in
> releasing an iscsi_cmd, resulting in a deadlock.
> 
> This small patch series prevents this issue by altering when and how
> max_cmd_sn changes are reported to the initiator for isert. It gets
> delayed
> until iscsi_cmd release instead of when sending a final response.
> 
> To prevent failure or large delays for informing the initiator of
> changes to
> max_cmd_sn, NOPIN is used as a method to inform the initiator should
> the
> difference between internal max_cmd_sn and what has been passed to
> the
> initiator grow too large.
> 
> David Jeffery (2):
>   isert: support for unsolicited NOPIN with no response.
>   iscsit: increment max_cmd_sn for isert on command release
> 
>  drivers/infiniband/ulp/isert/ib_isert.c    | 11 ++++++-
>  drivers/target/iscsi/iscsi_target.c        | 18 +++++------
>  drivers/target/iscsi/iscsi_target_device.c | 35
> +++++++++++++++++++++-
>  drivers/target/iscsi/iscsi_target_login.c  |  1 +
>  drivers/target/iscsi/iscsi_target_util.c   |  5 +++-
>  drivers/target/iscsi/iscsi_target_util.h   |  1 +
>  include/target/iscsi/iscsi_target_core.h   |  8 +++++
>  include/target/iscsi/iscsi_transport.h     |  1 +
>  8 files changed, 68 insertions(+), 12 deletions(-)
> 

This patch has had exhaustive testing in our lab and finally at a
customer. with 40GB FDR we could not reproduce this issue, when we
moved to 100G EDR it showed up. Its been literally over tested for many
days on two separate installations.

The patch corrected all the stalls and problems seen. Thanks David for
sending this.

Regards
Laurence Oberman
Max Gurtovoy March 13, 2022, 9:59 a.m. UTC | #2
Hi David,

thanks for the report.

On 3/11/2022 7:57 PM, David Jeffery wrote:
> With fast infiniband networks and rdma through isert, the isert version of
> an iSCSI target can get itself into a deadlock condition from when
> max_cmd_sn updates are pushed to the client versus when commands are fully
> released after rdma completes.
>
> iscsit preallocates a limited number of iscsi_cmd structs used for any
> commands from the initiator. While the iscsi window would normally be
> expected to limit the number used by normal SCSI commands, isert can exceed
> this limit with commands waiting finally completion. max_cmd_sn gets
> incremented and pushed to the client on sending the target's final
> response, but the iscsi_cmd won't be freed for reuse until after all rdma
> is acknowledged as complete.

Please check how we fixed that in NVMf in Sagi's commit:

nvmet-rdma: fix possible bogus dereference under heavy load (commit: 
8407879c4e0d77)

Maybe this can be done in isert and will solve this problem in a simpler 
way.

is it necessary to change max_cmd_sn ?


>
> This allows more new commands to come in even as older commands are not yet
> released. With enough commands on the initiator wanting to be sent, this can
> result in all iscsi_cmd structs being allocated and used for SCSI commands.
>
> And once all are allocated, isert can deadlock when another new command is
> received. Its receive processing waits for an iscsi_cmd to become available.
> But this also stalls processing of the completions which would result in
> releasing an iscsi_cmd, resulting in a deadlock.
>
> This small patch series prevents this issue by altering when and how
> max_cmd_sn changes are reported to the initiator for isert. It gets delayed
> until iscsi_cmd release instead of when sending a final response.
>
> To prevent failure or large delays for informing the initiator of changes to
> max_cmd_sn, NOPIN is used as a method to inform the initiator should the
> difference between internal max_cmd_sn and what has been passed to the
> initiator grow too large.
>
> David Jeffery (2):
>    isert: support for unsolicited NOPIN with no response.
>    iscsit: increment max_cmd_sn for isert on command release
>
>   drivers/infiniband/ulp/isert/ib_isert.c    | 11 ++++++-
>   drivers/target/iscsi/iscsi_target.c        | 18 +++++------
>   drivers/target/iscsi/iscsi_target_device.c | 35 +++++++++++++++++++++-
>   drivers/target/iscsi/iscsi_target_login.c  |  1 +
>   drivers/target/iscsi/iscsi_target_util.c   |  5 +++-
>   drivers/target/iscsi/iscsi_target_util.h   |  1 +
>   include/target/iscsi/iscsi_target_core.h   |  8 +++++
>   include/target/iscsi/iscsi_transport.h     |  1 +
>   8 files changed, 68 insertions(+), 12 deletions(-)
>
David Jeffery March 14, 2022, 1:57 p.m. UTC | #3
On Sun, Mar 13, 2022 at 5:59 AM Max Gurtovoy <mgurtovoy@nvidia.com> wrote:
>
> Hi David,
>
> thanks for the report.
>
> Please check how we fixed that in NVMf in Sagi's commit:
>
> nvmet-rdma: fix possible bogus dereference under heavy load (commit:
> 8407879c4e0d77)
>
> Maybe this can be done in isert and will solve this problem in a simpler
> way.
>
> is it necessary to change max_cmd_sn ?
>
>

Hello,

Sure, there are alternative methods which could fix this immediate
issue. e.g. We could make the command structs for scsi commands get
allocated from a mempool. Is there a particular reason you don't want
to do anything to modify max_cmd_sn behavior?

I didn't do something like this as it seems to me to go against the
intent of the design. It makes the iscsi window mostly meaningless in
some conditions and complicates any allocation path since it now must
gracefully and sanely handle an iscsi_cmd/isert_cmd not existing. I
assume special commands like task-management, logouts, and pings would
need a separate allocation source to keep from being dropped under
memory load.

David Jeffery
Max Gurtovoy March 14, 2022, 2:52 p.m. UTC | #4
On 3/14/2022 3:57 PM, David Jeffery wrote:
> On Sun, Mar 13, 2022 at 5:59 AM Max Gurtovoy <mgurtovoy@nvidia.com> wrote:
>> Hi David,
>>
>> thanks for the report.
>>
>> Please check how we fixed that in NVMf in Sagi's commit:
>>
>> nvmet-rdma: fix possible bogus dereference under heavy load (commit:
>> 8407879c4e0d77)
>>
>> Maybe this can be done in isert and will solve this problem in a simpler
>> way.
>>
>> is it necessary to change max_cmd_sn ?
>>
>>
> Hello,
>
> Sure, there are alternative methods which could fix this immediate
> issue. e.g. We could make the command structs for scsi commands get
> allocated from a mempool. Is there a particular reason you don't want
> to do anything to modify max_cmd_sn behavior?

according to the description the command was parsed successful and sent 
to the initiator.

Why do we need to change the window ? it's just a race of putting the 
context back to the pool.

And this race is rare.


>
> I didn't do something like this as it seems to me to go against the
> intent of the design. It makes the iscsi window mostly meaningless in
> some conditions and complicates any allocation path since it now must
> gracefully and sanely handle an iscsi_cmd/isert_cmd not existing. I
> assume special commands like task-management, logouts, and pings would
> need a separate allocation source to keep from being dropped under
> memory load.

it won't be dropped. It would be allocated dynamically and freed 
(instead of putting it back to the pool).


> David Jeffery
>
David Jeffery March 14, 2022, 3:55 p.m. UTC | #5
On Mon, Mar 14, 2022 at 10:52 AM Max Gurtovoy <mgurtovoy@nvidia.com> wrote:
>
>
> On 3/14/2022 3:57 PM, David Jeffery wrote:
> > On Sun, Mar 13, 2022 at 5:59 AM Max Gurtovoy <mgurtovoy@nvidia.com> wrote:
> >> Hi David,
> >>
> >> thanks for the report.
> >>
> >> Please check how we fixed that in NVMf in Sagi's commit:
> >>
> >> nvmet-rdma: fix possible bogus dereference under heavy load (commit:
> >> 8407879c4e0d77)
> >>
> >> Maybe this can be done in isert and will solve this problem in a simpler
> >> way.
> >>
> >> is it necessary to change max_cmd_sn ?
> >>
> >>
> > Hello,
> >
> > Sure, there are alternative methods which could fix this immediate
> > issue. e.g. We could make the command structs for scsi commands get
> > allocated from a mempool. Is there a particular reason you don't want
> > to do anything to modify max_cmd_sn behavior?
>
> according to the description the command was parsed successful and sent
> to the initiator.
>

Yes.

> Why do we need to change the window ? it's just a race of putting the
> context back to the pool.
>
> And this race is rare.
>

Sure, it's going to be rare. Systems using isert targets with
infiniband are going to be naturally rare. It's part of why I left the
max_cmd_sn behavior untouched for non-isert iscsit since they seem to
be fine as is. But it's easily and regularly triggered by some systems
which use isert, so worth fixing.

> >
> > I didn't do something like this as it seems to me to go against the
> > intent of the design. It makes the iscsi window mostly meaningless in
> > some conditions and complicates any allocation path since it now must
> > gracefully and sanely handle an iscsi_cmd/isert_cmd not existing. I
> > assume special commands like task-management, logouts, and pings would
> > need a separate allocation source to keep from being dropped under
> > memory load.
>
> it won't be dropped. It would be allocated dynamically and freed
> (instead of putting it back to the pool).
>

If it waits indefinitely for an allocation it ends up with a variation
of the original problem under memory pressure. If it waits for
allocation on isert receive, then receive stalls under memory pressure
and won't process the completions which would have released the other
iscsi_cmd structs just needing final acknowledgement.

David Jeffery
Laurence Oberman March 14, 2022, 5:40 p.m. UTC | #6
On Mon, 2022-03-14 at 11:55 -0400, David Jeffery wrote:
> On Mon, Mar 14, 2022 at 10:52 AM Max Gurtovoy <mgurtovoy@nvidia.com>
> wrote:
> > 
> > 
> > On 3/14/2022 3:57 PM, David Jeffery wrote:
> > > On Sun, Mar 13, 2022 at 5:59 AM Max Gurtovoy <
> > > mgurtovoy@nvidia.com> wrote:
> > > > Hi David,
> > > > 
> > > > thanks for the report.
> > > > 
> > > > Please check how we fixed that in NVMf in Sagi's commit:
> > > > 
> > > > nvmet-rdma: fix possible bogus dereference under heavy load
> > > > (commit:
> > > > 8407879c4e0d77)
> > > > 
> > > > Maybe this can be done in isert and will solve this problem in
> > > > a simpler
> > > > way.
> > > > 
> > > > is it necessary to change max_cmd_sn ?
> > > > 
> > > > 
> > > 
> > > Hello,
> > > 
> > > Sure, there are alternative methods which could fix this
> > > immediate
> > > issue. e.g. We could make the command structs for scsi commands
> > > get
> > > allocated from a mempool. Is there a particular reason you don't
> > > want
> > > to do anything to modify max_cmd_sn behavior?
> > 
> > according to the description the command was parsed successful and
> > sent
> > to the initiator.
> > 
> 
> Yes.
> 
> > Why do we need to change the window ? it's just a race of putting
> > the
> > context back to the pool.
> > 
> > And this race is rare.
> > 
> 
> Sure, it's going to be rare. Systems using isert targets with
> infiniband are going to be naturally rare. It's part of why I left
> the
> max_cmd_sn behavior untouched for non-isert iscsit since they seem to
> be fine as is. But it's easily and regularly triggered by some
> systems
> which use isert, so worth fixing.
> 
> > > 
> > > I didn't do something like this as it seems to me to go against
> > > the
> > > intent of the design. It makes the iscsi window mostly
> > > meaningless in
> > > some conditions and complicates any allocation path since it now
> > > must
> > > gracefully and sanely handle an iscsi_cmd/isert_cmd not existing.
> > > I
> > > assume special commands like task-management, logouts, and pings
> > > would
> > > need a separate allocation source to keep from being dropped
> > > under
> > > memory load.
> > 
> > it won't be dropped. It would be allocated dynamically and freed
> > (instead of putting it back to the pool).
> > 
> 
> If it waits indefinitely for an allocation it ends up with a
> variation
> of the original problem under memory pressure. If it waits for
> allocation on isert receive, then receive stalls under memory
> pressure
> and won't process the completions which would have released the other
> iscsi_cmd structs just needing final acknowledgement.
> 
> David Jeffery
> 

Folks this is a pending issue stopping a customer from making progress.
They run Oracle and very high workloads on EDR 100 so David fixed this
fosusing on the needs of the isert target changes etc. 

Are you able to give us technical reasons why David's patch is not
suitable and why we he would have to start from scratch.

We literally spent weeks on this and built another special lab for
fully testing EDR 100.
This issue was pending in a BZ for some time and Mellnox had eyes on it
then but this latest suggestion was never put forward in that BZ to us.

Sincerely
Laurence
Max Gurtovoy March 16, 2022, 10:38 a.m. UTC | #7
On 3/14/2022 7:40 PM, Laurence Oberman wrote:
> On Mon, 2022-03-14 at 11:55 -0400, David Jeffery wrote:
>> On Mon, Mar 14, 2022 at 10:52 AM Max Gurtovoy <mgurtovoy@nvidia.com>
>> wrote:
>>>
>>> On 3/14/2022 3:57 PM, David Jeffery wrote:
>>>> On Sun, Mar 13, 2022 at 5:59 AM Max Gurtovoy <
>>>> mgurtovoy@nvidia.com> wrote:
>>>>> Hi David,
>>>>>
>>>>> thanks for the report.
>>>>>
>>>>> Please check how we fixed that in NVMf in Sagi's commit:
>>>>>
>>>>> nvmet-rdma: fix possible bogus dereference under heavy load
>>>>> (commit:
>>>>> 8407879c4e0d77)
>>>>>
>>>>> Maybe this can be done in isert and will solve this problem in
>>>>> a simpler
>>>>> way.
>>>>>
>>>>> is it necessary to change max_cmd_sn ?
>>>>>
>>>>>
>>>> Hello,
>>>>
>>>> Sure, there are alternative methods which could fix this
>>>> immediate
>>>> issue. e.g. We could make the command structs for scsi commands
>>>> get
>>>> allocated from a mempool. Is there a particular reason you don't
>>>> want
>>>> to do anything to modify max_cmd_sn behavior?
>>> according to the description the command was parsed successful and
>>> sent
>>> to the initiator.
>>>
>> Yes.
>>
>>> Why do we need to change the window ? it's just a race of putting
>>> the
>>> context back to the pool.
>>>
>>> And this race is rare.
>>>
>> Sure, it's going to be rare. Systems using isert targets with
>> infiniband are going to be naturally rare. It's part of why I left
>> the
>> max_cmd_sn behavior untouched for non-isert iscsit since they seem to
>> be fine as is. But it's easily and regularly triggered by some
>> systems
>> which use isert, so worth fixing.
>>
>>>> I didn't do something like this as it seems to me to go against
>>>> the
>>>> intent of the design. It makes the iscsi window mostly
>>>> meaningless in
>>>> some conditions and complicates any allocation path since it now
>>>> must
>>>> gracefully and sanely handle an iscsi_cmd/isert_cmd not existing.
>>>> I
>>>> assume special commands like task-management, logouts, and pings
>>>> would
>>>> need a separate allocation source to keep from being dropped
>>>> under
>>>> memory load.
>>> it won't be dropped. It would be allocated dynamically and freed
>>> (instead of putting it back to the pool).
>>>
>> If it waits indefinitely for an allocation it ends up with a
>> variation
>> of the original problem under memory pressure. If it waits for
>> allocation on isert receive, then receive stalls under memory
>> pressure
>> and won't process the completions which would have released the other
>> iscsi_cmd structs just needing final acknowledgement.

If your system is under such memory pressure can you can't allocate few 
bytes for isert response, the silent drop

of the command is your smallest problem. You need to keep the system 
from crashing. And we do that in my suggestion.

>>
>> David Jeffery
>>
> Folks this is a pending issue stopping a customer from making progress.
> They run Oracle and very high workloads on EDR 100 so David fixed this
> fosusing on the needs of the isert target changes etc.
>
> Are you able to give us technical reasons why David's patch is not
> suitable and why we he would have to start from scratch.

You shouldn't start from scratch. You did all the investigation and the 
debugging already.

Coding a solution is the small part after you understand the root cause.

>
> We literally spent weeks on this and built another special lab for
> fully testing EDR 100.
> This issue was pending in a BZ for some time and Mellnox had eyes on it
> then but this latest suggestion was never put forward in that BZ to us.

Mellanox maintainers saw this issue few days before you sent it 
upstream. I suggested sending it upstream and have a discussion here 
since it has nothing to do with Mellanox adapters and Mellanox SW stack 
MLNX_OFED.

Our job as maintainers and reviewers in the community is to see the big 
picture and suggest solutions that not always same as posted in the 
mailing list.

>
> Sincerely
> Laurence
>
Laurence Oberman March 16, 2022, 1:07 p.m. UTC | #8
On Wed, 2022-03-16 at 12:38 +0200, Max Gurtovoy wrote:
> On 3/14/2022 7:40 PM, Laurence Oberman wrote:
> > On Mon, 2022-03-14 at 11:55 -0400, David Jeffery wrote:
> > > On Mon, Mar 14, 2022 at 10:52 AM Max Gurtovoy <
> > > mgurtovoy@nvidia.com>
> > > wrote:
> > > > 
> > > > On 3/14/2022 3:57 PM, David Jeffery wrote:
> > > > > On Sun, Mar 13, 2022 at 5:59 AM Max Gurtovoy <
> > > > > mgurtovoy@nvidia.com> wrote:
> > > > > > Hi David,
> > > > > > 
> > > > > > thanks for the report.
> > > > > > 
> > > > > > Please check how we fixed that in NVMf in Sagi's commit:
> > > > > > 
> > > > > > nvmet-rdma: fix possible bogus dereference under heavy load
> > > > > > (commit:
> > > > > > 8407879c4e0d77)
> > > > > > 
> > > > > > Maybe this can be done in isert and will solve this problem
> > > > > > in
> > > > > > a simpler
> > > > > > way.
> > > > > > 
> > > > > > is it necessary to change max_cmd_sn ?
> > > > > > 
> > > > > > 
> > > > > 
> > > > > Hello,
> > > > > 
> > > > > Sure, there are alternative methods which could fix this
> > > > > immediate
> > > > > issue. e.g. We could make the command structs for scsi
> > > > > commands
> > > > > get
> > > > > allocated from a mempool. Is there a particular reason you
> > > > > don't
> > > > > want
> > > > > to do anything to modify max_cmd_sn behavior?
> > > > 
> > > > according to the description the command was parsed successful
> > > > and
> > > > sent
> > > > to the initiator.
> > > > 
> > > 
> > > Yes.
> > > 
> > > > Why do we need to change the window ? it's just a race of
> > > > putting
> > > > the
> > > > context back to the pool.
> > > > 
> > > > And this race is rare.
> > > > 
> > > 
> > > Sure, it's going to be rare. Systems using isert targets with
> > > infiniband are going to be naturally rare. It's part of why I
> > > left
> > > the
> > > max_cmd_sn behavior untouched for non-isert iscsit since they
> > > seem to
> > > be fine as is. But it's easily and regularly triggered by some
> > > systems
> > > which use isert, so worth fixing.
> > > 
> > > > > I didn't do something like this as it seems to me to go
> > > > > against
> > > > > the
> > > > > intent of the design. It makes the iscsi window mostly
> > > > > meaningless in
> > > > > some conditions and complicates any allocation path since it
> > > > > now
> > > > > must
> > > > > gracefully and sanely handle an iscsi_cmd/isert_cmd not
> > > > > existing.
> > > > > I
> > > > > assume special commands like task-management, logouts, and
> > > > > pings
> > > > > would
> > > > > need a separate allocation source to keep from being dropped
> > > > > under
> > > > > memory load.
> > > > 
> > > > it won't be dropped. It would be allocated dynamically and
> > > > freed
> > > > (instead of putting it back to the pool).
> > > > 
> > > 
> > > If it waits indefinitely for an allocation it ends up with a
> > > variation
> > > of the original problem under memory pressure. If it waits for
> > > allocation on isert receive, then receive stalls under memory
> > > pressure
> > > and won't process the completions which would have released the
> > > other
> > > iscsi_cmd structs just needing final acknowledgement.
> 
> If your system is under such memory pressure can you can't allocate
> few 
> bytes for isert response, the silent drop
> 
> of the command is your smallest problem. You need to keep the system 
> from crashing. And we do that in my suggestion.
> 
> > > 
> > > David Jeffery
> > > 
> > 
> > Folks this is a pending issue stopping a customer from making
> > progress.
> > They run Oracle and very high workloads on EDR 100 so David fixed
> > this
> > fosusing on the needs of the isert target changes etc.
> > 
> > Are you able to give us technical reasons why David's patch is not
> > suitable and why we he would have to start from scratch.
> 
> You shouldn't start from scratch. You did all the investigation and
> the 
> debugging already.
> 
> Coding a solution is the small part after you understand the root
> cause.
> 
> > 
> > We literally spent weeks on this and built another special lab for
> > fully testing EDR 100.
> > This issue was pending in a BZ for some time and Mellnox had eyes
> > on it
> > then but this latest suggestion was never put forward in that BZ to
> > us.
> 
> Mellanox maintainers saw this issue few days before you sent it 
> upstream. I suggested sending it upstream and have a discussion here 
> since it has nothing to do with Mellanox adapters and Mellanox SW
> stack 
> MLNX_OFED.
> 
> Our job as maintainers and reviewers in the community is to see the
> big 
> picture and suggest solutions that not always same as posted in the 
> mailing list.
> 
> > 
> > Sincerely
> > Laurence
> > 
> 
> 

Hi Max 

The issue was reported with the OFED stack at the customer, so its why
we opened the BZ to get the Mallnox partners engineers engaged.
We had them then see if it also existed with the inbox stack which it
did. 
Sergey worked a little bit on the issue but did not have the same
suggestion you provivided and asked David for help.

We will be happy to take the fix you propose doing it your way. May I
that the engineewrs to work on this the most and understand the code
best propose a fix your way.

I will take on the responsibility to do all the kernel building and
testing.

Regards
Laurence
Sagi Grimberg March 16, 2022, 2:39 p.m. UTC | #9
>>>>>>> Hi David,
>>>>>>>
>>>>>>> thanks for the report.
>>>>>>>
>>>>>>> Please check how we fixed that in NVMf in Sagi's commit:
>>>>>>>
>>>>>>> nvmet-rdma: fix possible bogus dereference under heavy load
>>>>>>> (commit:
>>>>>>> 8407879c4e0d77)
>>>>>>>
>>>>>>> Maybe this can be done in isert and will solve this problem
>>>>>>> in
>>>>>>> a simpler
>>>>>>> way.
>>>>>>>
>>>>>>> is it necessary to change max_cmd_sn ?
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> Sure, there are alternative methods which could fix this
>>>>>> immediate
>>>>>> issue. e.g. We could make the command structs for scsi
>>>>>> commands
>>>>>> get
>>>>>> allocated from a mempool. Is there a particular reason you
>>>>>> don't
>>>>>> want
>>>>>> to do anything to modify max_cmd_sn behavior?
>>>>>
>>>>> according to the description the command was parsed successful
>>>>> and
>>>>> sent
>>>>> to the initiator.
>>>>>
>>>>
>>>> Yes.
>>>>
>>>>> Why do we need to change the window ? it's just a race of
>>>>> putting
>>>>> the
>>>>> context back to the pool.
>>>>>
>>>>> And this race is rare.
>>>>>
>>>>
>>>> Sure, it's going to be rare. Systems using isert targets with
>>>> infiniband are going to be naturally rare. It's part of why I
>>>> left
>>>> the
>>>> max_cmd_sn behavior untouched for non-isert iscsit since they
>>>> seem to
>>>> be fine as is. But it's easily and regularly triggered by some
>>>> systems
>>>> which use isert, so worth fixing.
>>>>
>>>>>> I didn't do something like this as it seems to me to go
>>>>>> against
>>>>>> the
>>>>>> intent of the design. It makes the iscsi window mostly
>>>>>> meaningless in
>>>>>> some conditions and complicates any allocation path since it
>>>>>> now
>>>>>> must
>>>>>> gracefully and sanely handle an iscsi_cmd/isert_cmd not
>>>>>> existing.
>>>>>> I
>>>>>> assume special commands like task-management, logouts, and
>>>>>> pings
>>>>>> would
>>>>>> need a separate allocation source to keep from being dropped
>>>>>> under
>>>>>> memory load.
>>>>>
>>>>> it won't be dropped. It would be allocated dynamically and
>>>>> freed
>>>>> (instead of putting it back to the pool).
>>>>>
>>>>
>>>> If it waits indefinitely for an allocation it ends up with a
>>>> variation
>>>> of the original problem under memory pressure. If it waits for
>>>> allocation on isert receive, then receive stalls under memory
>>>> pressure
>>>> and won't process the completions which would have released the
>>>> other
>>>> iscsi_cmd structs just needing final acknowledgement.
>>
>> If your system is under such memory pressure can you can't allocate
>> few
>> bytes for isert response, the silent drop
>>
>> of the command is your smallest problem. You need to keep the system
>> from crashing. And we do that in my suggestion.
>>
>>>>
>>>> David Jeffery
>>>>
>>>
>>> Folks this is a pending issue stopping a customer from making
>>> progress.
>>> They run Oracle and very high workloads on EDR 100 so David fixed
>>> this
>>> fosusing on the needs of the isert target changes etc.
>>>
>>> Are you able to give us technical reasons why David's patch is not
>>> suitable and why we he would have to start from scratch.
>>
>> You shouldn't start from scratch. You did all the investigation and
>> the
>> debugging already.
>>
>> Coding a solution is the small part after you understand the root
>> cause.
>>
>>>
>>> We literally spent weeks on this and built another special lab for
>>> fully testing EDR 100.
>>> This issue was pending in a BZ for some time and Mellnox had eyes
>>> on it
>>> then but this latest suggestion was never put forward in that BZ to
>>> us.
>>
>> Mellanox maintainers saw this issue few days before you sent it
>> upstream. I suggested sending it upstream and have a discussion here
>> since it has nothing to do with Mellanox adapters and Mellanox SW
>> stack
>> MLNX_OFED.
>>
>> Our job as maintainers and reviewers in the community is to see the
>> big
>> picture and suggest solutions that not always same as posted in the
>> mailing list.
>>
>>>
>>> Sincerely
>>> Laurence
>>>
>>
>>
> 
> Hi Max

Hey,

> The issue was reported with the OFED stack at the customer, so its why
> we opened the BZ to get the Mallnox partners engineers engaged.
> We had them then see if it also existed with the inbox stack which it
> did.
> Sergey worked a little bit on the issue but did not have the same
> suggestion you provivided and asked David for help.

I think you can move the corporate discussions offline.

> We will be happy to take the fix you propose doing it your way. May I
> that the engineewrs to work on this the most and understand the code
> best propose a fix your way.

I tend to agree with Max, I looked into the patch and I can't say that
we know for a fact that incrementing the cmdsn after releasing the
iscsi cmd will not introduce anything else (although looks fine at
a high-level).

Is there any measure-able performance implication?

Max, doing dynamic allocation is also a valid fix.
Laurence Oberman March 16, 2022, 3:26 p.m. UTC | #10
On Wed, 2022-03-16 at 16:39 +0200, Sagi Grimberg wrote:
> Is there any measure-able performance implication?

From our testing and customers it prevented the deadlock, but did not
seem to incur any additional latency. I was reaching similar IOPS/sec
amd GB/sec prior to deadlock.
The benefit seems to be just no more stalls and hung tasks

Thanks for the replies

Kind Regards
Laurence