mbox series

[for-next,0/2] RDMA/erdma: Introduce custom implementation of drain_sq and drain_rq

Message ID 20220824094251.23190-1-chengyou@linux.alibaba.com (mailing list archive)
Headers show
Series RDMA/erdma: Introduce custom implementation of drain_sq and drain_rq | expand

Message

Cheng Xu Aug. 24, 2022, 9:42 a.m. UTC
Hi,

This series introduces erdma's implementation of drain_sq and drain_rq.
Our hardware will stop processing any new WRs if QP state is error.
So the default __ib_drain_sq and __ib_drain_rq in core code can not work
for erdma. For this reason, we implement the drain_sq and drain_rq
interfaces.

In SQ draining or RQ draining, we post both drain send wr and drain
recv wr, and then modify_qp to error. At last, we wait the corresponding
completion in the separated interface.

The first patch introduces internal post_send/post_recv for qp drain, and
the second patch implements the drain_sq and drain_rq of erdma.

Thanks,
Cheng Xu

Cheng Xu (2):
  RDMA/erdma: Introduce internal post_send/post_recv for qp drain
  RDMA/erdma: Add drain_sq and drain_rq support

 drivers/infiniband/hw/erdma/erdma_main.c  |   4 +-
 drivers/infiniband/hw/erdma/erdma_qp.c    | 116 +++++++++++++++++++++-
 drivers/infiniband/hw/erdma/erdma_verbs.h |  27 ++++-
 3 files changed, 136 insertions(+), 11 deletions(-)

Comments

Tom Talpey Aug. 24, 2022, 2:08 p.m. UTC | #1
On 8/24/2022 5:42 AM, Cheng Xu wrote:
> Hi,
> 
> This series introduces erdma's implementation of drain_sq and drain_rq.
> Our hardware will stop processing any new WRs if QP state is error.

Doesn't this violate the IB specification? Failing newly posted WRs
before older WRs have flushed to the CQ means that ordering is not
preserved. Many upper layers depend on this.

Tom.

> So the default __ib_drain_sq and __ib_drain_rq in core code can not work
> for erdma. For this reason, we implement the drain_sq and drain_rq
> interfaces.
> 
> In SQ draining or RQ draining, we post both drain send wr and drain
> recv wr, and then modify_qp to error. At last, we wait the corresponding
> completion in the separated interface.
> 
> The first patch introduces internal post_send/post_recv for qp drain, and
> the second patch implements the drain_sq and drain_rq of erdma.
> 
> Thanks,
> Cheng Xu
> 
> Cheng Xu (2):
>    RDMA/erdma: Introduce internal post_send/post_recv for qp drain
>    RDMA/erdma: Add drain_sq and drain_rq support
> 
>   drivers/infiniband/hw/erdma/erdma_main.c  |   4 +-
>   drivers/infiniband/hw/erdma/erdma_qp.c    | 116 +++++++++++++++++++++-
>   drivers/infiniband/hw/erdma/erdma_verbs.h |  27 ++++-
>   3 files changed, 136 insertions(+), 11 deletions(-)
>
Bernard Metzler Aug. 24, 2022, 2:56 p.m. UTC | #2
> -----Original Message-----
> From: Tom Talpey <tom@talpey.com>
> Sent: Wednesday, 24 August 2022 16:09
> To: Cheng Xu <chengyou@linux.alibaba.com>; jgg@ziepe.ca; leon@kernel.org
> Cc: linux-rdma@vger.kernel.org; KaiShen@linux.alibaba.com
> Subject: [EXTERNAL] Re: [PATCH for-next 0/2] RDMA/erdma: Introduce
> custom implementation of drain_sq and drain_rq
> 
> On 8/24/2022 5:42 AM, Cheng Xu wrote:
> > Hi,
> >
> > This series introduces erdma's implementation of drain_sq and
> drain_rq.
> > Our hardware will stop processing any new WRs if QP state is error.
> 
> Doesn't this violate the IB specification? Failing newly posted WRs
> before older WRs have flushed to the CQ means that ordering is not
> preserved. Many upper layers depend on this.
> 

It would be ok to synchronously fail the post_send()/post_recv() call
if the QP is in error, or the WR is malformed. In that case, 
the WR does not translate into a WQE and will not produce a
work completion.

Bernard.


> Tom.
> 
> > So the default __ib_drain_sq and __ib_drain_rq in core code can not
> work
> > for erdma. For this reason, we implement the drain_sq and drain_rq
> > interfaces.
> >
> > In SQ draining or RQ draining, we post both drain send wr and drain
> > recv wr, and then modify_qp to error. At last, we wait the
> corresponding
> > completion in the separated interface.
> >
> > The first patch introduces internal post_send/post_recv for qp drain,
> and
> > the second patch implements the drain_sq and drain_rq of erdma.
> >
> > Thanks,
> > Cheng Xu
> >
> > Cheng Xu (2):
> >    RDMA/erdma: Introduce internal post_send/post_recv for qp drain
> >    RDMA/erdma: Add drain_sq and drain_rq support
> >
> >   drivers/infiniband/hw/erdma/erdma_main.c  |   4 +-
> >   drivers/infiniband/hw/erdma/erdma_qp.c    | 116
> +++++++++++++++++++++-
> >   drivers/infiniband/hw/erdma/erdma_verbs.h |  27 ++++-
> >   3 files changed, 136 insertions(+), 11 deletions(-)
> >
Cheng Xu Aug. 25, 2022, 1:54 a.m. UTC | #3
On 8/24/22 10:08 PM, Tom Talpey wrote:
> On 8/24/2022 5:42 AM, Cheng Xu wrote:
>> Hi,
>>
>> This series introduces erdma's implementation of drain_sq and drain_rq.
>> Our hardware will stop processing any new WRs if QP state is error.
> 
> Doesn't this violate the IB specification? Failing newly posted WRs
> before older WRs have flushed to the CQ means that ordering is not
> preserved.

I agree with Bernard's point.

I'm not very familiar with with IB specification. But for RNIC/iWarp [1],
post WR in Error state has two optional actions: "Post WQE, and then Flush it"
or "Return an Immediate Error" (Showed in Figure 10). So, I think failing
newly posted WRs is reasonable.

> Many upper layers depend on this.

For kernel verbs, erdma currently supports NoF. we tested the NoF cases,
and found that it's never happened that newly WRs were posted after QP
changed to error, and the drain_qp should be the terminal of IO process.

Also, in userspace, I find that many providers prevents new WRs if QP state is
not right.

So, it seems ok in practice.

Thanks,
Cheng Xu


[1] http://www.rdmaconsortium.org/home/draft-hilland-iwarp-verbs-v1.0-RDMAC.pdf
Tom Talpey Aug. 25, 2022, 4:37 p.m. UTC | #4
On 8/24/2022 9:54 PM, Cheng Xu wrote:
> 
> 
> On 8/24/22 10:08 PM, Tom Talpey wrote:
>> On 8/24/2022 5:42 AM, Cheng Xu wrote:
>>> Hi,
>>>
>>> This series introduces erdma's implementation of drain_sq and drain_rq.
>>> Our hardware will stop processing any new WRs if QP state is error.
>>
>> Doesn't this violate the IB specification? Failing newly posted WRs
>> before older WRs have flushed to the CQ means that ordering is not
>> preserved.
> 
> I agree with Bernard's point.
> 
> I'm not very familiar with with IB specification. But for RNIC/iWarp [1],
> post WR in Error state has two optional actions: "Post WQE, and then Flush it"
> or "Return an Immediate Error" (Showed in Figure 10). So, I think failing
> newly posted WRs is reasonable.

Both IB and iWARP allow new post-WR calls to fail synchronously if
the QP is in the ERROR state. But the QP can only enter ERROR once the
SQ and RQ are fully drained to the CQ(s). Until that happens, the
WRs need to flush through.

Your code seems to start failing WR's when the TX_STOPPED or RX_STOPPED
bits are set. But that bit is being set when the drain *begins*, not
when it flushes through. That seems wrong, to me.

>> Many upper layers depend on this.
> 
> For kernel verbs, erdma currently supports NoF. we tested the NoF cases,
> and found that it's never happened that newly WRs were posted after QP
> changed to error, and the drain_qp should be the terminal of IO process.
> 
> Also, in userspace, I find that many providers prevents new WRs if QP state is
> not right.

Sure, but your new STOPPED bits don't propagate up to userspace, right?
The post calls will fail unexpectedly, because the QP is not yet in
ERROR state.

I'm also concerned how consumers who are posting their own "drain" WQEs
will behave. This is a common approach that many already take. But now
they will see different behavior when posting them...

Tom.


> 
> So, it seems ok in practice.
> 
> Thanks,
> Cheng Xu
> 
> 
> [1] http://www.rdmaconsortium.org/home/draft-hilland-iwarp-verbs-v1.0-RDMAC.pdf
> 
>
Cheng Xu Aug. 26, 2022, 3:21 a.m. UTC | #5
On 8/26/22 12:37 AM, Tom Talpey wrote:
> On 8/24/2022 9:54 PM, Cheng Xu wrote:
>>
>>
>> On 8/24/22 10:08 PM, Tom Talpey wrote:
>>> On 8/24/2022 5:42 AM, Cheng Xu wrote:
>>>> Hi,
>>>>
>>>> This series introduces erdma's implementation of drain_sq and drain_rq.
>>>> Our hardware will stop processing any new WRs if QP state is error.
>>>
>>> Doesn't this violate the IB specification? Failing newly posted WRs
>>> before older WRs have flushed to the CQ means that ordering is not
>>> preserved.
>>
>> I agree with Bernard's point.
>>
>> I'm not very familiar with with IB specification. But for RNIC/iWarp [1],
>> post WR in Error state has two optional actions: "Post WQE, and then Flush it"
>> or "Return an Immediate Error" (Showed in Figure 10). So, I think failing
>> newly posted WRs is reasonable.
> 
> <...> But the QP can only enter ERROR once the
> SQ and RQ are fully drained to the CQ(s). Until that happens, the
> WRs need to flush through.
> 

Emm, let's put erdma aside first, it seems that specification does not require
this. According to "6.2.4 Error State" in the document [1]:

 The following is done on entry into the Error state:
 * The RI MUST flush any incomplete WRs on the SQ or RQ. 
   .....
 * At some point in the execution of the flushing operation, the RI
   MUST begin to return an Immediate Error for any attempt to post
   a WR to a Work Queue;
   ....

As the second point says, The flushing operation and the behavior of returning
Immediate Error are asynchronous. what you mentioned is not guaranteed. Failing
the post_send/post_recv may happens at any time during modify_qp to error.

[1] http://www.rdmaconsortium.org/home/draft-hilland-iwarp-verbs-v1.0-RDMAC.pdf

> Your code seems to start failing WR's when the TX_STOPPED or RX_STOPPED
> bits are set. But that bit is being set when the drain *begins*, not
> when it flushes through. That seems wrong, to me.
> 

Back to erdma's scenario, As I explains above, I think failing immediately when
flushing begins does not violate the specification.


>>> Many upper layers depend on this.
>>
>> For kernel verbs, erdma currently supports NoF. we tested the NoF cases,
>> and found that it's never happened that newly WRs were posted after QP
>> changed to error, and the drain_qp should be the terminal of IO process.
>>
>> Also, in userspace, I find that many providers prevents new WRs if QP state is
>> not right.
> 
> Sure, but your new STOPPED bits don't propagate up to userspace, right?

Yes. they are only used for kernel QPs. The bits are used for setting an accurate
time point to prevent newly WRs when modify qp to error.

> The post calls will fail unexpectedly, because the QP is not yet in
> ERROR state.

Do you mean this happens in userspace or kernel? The new bits do not influence
userspace, and erdma will have the same behavior with other providers in userspace.

For kernel, this is only used in drain_qp scenario. posting WRs and drain qp
concurrently will lead uncertain results. This is also mentioned in the comment
of ib_drain_qp:

 * ensure that there are no other contexts that are posting WRs concurrently.
 * Otherwise the drain is not guaranteed.

> I'm also concerned how consumers who are posting their own "drain" WQEs
> will behave. This is a common approach that many already take. But now
> they will see different behavior when posting them...
> 

For userspace, erdma is not special.
For kernel, I think the standard way to drain WR is using ib_drain_qp, not custom
implementation. I'm not sure that there is some ULPs in kernel has their own drain
flow.

Thanks,
Cheng Xu

> Tom.
> 
> 
>>
>> So, it seems ok in practice.
>>
>> Thanks,
>> Cheng Xu
>>
>>
>> [1] http://www.rdmaconsortium.org/home/draft-hilland-iwarp-verbs-v1.0-RDMAC.pdf
>>
>>
Tom Talpey Aug. 26, 2022, 1:11 p.m. UTC | #6
On 8/25/2022 11:21 PM, Cheng Xu wrote:
> On 8/26/22 12:37 AM, Tom Talpey wrote:
>> On 8/24/2022 9:54 PM, Cheng Xu wrote:
>>>
>>>
>>> On 8/24/22 10:08 PM, Tom Talpey wrote:
>>>> On 8/24/2022 5:42 AM, Cheng Xu wrote:
>>>>> Hi,
>>>>>
>>>>> This series introduces erdma's implementation of drain_sq and drain_rq.
>>>>> Our hardware will stop processing any new WRs if QP state is error.
>>>>
>>>> Doesn't this violate the IB specification? Failing newly posted WRs
>>>> before older WRs have flushed to the CQ means that ordering is not
>>>> preserved.
>>>
>>> I agree with Bernard's point.
>>>
>>> I'm not very familiar with with IB specification. But for RNIC/iWarp [1],
>>> post WR in Error state has two optional actions: "Post WQE, and then Flush it"
>>> or "Return an Immediate Error" (Showed in Figure 10). So, I think failing
>>> newly posted WRs is reasonable.
>>
>> <...> But the QP can only enter ERROR once the
>> SQ and RQ are fully drained to the CQ(s). Until that happens, the
>> WRs need to flush through.
>>
> 
> Emm, let's put erdma aside first, it seems that specification does not require
> this. According to "6.2.4 Error State" in the document [1]:
> 
>   The following is done on entry into the Error state:
>   * The RI MUST flush any incomplete WRs on the SQ or RQ.
>     .....
>   * At some point in the execution of the flushing operation, the RI
>     MUST begin to return an Immediate Error for any attempt to post
>     a WR to a Work Queue;
>     ....
> 
> As the second point says, The flushing operation and the behavior of returning
> Immediate Error are asynchronous. what you mentioned is not guaranteed. Failing
> the post_send/post_recv may happens at any time during modify_qp to error.
> 
> [1] http://www.rdmaconsortium.org/home/draft-hilland-iwarp-verbs-v1.0-RDMAC.pdf

Well, that language is very imprecise, "at some point" is not exactly
definitive. I'll explain one scenario that makes it problematic.

>> Your code seems to start failing WR's when the TX_STOPPED or RX_STOPPED
>> bits are set. But that bit is being set when the drain *begins*, not
>> when it flushes through. That seems wrong, to me.
>>
> 
> Back to erdma's scenario, As I explains above, I think failing immediately when
> flushing begins does not violate the specification.

Consider a consumer which posts with a mix of IB_SEND_SIGNALED and
also unsignaled WRs, for example, fast-memory registration followed
by a send, a very typical storage consumer operation.

- post_wr(memreg, !signaled) => post success
-      => operation success, no completion generated
- ...  <= provider detects error here
- post_wr(send, signaled) => post fail (new in your patch)
- ...  <= provider notifies async error, etc.

The consumer now knows there's an error, and needs to tear down.
It must remove the DMA mapping before proceeding, but the hardware
may still be using it. How does it determine the status of that
first post_wr, so it may proceed?

The IB spec explicitly states that the post verb can only return
the immediate error after the QP has exited the ERROR state, which
includes all pending WRs having been flushed and made visible on
the CQ. Here is an excerpt from the Post Send Request section
11.4.1.1 specifying its output modifiers:

-> Invalid QP state.
-> Note: This error is returned only when the QP is in the Reset,
-> Init, or RTR states. It is not returned when the QP is in the Error
-> or Send Queue Error states due to race conditions that could
-> result in indeterminate behavior. Work Requests posted to the
-> Send Queue while the QP is in the Error or Send Queue Error
-> states are completed with a flush error.

So, the consumer will post a new, signaled, work request, and wait
for it to "flush through" by polling the CQ. Because WR's always
complete in-order, this final completion must appear after *all*
prior WR's, and this gives the consumer the green light to proceed.

With your change, ERDMA will pre-emptively fail such a newly posted
request, and generate no new completion. The consumer is left in limbo
on the status of its prior requests. Providers must not override this.

Tom.

>>>> Many upper layers depend on this.
>>>
>>> For kernel verbs, erdma currently supports NoF. we tested the NoF cases,
>>> and found that it's never happened that newly WRs were posted after QP
>>> changed to error, and the drain_qp should be the terminal of IO process.
>>>
>>> Also, in userspace, I find that many providers prevents new WRs if QP state is
>>> not right.
>>
>> Sure, but your new STOPPED bits don't propagate up to userspace, right?
> 
> Yes. they are only used for kernel QPs. The bits are used for setting an accurate
> time point to prevent newly WRs when modify qp to error.
> 
>> The post calls will fail unexpectedly, because the QP is not yet in
>> ERROR state.
> 
> Do you mean this happens in userspace or kernel? The new bits do not influence
> userspace, and erdma will have the same behavior with other providers in userspace.
> 
> For kernel, this is only used in drain_qp scenario. posting WRs and drain qp
> concurrently will lead uncertain results. This is also mentioned in the comment
> of ib_drain_qp:
> 
>   * ensure that there are no other contexts that are posting WRs concurrently.
>   * Otherwise the drain is not guaranteed.
> 
>> I'm also concerned how consumers who are posting their own "drain" WQEs
>> will behave. This is a common approach that many already take. But now
>> they will see different behavior when posting them...
>>
> 
> For userspace, erdma is not special.
> For kernel, I think the standard way to drain WR is using ib_drain_qp, not custom
> implementation. I'm not sure that there is some ULPs in kernel has their own drain
> flow.
> 
> Thanks,
> Cheng Xu
> 
>> Tom.
>>
>>
>>>
>>> So, it seems ok in practice.
>>>
>>> Thanks,
>>> Cheng Xu
>>>
>>>
>>> [1] http://www.rdmaconsortium.org/home/draft-hilland-iwarp-verbs-v1.0-RDMAC.pdf
>>>
>>>
>
Jason Gunthorpe Aug. 26, 2022, 1:57 p.m. UTC | #7
On Fri, Aug 26, 2022 at 09:11:25AM -0400, Tom Talpey wrote:

> With your change, ERDMA will pre-emptively fail such a newly posted
> request, and generate no new completion. The consumer is left in limbo
> on the status of its prior requests. Providers must not override this.

Yeah, I tend to agree with Tom.

And I also want to point out that Linux RDMA verbs does not follow the
SW specifications of either IBTA or the iWarp group. We have our own
expectation for how these APIs work that our own ULPs rely on.

So pedantically debating what a software spec we don't follow says is
not relavent. The utility is to understand the intention and use cases
and ensure we cover the same. Usually this means we follow the spec :)

Jason
Cheng Xu Aug. 29, 2022, 3:37 a.m. UTC | #8
On 8/26/22 9:11 PM, Tom Talpey wrote:
> On 8/25/2022 11:21 PM, Cheng Xu wrote:
>> On 8/26/22 12:37 AM, Tom Talpey wrote:
>>> On 8/24/2022 9:54 PM, Cheng Xu wrote:
>>>>
>>>>
>>>> On 8/24/22 10:08 PM, Tom Talpey wrote:
>>>>> On 8/24/2022 5:42 AM, Cheng Xu wrote:
>>>>>> Hi,
>>>>>>
>>>>>> This series introduces erdma's implementation of drain_sq and drain_rq.
>>>>>> Our hardware will stop processing any new WRs if QP state is error.
>>>>>
>>>>> Doesn't this violate the IB specification? Failing newly posted WRs
>>>>> before older WRs have flushed to the CQ means that ordering is not
>>>>> preserved.
>>>>
>>>> I agree with Bernard's point.
>>>>
>>>> I'm not very familiar with with IB specification. But for RNIC/iWarp [1],
>>>> post WR in Error state has two optional actions: "Post WQE, and then Flush it"
>>>> or "Return an Immediate Error" (Showed in Figure 10). So, I think failing
>>>> newly posted WRs is reasonable.
>>>
>>> <...> But the QP can only enter ERROR once the
>>> SQ and RQ are fully drained to the CQ(s). Until that happens, the
>>> WRs need to flush through.
>>>
>>
>> Emm, let's put erdma aside first, it seems that specification does not require
>> this. According to "6.2.4 Error State" in the document [1]:
>>
>>   The following is done on entry into the Error state:
>>   * The RI MUST flush any incomplete WRs on the SQ or RQ.
>>     .....
>>   * At some point in the execution of the flushing operation, the RI
>>     MUST begin to return an Immediate Error for any attempt to post
>>     a WR to a Work Queue;
>>     ....
>>
>> As the second point says, The flushing operation and the behavior of returning
>> Immediate Error are asynchronous. what you mentioned is not guaranteed. Failing
>> the post_send/post_recv may happens at any time during modify_qp to error.
>>
>> [1] http://www.rdmaconsortium.org/home/draft-hilland-iwarp-verbs-v1.0-RDMAC.pdf
> 
> Well, that language is very imprecise, "at some point" is not exactly
> definitive. I'll explain one scenario that makes it problematic.
> 
>>> Your code seems to start failing WR's when the TX_STOPPED or RX_STOPPED
>>> bits are set. But that bit is being set when the drain *begins*, not
>>> when it flushes through. That seems wrong, to me.
>>>
>>
>> Back to erdma's scenario, As I explains above, I think failing immediately when
>> flushing begins does not violate the specification.
> 
> Consider a consumer which posts with a mix of IB_SEND_SIGNALED and
> also unsignaled WRs, for example, fast-memory registration followed
> by a send, a very typical storage consumer operation.
> 
> - post_wr(memreg, !signaled) => post success
> -      => operation success, no completion generated
> - ...  <= provider detects error here
> - post_wr(send, signaled) => post fail (new in your patch)
> - ...  <= provider notifies async error, etc.
> 
> The consumer now knows there's an error, and needs to tear down.
> It must remove the DMA mapping before proceeding, but the hardware
> may still be using it. How does it determine the status of that
> first post_wr, so it may proceed?
> 
> The IB spec explicitly states that the post verb can only return
> the immediate error after the QP has exited the ERROR state, which
> includes all pending WRs having been flushed and made visible on
> the CQ. Here is an excerpt from the Post Send Request section
> 11.4.1.1 specifying its output modifiers:
> 
> -> Invalid QP state.
> -> Note: This error is returned only when the QP is in the Reset,
> -> Init, or RTR states. It is not returned when the QP is in the Error
> -> or Send Queue Error states due to race conditions that could
> -> result in indeterminate behavior. Work Requests posted to the
> -> Send Queue while the QP is in the Error or Send Queue Error
> -> states are completed with a flush error.
> 

Get it, thanks. The IB spec seems to be more clear.

> So, the consumer will post a new, signaled, work request, and wait
> for it to "flush through" by polling the CQ. Because WR's always
> complete in-order, this final completion must appear after *all*
> prior WR's, and this gives the consumer the green light to proceed.
> 

Yeah, this is right, and the default ib_drain_qp do it in this way.

> With your change, ERDMA will pre-emptively fail such a newly posted
> request, and generate no new completion. The consumer is left in limbo
> on the status of its prior requests. Providers must not override this.

For the ULPs that do not use ib_drain_qp interface, we will have problem.

But currently it seems that almost all the ULPs in kernel call ib_drain_qp
to finish the drain flow. While ib_drain_qp allows vendors to have
custom ib_drain_qp implementations which is invisible to ULPs.

Thanks,
Cheng Xu


> Tom.
>
Cheng Xu Aug. 29, 2022, 4:01 a.m. UTC | #9
On 8/26/22 9:57 PM, Jason Gunthorpe wrote:
> On Fri, Aug 26, 2022 at 09:11:25AM -0400, Tom Talpey wrote:
> 
>> With your change, ERDMA will pre-emptively fail such a newly posted
>> request, and generate no new completion. The consumer is left in limbo
>> on the status of its prior requests. Providers must not override this.
> 
> Yeah, I tend to agree with Tom.
> 
> And I also want to point out that Linux RDMA verbs does not follow the
> SW specifications of either IBTA or the iWarp group. We have our own
> expectation for how these APIs work that our own ULPs rely on.
> 
> So pedantically debating what a software spec we don't follow says is
> not relavent. The utility is to understand the intention and use cases
> and ensure we cover the same. Usually this means we follow the spec :)
> 

Yeah, I totally agree with this.

Actually, I thought that ULPs do not concern about the details of how the
flushing and modify_qp being performed in the drivers. The drain flow is
handled by a single ib_drain_qp call for ULPs. While ib_drain_qp API allows
vendor-custom implementation, this is invisible to ULPs.

For the ULPs which implement their own drain flow instead of using
ib_drain_qp  (I think it is rare in kernel), they will fail in erdma.

Anyway, since our implementation is disputed, We'd like to keep the same
behavior with other vendors. Maybe firmware updating w/o driver changes or
software flushing in driver will fix this.

Thanks,
Cheng Xu
Tom Talpey Aug. 30, 2022, 6:45 p.m. UTC | #10
On 8/29/2022 12:01 AM, Cheng Xu wrote:
> 
> 
> On 8/26/22 9:57 PM, Jason Gunthorpe wrote:
>> On Fri, Aug 26, 2022 at 09:11:25AM -0400, Tom Talpey wrote:
>>
>>> With your change, ERDMA will pre-emptively fail such a newly posted
>>> request, and generate no new completion. The consumer is left in limbo
>>> on the status of its prior requests. Providers must not override this.
>>
>> Yeah, I tend to agree with Tom.
>>
>> And I also want to point out that Linux RDMA verbs does not follow the
>> SW specifications of either IBTA or the iWarp group. We have our own
>> expectation for how these APIs work that our own ULPs rely on.
>>
>> So pedantically debating what a software spec we don't follow says is
>> not relavent. The utility is to understand the intention and use cases
>> and ensure we cover the same. Usually this means we follow the spec :)
>>
> 
> Yeah, I totally agree with this.
> 
> Actually, I thought that ULPs do not concern about the details of how the
> flushing and modify_qp being performed in the drivers. The drain flow is
> handled by a single ib_drain_qp call for ULPs. While ib_drain_qp API allows
> vendor-custom implementation, this is invisible to ULPs.
> 
> For the ULPs which implement their own drain flow instead of using
> ib_drain_qp  (I think it is rare in kernel), they will fail in erdma.
> 
> Anyway, since our implementation is disputed, We'd like to keep the same
> behavior with other vendors. Maybe firmware updating w/o driver changes or
> software flushing in driver will fix this.

To be clear, my concern is about the ordering of CQE flushes with
respect to the WR posting fails. Draining the CQs in whatever way
you choose to optimize for your device is not the issue, although
it seems odd to me that you need such a thing.

The problem is that your patch started failing the new requests
_before_ the drain could be used to clean up. This introduced
two new provider behaviors that consumers would not expect:

- first error detected in a post call (on the fast path!)
- inability to determine if prior requests were complete

I'd really suggest getting a copy of the full IB spec and examining
the difference between QP "Error" and "SQ Error" states. They are
subtle but important.

Tom.
Cheng Xu Aug. 31, 2022, 2:08 a.m. UTC | #11
On 8/31/22 2:45 AM, Tom Talpey wrote:
> On 8/29/2022 12:01 AM, Cheng Xu wrote:
>>
>>
>> On 8/26/22 9:57 PM, Jason Gunthorpe wrote:
>>> On Fri, Aug 26, 2022 at 09:11:25AM -0400, Tom Talpey wrote:
>>>
>>>> With your change, ERDMA will pre-emptively fail such a newly posted
>>>> request, and generate no new completion. The consumer is left in limbo
>>>> on the status of its prior requests. Providers must not override this.
>>>
>>> Yeah, I tend to agree with Tom.
>>>
>>> And I also want to point out that Linux RDMA verbs does not follow the
>>> SW specifications of either IBTA or the iWarp group. We have our own
>>> expectation for how these APIs work that our own ULPs rely on.
>>>
>>> So pedantically debating what a software spec we don't follow says is
>>> not relavent. The utility is to understand the intention and use cases
>>> and ensure we cover the same. Usually this means we follow the spec :)
>>>
>>
>> Yeah, I totally agree with this.
>>
>> Actually, I thought that ULPs do not concern about the details of how the
>> flushing and modify_qp being performed in the drivers. The drain flow is
>> handled by a single ib_drain_qp call for ULPs. While ib_drain_qp API allows
>> vendor-custom implementation, this is invisible to ULPs.
>>
>> For the ULPs which implement their own drain flow instead of using
>> ib_drain_qp  (I think it is rare in kernel), they will fail in erdma.
>>
>> Anyway, since our implementation is disputed, We'd like to keep the same
>> behavior with other vendors. Maybe firmware updating w/o driver changes or
>> software flushing in driver will fix this.
> 
> To be clear, my concern is about the ordering of CQE flushes with
> respect to the WR posting fails.Draining the CQs in whatever way
> you choose to optimize for your device is not the issue, although
> it seems odd to me that you need such a thing.
> 

Yeah, I understand what you concern about. I'm sorry that there may be
ambiguity in my last reply.

After discussed internally, we would like to drop this patch (e.g., failing
WRs before drain, or failing WRs in QP Error State), because it indeed has problem
in the cases you mentioned. And we are seeking for new solutions. New solutions
will not failing the WRs in drain cases, and by this erdma will have the same behavior
with other vendors.

More, the reason why we introduced this patch is that our hardware do not flush
newly WRs in QP Error State currently. So new solutions could be:
 - Let our hardware flush newly WRs, or
 - Flush WRs in our driver if hardware does not flush.
Either of them will eliminate the odd logic in this patch. For now, we tend the
first option.

Thanks,
Cheng Xu


> The problem is that your patch started failing the new requests
> _before_ the drain could be used to clean up. This introduced
> two new provider behaviors that consumers would not expect:
> 
> - first error detected in a post call (on the fast path!)
> - inability to determine if prior requests were complete
> 
> I'd really suggest getting a copy of the full IB spec and examining
> the difference between QP "Error" and "SQ Error" states. They are
> subtle but important.
> 
> Tom.
Cheng Xu Aug. 31, 2022, 2:52 a.m. UTC | #12
On 8/31/22 2:45 AM, Tom Talpey wrote:
> On 8/29/2022 12:01 AM, Cheng Xu wrote:
>>
>>
>> On 8/26/22 9:57 PM, Jason Gunthorpe wrote:
>>> On Fri, Aug 26, 2022 at 09:11:25AM -0400, Tom Talpey wrote:
>>>
>>>> With your change, ERDMA will pre-emptively fail such a newly posted
>>>> request, and generate no new completion. The consumer is left in limbo
>>>> on the status of its prior requests. Providers must not override this.
>>>
>>> Yeah, I tend to agree with Tom.
>>>
>>> And I also want to point out that Linux RDMA verbs does not follow the
>>> SW specifications of either IBTA or the iWarp group. We have our own
>>> expectation for how these APIs work that our own ULPs rely on.
>>>
>>> So pedantically debating what a software spec we don't follow says is
>>> not relavent. The utility is to understand the intention and use cases
>>> and ensure we cover the same. Usually this means we follow the spec :)
>>>
>>
>> Yeah, I totally agree with this.
>>
>> Actually, I thought that ULPs do not concern about the details of how the
>> flushing and modify_qp being performed in the drivers. The drain flow is
>> handled by a single ib_drain_qp call for ULPs. While ib_drain_qp API allows
>> vendor-custom implementation, this is invisible to ULPs.
>>
>> For the ULPs which implement their own drain flow instead of using
>> ib_drain_qp  (I think it is rare in kernel), they will fail in erdma.
>>
>> Anyway, since our implementation is disputed, We'd like to keep the same
>> behavior with other vendors. Maybe firmware updating w/o driver changes or
>> software flushing in driver will fix this.
> 
> To be clear, my concern is about the ordering of CQE flushes with
> respect to the WR posting fails. Draining the CQs in whatever way
> you choose to optimize for your device is not the issue, although
> it seems odd to me that you need such a thing.
> 
> The problem is that your patch started failing the new requests
> _before_ the drain could be used to clean up. This introduced
> two new provider behaviors that consumers would not expect:
> 
> - first error detected in a post call (on the fast path!)
> - inability to determine if prior requests were complete
> 
Yes, you are right. As I replied, we will drop this patch, and follow
the common behaviors as other providers do.

> I'd really suggest getting a copy of the full IB spec and examining
> the difference between QP "Error" and "SQ Error" states. They are
> subtle but important.

Yeah, I'm already doing this. Thanks very much.

Cheng Xu


> Tom.