diff mbox

[v3,1/3] nvme-rdma: correctly check for target keyed sgl support

Message ID 1d549a4d-c5ea-0bd2-1acd-66a627812eda@mellanox.com (mailing list archive)
State Not Applicable
Headers show

Commit Message

Max Gurtovoy June 4, 2018, 2:29 p.m. UTC
On 6/4/2018 5:21 PM, Steve Wise wrote:
> 
> 
>> -----Original Message-----
>> From: Max Gurtovoy <maxg@mellanox.com>
>> Sent: Monday, June 4, 2018 8:52 AM
>> To: Christoph Hellwig <hch@lst.de>; Sagi Grimberg <sagi@grimberg.me>
>> Cc: Steve Wise <swise@opengridcomputing.com>; 'Ruhl, Michael J'
>> <michael.j.ruhl@intel.com>; axboe@kernel.dk; 'Busch, Keith'
>> <keith.busch@intel.com>; linux-nvme@lists.infradead.org;
>> parav@mellanox.com; linux-rdma@vger.kernel.org
>> Subject: Re: [PATCH v3 1/3] nvme-rdma: correctly check for target keyed
> sgl
>> support
>>
>>
>>
>> On 6/4/2018 3:11 PM, Christoph Hellwig wrote:
>>> On Mon, Jun 04, 2018 at 03:01:43PM +0300, Sagi Grimberg wrote:
>>>>
>>>>> He's referring to patch 1 and 2, which are the host side.  No page
>> allocations.
>>>>
>>>> I'm good with 1 & 2,
>>>>
>>>> Christoph, you can add my
>>>>
>>>> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
>>>
>>> We've missed the merge window now, so we can just wait for a proper
>>> resend from Steve I think.
>>>
>>
>> There are still issue that I'm trying to help Steve with their debug so
>> let's wait with the merge until we figure them out.
> 
> I would like review on my new nvmet-rdma changes to avoid > 0 order page
> allocations though.  Perhaps I'll resend the series and add the RFC tag (or
> WIP?) with verbiage saying don't merge yet.
> 
> 

you should add to your new code:

         ret = rdma_create_qp(queue->cm_id, ndev->pd, &qp_attr);



I currently see some timeout in the initiator also with 4k inline but it 
works good with old initiator.

-Max.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Steve Wise June 4, 2018, 2:31 p.m. UTC | #1
> 
> 
> 
> On 6/4/2018 5:21 PM, Steve Wise wrote:
> >
> >
> >> -----Original Message-----
> >> From: Max Gurtovoy <maxg@mellanox.com>
> >> Sent: Monday, June 4, 2018 8:52 AM
> >> To: Christoph Hellwig <hch@lst.de>; Sagi Grimberg <sagi@grimberg.me>
> >> Cc: Steve Wise <swise@opengridcomputing.com>; 'Ruhl, Michael J'
> >> <michael.j.ruhl@intel.com>; axboe@kernel.dk; 'Busch, Keith'
> >> <keith.busch@intel.com>; linux-nvme@lists.infradead.org;
> >> parav@mellanox.com; linux-rdma@vger.kernel.org
> >> Subject: Re: [PATCH v3 1/3] nvme-rdma: correctly check for target keyed
> > sgl
> >> support
> >>
> >>
> >>
> >> On 6/4/2018 3:11 PM, Christoph Hellwig wrote:
> >>> On Mon, Jun 04, 2018 at 03:01:43PM +0300, Sagi Grimberg wrote:
> >>>>
> >>>>> He's referring to patch 1 and 2, which are the host side.  No page
> >> allocations.
> >>>>
> >>>> I'm good with 1 & 2,
> >>>>
> >>>> Christoph, you can add my
> >>>>
> >>>> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
> >>>
> >>> We've missed the merge window now, so we can just wait for a proper
> >>> resend from Steve I think.
> >>>
> >>
> >> There are still issue that I'm trying to help Steve with their debug so
> >> let's wait with the merge until we figure them out.
> >
> > I would like review on my new nvmet-rdma changes to avoid > 0 order
> page
> > allocations though.  Perhaps I'll resend the series and add the RFC tag
(or
> > WIP?) with verbiage saying don't merge yet.
> >
> >
> 
> you should add to your new code:
> 
> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
> index 2b6dc19..5828bf2 100644
> --- a/drivers/nvme/target/rdma.c
> +++ b/drivers/nvme/target/rdma.c
> @@ -964,7 +965,7 @@ static int nvmet_rdma_create_queue_ib(struct
> nvmet_rdma_queue *queue)
>          } else {
>                  /* +1 for drain */
>                  qp_attr.cap.max_recv_wr = 1 + queue->recv_queue_size;
> -               qp_attr.cap.max_recv_sge = 2;
> +               qp_attr.cap.max_recv_sge = 1 + ndev->inline_page_count;
>          }
> 
>          ret = rdma_create_qp(queue->cm_id, ndev->pd, &qp_attr);
>

Yes.  Good catch.
 
> 
> 
> I currently see some timeout in the initiator also with 4k inline but it
> works good with old initiator.
> 

This is with my github repo?

Steve



--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Max Gurtovoy June 4, 2018, 2:37 p.m. UTC | #2
On 6/4/2018 5:31 PM, Steve Wise wrote:
>>
>>
>>
>> On 6/4/2018 5:21 PM, Steve Wise wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: Max Gurtovoy <maxg@mellanox.com>
>>>> Sent: Monday, June 4, 2018 8:52 AM
>>>> To: Christoph Hellwig <hch@lst.de>; Sagi Grimberg <sagi@grimberg.me>
>>>> Cc: Steve Wise <swise@opengridcomputing.com>; 'Ruhl, Michael J'
>>>> <michael.j.ruhl@intel.com>; axboe@kernel.dk; 'Busch, Keith'
>>>> <keith.busch@intel.com>; linux-nvme@lists.infradead.org;
>>>> parav@mellanox.com; linux-rdma@vger.kernel.org
>>>> Subject: Re: [PATCH v3 1/3] nvme-rdma: correctly check for target keyed
>>> sgl
>>>> support
>>>>
>>>>
>>>>
>>>> On 6/4/2018 3:11 PM, Christoph Hellwig wrote:
>>>>> On Mon, Jun 04, 2018 at 03:01:43PM +0300, Sagi Grimberg wrote:
>>>>>>
>>>>>>> He's referring to patch 1 and 2, which are the host side.  No page
>>>> allocations.
>>>>>>
>>>>>> I'm good with 1 & 2,
>>>>>>
>>>>>> Christoph, you can add my
>>>>>>
>>>>>> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
>>>>>
>>>>> We've missed the merge window now, so we can just wait for a proper
>>>>> resend from Steve I think.
>>>>>
>>>>
>>>> There are still issue that I'm trying to help Steve with their debug so
>>>> let's wait with the merge until we figure them out.
>>>
>>> I would like review on my new nvmet-rdma changes to avoid > 0 order
>> page
>>> allocations though.  Perhaps I'll resend the series and add the RFC tag
> (or
>>> WIP?) with verbiage saying don't merge yet.
>>>
>>>
>>
>> you should add to your new code:
>>
>> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
>> index 2b6dc19..5828bf2 100644
>> --- a/drivers/nvme/target/rdma.c
>> +++ b/drivers/nvme/target/rdma.c
>> @@ -964,7 +965,7 @@ static int nvmet_rdma_create_queue_ib(struct
>> nvmet_rdma_queue *queue)
>>           } else {
>>                   /* +1 for drain */
>>                   qp_attr.cap.max_recv_wr = 1 + queue->recv_queue_size;
>> -               qp_attr.cap.max_recv_sge = 2;
>> +               qp_attr.cap.max_recv_sge = 1 + ndev->inline_page_count;
>>           }
>>
>>           ret = rdma_create_qp(queue->cm_id, ndev->pd, &qp_attr);
>>
> 
> Yes.  Good catch.
>   
>>
>>
>> I currently see some timeout in the initiator also with 4k inline but it
>> works good with old initiator.
>>
> 
> This is with my github repo?

The initiator is from V3 here.
Is there a difference in the initiator code ?

> 
> Steve
> 
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Steve Wise June 4, 2018, 2:45 p.m. UTC | #3
> >> you should add to your new code:
> >>
> >> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
> >> index 2b6dc19..5828bf2 100644
> >> --- a/drivers/nvme/target/rdma.c
> >> +++ b/drivers/nvme/target/rdma.c
> >> @@ -964,7 +965,7 @@ static int nvmet_rdma_create_queue_ib(struct
> >> nvmet_rdma_queue *queue)
> >>           } else {
> >>                   /* +1 for drain */
> >>                   qp_attr.cap.max_recv_wr = 1 + queue->recv_queue_size;
> >> -               qp_attr.cap.max_recv_sge = 2;
> >> +               qp_attr.cap.max_recv_sge = 1 + ndev->inline_page_count;
> >>           }
> >>
> >>           ret = rdma_create_qp(queue->cm_id, ndev->pd, &qp_attr);
> >>
> >
> > Yes.  Good catch.
> >
> >>
> >>
> >> I currently see some timeout in the initiator also with 4k inline but
it
> >> works good with old initiator.
> >>
> >
> > This is with my github repo?
> 
> The initiator is from V3 here.
> Is there a difference in the initiator code ?

Just removing a pr_debug().


--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 2b6dc19..5828bf2 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -964,7 +965,7 @@  static int nvmet_rdma_create_queue_ib(struct 
nvmet_rdma_queue *queue)
         } else {
                 /* +1 for drain */
                 qp_attr.cap.max_recv_wr = 1 + queue->recv_queue_size;
-               qp_attr.cap.max_recv_sge = 2;
+               qp_attr.cap.max_recv_sge = 1 + ndev->inline_page_count;
         }