diff mbox

[rfc,06/10] IB/cq: Don't force IB_POLL_DIRECT poll context for ib_process_cq_direct

Message ID 1489065402-14757-7-git-send-email-sagi@grimberg.me (mailing list archive)
State RFC
Headers show

Commit Message

Sagi Grimberg March 9, 2017, 1:16 p.m. UTC
polling the completion queue directly does not interfere
with the existing polling logic, hence drop the requirement.

This can be used for polling mode ULPs.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/infiniband/core/cq.c | 2 --
 1 file changed, 2 deletions(-)

Comments

Bart Van Assche March 9, 2017, 4:30 p.m. UTC | #1
On Thu, 2017-03-09 at 15:16 +0200, Sagi Grimberg wrote:
> polling the completion queue directly does not interfere
> with the existing polling logic, hence drop the requirement.
> 
> This can be used for polling mode ULPs.
> 
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> ---
>  drivers/infiniband/core/cq.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/drivers/infiniband/core/cq.c b/drivers/infiniband/core/cq.c
> index 21d1a38af489..7f6ae0ecb0c5 100644
> --- a/drivers/infiniband/core/cq.c
> +++ b/drivers/infiniband/core/cq.c
> @@ -64,8 +64,6 @@ static int __ib_process_cq(struct ib_cq *cq, int budget)
>   */
>  int ib_process_cq_direct(struct ib_cq *cq, int budget)
>  {
> -	WARN_ON_ONCE(cq->poll_ctx != IB_POLL_DIRECT);
> -
>  	return __ib_process_cq(cq, budget);
>  }
>  EXPORT_SYMBOL(ib_process_cq_direct);

Using ib_process_cq_direct() for queues that have another type than
IB_POLL_DIRECT may trigger concurrent CQ processing. Before this patch
the completions from each CQ were processed sequentially. That's a big
change so I think this should be mentioned clearly in the header above
ib_process_cq_direct().

Bart.--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sagi Grimberg March 13, 2017, 8:24 a.m. UTC | #2
>> polling the completion queue directly does not interfere
>> with the existing polling logic, hence drop the requirement.
>>
>> This can be used for polling mode ULPs.
>>
>> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
>> ---
>>  drivers/infiniband/core/cq.c | 2 --
>>  1 file changed, 2 deletions(-)
>>
>> diff --git a/drivers/infiniband/core/cq.c b/drivers/infiniband/core/cq.c
>> index 21d1a38af489..7f6ae0ecb0c5 100644
>> --- a/drivers/infiniband/core/cq.c
>> +++ b/drivers/infiniband/core/cq.c
>> @@ -64,8 +64,6 @@ static int __ib_process_cq(struct ib_cq *cq, int budget)
>>   */
>>  int ib_process_cq_direct(struct ib_cq *cq, int budget)
>>  {
>> -	WARN_ON_ONCE(cq->poll_ctx != IB_POLL_DIRECT);
>> -
>>  	return __ib_process_cq(cq, budget);
>>  }
>>  EXPORT_SYMBOL(ib_process_cq_direct);
>
> Using ib_process_cq_direct() for queues that have another type than
> IB_POLL_DIRECT may trigger concurrent CQ processing.

Correct.

> Before this patch
> the completions from each CQ were processed sequentially. That's a big
> change so I think this should be mentioned clearly in the header above
> ib_process_cq_direct().

Note that I now see that the cq lock is not sufficient for mutual
exclusion here because we're referencing cq->wc array outside of it.

There are three options I see here:
1. we'll need to allocate a different wc array for polling mode,
perhaps a smaller one?
2. Export __ib_process_cq (in some form) with an option to pass in a wc
array.
3. Simply not support non-selective polling but it seems like a shame...

Any thoughts?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Bart Van Assche March 14, 2017, 9:32 p.m. UTC | #3
On Mon, 2017-03-13 at 10:24 +0200, Sagi Grimberg wrote:
> > Before this patch
> > the completions from each CQ were processed sequentially. That's a big
> > change so I think this should be mentioned clearly in the header above
> > ib_process_cq_direct().
> 
> Note that I now see that the cq lock is not sufficient for mutual
> exclusion here because we're referencing cq->wc array outside of it.
> 
> There are three options I see here:
> 1. we'll need to allocate a different wc array for polling mode,
> perhaps a smaller one?
> 2. Export __ib_process_cq (in some form) with an option to pass in a wc
> array.
> 3. Simply not support non-selective polling but it seems like a shame...
> 
> Any thoughts?

I doubt it is possible to come up with an algorithm that recognizes whether
or not two different ib_process_cq() calls are serialized. So the
ib_process_cq() caller will have to provide that information. How about adding
an ib_wc array argument to ib_process_cq() and modifying __ib_process_cq()
such that it uses that array if specified and cq->wc otherwise?

Bart.--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/infiniband/core/cq.c b/drivers/infiniband/core/cq.c
index 21d1a38af489..7f6ae0ecb0c5 100644
--- a/drivers/infiniband/core/cq.c
+++ b/drivers/infiniband/core/cq.c
@@ -64,8 +64,6 @@  static int __ib_process_cq(struct ib_cq *cq, int budget)
  */
 int ib_process_cq_direct(struct ib_cq *cq, int budget)
 {
-	WARN_ON_ONCE(cq->poll_ctx != IB_POLL_DIRECT);
-
 	return __ib_process_cq(cq, budget);
 }
 EXPORT_SYMBOL(ib_process_cq_direct);