diff mbox series

[14/15] nvme-multipath: set QUEUE_FLAG_NOWAIT

Message ID 20210427161619.1294399-15-hch@lst.de (mailing list archive)
State New
Headers show
Series [01/15] direct-io: remove blk_poll support | expand

Commit Message

Christoph Hellwig April 27, 2021, 4:16 p.m. UTC
The nvme multipathing code just dispatches bios to one of the blk-mq
based paths and never blocks on its own, so set QUEUE_FLAG_NOWAIT
to support REQ_NOWAIT bios.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/nvme/host/multipath.c | 2 ++
 1 file changed, 2 insertions(+)

Comments

Ming Lei April 28, 2021, 2:32 a.m. UTC | #1
On Tue, Apr 27, 2021 at 06:16:18PM +0200, Christoph Hellwig wrote:
> The nvme multipathing code just dispatches bios to one of the blk-mq
> based paths and never blocks on its own, so set QUEUE_FLAG_NOWAIT
> to support REQ_NOWAIT bios.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/nvme/host/multipath.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
> index 4e2c3a6787e9..1d17b2387884 100644
> --- a/drivers/nvme/host/multipath.c
> +++ b/drivers/nvme/host/multipath.c
> @@ -442,6 +442,8 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
>  	if (!q)
>  		goto out;
>  	blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
> +	blk_queue_flag_set(QUEUE_FLAG_NOWAIT, q);

BLK_MQ_F_BLOCKING is set for nvme-tcp, and the blocking may be done inside
nvme_ns_head_submit_bio(), is that one problem?


Thanks,
Ming
Christoph Hellwig April 29, 2021, 7:27 a.m. UTC | #2
On Wed, Apr 28, 2021 at 10:32:38AM +0800, Ming Lei wrote:
> > diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
> > index 4e2c3a6787e9..1d17b2387884 100644
> > --- a/drivers/nvme/host/multipath.c
> > +++ b/drivers/nvme/host/multipath.c
> > @@ -442,6 +442,8 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
> >  	if (!q)
> >  		goto out;
> >  	blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
> > +	blk_queue_flag_set(QUEUE_FLAG_NOWAIT, q);
> 
> BLK_MQ_F_BLOCKING is set for nvme-tcp, and the blocking may be done inside
> nvme_ns_head_submit_bio(), is that one problem?

The point of BLK_MQ_F_BLOCKING is that ->queue_rq can block, and
because of that it is not called from the submitting context but in
a work queue.  nvme-tcp also sets QUEUE_FLAG_NOWAIT, just like all blk-mq
drivers.
Ming Lei April 29, 2021, 7:37 a.m. UTC | #3
On Thu, Apr 29, 2021 at 09:27:37AM +0200, Christoph Hellwig wrote:
> On Wed, Apr 28, 2021 at 10:32:38AM +0800, Ming Lei wrote:
> > > diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
> > > index 4e2c3a6787e9..1d17b2387884 100644
> > > --- a/drivers/nvme/host/multipath.c
> > > +++ b/drivers/nvme/host/multipath.c
> > > @@ -442,6 +442,8 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
> > >  	if (!q)
> > >  		goto out;
> > >  	blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
> > > +	blk_queue_flag_set(QUEUE_FLAG_NOWAIT, q);
> > 
> > BLK_MQ_F_BLOCKING is set for nvme-tcp, and the blocking may be done inside
> > nvme_ns_head_submit_bio(), is that one problem?
> 
> The point of BLK_MQ_F_BLOCKING is that ->queue_rq can block, and
> because of that it is not called from the submitting context but in
> a work queue.  nvme-tcp also sets QUEUE_FLAG_NOWAIT, just like all blk-mq
> drivers.

BLK_MQ_F_BLOCKING can be called from submitting context, see
blk_mq_try_issue_directly().



Thanks,
Ming
Christoph Hellwig April 29, 2021, 9:46 a.m. UTC | #4
On Thu, Apr 29, 2021 at 03:37:47PM +0800, Ming Lei wrote:
> > > BLK_MQ_F_BLOCKING is set for nvme-tcp, and the blocking may be done inside
> > > nvme_ns_head_submit_bio(), is that one problem?
> > 
> > The point of BLK_MQ_F_BLOCKING is that ->queue_rq can block, and
> > because of that it is not called from the submitting context but in
> > a work queue.  nvme-tcp also sets QUEUE_FLAG_NOWAIT, just like all blk-mq
> > drivers.
> 
> BLK_MQ_F_BLOCKING can be called from submitting context, see
> blk_mq_try_issue_directly().

The all drivers setting it have a problem, as the blk-mq core sets
BLK_MQ_F_BLOCKING for all of them.  The right fix would be to not make
it block for REQ_NOWAIT I/O..
Sagi Grimberg May 6, 2021, 9:23 p.m. UTC | #5
>>>> BLK_MQ_F_BLOCKING is set for nvme-tcp, and the blocking may be done inside
>>>> nvme_ns_head_submit_bio(), is that one problem?
>>>
>>> The point of BLK_MQ_F_BLOCKING is that ->queue_rq can block, and
>>> because of that it is not called from the submitting context but in
>>> a work queue.  nvme-tcp also sets QUEUE_FLAG_NOWAIT, just like all blk-mq
>>> drivers.
>>
>> BLK_MQ_F_BLOCKING can be called from submitting context, see
>> blk_mq_try_issue_directly().
> 
> The all drivers setting it have a problem, as the blk-mq core sets
> BLK_MQ_F_BLOCKING for all of them.  The right fix would be to not make
> it block for REQ_NOWAIT I/O..

Hey Christoph and Ming,

I think that we can remove BLK_MQ_F_BLOCKING setting from nvme-tcp,
the reason that it was set it because lock_sock taken in sendpage is
taking the sk_lock mutex, however given that nvme_tcp_try_send is
protected with the queue send_mutex (which is taken with a
mutex_trylock) we can probably convert it to sendpage_locked...

Let me run it through some tests and get back to you...
diff mbox series

Patch

diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 4e2c3a6787e9..1d17b2387884 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -442,6 +442,8 @@  int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
 	if (!q)
 		goto out;
 	blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
+	blk_queue_flag_set(QUEUE_FLAG_NOWAIT, q);
+
 	/* set to a default value for 512 until disk is validated */
 	blk_queue_logical_block_size(q, 512);
 	blk_set_stacking_limits(&q->limits);