diff mbox series

block: recalculate segment count for multi-segment discard requests correctly

Message ID 20210201164850.391332-1-djeffery@redhat.com (mailing list archive)
State New, archived
Headers show
Series block: recalculate segment count for multi-segment discard requests correctly | expand

Commit Message

David Jeffery Feb. 1, 2021, 4:48 p.m. UTC
When a stacked block device inserts a request into another block device
using blk_insert_cloned_request, the request's nr_phys_segments field gets
recalculated by a call to blk_recalc_rq_segments in
blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not know how to
handle multi-segment discards. For disk types which can handle
multi-segment discards like nvme, this results in discard requests which
claim a single segment when it should report several, triggering a warning
in nvme and causing nvme to fail the discard from the invalid state.

 WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700 nvme_setup_discard+0x170/0x1e0 [nvme_core]
 ...
 nvme_setup_cmd+0x217/0x270 [nvme_core]
 nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
 __blk_mq_try_issue_directly+0xe7/0x1b0
 blk_mq_request_issue_directly+0x41/0x70
 ? blk_account_io_start+0x40/0x50
 dm_mq_queue_rq+0x200/0x3e0
 blk_mq_dispatch_rq_list+0x10a/0x7d0
 ? __sbitmap_queue_get+0x25/0x90
 ? elv_rb_del+0x1f/0x30
 ? deadline_remove_request+0x55/0xb0
 ? dd_dispatch_request+0x181/0x210
 __blk_mq_do_dispatch_sched+0x144/0x290
 ? bio_attempt_discard_merge+0x134/0x1f0
 __blk_mq_sched_dispatch_requests+0x129/0x180
 blk_mq_sched_dispatch_requests+0x30/0x60
 __blk_mq_run_hw_queue+0x47/0xe0
 __blk_mq_delay_run_hw_queue+0x15b/0x170
 blk_mq_sched_insert_requests+0x68/0xe0
 blk_mq_flush_plug_list+0xf0/0x170
 blk_finish_plug+0x36/0x50
 xlog_cil_committed+0x19f/0x290 [xfs]
 xlog_cil_process_committed+0x57/0x80 [xfs]
 xlog_state_do_callback+0x1e0/0x2a0 [xfs]
 xlog_ioend_work+0x2f/0x80 [xfs]
 process_one_work+0x1b6/0x350
 worker_thread+0x53/0x3e0
 ? process_one_work+0x350/0x350
 kthread+0x11b/0x140
 ? __kthread_bind_mask+0x60/0x60
 ret_from_fork+0x22/0x30

This patch fixes blk_recalc_rq_segments to be aware of devices which can
have multi-segment discards. It calculates the correct discard segment
count by counting the number of bio as each discard bio is considered its
own segment.

Signed-off-by: David Jeffery <djeffery@redhat.com>
Tested-by: Laurence Oberman <loberman@redhat.com>
---
 block/blk-merge.c | 7 +++++++
 1 file changed, 7 insertions(+)

Comments

Ming Lei Feb. 2, 2021, 3:33 a.m. UTC | #1
On Mon, Feb 01, 2021 at 11:48:50AM -0500, David Jeffery wrote:
> When a stacked block device inserts a request into another block device
> using blk_insert_cloned_request, the request's nr_phys_segments field gets
> recalculated by a call to blk_recalc_rq_segments in
> blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not know how to
> handle multi-segment discards. For disk types which can handle
> multi-segment discards like nvme, this results in discard requests which
> claim a single segment when it should report several, triggering a warning
> in nvme and causing nvme to fail the discard from the invalid state.
> 
>  WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700 nvme_setup_discard+0x170/0x1e0 [nvme_core]
>  ...
>  nvme_setup_cmd+0x217/0x270 [nvme_core]
>  nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
>  __blk_mq_try_issue_directly+0xe7/0x1b0
>  blk_mq_request_issue_directly+0x41/0x70
>  ? blk_account_io_start+0x40/0x50
>  dm_mq_queue_rq+0x200/0x3e0
>  blk_mq_dispatch_rq_list+0x10a/0x7d0
>  ? __sbitmap_queue_get+0x25/0x90
>  ? elv_rb_del+0x1f/0x30
>  ? deadline_remove_request+0x55/0xb0
>  ? dd_dispatch_request+0x181/0x210
>  __blk_mq_do_dispatch_sched+0x144/0x290
>  ? bio_attempt_discard_merge+0x134/0x1f0
>  __blk_mq_sched_dispatch_requests+0x129/0x180
>  blk_mq_sched_dispatch_requests+0x30/0x60
>  __blk_mq_run_hw_queue+0x47/0xe0
>  __blk_mq_delay_run_hw_queue+0x15b/0x170
>  blk_mq_sched_insert_requests+0x68/0xe0
>  blk_mq_flush_plug_list+0xf0/0x170
>  blk_finish_plug+0x36/0x50
>  xlog_cil_committed+0x19f/0x290 [xfs]
>  xlog_cil_process_committed+0x57/0x80 [xfs]
>  xlog_state_do_callback+0x1e0/0x2a0 [xfs]
>  xlog_ioend_work+0x2f/0x80 [xfs]
>  process_one_work+0x1b6/0x350
>  worker_thread+0x53/0x3e0
>  ? process_one_work+0x350/0x350
>  kthread+0x11b/0x140
>  ? __kthread_bind_mask+0x60/0x60
>  ret_from_fork+0x22/0x30
> 
> This patch fixes blk_recalc_rq_segments to be aware of devices which can
> have multi-segment discards. It calculates the correct discard segment
> count by counting the number of bio as each discard bio is considered its
> own segment.
> 
> Signed-off-by: David Jeffery <djeffery@redhat.com>
> Tested-by: Laurence Oberman <loberman@redhat.com>
> ---
>  block/blk-merge.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/block/blk-merge.c b/block/blk-merge.c
> index 808768f6b174..fe7358bd5d09 100644
> --- a/block/blk-merge.c
> +++ b/block/blk-merge.c
> @@ -382,6 +382,13 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
>  
>  	switch (bio_op(rq->bio)) {
>  	case REQ_OP_DISCARD:
> +		if (queue_max_discard_segments(rq->q) > 1) {
> +			struct bio *bio = rq->bio;
> +			for_each_bio(bio)
> +				nr_phys_segs++;
> +			return nr_phys_segs;
> +		}
> +		/* fall through */
>  	case REQ_OP_SECURE_ERASE:

REQ_OP_SECURE_ERASE needs to be covered since block layer treats
the two in very similar way from discard viewpoint.

Also single range discard should be fixed too, since block layer
thinks single-range discard req segment is 1. Otherwise, the warning in
virtblk_setup_discard_write_zeroes() still may be triggered, at least.
David Jeffery Feb. 2, 2021, 8:43 p.m. UTC | #2
On Tue, Feb 02, 2021 at 11:33:43AM +0800, Ming Lei wrote:
> 
> On Mon, Feb 01, 2021 at 11:48:50AM -0500, David Jeffery wrote:
> > When a stacked block device inserts a request into another block device
> > using blk_insert_cloned_request, the request's nr_phys_segments field gets
> > recalculated by a call to blk_recalc_rq_segments in
> > blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not know how to
> > handle multi-segment discards. For disk types which can handle
> > multi-segment discards like nvme, this results in discard requests which
> > claim a single segment when it should report several, triggering a warning
> > in nvme and causing nvme to fail the discard from the invalid state.
> > 
> >  WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700 nvme_setup_discard+0x170/0x1e0 [nvme_core]
> >  ...
> >  nvme_setup_cmd+0x217/0x270 [nvme_core]
> >  nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
> >  __blk_mq_try_issue_directly+0xe7/0x1b0
> >  blk_mq_request_issue_directly+0x41/0x70
> >  ? blk_account_io_start+0x40/0x50
> >  dm_mq_queue_rq+0x200/0x3e0
> >  blk_mq_dispatch_rq_list+0x10a/0x7d0
> >  ? __sbitmap_queue_get+0x25/0x90
> >  ? elv_rb_del+0x1f/0x30
> >  ? deadline_remove_request+0x55/0xb0
> >  ? dd_dispatch_request+0x181/0x210
> >  __blk_mq_do_dispatch_sched+0x144/0x290
> >  ? bio_attempt_discard_merge+0x134/0x1f0
> >  __blk_mq_sched_dispatch_requests+0x129/0x180
> >  blk_mq_sched_dispatch_requests+0x30/0x60
> >  __blk_mq_run_hw_queue+0x47/0xe0
> >  __blk_mq_delay_run_hw_queue+0x15b/0x170
> >  blk_mq_sched_insert_requests+0x68/0xe0
> >  blk_mq_flush_plug_list+0xf0/0x170
> >  blk_finish_plug+0x36/0x50
> >  xlog_cil_committed+0x19f/0x290 [xfs]
> >  xlog_cil_process_committed+0x57/0x80 [xfs]
> >  xlog_state_do_callback+0x1e0/0x2a0 [xfs]
> >  xlog_ioend_work+0x2f/0x80 [xfs]
> >  process_one_work+0x1b6/0x350
> >  worker_thread+0x53/0x3e0
> >  ? process_one_work+0x350/0x350
> >  kthread+0x11b/0x140
> >  ? __kthread_bind_mask+0x60/0x60
> >  ret_from_fork+0x22/0x30
> > 
> > This patch fixes blk_recalc_rq_segments to be aware of devices which can
> > have multi-segment discards. It calculates the correct discard segment
> > count by counting the number of bio as each discard bio is considered its
> > own segment.
> > 
> > Signed-off-by: David Jeffery <djeffery@redhat.com>
> > Tested-by: Laurence Oberman <loberman@redhat.com>
> > ---
> >  block/blk-merge.c | 7 +++++++
> >  1 file changed, 7 insertions(+)
> > 
> > diff --git a/block/blk-merge.c b/block/blk-merge.c
> > index 808768f6b174..fe7358bd5d09 100644
> > --- a/block/blk-merge.c
> > +++ b/block/blk-merge.c
> > @@ -382,6 +382,13 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
> >  
> >  	switch (bio_op(rq->bio)) {
> >  	case REQ_OP_DISCARD:
> > +		if (queue_max_discard_segments(rq->q) > 1) {
> > +			struct bio *bio = rq->bio;
> > +			for_each_bio(bio)
> > +				nr_phys_segs++;
> > +			return nr_phys_segs;
> > +		}
> > +		/* fall through */
> >  	case REQ_OP_SECURE_ERASE:
> 
> REQ_OP_SECURE_ERASE needs to be covered since block layer treats
> the two in very similar way from discard viewpoint.
> 
> Also single range discard should be fixed too, since block layer
> thinks single-range discard req segment is 1. Otherwise, the warning in
> virtblk_setup_discard_write_zeroes() still may be triggered, at least.
> 
> 
> -- 
> Ming
>

The return 0 does seem to be an old relic that does not make sense anymore.
Moving REQ_OP_SECURE_ERASE to be with discard and removing the old return 0,
is this what you had in mind?

 
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 808768f6b174..68458aa01b05 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -383,8 +383,14 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
 	switch (bio_op(rq->bio)) {
 	case REQ_OP_DISCARD:
 	case REQ_OP_SECURE_ERASE:
+		if (queue_max_discard_segments(rq->q) > 1) {
+			struct bio *bio = rq->bio;
+			for_each_bio(bio)
+				nr_phys_segs++;
+			return nr_phys_segs;
+		}
+		/* fall through */
 	case REQ_OP_WRITE_ZEROES:
-		return 0;
 	case REQ_OP_WRITE_SAME:
 		return 1;
 	}

--
David Jeffery
Ming Lei Feb. 3, 2021, 2:35 a.m. UTC | #3
On Tue, Feb 02, 2021 at 03:43:55PM -0500, David Jeffery wrote:
> On Tue, Feb 02, 2021 at 11:33:43AM +0800, Ming Lei wrote:
> > 
> > On Mon, Feb 01, 2021 at 11:48:50AM -0500, David Jeffery wrote:
> > > When a stacked block device inserts a request into another block device
> > > using blk_insert_cloned_request, the request's nr_phys_segments field gets
> > > recalculated by a call to blk_recalc_rq_segments in
> > > blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not know how to
> > > handle multi-segment discards. For disk types which can handle
> > > multi-segment discards like nvme, this results in discard requests which
> > > claim a single segment when it should report several, triggering a warning
> > > in nvme and causing nvme to fail the discard from the invalid state.
> > > 
> > >  WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700 nvme_setup_discard+0x170/0x1e0 [nvme_core]
> > >  ...
> > >  nvme_setup_cmd+0x217/0x270 [nvme_core]
> > >  nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
> > >  __blk_mq_try_issue_directly+0xe7/0x1b0
> > >  blk_mq_request_issue_directly+0x41/0x70
> > >  ? blk_account_io_start+0x40/0x50
> > >  dm_mq_queue_rq+0x200/0x3e0
> > >  blk_mq_dispatch_rq_list+0x10a/0x7d0
> > >  ? __sbitmap_queue_get+0x25/0x90
> > >  ? elv_rb_del+0x1f/0x30
> > >  ? deadline_remove_request+0x55/0xb0
> > >  ? dd_dispatch_request+0x181/0x210
> > >  __blk_mq_do_dispatch_sched+0x144/0x290
> > >  ? bio_attempt_discard_merge+0x134/0x1f0
> > >  __blk_mq_sched_dispatch_requests+0x129/0x180
> > >  blk_mq_sched_dispatch_requests+0x30/0x60
> > >  __blk_mq_run_hw_queue+0x47/0xe0
> > >  __blk_mq_delay_run_hw_queue+0x15b/0x170
> > >  blk_mq_sched_insert_requests+0x68/0xe0
> > >  blk_mq_flush_plug_list+0xf0/0x170
> > >  blk_finish_plug+0x36/0x50
> > >  xlog_cil_committed+0x19f/0x290 [xfs]
> > >  xlog_cil_process_committed+0x57/0x80 [xfs]
> > >  xlog_state_do_callback+0x1e0/0x2a0 [xfs]
> > >  xlog_ioend_work+0x2f/0x80 [xfs]
> > >  process_one_work+0x1b6/0x350
> > >  worker_thread+0x53/0x3e0
> > >  ? process_one_work+0x350/0x350
> > >  kthread+0x11b/0x140
> > >  ? __kthread_bind_mask+0x60/0x60
> > >  ret_from_fork+0x22/0x30
> > > 
> > > This patch fixes blk_recalc_rq_segments to be aware of devices which can
> > > have multi-segment discards. It calculates the correct discard segment
> > > count by counting the number of bio as each discard bio is considered its
> > > own segment.
> > > 
> > > Signed-off-by: David Jeffery <djeffery@redhat.com>
> > > Tested-by: Laurence Oberman <loberman@redhat.com>
> > > ---
> > >  block/blk-merge.c | 7 +++++++
> > >  1 file changed, 7 insertions(+)
> > > 
> > > diff --git a/block/blk-merge.c b/block/blk-merge.c
> > > index 808768f6b174..fe7358bd5d09 100644
> > > --- a/block/blk-merge.c
> > > +++ b/block/blk-merge.c
> > > @@ -382,6 +382,13 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
> > >  
> > >  	switch (bio_op(rq->bio)) {
> > >  	case REQ_OP_DISCARD:
> > > +		if (queue_max_discard_segments(rq->q) > 1) {
> > > +			struct bio *bio = rq->bio;
> > > +			for_each_bio(bio)
> > > +				nr_phys_segs++;
> > > +			return nr_phys_segs;
> > > +		}
> > > +		/* fall through */
> > >  	case REQ_OP_SECURE_ERASE:
> > 
> > REQ_OP_SECURE_ERASE needs to be covered since block layer treats
> > the two in very similar way from discard viewpoint.
> > 
> > Also single range discard should be fixed too, since block layer
> > thinks single-range discard req segment is 1. Otherwise, the warning in
> > virtblk_setup_discard_write_zeroes() still may be triggered, at least.
> > 
> > 
> > -- 
> > Ming
> >
> 
> The return 0 does seem to be an old relic that does not make sense anymore.
> Moving REQ_OP_SECURE_ERASE to be with discard and removing the old return 0,
> is this what you had in mind?
> 
>  
> diff --git a/block/blk-merge.c b/block/blk-merge.c
> index 808768f6b174..68458aa01b05 100644
> --- a/block/blk-merge.c
> +++ b/block/blk-merge.c
> @@ -383,8 +383,14 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
>  	switch (bio_op(rq->bio)) {
>  	case REQ_OP_DISCARD:
>  	case REQ_OP_SECURE_ERASE:
> +		if (queue_max_discard_segments(rq->q) > 1) {
> +			struct bio *bio = rq->bio;
> +			for_each_bio(bio)
> +				nr_phys_segs++;
> +			return nr_phys_segs;
> +		}
> +		/* fall through */
>  	case REQ_OP_WRITE_ZEROES:
> -		return 0;
>  	case REQ_OP_WRITE_SAME:
>  		return 1;

WRITE_SAME uses same buffer, so the nr_segment is still one; WRITE_ZERO
doesn't need extra payload, so nr_segments is zero, see
blk_bio_write_zeroes_split(), blk_bio_write_same_split, attempt_merge()
and blk_rq_merge_ok().
Chaitanya Kulkarni Feb. 3, 2021, 3:15 a.m. UTC | #4
On 2/2/21 18:39, Ming Lei wrote:
> +		/* fall through */
>  	case REQ_OP_WRITE_ZEROES:
> -		return 0;
I don't think returning 1 for write-zeroes is right,
did you test this patch with write-zeores enabled controller with
the right fs that triggers this behavior ?
Chaitanya Kulkarni Feb. 3, 2021, 3:18 a.m. UTC | #5
On 2/2/21 18:39, Ming Lei wrote:
> +			struct bio *bio = rq->bio;
> +			for_each_bio(bio)
> +				nr_phys_segs++;
> +			return nr_phys_segs;
> +		}
Also, you need to add a new line after declaration of bio in the above
code block.
Laurence Oberman Feb. 3, 2021, 1:50 p.m. UTC | #6
On Wed, 2021-02-03 at 03:15 +0000, Chaitanya Kulkarni wrote:
> On 2/2/21 18:39, Ming Lei wrote:
> > +		/* fall through */
> >  	case REQ_OP_WRITE_ZEROES:
> > -		return 0;
> 
> I don't think returning 1 for write-zeroes is right,
> did you test this patch with write-zeores enabled controller with
> the right fs that triggers this behavior ?
> 
> 
I tested the first iteration of the patch fully mounting an XFS file
system with -o discard and creating and deleting files.
That was our specific RHEL8 failure we were handling here with David's
first submission.

I can test his most recent, I have not done that yet.
Again, please follow up with exactly what you want based against
David's patch and I can test that.

Regards
Laurence
Laurence Oberman Feb. 3, 2021, 3:08 p.m. UTC | #7
On Wed, 2021-02-03 at 08:50 -0500, Laurence Oberman wrote:
> On Wed, 2021-02-03 at 03:15 +0000, Chaitanya Kulkarni wrote:
> > On 2/2/21 18:39, Ming Lei wrote:
> > > +		/* fall through */
> > >  	case REQ_OP_WRITE_ZEROES:
> > > -		return 0;
> > 
> > I don't think returning 1 for write-zeroes is right,
> > did you test this patch with write-zeores enabled controller with
> > the right fs that triggers this behavior ?
> > 
> > 
> 
> I tested the first iteration of the patch fully mounting an XFS file
> system with -o discard and creating and deleting files.
> That was our specific RHEL8 failure we were handling here with
> David's
> first submission.
> 
> I can test his most recent, I have not done that yet.
> Again, please follow up with exactly what you want based against
> David's patch and I can test that.
> 
> Regards
> Laurence 

So if I understand what it is you were wanting I will test this now.

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 808768f6b174..a9bd958c07c4 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -383,6 +383,13 @@ unsigned int blk_recalc_rq_segments(struct request
*rq)
        switch (bio_op(rq->bio)) {
        case REQ_OP_DISCARD:
        case REQ_OP_SECURE_ERASE:
+                if (queue_max_discard_segments(rq->q) > 1) {
+                        struct bio *bio = rq->bio;

+                        for_each_bio(bio)
+                                nr_phys_segs++;
+                        return nr_phys_segs;
+                }
+                /* fall through */
        case REQ_OP_WRITE_ZEROES:
                return 0;
        case REQ_OP_WRITE_SAME:
David Jeffery Feb. 3, 2021, 4:23 p.m. UTC | #8
On Wed, Feb 03, 2021 at 10:35:17AM +0800, Ming Lei wrote:
> 
> On Tue, Feb 02, 2021 at 03:43:55PM -0500, David Jeffery wrote:
> > The return 0 does seem to be an old relic that does not make sense anymore.
> > Moving REQ_OP_SECURE_ERASE to be with discard and removing the old return 0,
> > is this what you had in mind?
> > 
> >  
> > diff --git a/block/blk-merge.c b/block/blk-merge.c
> > index 808768f6b174..68458aa01b05 100644
> > --- a/block/blk-merge.c
> > +++ b/block/blk-merge.c
> > @@ -383,8 +383,14 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
> >  	switch (bio_op(rq->bio)) {
> >  	case REQ_OP_DISCARD:
> >  	case REQ_OP_SECURE_ERASE:
> > +		if (queue_max_discard_segments(rq->q) > 1) {
> > +			struct bio *bio = rq->bio;
> > +			for_each_bio(bio)
> > +				nr_phys_segs++;
> > +			return nr_phys_segs;
> > +		}
> > +		/* fall through */
> >  	case REQ_OP_WRITE_ZEROES:
> > -		return 0;
> >  	case REQ_OP_WRITE_SAME:
> >  		return 1;
> 
> WRITE_SAME uses same buffer, so the nr_segment is still one; WRITE_ZERO
> doesn't need extra payload, so nr_segments is zero, see
> blk_bio_write_zeroes_split(), blk_bio_write_same_split, attempt_merge()
> and blk_rq_merge_ok().
> 

I thought you mentioned virtio-blk because of how some drivers handle
zeroing and discarding similarly and wanted to align the segment count with
discard behavior for WRITE_ZEROES too. (Though that would also need an update
to blk_bio_write_zeroes_split as you pointed out.)  So you want me to leave
WRITE_ZEROES behavior alone and let blk_rq_nr_discard_segments() keep doing
the hiding of a 0 rq->nr_phys_segments as 1 segment in the WRITE_ZEROES treated
as a discard case?

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 808768f6b174..756473295f19 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -383,6 +383,14 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
 	switch (bio_op(rq->bio)) {
 	case REQ_OP_DISCARD:
 	case REQ_OP_SECURE_ERASE:
+		if (queue_max_discard_segments(rq->q) > 1) {
+			struct bio *bio = rq->bio;
+
+			for_each_bio(bio)
+				nr_phys_segs++;
+			return nr_phys_segs;
+		}
+		return 1;
 	case REQ_OP_WRITE_ZEROES:
 		return 0;
 	case REQ_OP_WRITE_SAME:

--
David Jeffery
Ming Lei Feb. 4, 2021, 2:18 a.m. UTC | #9
On Wed, Feb 03, 2021 at 11:23:37AM -0500, David Jeffery wrote:
> On Wed, Feb 03, 2021 at 10:35:17AM +0800, Ming Lei wrote:
> > 
> > On Tue, Feb 02, 2021 at 03:43:55PM -0500, David Jeffery wrote:
> > > The return 0 does seem to be an old relic that does not make sense anymore.
> > > Moving REQ_OP_SECURE_ERASE to be with discard and removing the old return 0,
> > > is this what you had in mind?
> > > 
> > >  
> > > diff --git a/block/blk-merge.c b/block/blk-merge.c
> > > index 808768f6b174..68458aa01b05 100644
> > > --- a/block/blk-merge.c
> > > +++ b/block/blk-merge.c
> > > @@ -383,8 +383,14 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
> > >  	switch (bio_op(rq->bio)) {
> > >  	case REQ_OP_DISCARD:
> > >  	case REQ_OP_SECURE_ERASE:
> > > +		if (queue_max_discard_segments(rq->q) > 1) {
> > > +			struct bio *bio = rq->bio;
> > > +			for_each_bio(bio)
> > > +				nr_phys_segs++;
> > > +			return nr_phys_segs;
> > > +		}
> > > +		/* fall through */
> > >  	case REQ_OP_WRITE_ZEROES:
> > > -		return 0;
> > >  	case REQ_OP_WRITE_SAME:
> > >  		return 1;
> > 
> > WRITE_SAME uses same buffer, so the nr_segment is still one; WRITE_ZERO
> > doesn't need extra payload, so nr_segments is zero, see
> > blk_bio_write_zeroes_split(), blk_bio_write_same_split, attempt_merge()
> > and blk_rq_merge_ok().
> > 
> 
> I thought you mentioned virtio-blk because of how some drivers handle
> zeroing and discarding similarly and wanted to align the segment count with
> discard behavior for WRITE_ZEROES too. (Though that would also need an update

virtio-blk is just one example which supports both single discard range
and multiple discard range, meantime virtblk_setup_discard_write_zeroes()
simply maps write zero into discard directly.

Just found blk_rq_nr_discard_segments() returns >=1 segments always, so
looks your patch is enough for avoiding the warning.

> to blk_bio_write_zeroes_split as you pointed out.)  So you want me to leave
> WRITE_ZEROES behavior alone and let blk_rq_nr_discard_segments() keep doing
> the hiding of a 0 rq->nr_phys_segments as 1 segment in the WRITE_ZEROES treated
> as a discard case?
> 
> diff --git a/block/blk-merge.c b/block/blk-merge.c
> index 808768f6b174..756473295f19 100644
> --- a/block/blk-merge.c
> +++ b/block/blk-merge.c
> @@ -383,6 +383,14 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
>  	switch (bio_op(rq->bio)) {
>  	case REQ_OP_DISCARD:
>  	case REQ_OP_SECURE_ERASE:
> +		if (queue_max_discard_segments(rq->q) > 1) {
> +			struct bio *bio = rq->bio;
> +
> +			for_each_bio(bio)
> +				nr_phys_segs++;
> +			return nr_phys_segs;
> +		}
> +		return 1;
>  	case REQ_OP_WRITE_ZEROES:
>  		return 0;
>  	case REQ_OP_WRITE_SAME:

This patch returns 1 for single-range discard explicitly. However, it
isn't necessary because of blk_rq_nr_discard_segments().

Maybe we can align to blk_bio_discard_split() in future, but that can be
done as cleanup.

Thanks,
Ming
Ming Lei Feb. 4, 2021, 2:27 a.m. UTC | #10
On Mon, Feb 01, 2021 at 11:48:50AM -0500, David Jeffery wrote:
> When a stacked block device inserts a request into another block device
> using blk_insert_cloned_request, the request's nr_phys_segments field gets
> recalculated by a call to blk_recalc_rq_segments in
> blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not know how to
> handle multi-segment discards. For disk types which can handle
> multi-segment discards like nvme, this results in discard requests which
> claim a single segment when it should report several, triggering a warning
> in nvme and causing nvme to fail the discard from the invalid state.
> 
>  WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700 nvme_setup_discard+0x170/0x1e0 [nvme_core]
>  ...
>  nvme_setup_cmd+0x217/0x270 [nvme_core]
>  nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
>  __blk_mq_try_issue_directly+0xe7/0x1b0
>  blk_mq_request_issue_directly+0x41/0x70
>  ? blk_account_io_start+0x40/0x50
>  dm_mq_queue_rq+0x200/0x3e0
>  blk_mq_dispatch_rq_list+0x10a/0x7d0
>  ? __sbitmap_queue_get+0x25/0x90
>  ? elv_rb_del+0x1f/0x30
>  ? deadline_remove_request+0x55/0xb0
>  ? dd_dispatch_request+0x181/0x210
>  __blk_mq_do_dispatch_sched+0x144/0x290
>  ? bio_attempt_discard_merge+0x134/0x1f0
>  __blk_mq_sched_dispatch_requests+0x129/0x180
>  blk_mq_sched_dispatch_requests+0x30/0x60
>  __blk_mq_run_hw_queue+0x47/0xe0
>  __blk_mq_delay_run_hw_queue+0x15b/0x170
>  blk_mq_sched_insert_requests+0x68/0xe0
>  blk_mq_flush_plug_list+0xf0/0x170
>  blk_finish_plug+0x36/0x50
>  xlog_cil_committed+0x19f/0x290 [xfs]
>  xlog_cil_process_committed+0x57/0x80 [xfs]
>  xlog_state_do_callback+0x1e0/0x2a0 [xfs]
>  xlog_ioend_work+0x2f/0x80 [xfs]
>  process_one_work+0x1b6/0x350
>  worker_thread+0x53/0x3e0
>  ? process_one_work+0x350/0x350
>  kthread+0x11b/0x140
>  ? __kthread_bind_mask+0x60/0x60
>  ret_from_fork+0x22/0x30
> 
> This patch fixes blk_recalc_rq_segments to be aware of devices which can
> have multi-segment discards. It calculates the correct discard segment
> count by counting the number of bio as each discard bio is considered its
> own segment.
> 
> Signed-off-by: David Jeffery <djeffery@redhat.com>
> Tested-by: Laurence Oberman <loberman@redhat.com>
> ---
>  block/blk-merge.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/block/blk-merge.c b/block/blk-merge.c
> index 808768f6b174..fe7358bd5d09 100644
> --- a/block/blk-merge.c
> +++ b/block/blk-merge.c
> @@ -382,6 +382,13 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
>  
>  	switch (bio_op(rq->bio)) {
>  	case REQ_OP_DISCARD:
> +		if (queue_max_discard_segments(rq->q) > 1) {
> +			struct bio *bio = rq->bio;
> +			for_each_bio(bio)
> +				nr_phys_segs++;
> +			return nr_phys_segs;
> +		}
> +		/* fall through */
>  	case REQ_OP_SECURE_ERASE:
>  	case REQ_OP_WRITE_ZEROES:
>  		return 0;

blk_rq_nr_discard_segments() always returns >=1 segments, so no similar
issue in case of single range discard.

Reviewed-by: Ming Lei <ming.lei@redhat.com>

And it can be thought as:

Fixes: 1e739730c5b9 ("block: optionally merge discontiguous discard bios into a single request")
Laurence Oberman Feb. 4, 2021, 4:43 p.m. UTC | #11
On Thu, 2021-02-04 at 10:27 +0800, Ming Lei wrote:
> On Mon, Feb 01, 2021 at 11:48:50AM -0500, David Jeffery wrote:
> > When a stacked block device inserts a request into another block
> > device
> > using blk_insert_cloned_request, the request's nr_phys_segments
> > field gets
> > recalculated by a call to blk_recalc_rq_segments in
> > blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not
> > know how to
> > handle multi-segment discards. For disk types which can handle
> > multi-segment discards like nvme, this results in discard requests
> > which
> > claim a single segment when it should report several, triggering a
> > warning
> > in nvme and causing nvme to fail the discard from the invalid
> > state.
> > 
> >  WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700
> > nvme_setup_discard+0x170/0x1e0 [nvme_core]
> >  ...
> >  nvme_setup_cmd+0x217/0x270 [nvme_core]
> >  nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
> >  __blk_mq_try_issue_directly+0xe7/0x1b0
> >  blk_mq_request_issue_directly+0x41/0x70
> >  ? blk_account_io_start+0x40/0x50
> >  dm_mq_queue_rq+0x200/0x3e0
> >  blk_mq_dispatch_rq_list+0x10a/0x7d0
> >  ? __sbitmap_queue_get+0x25/0x90
> >  ? elv_rb_del+0x1f/0x30
> >  ? deadline_remove_request+0x55/0xb0
> >  ? dd_dispatch_request+0x181/0x210
> >  __blk_mq_do_dispatch_sched+0x144/0x290
> >  ? bio_attempt_discard_merge+0x134/0x1f0
> >  __blk_mq_sched_dispatch_requests+0x129/0x180
> >  blk_mq_sched_dispatch_requests+0x30/0x60
> >  __blk_mq_run_hw_queue+0x47/0xe0
> >  __blk_mq_delay_run_hw_queue+0x15b/0x170
> >  blk_mq_sched_insert_requests+0x68/0xe0
> >  blk_mq_flush_plug_list+0xf0/0x170
> >  blk_finish_plug+0x36/0x50
> >  xlog_cil_committed+0x19f/0x290 [xfs]
> >  xlog_cil_process_committed+0x57/0x80 [xfs]
> >  xlog_state_do_callback+0x1e0/0x2a0 [xfs]
> >  xlog_ioend_work+0x2f/0x80 [xfs]
> >  process_one_work+0x1b6/0x350
> >  worker_thread+0x53/0x3e0
> >  ? process_one_work+0x350/0x350
> >  kthread+0x11b/0x140
> >  ? __kthread_bind_mask+0x60/0x60
> >  ret_from_fork+0x22/0x30
> > 
> > This patch fixes blk_recalc_rq_segments to be aware of devices
> > which can
> > have multi-segment discards. It calculates the correct discard
> > segment
> > count by counting the number of bio as each discard bio is
> > considered its
> > own segment.
> > 
> > Signed-off-by: David Jeffery <djeffery@redhat.com>
> > Tested-by: Laurence Oberman <loberman@redhat.com>
> > ---
> >  block/blk-merge.c | 7 +++++++
> >  1 file changed, 7 insertions(+)
> > 
> > diff --git a/block/blk-merge.c b/block/blk-merge.c
> > index 808768f6b174..fe7358bd5d09 100644
> > --- a/block/blk-merge.c
> > +++ b/block/blk-merge.c
> > @@ -382,6 +382,13 @@ unsigned int blk_recalc_rq_segments(struct
> > request *rq)
> >  
> >  	switch (bio_op(rq->bio)) {
> >  	case REQ_OP_DISCARD:
> > +		if (queue_max_discard_segments(rq->q) > 1) {
> > +			struct bio *bio = rq->bio;
> > +			for_each_bio(bio)
> > +				nr_phys_segs++;
> > +			return nr_phys_segs;
> > +		}
> > +		/* fall through */
> >  	case REQ_OP_SECURE_ERASE:
> >  	case REQ_OP_WRITE_ZEROES:
> >  		return 0;
> 
> blk_rq_nr_discard_segments() always returns >=1 segments, so no
> similar
> issue in case of single range discard.
> 
> Reviewed-by: Ming Lei <ming.lei@redhat.com>
> 
> And it can be thought as:
> 
> Fixes: 1e739730c5b9 ("block: optionally merge discontiguous discard
> bios into a single request")
> 
> 

Great, can we get enough acks and push this through its urgent for me
Reviewed-by: Laurence Oberman <loberman@redhat.com>
Laurence Oberman Feb. 8, 2021, 6:53 p.m. UTC | #12
On Thu, 2021-02-04 at 11:43 -0500, Laurence Oberman wrote:
> On Thu, 2021-02-04 at 10:27 +0800, Ming Lei wrote:
> > On Mon, Feb 01, 2021 at 11:48:50AM -0500, David Jeffery wrote:
> > > When a stacked block device inserts a request into another block
> > > device
> > > using blk_insert_cloned_request, the request's nr_phys_segments
> > > field gets
> > > recalculated by a call to blk_recalc_rq_segments in
> > > blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not
> > > know how to
> > > handle multi-segment discards. For disk types which can handle
> > > multi-segment discards like nvme, this results in discard
> > > requests
> > > which
> > > claim a single segment when it should report several, triggering
> > > a
> > > warning
> > > in nvme and causing nvme to fail the discard from the invalid
> > > state.
> > > 
> > >  WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700
> > > nvme_setup_discard+0x170/0x1e0 [nvme_core]
> > >  ...
> > >  nvme_setup_cmd+0x217/0x270 [nvme_core]
> > >  nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
> > >  __blk_mq_try_issue_directly+0xe7/0x1b0
> > >  blk_mq_request_issue_directly+0x41/0x70
> > >  ? blk_account_io_start+0x40/0x50
> > >  dm_mq_queue_rq+0x200/0x3e0
> > >  blk_mq_dispatch_rq_list+0x10a/0x7d0
> > >  ? __sbitmap_queue_get+0x25/0x90
> > >  ? elv_rb_del+0x1f/0x30
> > >  ? deadline_remove_request+0x55/0xb0
> > >  ? dd_dispatch_request+0x181/0x210
> > >  __blk_mq_do_dispatch_sched+0x144/0x290
> > >  ? bio_attempt_discard_merge+0x134/0x1f0
> > >  __blk_mq_sched_dispatch_requests+0x129/0x180
> > >  blk_mq_sched_dispatch_requests+0x30/0x60
> > >  __blk_mq_run_hw_queue+0x47/0xe0
> > >  __blk_mq_delay_run_hw_queue+0x15b/0x170
> > >  blk_mq_sched_insert_requests+0x68/0xe0
> > >  blk_mq_flush_plug_list+0xf0/0x170
> > >  blk_finish_plug+0x36/0x50
> > >  xlog_cil_committed+0x19f/0x290 [xfs]
> > >  xlog_cil_process_committed+0x57/0x80 [xfs]
> > >  xlog_state_do_callback+0x1e0/0x2a0 [xfs]
> > >  xlog_ioend_work+0x2f/0x80 [xfs]
> > >  process_one_work+0x1b6/0x350
> > >  worker_thread+0x53/0x3e0
> > >  ? process_one_work+0x350/0x350
> > >  kthread+0x11b/0x140
> > >  ? __kthread_bind_mask+0x60/0x60
> > >  ret_from_fork+0x22/0x30
> > > 
> > > This patch fixes blk_recalc_rq_segments to be aware of devices
> > > which can
> > > have multi-segment discards. It calculates the correct discard
> > > segment
> > > count by counting the number of bio as each discard bio is
> > > considered its
> > > own segment.
> > > 
> > > Signed-off-by: David Jeffery <djeffery@redhat.com>
> > > Tested-by: Laurence Oberman <loberman@redhat.com>
> > > ---
> > >  block/blk-merge.c | 7 +++++++
> > >  1 file changed, 7 insertions(+)
> > > 
> > > diff --git a/block/blk-merge.c b/block/blk-merge.c
> > > index 808768f6b174..fe7358bd5d09 100644
> > > --- a/block/blk-merge.c
> > > +++ b/block/blk-merge.c
> > > @@ -382,6 +382,13 @@ unsigned int blk_recalc_rq_segments(struct
> > > request *rq)
> > >  
> > >  	switch (bio_op(rq->bio)) {
> > >  	case REQ_OP_DISCARD:
> > > +		if (queue_max_discard_segments(rq->q) > 1) {
> > > +			struct bio *bio = rq->bio;
> > > +			for_each_bio(bio)
> > > +				nr_phys_segs++;
> > > +			return nr_phys_segs;
> > > +		}
> > > +		/* fall through */
> > >  	case REQ_OP_SECURE_ERASE:
> > >  	case REQ_OP_WRITE_ZEROES:
> > >  		return 0;
> > 
> > blk_rq_nr_discard_segments() always returns >=1 segments, so no
> > similar
> > issue in case of single range discard.
> > 
> > Reviewed-by: Ming Lei <ming.lei@redhat.com>
> > 
> > And it can be thought as:
> > 
> > Fixes: 1e739730c5b9 ("block: optionally merge discontiguous discard
> > bios into a single request")
> > 
> > 
> 
> Great, can we get enough acks and push this through its urgent for me
> Reviewed-by: Laurence Oberman <loberman@redhat.com>

Hate to ping again, but we cant take this into RHEL unless its
upstream, can we get enough acks to get this in.

Many Thanks
Laurence
John Pittman Feb. 8, 2021, 6:58 p.m. UTC | #13
Hi Jens, when you get a moment, could you take a quick look at this one for ack?

On Thu, Feb 4, 2021 at 11:49 AM Laurence Oberman <loberman@redhat.com> wrote:
>
> On Thu, 2021-02-04 at 10:27 +0800, Ming Lei wrote:
> > On Mon, Feb 01, 2021 at 11:48:50AM -0500, David Jeffery wrote:
> > > When a stacked block device inserts a request into another block
> > > device
> > > using blk_insert_cloned_request, the request's nr_phys_segments
> > > field gets
> > > recalculated by a call to blk_recalc_rq_segments in
> > > blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not
> > > know how to
> > > handle multi-segment discards. For disk types which can handle
> > > multi-segment discards like nvme, this results in discard requests
> > > which
> > > claim a single segment when it should report several, triggering a
> > > warning
> > > in nvme and causing nvme to fail the discard from the invalid
> > > state.
> > >
> > >  WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700
> > > nvme_setup_discard+0x170/0x1e0 [nvme_core]
> > >  ...
> > >  nvme_setup_cmd+0x217/0x270 [nvme_core]
> > >  nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
> > >  __blk_mq_try_issue_directly+0xe7/0x1b0
> > >  blk_mq_request_issue_directly+0x41/0x70
> > >  ? blk_account_io_start+0x40/0x50
> > >  dm_mq_queue_rq+0x200/0x3e0
> > >  blk_mq_dispatch_rq_list+0x10a/0x7d0
> > >  ? __sbitmap_queue_get+0x25/0x90
> > >  ? elv_rb_del+0x1f/0x30
> > >  ? deadline_remove_request+0x55/0xb0
> > >  ? dd_dispatch_request+0x181/0x210
> > >  __blk_mq_do_dispatch_sched+0x144/0x290
> > >  ? bio_attempt_discard_merge+0x134/0x1f0
> > >  __blk_mq_sched_dispatch_requests+0x129/0x180
> > >  blk_mq_sched_dispatch_requests+0x30/0x60
> > >  __blk_mq_run_hw_queue+0x47/0xe0
> > >  __blk_mq_delay_run_hw_queue+0x15b/0x170
> > >  blk_mq_sched_insert_requests+0x68/0xe0
> > >  blk_mq_flush_plug_list+0xf0/0x170
> > >  blk_finish_plug+0x36/0x50
> > >  xlog_cil_committed+0x19f/0x290 [xfs]
> > >  xlog_cil_process_committed+0x57/0x80 [xfs]
> > >  xlog_state_do_callback+0x1e0/0x2a0 [xfs]
> > >  xlog_ioend_work+0x2f/0x80 [xfs]
> > >  process_one_work+0x1b6/0x350
> > >  worker_thread+0x53/0x3e0
> > >  ? process_one_work+0x350/0x350
> > >  kthread+0x11b/0x140
> > >  ? __kthread_bind_mask+0x60/0x60
> > >  ret_from_fork+0x22/0x30
> > >
> > > This patch fixes blk_recalc_rq_segments to be aware of devices
> > > which can
> > > have multi-segment discards. It calculates the correct discard
> > > segment
> > > count by counting the number of bio as each discard bio is
> > > considered its
> > > own segment.
> > >
> > > Signed-off-by: David Jeffery <djeffery@redhat.com>
> > > Tested-by: Laurence Oberman <loberman@redhat.com>
> > > ---
> > >  block/blk-merge.c | 7 +++++++
> > >  1 file changed, 7 insertions(+)
> > >
> > > diff --git a/block/blk-merge.c b/block/blk-merge.c
> > > index 808768f6b174..fe7358bd5d09 100644
> > > --- a/block/blk-merge.c
> > > +++ b/block/blk-merge.c
> > > @@ -382,6 +382,13 @@ unsigned int blk_recalc_rq_segments(struct
> > > request *rq)
> > >
> > >     switch (bio_op(rq->bio)) {
> > >     case REQ_OP_DISCARD:
> > > +           if (queue_max_discard_segments(rq->q) > 1) {
> > > +                   struct bio *bio = rq->bio;
> > > +                   for_each_bio(bio)
> > > +                           nr_phys_segs++;
> > > +                   return nr_phys_segs;
> > > +           }
> > > +           /* fall through */
> > >     case REQ_OP_SECURE_ERASE:
> > >     case REQ_OP_WRITE_ZEROES:
> > >             return 0;
> >
> > blk_rq_nr_discard_segments() always returns >=1 segments, so no
> > similar
> > issue in case of single range discard.
> >
> > Reviewed-by: Ming Lei <ming.lei@redhat.com>
> >
> > And it can be thought as:
> >
> > Fixes: 1e739730c5b9 ("block: optionally merge discontiguous discard
> > bios into a single request")
> >
> >
>
> Great, can we get enough acks and push this through its urgent for me
> Reviewed-by: Laurence Oberman <loberman@redhat.com>
>
diff mbox series

Patch

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 808768f6b174..fe7358bd5d09 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -382,6 +382,13 @@  unsigned int blk_recalc_rq_segments(struct request *rq)
 
 	switch (bio_op(rq->bio)) {
 	case REQ_OP_DISCARD:
+		if (queue_max_discard_segments(rq->q) > 1) {
+			struct bio *bio = rq->bio;
+			for_each_bio(bio)
+				nr_phys_segs++;
+			return nr_phys_segs;
+		}
+		/* fall through */
 	case REQ_OP_SECURE_ERASE:
 	case REQ_OP_WRITE_ZEROES:
 		return 0;