diff mbox series

[5/7] blk-mq: defer to the normal submission path for post-flush requests

Message ID 20230416200930.29542-6-hch@lst.de (mailing list archive)
State New, archived
Headers show
Series [1/7] blk-mq: factor out a blk_rq_init_flush helper | expand

Commit Message

Christoph Hellwig April 16, 2023, 8:09 p.m. UTC
Requests with the FUA bit on hardware without FUA support need a post
flush before returning the caller, but they can still be sent using
the normal I/O path after initializing the flush-related fields and
end I/O handler.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-flush.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

Comments

Damien Le Moal April 17, 2023, 6:36 a.m. UTC | #1
On 4/17/23 05:09, Christoph Hellwig wrote:
> Requests with the FUA bit on hardware without FUA support need a post
> flush before returning the caller, but they can still be sent using

s/returning/returning to

> the normal I/O path after initializing the flush-related fields and
> end I/O handler.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  block/blk-flush.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/block/blk-flush.c b/block/blk-flush.c
> index f62e74d9d56bc8..9eda6d46438dba 100644
> --- a/block/blk-flush.c
> +++ b/block/blk-flush.c
> @@ -435,6 +435,17 @@ bool blk_insert_flush(struct request *rq)
>  		 * Queue for normal execution.
>  		 */
>  		return false;
> +	case REQ_FSEQ_DATA | REQ_FSEQ_POSTFLUSH:
> +		/*
> +		 * Initialize the flush fields and completion handler to trigger
> +		 * the post flush, and then just pass the command on.
> +		 */
> +		blk_rq_init_flush(rq);
> +		rq->flush.seq |= REQ_FSEQ_PREFLUSH;

Shouldn't this be REQ_FSEQ_POSTFLUSH ?

> +		spin_lock_irq(&fq->mq_flush_lock);
> +		list_move_tail(&rq->flush.list, &fq->flush_data_in_flight);
> +		spin_unlock_irq(&fq->mq_flush_lock);
> +		return false;
>  	default:
>  		/*
>  		 * Mark the request as part of a flush sequence and submit it
Christoph Hellwig April 17, 2023, 6:39 a.m. UTC | #2
On Mon, Apr 17, 2023 at 03:36:54PM +0900, Damien Le Moal wrote:
> > +	case REQ_FSEQ_DATA | REQ_FSEQ_POSTFLUSH:
> > +		/*
> > +		 * Initialize the flush fields and completion handler to trigger
> > +		 * the post flush, and then just pass the command on.
> > +		 */
> > +		blk_rq_init_flush(rq);
> > +		rq->flush.seq |= REQ_FSEQ_PREFLUSH;
> 
> Shouldn't this be REQ_FSEQ_POSTFLUSH ?

Yes.  My fault for optimizing away the complicated assignment in the
last minute..
diff mbox series

Patch

diff --git a/block/blk-flush.c b/block/blk-flush.c
index f62e74d9d56bc8..9eda6d46438dba 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -435,6 +435,17 @@  bool blk_insert_flush(struct request *rq)
 		 * Queue for normal execution.
 		 */
 		return false;
+	case REQ_FSEQ_DATA | REQ_FSEQ_POSTFLUSH:
+		/*
+		 * Initialize the flush fields and completion handler to trigger
+		 * the post flush, and then just pass the command on.
+		 */
+		blk_rq_init_flush(rq);
+		rq->flush.seq |= REQ_FSEQ_PREFLUSH;
+		spin_lock_irq(&fq->mq_flush_lock);
+		list_move_tail(&rq->flush.list, &fq->flush_data_in_flight);
+		spin_unlock_irq(&fq->mq_flush_lock);
+		return false;
 	default:
 		/*
 		 * Mark the request as part of a flush sequence and submit it