diff mbox series

[5/7] blk-mq: defer to the normal submission path for post-flush requests

Message ID 20230519044050.107790-6-hch@lst.de (mailing list archive)
State New, archived
Headers show
Series [1/7] blk-mq: factor out a blk_rq_init_flush helper | expand

Commit Message

Christoph Hellwig May 19, 2023, 4:40 a.m. UTC
Requests with the FUA bit on hardware without FUA support need a post
flush before returning to the caller, but they can still be sent using
the normal I/O path after initializing the flush-related fields and
end I/O handler.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-flush.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

Comments

Bart Van Assche May 19, 2023, 7:42 p.m. UTC | #1
On 5/18/23 21:40, Christoph Hellwig wrote:
> Requests with the FUA bit on hardware without FUA support need a post
> flush before returning to the caller, but they can still be sent using
> the normal I/O path after initializing the flush-related fields and
> end I/O handler.

Reviewed-by: Bart Van Assche <bvanassche@acm.org>
diff mbox series

Patch

diff --git a/block/blk-flush.c b/block/blk-flush.c
index 6fb9cf2d38184b..7121f9ad0762f8 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -432,6 +432,17 @@  bool blk_insert_flush(struct request *rq)
 		 * Queue for normal execution.
 		 */
 		return false;
+	case REQ_FSEQ_DATA | REQ_FSEQ_POSTFLUSH:
+		/*
+		 * Initialize the flush fields and completion handler to trigger
+		 * the post flush, and then just pass the command on.
+		 */
+		blk_rq_init_flush(rq);
+		rq->flush.seq |= REQ_FSEQ_POSTFLUSH;
+		spin_lock_irq(&fq->mq_flush_lock);
+		list_move_tail(&rq->flush.list, &fq->flush_data_in_flight);
+		spin_unlock_irq(&fq->mq_flush_lock);
+		return false;
 	default:
 		/*
 		 * Mark the request as part of a flush sequence and submit it