From patchwork Fri May 19 04:40:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13247674 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6E00C77B7A for ; Fri, 19 May 2023 04:41:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229512AbjESElB (ORCPT ); Fri, 19 May 2023 00:41:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50520 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229449AbjESEk6 (ORCPT ); Fri, 19 May 2023 00:40:58 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DCA5B10D2 for ; Thu, 18 May 2023 21:40:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Cdfhz/h6eobdkW1AXim37wZh3wblQAyf7df01SjVxb0=; b=YmYOw+T3x06KleF9Zpzok5XmmL tH3P1LNZsXSX8ga+HjFJB3Hfrz2jf5+rMoW9VzjDfZqNTR0NztefIf6STo1Rex8tcUyfAswkvXoA7 8NpxWVA/hFSoKUJBjKs00rAJjI/Ahh/cgeXzsnArySAIrnLQhnZvmXGuuIILm5mGVI7cOSOEEWnhF SWC/nM0ItyZ8HcfCmzZEJkDD9pvcZIUCT4d8kUH1QfFtf62xbFNjFBoEeUssJcGBWKdF6xbN8TTZr id4gqAVYHR5kKJHXZJ8M8CPHenxctv/n5sQZ/7VaGJVyiCNoYIxWoIENYK5+bAhK5BIreDhttiKIW +SrsVRsg==; Received: from [2001:4bb8:188:3dd5:8711:951c:9ab6:1400] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pzrvE-00F4Wy-1C; Fri, 19 May 2023 04:40:56 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Bart Van Assche , Damien Le Moal , linux-block@vger.kernel.org Subject: [PATCH 1/7] blk-mq: factor out a blk_rq_init_flush helper Date: Fri, 19 May 2023 06:40:44 +0200 Message-Id: <20230519044050.107790-2-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230519044050.107790-1-hch@lst.de> References: <20230519044050.107790-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Factor out a helper from blk_insert_flush that initializes the flush machine related fields in struct request, and don't bother with the full memset as there's just a few fields to initialize, and all but one already have explicit initializers. Signed-off-by: Christoph Hellwig Reviewed-by: Damien Le Moal Reviewed-by: Bart Van Assche --- block/blk-flush.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/block/blk-flush.c b/block/blk-flush.c index 04698ed9bcd4a9..ed37d272f787eb 100644 --- a/block/blk-flush.c +++ b/block/blk-flush.c @@ -376,6 +376,15 @@ static enum rq_end_io_ret mq_flush_data_end_io(struct request *rq, return RQ_END_IO_NONE; } +static void blk_rq_init_flush(struct request *rq) +{ + rq->flush.seq = 0; + INIT_LIST_HEAD(&rq->flush.list); + rq->rq_flags |= RQF_FLUSH_SEQ; + rq->flush.saved_end_io = rq->end_io; /* Usually NULL */ + rq->end_io = mq_flush_data_end_io; +} + /** * blk_insert_flush - insert a new PREFLUSH/FUA request * @rq: request to insert @@ -437,13 +446,7 @@ void blk_insert_flush(struct request *rq) * @rq should go through flush machinery. Mark it part of flush * sequence and submit for further processing. */ - memset(&rq->flush, 0, sizeof(rq->flush)); - INIT_LIST_HEAD(&rq->flush.list); - rq->rq_flags |= RQF_FLUSH_SEQ; - rq->flush.saved_end_io = rq->end_io; /* Usually NULL */ - - rq->end_io = mq_flush_data_end_io; - + blk_rq_init_flush(rq); spin_lock_irq(&fq->mq_flush_lock); blk_flush_complete_seq(rq, fq, REQ_FSEQ_ACTIONS & ~policy, 0); spin_unlock_irq(&fq->mq_flush_lock); From patchwork Fri May 19 04:40:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13247675 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 979E6C77B75 for ; Fri, 19 May 2023 04:41:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229597AbjESElE (ORCPT ); Fri, 19 May 2023 00:41:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50534 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229449AbjESElC (ORCPT ); Fri, 19 May 2023 00:41:02 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E7F7210D2 for ; Thu, 18 May 2023 21:41:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=ug87gDFDrWOmsYYc2d4u7i6hN03FqCqCZWzsXVRAe+4=; b=1jm7w6cWJYJXyLKD7PukbDhhP9 TNxAR1ztVF9dEcrFgREO+WlB3fVAgaH5ZWmUv3emuV9YNWefI3jo3KSvD9MXXPQnI9IsqOqfp3ygD +5RG+IA0v+IU4FJLc/uG0ITt3klyNx/j0Sbn674YheWQZXk1xTLULCVkz8uMhBp00c8d73RQh12X6 i6j5HuP1XTKTg6ODG3oXfJ1kTJyxbLp5tSvXy9eyz3ZQywZZYIMndisV+LlrcQCT2bDgb8Q7tkb9K xPRD3NQDhMRZUGU6kLl4AafqCyWot+7/clFKfzGEW48XhtMGna/r1jufUofockIruwdBj8JvsJbft AsE9ZnEQ==; Received: from [2001:4bb8:188:3dd5:8711:951c:9ab6:1400] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pzrvH-00F4XF-17; Fri, 19 May 2023 04:40:59 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Bart Van Assche , Damien Le Moal , linux-block@vger.kernel.org Subject: [PATCH 2/7] blk-mq: reflow blk_insert_flush Date: Fri, 19 May 2023 06:40:45 +0200 Message-Id: <20230519044050.107790-3-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230519044050.107790-1-hch@lst.de> References: <20230519044050.107790-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Use a switch statement to decide on the disposition of a flush request instead of multiple if statements, out of which one does checks that are more complex than required. Also warn on a malformed request early on instead of doing a BUG_ON later. Signed-off-by: Christoph Hellwig Reviewed-by: Damien Le Moal Reviewed-by: Bart Van Assche --- block/blk-flush.c | 53 +++++++++++++++++++++++------------------------ 1 file changed, 26 insertions(+), 27 deletions(-) diff --git a/block/blk-flush.c b/block/blk-flush.c index ed37d272f787eb..d8144f1f6fb12f 100644 --- a/block/blk-flush.c +++ b/block/blk-flush.c @@ -402,6 +402,9 @@ void blk_insert_flush(struct request *rq) struct blk_flush_queue *fq = blk_get_flush_queue(q, rq->mq_ctx); struct blk_mq_hw_ctx *hctx = rq->mq_hctx; + /* FLUSH/FUA request must never be merged */ + WARN_ON_ONCE(rq->bio != rq->biotail); + /* * @policy now records what operations need to be done. Adjust * REQ_PREFLUSH and FUA for the driver. @@ -417,39 +420,35 @@ void blk_insert_flush(struct request *rq) */ rq->cmd_flags |= REQ_SYNC; - /* - * An empty flush handed down from a stacking driver may - * translate into nothing if the underlying device does not - * advertise a write-back cache. In this case, simply - * complete the request. - */ - if (!policy) { + switch (policy) { + case 0: + /* + * An empty flush handed down from a stacking driver may + * translate into nothing if the underlying device does not + * advertise a write-back cache. In this case, simply + * complete the request. + */ blk_mq_end_request(rq, 0); return; - } - - BUG_ON(rq->bio != rq->biotail); /*assumes zero or single bio rq */ - - /* - * If there's data but flush is not necessary, the request can be - * processed directly without going through flush machinery. Queue - * for normal execution. - */ - if ((policy & REQ_FSEQ_DATA) && - !(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) { + case REQ_FSEQ_DATA: + /* + * If there's data, but no flush is necessary, the request can + * be processed directly without going through flush machinery. + * Queue for normal execution. + */ blk_mq_request_bypass_insert(rq, 0); blk_mq_run_hw_queue(hctx, false); return; + default: + /* + * Mark the request as part of a flush sequence and submit it + * for further processing to the flush state machine. + */ + blk_rq_init_flush(rq); + spin_lock_irq(&fq->mq_flush_lock); + blk_flush_complete_seq(rq, fq, REQ_FSEQ_ACTIONS & ~policy, 0); + spin_unlock_irq(&fq->mq_flush_lock); } - - /* - * @rq should go through flush machinery. Mark it part of flush - * sequence and submit for further processing. - */ - blk_rq_init_flush(rq); - spin_lock_irq(&fq->mq_flush_lock); - blk_flush_complete_seq(rq, fq, REQ_FSEQ_ACTIONS & ~policy, 0); - spin_unlock_irq(&fq->mq_flush_lock); } /** From patchwork Fri May 19 04:40:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13247676 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89D09C77B75 for ; Fri, 19 May 2023 04:41:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229648AbjESElJ (ORCPT ); Fri, 19 May 2023 00:41:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229449AbjESElJ (ORCPT ); Fri, 19 May 2023 00:41:09 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A9D9410EF for ; Thu, 18 May 2023 21:41:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=X6PJc5KUAOx/Ahop8km3hc+HFY2S/4KGsyjRdhW1Njs=; b=VeaIUWVZVzeDn/Tn+FTz69bboN rp933lPbWi4UAAH8z5Ru8gnbSlc/zoBQaB/eCQA3CIJjY2KHcU31njpold4W7Bb79wA8y7j1z9WL+ 6Hc8BK9UnrR3lj674Vz2/TL6UmrOZQVDGm8QTN/SKURqioNji6nMMlkPxSXzAYrQpVIrswuY2K6KT q21WCZW0DxvM/E/tUP8z4IjcR9XSCUe+LyFOU+LY8TuttstcBw5q6AX1he0iZS7Kqodd1IxciM3bv yMstzz3RBwDvsTSLSRUcoJ+89duAFdg/VOSyYv+RsUIAWKpBj3t1BFnzsg33HBDhqE9976y0L96n9 ZsIKWj+A==; Received: from [2001:4bb8:188:3dd5:8711:951c:9ab6:1400] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pzrvK-00F4XW-0k; Fri, 19 May 2023 04:41:02 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Bart Van Assche , Damien Le Moal , linux-block@vger.kernel.org Subject: [PATCH 3/7] blk-mq: defer to the normal submission path for non-flush flush commands Date: Fri, 19 May 2023 06:40:46 +0200 Message-Id: <20230519044050.107790-4-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230519044050.107790-1-hch@lst.de> References: <20230519044050.107790-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If blk_insert_flush decides that a command does not need to use the flush state machine, return false and let blk_mq_submit_bio handle it the normal way (including using an I/O scheduler) instead of doing a bypass insert. Signed-off-by: Christoph Hellwig Reviewed-by: Bart Van Assche Reviewed-by: Damien Le Moal --- block/blk-flush.c | 22 ++++++++-------------- block/blk-mq.c | 8 ++++---- block/blk-mq.h | 4 ---- block/blk.h | 2 +- 4 files changed, 13 insertions(+), 23 deletions(-) diff --git a/block/blk-flush.c b/block/blk-flush.c index d8144f1f6fb12f..6fb9cf2d38184b 100644 --- a/block/blk-flush.c +++ b/block/blk-flush.c @@ -385,22 +385,17 @@ static void blk_rq_init_flush(struct request *rq) rq->end_io = mq_flush_data_end_io; } -/** - * blk_insert_flush - insert a new PREFLUSH/FUA request - * @rq: request to insert - * - * To be called from __elv_add_request() for %ELEVATOR_INSERT_FLUSH insertions. - * or __blk_mq_run_hw_queue() to dispatch request. - * @rq is being submitted. Analyze what needs to be done and put it on the - * right queue. +/* + * Insert a PREFLUSH/FUA request into the flush state machine. + * Returns true if the request has been consumed by the flush state machine, + * or false if the caller should continue to process it. */ -void blk_insert_flush(struct request *rq) +bool blk_insert_flush(struct request *rq) { struct request_queue *q = rq->q; unsigned long fflags = q->queue_flags; /* may change, cache */ unsigned int policy = blk_flush_policy(fflags, rq); struct blk_flush_queue *fq = blk_get_flush_queue(q, rq->mq_ctx); - struct blk_mq_hw_ctx *hctx = rq->mq_hctx; /* FLUSH/FUA request must never be merged */ WARN_ON_ONCE(rq->bio != rq->biotail); @@ -429,16 +424,14 @@ void blk_insert_flush(struct request *rq) * complete the request. */ blk_mq_end_request(rq, 0); - return; + return true; case REQ_FSEQ_DATA: /* * If there's data, but no flush is necessary, the request can * be processed directly without going through flush machinery. * Queue for normal execution. */ - blk_mq_request_bypass_insert(rq, 0); - blk_mq_run_hw_queue(hctx, false); - return; + return false; default: /* * Mark the request as part of a flush sequence and submit it @@ -448,6 +441,7 @@ void blk_insert_flush(struct request *rq) spin_lock_irq(&fq->mq_flush_lock); blk_flush_complete_seq(rq, fq, REQ_FSEQ_ACTIONS & ~policy, 0); spin_unlock_irq(&fq->mq_flush_lock); + return true; } } diff --git a/block/blk-mq.c b/block/blk-mq.c index e021740154feae..c0b394096b6b6b 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -45,6 +45,8 @@ static DEFINE_PER_CPU(struct llist_head, blk_cpu_done); static void blk_mq_insert_request(struct request *rq, blk_insert_t flags); +static void blk_mq_request_bypass_insert(struct request *rq, + blk_insert_t flags); static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx, struct list_head *list); @@ -2430,7 +2432,7 @@ static void blk_mq_run_work_fn(struct work_struct *work) * Should only be used carefully, when the caller knows we want to * bypass a potential IO scheduler on the target device. */ -void blk_mq_request_bypass_insert(struct request *rq, blk_insert_t flags) +static void blk_mq_request_bypass_insert(struct request *rq, blk_insert_t flags) { struct blk_mq_hw_ctx *hctx = rq->mq_hctx; @@ -2977,10 +2979,8 @@ void blk_mq_submit_bio(struct bio *bio) return; } - if (op_is_flush(bio->bi_opf)) { - blk_insert_flush(rq); + if (op_is_flush(bio->bi_opf) && blk_insert_flush(rq)) return; - } if (plug) { blk_add_rq_to_plug(plug, rq); diff --git a/block/blk-mq.h b/block/blk-mq.h index d15981db34b958..ec7d2fb0b3c8ef 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -64,10 +64,6 @@ struct blk_mq_tags *blk_mq_alloc_map_and_rqs(struct blk_mq_tag_set *set, void blk_mq_free_map_and_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, unsigned int hctx_idx); -/* - * Internal helpers for request insertion into sw queues - */ -void blk_mq_request_bypass_insert(struct request *rq, blk_insert_t flags); /* * CPU -> queue mappings diff --git a/block/blk.h b/block/blk.h index 45547bcf111938..9f171b8f1e3402 100644 --- a/block/blk.h +++ b/block/blk.h @@ -269,7 +269,7 @@ bool blk_bio_list_merge(struct request_queue *q, struct list_head *list, */ #define ELV_ON_HASH(rq) ((rq)->rq_flags & RQF_HASHED) -void blk_insert_flush(struct request *rq); +bool blk_insert_flush(struct request *rq); int elevator_switch(struct request_queue *q, struct elevator_type *new_e); void elevator_disable(struct request_queue *q); From patchwork Fri May 19 04:40:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13247677 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82B2EC77B7A for ; Fri, 19 May 2023 04:41:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229653AbjESElK (ORCPT ); Fri, 19 May 2023 00:41:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50590 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229611AbjESElJ (ORCPT ); Fri, 19 May 2023 00:41:09 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C760910F0 for ; Thu, 18 May 2023 21:41:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=zkFHrgMBKG3Lg2cEUpNdLBdgYFqJN3EbR5nFpx3gwEU=; b=SIu39xLnUEiSK1f6ruJjxE6ND2 FHXHvv6gAnz7uUqQazGYm39XOYLvtd39EmJ7kmxDdnTIfDc8f4+SYDZzvmmYDT/DoAZoodraZ+BZX CkPwMOj2etb81c5NFEdVJTGc3JVYZTvuPX+0xtcrR5+1VGZO9tf4/BnLyh+0aE1sn67bI+o75R6hp SUp8x0By/jgfVp3iKTSo2yN3A6gd/viiEq1vd7ZyLPDTedM8Vgc/xHMO62bHdLZ5BbS2GTs5BtdH3 JH+++ydrvPNVzWcNSVi0/SgkA/pBo0eXUWwlIIt84JWDb0P8wbTxX+qfLduODRNaZ/zTmswGPH+3e tkyGz5pA==; Received: from [2001:4bb8:188:3dd5:8711:951c:9ab6:1400] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pzrvN-00F4YL-0f; Fri, 19 May 2023 04:41:05 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Bart Van Assche , Damien Le Moal , linux-block@vger.kernel.org Subject: [PATCH 4/7] blk-mq: use the I/O scheduler for writes from the flush state machine Date: Fri, 19 May 2023 06:40:47 +0200 Message-Id: <20230519044050.107790-5-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230519044050.107790-1-hch@lst.de> References: <20230519044050.107790-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Bart Van Assche Send write requests issued by the flush state machine through the normal I/O submission path including the I/O scheduler (if present) so that I/O scheduler policies are applied to writes with the FUA flag set. Separate the I/O scheduler members from the flush members in struct request since now a request may pass through both an I/O scheduler and the flush machinery. Note that the actual flush requests, which have no bio attached to the request still bypass the I/O schedulers. Signed-off-by: Bart Van Assche [hch: rebased] Signed-off-by: Christoph Hellwig Reviewed-by: Damien Le Moal --- block/blk-mq.c | 4 ++-- include/linux/blk-mq.h | 27 +++++++++++---------------- 2 files changed, 13 insertions(+), 18 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index c0b394096b6b6b..aac67bc3d3680c 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -458,7 +458,7 @@ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data) * Flush/passthrough requests are special and go directly to the * dispatch list. */ - if (!op_is_flush(data->cmd_flags) && + if ((data->cmd_flags & REQ_OP_MASK) != REQ_OP_FLUSH && !blk_op_is_passthrough(data->cmd_flags)) { struct elevator_mq_ops *ops = &q->elevator->type->ops; @@ -2497,7 +2497,7 @@ static void blk_mq_insert_request(struct request *rq, blk_insert_t flags) * dispatch it given we prioritize requests in hctx->dispatch. */ blk_mq_request_bypass_insert(rq, flags); - } else if (rq->rq_flags & RQF_FLUSH_SEQ) { + } else if (req_op(rq) == REQ_OP_FLUSH) { /* * Firstly normal IO request is inserted to scheduler queue or * sw queue, meantime we add flush request to dispatch queue( diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 49d14b1acfa5df..935201c8974371 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -169,25 +169,20 @@ struct request { void *completion_data; }; - /* * Three pointers are available for the IO schedulers, if they need - * more they have to dynamically allocate it. Flush requests are - * never put on the IO scheduler. So let the flush fields share - * space with the elevator data. + * more they have to dynamically allocate it. */ - union { - struct { - struct io_cq *icq; - void *priv[2]; - } elv; - - struct { - unsigned int seq; - struct list_head list; - rq_end_io_fn *saved_end_io; - } flush; - }; + struct { + struct io_cq *icq; + void *priv[2]; + } elv; + + struct { + unsigned int seq; + struct list_head list; + rq_end_io_fn *saved_end_io; + } flush; union { struct __call_single_data csd; From patchwork Fri May 19 04:40:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13247678 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDA8BC77B75 for ; Fri, 19 May 2023 04:41:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229449AbjESElN (ORCPT ); Fri, 19 May 2023 00:41:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50630 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229691AbjESElM (ORCPT ); Fri, 19 May 2023 00:41:12 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA6AE10E0 for ; Thu, 18 May 2023 21:41:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=EsOEomkX55SFfNb3cCrwA/PnbVTP+rm0OlgzDlFUVX4=; b=zpS9dDWHcpUm2yy4SWfIYhMBWl S6YCXoiFqw8HUe49nxePrfeR6J4HPOhby8TICFFuJwsCKB2VVpsBbvrFi5YSOFRLWsK9cJW97bbXO +k+HCShqDmn6LHrWPtjYmq6PMAnR82nc+4S1hbwq7ynl3BGpi1ic1JjEj8TGsAIglQrTe0nQUf3EM Tho/Cc3w3Bu3CyiIQ24Gt3VV5p6yn7EBS2ecfNP5i3H1m57jWpZmCN4U8EtITHByhqsLw9Oc86mIP X9qypuuHoI9CtQcEKBwQWhVCi2LJJCUTQILiGuhaX67UOs3eh2uB4le7MWasKpiEOm73Iy6gn4Epd bTyX8YxQ==; Received: from [2001:4bb8:188:3dd5:8711:951c:9ab6:1400] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pzrvQ-00F4Yw-0x; Fri, 19 May 2023 04:41:08 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Bart Van Assche , Damien Le Moal , linux-block@vger.kernel.org Subject: [PATCH 5/7] blk-mq: defer to the normal submission path for post-flush requests Date: Fri, 19 May 2023 06:40:48 +0200 Message-Id: <20230519044050.107790-6-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230519044050.107790-1-hch@lst.de> References: <20230519044050.107790-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Requests with the FUA bit on hardware without FUA support need a post flush before returning to the caller, but they can still be sent using the normal I/O path after initializing the flush-related fields and end I/O handler. Signed-off-by: Christoph Hellwig Reviewed-by: Bart Van Assche --- block/blk-flush.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/block/blk-flush.c b/block/blk-flush.c index 6fb9cf2d38184b..7121f9ad0762f8 100644 --- a/block/blk-flush.c +++ b/block/blk-flush.c @@ -432,6 +432,17 @@ bool blk_insert_flush(struct request *rq) * Queue for normal execution. */ return false; + case REQ_FSEQ_DATA | REQ_FSEQ_POSTFLUSH: + /* + * Initialize the flush fields and completion handler to trigger + * the post flush, and then just pass the command on. + */ + blk_rq_init_flush(rq); + rq->flush.seq |= REQ_FSEQ_POSTFLUSH; + spin_lock_irq(&fq->mq_flush_lock); + list_move_tail(&rq->flush.list, &fq->flush_data_in_flight); + spin_unlock_irq(&fq->mq_flush_lock); + return false; default: /* * Mark the request as part of a flush sequence and submit it From patchwork Fri May 19 04:40:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13247679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6E54C7EE2A for ; Fri, 19 May 2023 04:41:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229611AbjESElP (ORCPT ); Fri, 19 May 2023 00:41:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50652 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229663AbjESElN (ORCPT ); Fri, 19 May 2023 00:41:13 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70F8610DC for ; Thu, 18 May 2023 21:41:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=k4ufoMvzmdHzNs7ww5tI5ingjTVUUixghlJSJD/Y+Y0=; b=nF1Eoo435gbGmmEaR0sEu8mSQZ vPHOmlWhTWTRPUJMgwAszirSUlNa8+db0ADifEWdBc9JrMlQGPVZQGD2blc0t4SOOBxTZR62aaVqm QGq0KfB11ovDcBuxlhjHUJ07hLu2UTRdcbVopq7NbjPtMgvvUPP8Vbxibn26iuMEVRe+CBJEJXCBF dGvQSDb3I6oR1TINEAclRPi7Hv/fjUWKUFlKGpXPMzE264CTbuSna+/dMaEjbyh5UdOJocRozeOK/ HODipa7jXRho/pX5PkXsm1pLc6733hR2JKoplNy7AQ1P0ajdX02G+pczHHnREBzWsJRwobAi9FMig okTZITzg==; Received: from [2001:4bb8:188:3dd5:8711:951c:9ab6:1400] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pzrvT-00F4ZE-0d; Fri, 19 May 2023 04:41:11 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Bart Van Assche , Damien Le Moal , linux-block@vger.kernel.org Subject: [PATCH 6/7] blk-mq: do not do head insertions post-pre-flush commands Date: Fri, 19 May 2023 06:40:49 +0200 Message-Id: <20230519044050.107790-7-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230519044050.107790-1-hch@lst.de> References: <20230519044050.107790-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org blk_flush_complete_seq currently queues requests that write data after a pre-flush from the flush state machine at the head of the queue. This doesn't really make sense, as the original request bypassed all queue lists by directly diverting to blk_insert_flush from blk_mq_submit_bio. Signed-off-by: Christoph Hellwig Reviewed-by: Bart Van Assche Reviewed-by: Damien Le Moal --- block/blk-flush.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/block/blk-flush.c b/block/blk-flush.c index 7121f9ad0762f8..f407a59503173d 100644 --- a/block/blk-flush.c +++ b/block/blk-flush.c @@ -188,7 +188,7 @@ static void blk_flush_complete_seq(struct request *rq, case REQ_FSEQ_DATA: list_move_tail(&rq->flush.list, &fq->flush_data_in_flight); - blk_mq_add_to_requeue_list(rq, BLK_MQ_INSERT_AT_HEAD); + blk_mq_add_to_requeue_list(rq, 0); blk_mq_kick_requeue_list(q); break; From patchwork Fri May 19 04:40:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13247680 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D05DBC77B75 for ; Fri, 19 May 2023 04:41:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229676AbjESElT (ORCPT ); Fri, 19 May 2023 00:41:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50714 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229769AbjESElS (ORCPT ); Fri, 19 May 2023 00:41:18 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B12EB10D2 for ; Thu, 18 May 2023 21:41:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=CRsdm81PCIjEz6U3W0MKKMMlYN7yH8GEpPhS0HFg2Co=; b=U7zuGrQwsSV+0ZnL8ZnpZpWvxY AZfqd/5VXrAuB0FbORGn8EUivYsB2uYAwn8ErJas766hiycp47lAlkdpiBHW4do/jef8aUf60wGU0 Z3EG1x+dilmOZphkI5zfvae8tb2Iy/GweDwZRkLe8z2VdWGMu5FeH8q7UzyH6TUa1zaauvmppO1xD EKSrg52cXhjw27ibMmlJQdilnWweyAOd7nWpb3NFDxP0KaJyxXmfd3OL6SNr4G2BY3Z4xeAdKjkf4 wYlpDqRh1jkthwg6W7It+DXYmu0sduvefnxVPDpj3XuoGpF4Omut1ydXVgIkcEMzVOa12QOzWfhoC 2plsWUNg==; Received: from [2001:4bb8:188:3dd5:8711:951c:9ab6:1400] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pzrvW-00F4Zt-0j; Fri, 19 May 2023 04:41:14 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Bart Van Assche , Damien Le Moal , linux-block@vger.kernel.org Subject: [PATCH 7/7] blk-mq: don't use the requeue list to queue flush commands Date: Fri, 19 May 2023 06:40:50 +0200 Message-Id: <20230519044050.107790-8-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230519044050.107790-1-hch@lst.de> References: <20230519044050.107790-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Currently both requeues of commands that were already sent to the driver and flush commands submitted from the flush state machine share the same requeue_list struct request_queue, despite requeues doing head insertations and flushes not. Switch to using two separate lists instead. Signed-off-by: Christoph Hellwig Reviewed-by: Damien Le Moal --- block/blk-flush.c | 9 +++++++-- block/blk-mq-debugfs.c | 1 - block/blk-mq.c | 42 +++++++++++++----------------------------- block/blk-mq.h | 1 - include/linux/blk-mq.h | 4 +--- include/linux/blkdev.h | 1 + 6 files changed, 22 insertions(+), 36 deletions(-) diff --git a/block/blk-flush.c b/block/blk-flush.c index f407a59503173d..dba392cf22bec6 100644 --- a/block/blk-flush.c +++ b/block/blk-flush.c @@ -188,7 +188,9 @@ static void blk_flush_complete_seq(struct request *rq, case REQ_FSEQ_DATA: list_move_tail(&rq->flush.list, &fq->flush_data_in_flight); - blk_mq_add_to_requeue_list(rq, 0); + spin_lock(&q->requeue_lock); + list_add_tail(&rq->queuelist, &q->flush_list); + spin_unlock(&q->requeue_lock); blk_mq_kick_requeue_list(q); break; @@ -346,7 +348,10 @@ static void blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq, smp_wmb(); req_ref_set(flush_rq, 1); - blk_mq_add_to_requeue_list(flush_rq, 0); + spin_lock(&q->requeue_lock); + list_add_tail(&flush_rq->queuelist, &q->flush_list); + spin_unlock(&q->requeue_lock); + blk_mq_kick_requeue_list(q); } diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index 22e39b9a77ecf2..68165a50951b68 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -244,7 +244,6 @@ static const char *const cmd_flag_name[] = { #define RQF_NAME(name) [ilog2((__force u32)RQF_##name)] = #name static const char *const rqf_name[] = { RQF_NAME(STARTED), - RQF_NAME(SOFTBARRIER), RQF_NAME(FLUSH_SEQ), RQF_NAME(MIXED_MERGE), RQF_NAME(MQ_INFLIGHT), diff --git a/block/blk-mq.c b/block/blk-mq.c index aac67bc3d3680c..551e7760f45e20 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1416,13 +1416,16 @@ static void __blk_mq_requeue_request(struct request *rq) void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list) { struct request_queue *q = rq->q; + unsigned long flags; __blk_mq_requeue_request(rq); /* this request will be re-inserted to io scheduler queue */ blk_mq_sched_requeue_request(rq); - blk_mq_add_to_requeue_list(rq, BLK_MQ_INSERT_AT_HEAD); + spin_lock_irqsave(&q->requeue_lock, flags); + list_add_tail(&rq->queuelist, &q->requeue_list); + spin_unlock_irqrestore(&q->requeue_lock, flags); if (kick_requeue_list) blk_mq_kick_requeue_list(q); @@ -1434,13 +1437,16 @@ static void blk_mq_requeue_work(struct work_struct *work) struct request_queue *q = container_of(work, struct request_queue, requeue_work.work); LIST_HEAD(rq_list); - struct request *rq, *next; + LIST_HEAD(flush_list); + struct request *rq; spin_lock_irq(&q->requeue_lock); list_splice_init(&q->requeue_list, &rq_list); + list_splice_init(&q->flush_list, &flush_list); spin_unlock_irq(&q->requeue_lock); - list_for_each_entry_safe(rq, next, &rq_list, queuelist) { + while (!list_empty(&rq_list)) { + rq = list_entry(rq_list.next, struct request, queuelist); /* * If RQF_DONTPREP ist set, the request has been started by the * driver already and might have driver-specific data allocated @@ -1448,18 +1454,16 @@ static void blk_mq_requeue_work(struct work_struct *work) * block layer merges for the request. */ if (rq->rq_flags & RQF_DONTPREP) { - rq->rq_flags &= ~RQF_SOFTBARRIER; list_del_init(&rq->queuelist); blk_mq_request_bypass_insert(rq, 0); - } else if (rq->rq_flags & RQF_SOFTBARRIER) { - rq->rq_flags &= ~RQF_SOFTBARRIER; + } else { list_del_init(&rq->queuelist); blk_mq_insert_request(rq, BLK_MQ_INSERT_AT_HEAD); } } - while (!list_empty(&rq_list)) { - rq = list_entry(rq_list.next, struct request, queuelist); + while (!list_empty(&flush_list)) { + rq = list_entry(flush_list.next, struct request, queuelist); list_del_init(&rq->queuelist); blk_mq_insert_request(rq, 0); } @@ -1467,27 +1471,6 @@ static void blk_mq_requeue_work(struct work_struct *work) blk_mq_run_hw_queues(q, false); } -void blk_mq_add_to_requeue_list(struct request *rq, blk_insert_t insert_flags) -{ - struct request_queue *q = rq->q; - unsigned long flags; - - /* - * We abuse this flag that is otherwise used by the I/O scheduler to - * request head insertion from the workqueue. - */ - BUG_ON(rq->rq_flags & RQF_SOFTBARRIER); - - spin_lock_irqsave(&q->requeue_lock, flags); - if (insert_flags & BLK_MQ_INSERT_AT_HEAD) { - rq->rq_flags |= RQF_SOFTBARRIER; - list_add(&rq->queuelist, &q->requeue_list); - } else { - list_add_tail(&rq->queuelist, &q->requeue_list); - } - spin_unlock_irqrestore(&q->requeue_lock, flags); -} - void blk_mq_kick_requeue_list(struct request_queue *q) { kblockd_mod_delayed_work_on(WORK_CPU_UNBOUND, &q->requeue_work, 0); @@ -4239,6 +4222,7 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, blk_mq_update_poll_flag(q); INIT_DELAYED_WORK(&q->requeue_work, blk_mq_requeue_work); + INIT_LIST_HEAD(&q->flush_list); INIT_LIST_HEAD(&q->requeue_list); spin_lock_init(&q->requeue_lock); diff --git a/block/blk-mq.h b/block/blk-mq.h index ec7d2fb0b3c8ef..8c642e9f32f102 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -47,7 +47,6 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr); void blk_mq_wake_waiters(struct request_queue *q); bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *hctx, struct list_head *, unsigned int); -void blk_mq_add_to_requeue_list(struct request *rq, blk_insert_t insert_flags); void blk_mq_flush_busy_ctxs(struct blk_mq_hw_ctx *hctx, struct list_head *list); struct request *blk_mq_dequeue_from_ctx(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *start); diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 935201c8974371..d778cb6b211233 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -28,8 +28,6 @@ typedef __u32 __bitwise req_flags_t; /* drive already may have started this one */ #define RQF_STARTED ((__force req_flags_t)(1 << 1)) -/* may not be passed by ioscheduler */ -#define RQF_SOFTBARRIER ((__force req_flags_t)(1 << 3)) /* request for flush sequence */ #define RQF_FLUSH_SEQ ((__force req_flags_t)(1 << 4)) /* merge of different types, fail separately */ @@ -65,7 +63,7 @@ typedef __u32 __bitwise req_flags_t; /* flags that prevent us from merging requests: */ #define RQF_NOMERGE_FLAGS \ - (RQF_STARTED | RQF_SOFTBARRIER | RQF_FLUSH_SEQ | RQF_SPECIAL_PAYLOAD) + (RQF_STARTED | RQF_FLUSH_SEQ | RQF_SPECIAL_PAYLOAD) enum mq_rq_state { MQ_RQ_IDLE = 0, diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 3952c52d6cd1b0..fe99948688dfda 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -487,6 +487,7 @@ struct request_queue { * for flush operations */ struct blk_flush_queue *fq; + struct list_head flush_list; struct list_head requeue_list; spinlock_t requeue_lock;