diff mbox series

[V2] block: Disable write plugging for zoned block devices

Message ID 20190710155447.11112-1-damien.lemoal@wdc.com (mailing list archive)
State New, archived
Headers show
Series [V2] block: Disable write plugging for zoned block devices | expand

Commit Message

Damien Le Moal July 10, 2019, 3:54 p.m. UTC
Simultaneously writing to a sequential zone of a zoned block device
from multiple contexts requires mutual exclusion for BIO issuing to
ensure that writes happen sequentially. However, even for a well
behaved user correctly implementing such synchronization, BIO plugging
may interfere and result in BIOs from the different contextx to be
reordered if plugging is done outside of the mutual exclusion section,
e.g. the plug was started by a function higher in the call chain than
the function issuing BIOs.

         Context A                     Context B

   | blk_start_plug()
   | ...
   | seq_write_zone()
     | mutex_lock(zone)
     | bio-0->bi_iter.bi_sector = zone->wp
     | zone->wp += bio_sectors(bio-0)
     | submit_bio(bio-0)
     | bio-1->bi_iter.bi_sector = zone->wp
     | zone->wp += bio_sectors(bio-1)
     | submit_bio(bio-1)
     | mutex_unlock(zone)
     | return
   | -----------------------> | seq_write_zone()
  				| mutex_lock(zone)
     				| bio-2->bi_iter.bi_sector = zone->wp
     				| zone->wp += bio_sectors(bio-2)
				| submit_bio(bio-2)
				| mutex_unlock(zone)
   | <------------------------- |
   | blk_finish_plug()

In the above example, despite the mutex synchronization ensuring the
correct BIO issuing order 0, 1, 2, context A BIOs 0 and 1 end up being
issued after BIO 2 of context B, when the plug is released with
blk_finish_plug().

While this problem can be addressed using the blk_flush_plug_list()
function (in the above example, the call must be inserted before the
zone mutex lock is released), a simple generic solution in the block
layer avoid this additional code in all zoned block device user code.
The simple generic solution implemented with this patch is to introduce
the internal helper function blk_mq_plug() to access the current
context plug on BIO submission. This helper returns the current plug
only if the target device is not a zoned block device or if the BIO to
be plugged is not a write operation. Otherwise, the caller context plug
is ignored and NULL returned, resulting is all writes to zoned block
device to never be plugged.

Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
---
 block/blk-core.c |  2 +-
 block/blk-mq.c   |  2 +-
 block/blk-mq.h   | 10 ++++++++++
 3 files changed, 12 insertions(+), 2 deletions(-)

Comments

Christoph Hellwig July 10, 2019, 4:37 p.m. UTC | #1
On Wed, Jul 10, 2019 at 09:57:05AM -0600, Jens Axboe wrote:
> On 7/10/19 9:54 AM, Damien Le Moal wrote:
> > diff --git a/block/blk-mq.h b/block/blk-mq.h
> > index 633a5a77ee8b..c9195a2cd670 100644
> > --- a/block/blk-mq.h
> > +++ b/block/blk-mq.h
> > @@ -238,4 +238,14 @@ static inline void blk_mq_clear_mq_map(struct blk_mq_queue_map *qmap)
> >   		qmap->mq_map[cpu] = 0;
> >   }
> >   
> > +static inline struct blk_plug *blk_mq_plug(struct request_queue *q,
> > +					   struct bio *bio)
> > +{
> > +	if (!blk_queue_is_zoned(q) || !op_is_write(bio_op(bio)))
> > +		return current->plug;
> > +
> > +	/* Zoned block device write case: do not plug the BIO */
> > +	return NULL;
> > +}
> > +
> >   #endif
> 
> Folks are going to look at that and be puzzled, I think that function
> deserves a comment.

Agreed.  Also I'd reformat the conditionals to make the default
case more obvious:

	if (blk_queue_is_zoned(q) && op_is_write(bio_op(bio)))
		return NULL;
	return current->plug;
diff mbox series

Patch

diff --git a/block/blk-core.c b/block/blk-core.c
index 8340f69670d8..3957ea6811c3 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -645,7 +645,7 @@  bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio,
 	struct request *rq;
 	struct list_head *plug_list;
 
-	plug = current->plug;
+	plug = blk_mq_plug(q, bio);
 	if (!plug)
 		return false;
 
diff --git a/block/blk-mq.c b/block/blk-mq.c
index ce0f5f4ede70..90be5bb6fa1b 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1969,7 +1969,7 @@  static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 
 	cookie = request_to_qc_t(data.hctx, rq);
 
-	plug = current->plug;
+	plug = blk_mq_plug(q, bio);
 	if (unlikely(is_flush_fua)) {
 		blk_mq_put_ctx(data.ctx);
 		blk_mq_bio_to_request(rq, bio);
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 633a5a77ee8b..c9195a2cd670 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -238,4 +238,14 @@  static inline void blk_mq_clear_mq_map(struct blk_mq_queue_map *qmap)
 		qmap->mq_map[cpu] = 0;
 }
 
+static inline struct blk_plug *blk_mq_plug(struct request_queue *q,
+					   struct bio *bio)
+{
+	if (!blk_queue_is_zoned(q) || !op_is_write(bio_op(bio)))
+		return current->plug;
+
+	/* Zoned block device write case: do not plug the BIO */
+	return NULL;
+}
+
 #endif