diff mbox series

[04/13] blkcg: introduce common blkg association logic

Message ID 20181126211946.77067-5-dennis@kernel.org (mailing list archive)
State New, archived
Headers show
Series block: always associate blkg and refcount cleanup | expand

Commit Message

Dennis Zhou Nov. 26, 2018, 9:19 p.m. UTC
There are 3 ways blkg association can happen: association with the
current css, with the page css (swap), or from the wbc css (writeback).

This patch handles how association is done for the first case where we
are associating bsaed on the current css. If there is already a blkg
associated, the css will be reused and association will be redone as the
request_queue may have changed.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
---
 block/bio.c           | 81 ++++++++++++++++++++++++++++++++++++++-----
 block/blk-iolatency.c | 10 ++----
 block/blk-throttle.c  |  6 ++--
 include/linux/bio.h   |  5 ++-
 4 files changed, 81 insertions(+), 21 deletions(-)

Comments

Josef Bacik Nov. 27, 2018, 9:04 p.m. UTC | #1
On Mon, Nov 26, 2018 at 04:19:37PM -0500, Dennis Zhou wrote:
> There are 3 ways blkg association can happen: association with the
> current css, with the page css (swap), or from the wbc css (writeback).
> 
> This patch handles how association is done for the first case where we
> are associating bsaed on the current css. If there is already a blkg
> associated, the css will be reused and association will be redone as the
> request_queue may have changed.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef
Tejun Heo Nov. 29, 2018, 3:49 p.m. UTC | #2
On Mon, Nov 26, 2018 at 04:19:37PM -0500, Dennis Zhou wrote:
> There are 3 ways blkg association can happen: association with the
> current css, with the page css (swap), or from the wbc css (writeback).
> 
> This patch handles how association is done for the first case where we
> are associating bsaed on the current css. If there is already a blkg
> associated, the css will be reused and association will be redone as the
> request_queue may have changed.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>

Acked-by: Tejun Heo <tj@kernel.org>

A minor nit below.

> +/**
> + * bio_associate_blkg - associate a bio with a blkg from q
> + * @bio: target bio
> + *
> + * Associate @bio with the blkg found from the bio's css and request_queue.
> + * If one is not found, bio_lookup_blkg() creates the blkg.  If a blkg is
> + * already associated, the css is reused and association redone as the
> + * request_queue may have changed.
> + */
> +void bio_associate_blkg(struct bio *bio)
> +{
> +	struct request_queue *q;
> +	struct blkcg *blkcg;
> +	struct blkcg_gq *blkg;
> +
> +	if (!bio_has_queue(bio))
> +		return;

So, this isn't actually necessary cuz we don't stack with
request_queues but it might be more consistent to call disassociate
before returning above so that even if sth like that happens the
function always guarantees that the blkg and bio agree.

Thanks.
Dennis Zhou Nov. 29, 2018, 7:48 p.m. UTC | #3
On Thu, Nov 29, 2018 at 07:49:17AM -0800, Tejun Heo wrote:
> On Mon, Nov 26, 2018 at 04:19:37PM -0500, Dennis Zhou wrote:
> > There are 3 ways blkg association can happen: association with the
> > current css, with the page css (swap), or from the wbc css (writeback).
> > 
> > This patch handles how association is done for the first case where we
> > are associating bsaed on the current css. If there is already a blkg
> > associated, the css will be reused and association will be redone as the
> > request_queue may have changed.
> > 
> > Signed-off-by: Dennis Zhou <dennis@kernel.org>
> 
> Acked-by: Tejun Heo <tj@kernel.org>
> 
> A minor nit below.
> 
> > +/**
> > + * bio_associate_blkg - associate a bio with a blkg from q
> > + * @bio: target bio
> > + *
> > + * Associate @bio with the blkg found from the bio's css and request_queue.
> > + * If one is not found, bio_lookup_blkg() creates the blkg.  If a blkg is
> > + * already associated, the css is reused and association redone as the
> > + * request_queue may have changed.
> > + */
> > +void bio_associate_blkg(struct bio *bio)
> > +{
> > +	struct request_queue *q;
> > +	struct blkcg *blkcg;
> > +	struct blkcg_gq *blkg;
> > +
> > +	if (!bio_has_queue(bio))
> > +		return;
> 
> So, this isn't actually necessary cuz we don't stack with
> request_queues but it might be more consistent to call disassociate
> before returning above so that even if sth like that happens the
> function always guarantees that the blkg and bio agree.

Ah. That makes sense. I've changed bio_has_queue() to be
bio_blkg_association_check(). This calls disassociate and should handle
the case a bio transfers from a request_queue device to a bio driven
device. It changes the future patches slightly as we have to remember
the css before doing the check in case we want to reassociate with the
same css.

Thanks,
Dennis
Christoph Hellwig Nov. 30, 2018, 9:52 a.m. UTC | #4
>  EXPORT_SYMBOL_GPL(bio_associate_blkcg);
>  
>  /**
> - * bio_associate_blkg - associate a bio with the a blkg
> + * bio_has_queue - required check for blkg association
> + * @bio: target bio
> + *
> + * A blkg represents the relationship between a blkcg and a request_queue.
> + * If there is no request_queue, there is no blkg and therefore nothing to
> + * associate with.
> + */
> +static inline bool bio_has_queue(struct bio *bio)
> +{
> +	return bio->bi_disk && bio->bi_disk->queue;
> +}

How do you ever see a bio without a queue?  We can't even do I/O in
that case.
Dennis Zhou Dec. 3, 2018, 8:58 p.m. UTC | #5
Hi Christoph,

On Fri, Nov 30, 2018 at 01:52:09AM -0800, Christoph Hellwig wrote:
> >  EXPORT_SYMBOL_GPL(bio_associate_blkcg);
> >  
> >  /**
> > - * bio_associate_blkg - associate a bio with the a blkg
> > + * bio_has_queue - required check for blkg association
> > + * @bio: target bio
> > + *
> > + * A blkg represents the relationship between a blkcg and a request_queue.
> > + * If there is no request_queue, there is no blkg and therefore nothing to
> > + * associate with.
> > + */
> > +static inline bool bio_has_queue(struct bio *bio)
> > +{
> > +	return bio->bi_disk && bio->bi_disk->queue;
> > +}
> 
> How do you ever see a bio without a queue?  We can't even do I/O in
> that case.

The case I found was with the flush bio in dm which is statically
allocated in dm_alloc(). The issue issue is that bio_set_dev() is called
on a bdev that isn't opened. So, the bdev wasn't pointing to a genhd.
I've fixed the issue with the patch below, which will be added in v5.

I think I was being overly cautious with the change and have taken this
out in v5. It seems that this should be a one-off case which should work
with the patch below.

Thanks,
Dennis

---
From 3ee13402af369ee8618549b63593d68ffca574ca Mon Sep 17 00:00:00 2001
From: Dennis Zhou <dennis@kernel.org>
Date: Mon, 3 Dec 2018 10:56:34 -0800
Subject: [PATCH 05/14] dm: set flush bio device on demand

The next patch changes the macro bio_set_dev() to associate a bio with a
blkg based on the device set. However, dm creates a static bio to be
used as the basis for cloning empty flush bios on creation. This
association is with a not-opened bdev so bd_disk is %NULL. To easily get
around this, we will set the device on the static bio every time and use
that to copy to the other bios.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
---
 drivers/md/dm.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index a733e4c920af..b5e996c5c709 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1417,10 +1417,14 @@ static int __send_empty_flush(struct clone_info *ci)
 	unsigned target_nr = 0;
 	struct dm_target *ti;
 
+	bio_set_dev(ci->bio, ci->io->md->bdev);
+
 	BUG_ON(bio_has_data(ci->bio));
 	while ((ti = dm_table_get_target(ci->map, target_nr++)))
 		__send_duplicate_bios(ci, ti, ti->num_flush_bios, NULL);
 
+	bio_disassociate_blkg(ci->bio);
+
 	return 0;
 }
 
@@ -1939,7 +1943,6 @@ static struct mapped_device *alloc_dev(int minor)
 		goto bad;
 
 	bio_init(&md->flush_bio, NULL, 0);
-	bio_set_dev(&md->flush_bio, md->bdev);
 	md->flush_bio.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC;
 
 	dm_stats_init(&md->stats);
diff mbox series

Patch

diff --git a/block/bio.c b/block/bio.c
index edc8a73b98d5..de0133329b71 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -2009,7 +2009,34 @@  int bio_associate_blkcg(struct bio *bio, struct cgroup_subsys_state *blkcg_css)
 EXPORT_SYMBOL_GPL(bio_associate_blkcg);
 
 /**
- * bio_associate_blkg - associate a bio with the a blkg
+ * bio_has_queue - required check for blkg association
+ * @bio: target bio
+ *
+ * A blkg represents the relationship between a blkcg and a request_queue.
+ * If there is no request_queue, there is no blkg and therefore nothing to
+ * associate with.
+ */
+static inline bool bio_has_queue(struct bio *bio)
+{
+	return bio->bi_disk && bio->bi_disk->queue;
+}
+
+/**
+ * bio_disassociate_blkg - puts back the blkg reference if associated
+ * @bio: target bio
+ *
+ * Helper to disassociate the blkg from @bio if a blkg is associated.
+ */
+void bio_disassociate_blkg(struct bio *bio)
+{
+	if (bio->bi_blkg) {
+		blkg_put(bio->bi_blkg);
+		bio->bi_blkg = NULL;
+	}
+}
+
+/**
+ * __bio_associate_blkg - associate a bio with the a blkg
  * @bio: target bio
  * @blkg: the blkg to associate
  *
@@ -2022,12 +2049,48 @@  EXPORT_SYMBOL_GPL(bio_associate_blkcg);
  * A reference will be taken on the @blkg and will be released when @bio is
  * freed.
  */
-int bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg)
+static void __bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg)
 {
-	if (unlikely(bio->bi_blkg))
-		return -EBUSY;
+	if (bio->bi_blkg)
+		bio_disassociate_blkg(bio);
+
 	bio->bi_blkg = blkg_try_get_closest(blkg);
-	return 0;
+}
+
+/**
+ * bio_associate_blkg - associate a bio with a blkg from q
+ * @bio: target bio
+ *
+ * Associate @bio with the blkg found from the bio's css and request_queue.
+ * If one is not found, bio_lookup_blkg() creates the blkg.  If a blkg is
+ * already associated, the css is reused and association redone as the
+ * request_queue may have changed.
+ */
+void bio_associate_blkg(struct bio *bio)
+{
+	struct request_queue *q;
+	struct blkcg *blkcg;
+	struct blkcg_gq *blkg;
+
+	if (!bio_has_queue(bio))
+		return;
+
+	q = bio->bi_disk->queue;
+
+	rcu_read_lock();
+
+	bio_associate_blkcg(bio, NULL);
+	blkcg = bio_blkcg(bio);
+
+	if (!blkcg->css.parent) {
+		__bio_associate_blkg(bio, q->root_blkg);
+	} else {
+		blkg = blkg_lookup_create(blkcg, q);
+
+		__bio_associate_blkg(bio, blkg);
+	}
+
+	rcu_read_unlock();
 }
 
 /**
@@ -2040,10 +2103,7 @@  void bio_disassociate_task(struct bio *bio)
 		css_put(bio->bi_css);
 		bio->bi_css = NULL;
 	}
-	if (bio->bi_blkg) {
-		blkg_put(bio->bi_blkg);
-		bio->bi_blkg = NULL;
-	}
+	bio_disassociate_blkg(bio);
 }
 
 /**
@@ -2055,6 +2115,9 @@  void bio_clone_blkcg_association(struct bio *dst, struct bio *src)
 {
 	if (src->bi_css)
 		WARN_ON(bio_associate_blkcg(dst, src->bi_css));
+
+	if (src->bi_blkg)
+		__bio_associate_blkg(dst, src->bi_blkg);
 }
 EXPORT_SYMBOL_GPL(bio_clone_blkcg_association);
 #endif /* CONFIG_BLK_CGROUP */
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 46e86c34cf79..cdbd10564e66 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -472,21 +472,15 @@  static void check_scale_change(struct iolatency_grp *iolat)
 static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio)
 {
 	struct blk_iolatency *blkiolat = BLKIOLATENCY(rqos);
-	struct blkcg *blkcg;
 	struct blkcg_gq *blkg;
-	struct request_queue *q = rqos->q;
 	bool issue_as_root = bio_issue_as_root_blkg(bio);
 
 	if (!blk_iolatency_enabled(blkiolat))
 		return;
 
-	rcu_read_lock();
-	bio_associate_blkcg(bio, NULL);
-	blkcg = bio_blkcg(bio);
-	blkg = blkg_lookup_create(blkcg, q);
+	bio_associate_blkg(bio);
+	blkg = bio->bi_blkg;
 	bio_issue_init(&bio->bi_issue, bio_sectors(bio));
-	bio_associate_blkg(bio, blkg);
-	rcu_read_unlock();
 
 	while (blkg && blkg->parent) {
 		struct iolatency_grp *iolat = blkg_to_lat(blkg);
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index d648d6720f46..228c3a007ebc 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -2115,10 +2115,10 @@  static inline void throtl_update_latency_buckets(struct throtl_data *td)
 }
 #endif
 
-static void blk_throtl_assoc_bio(struct throtl_grp *tg, struct bio *bio)
+static void blk_throtl_assoc_bio(struct bio *bio)
 {
 #ifdef CONFIG_BLK_DEV_THROTTLING_LOW
-	bio_associate_blkg(bio, tg_to_blkg(tg));
+	bio_associate_blkg(bio);
 	bio_issue_init(&bio->bi_issue, bio_sectors(bio));
 #endif
 }
@@ -2143,7 +2143,7 @@  bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg,
 
 	throtl_update_latency_buckets(td);
 
-	blk_throtl_assoc_bio(tg, bio);
+	blk_throtl_assoc_bio(bio);
 	blk_throtl_update_idletime(tg);
 
 	sq = &tg->service_queue;
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 056fb627edb3..62715a5a4f32 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -511,12 +511,15 @@  static inline int bio_associate_blkcg_from_page(struct bio *bio,
 
 #ifdef CONFIG_BLK_CGROUP
 int bio_associate_blkcg(struct bio *bio, struct cgroup_subsys_state *blkcg_css);
-int bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg);
+void bio_disassociate_blkg(struct bio *bio);
+void bio_associate_blkg(struct bio *bio);
 void bio_disassociate_task(struct bio *bio);
 void bio_clone_blkcg_association(struct bio *dst, struct bio *src);
 #else	/* CONFIG_BLK_CGROUP */
 static inline int bio_associate_blkcg(struct bio *bio,
 			struct cgroup_subsys_state *blkcg_css) { return 0; }
+static inline void bio_disassociate_blkg(struct bio *bio) { }
+static inline void bio_associate_blkg(struct bio *bio) { }
 static inline void bio_disassociate_task(struct bio *bio) { }
 static inline void bio_clone_blkcg_association(struct bio *dst,
 			struct bio *src) { }