diff mbox

block: transfer source bio's cgroup tags to clone via bio_associate_blkcg()

Message ID 20160302211920.GH3476@redhat.com (mailing list archive)
State Rejected, archived
Delegated to: Mike Snitzer
Headers show

Commit Message

Vivek Goyal March 2, 2016, 9:19 p.m. UTC
On Wed, Mar 02, 2016 at 04:04:05PM -0500, Vivek Goyal wrote:
> On Wed, Mar 02, 2016 at 02:34:50PM -0600, Chris Friesen wrote:
> > On 03/02/2016 02:10 PM, Vivek Goyal wrote:
> > >On Wed, Mar 02, 2016 at 09:59:13PM +0200, Nikolay Borisov wrote:
> > 
> > >We had similar issue with IO priority and it did not work reliably with
> > >CFQ on underlying device when dm devices were sitting on top.
> > >
> > >If we really want to give it a try, I guess we will have to put cgroup
> > >info of submitter early in bio at the time of bio creation even for all
> > >kind of IO. Not sure if it is worth the effort.
> > 
> > As it stands, imagine that you have a hypervisor node running many VMs (or
> > containers), each of which is assigned a separate logical volume (possibly
> > thin-provisioned) as its rootfs.
> > 
> > Ideally we want the disk accesses by those VMs to be "fair" relative to each
> > other, and we want to guarantee a certain amount of bandwidth for the host
> > as well.
> > 
> > Without this sort of feature, how can we accomplish that?
> 
> As of now, you can't. I will try adding bio_associate_current() and see
> if that along with Mike's patches gets you what you are looking for.
> 

Can you try following also along with Mike's patch of carrying cgroup
info over the clones.

Mike, is it right place in dm layer to hook into. I think this will take
care of bio based targets.

Even after this I think there are still two issues.

- bio_associate_current() assumes that submitter already has an io context
  otherwise does nothing. So in this case if container/VM process does not
  have io context, nothing will happen.

- We will also need mechanism to carry io context information when we 
  clone bio. Otherwise we will get the cgroup of the original process
  and io context of the dm thread (kind of odd).


---
 drivers/md/dm.c |    1 +
 1 file changed, 1 insertion(+)

Thanks
Vivek

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
diff mbox

Patch

Index: rhvgoyal-linux/drivers/md/dm.c
===================================================================
--- rhvgoyal-linux.orig/drivers/md/dm.c	2016-03-02 19:19:12.301000000 +0000
+++ rhvgoyal-linux/drivers/md/dm.c	2016-03-02 21:11:01.357000000 +0000
@@ -1769,6 +1769,7 @@  static blk_qc_t dm_make_request(struct r
 
 	generic_start_io_acct(rw, bio_sectors(bio), &dm_disk(md)->part0);
 
+	bio_associate_current(bio);
 	/* if we're suspended, we have to queue this io for later */
 	if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags))) {
 		dm_put_live_table(md, srcu_idx);