diff mbox series

[4/4] xen-blkback: support dynamic unbind/bind

Message ID 20191205140123.3817-5-pdurrant@amazon.com (mailing list archive)
State Superseded
Headers show
Series xen-blkback: support live update | expand

Commit Message

Paul Durrant Dec. 5, 2019, 2:01 p.m. UTC
By simply re-attaching to shared rings during connect_ring() rather than
assuming they are freshly allocated (i.e assuming the counters are zero)
it is possible for vbd instances to be unbound and re-bound from and to
(respectively) a running guest.

This has been tested by running:

while true; do dd if=/dev/urandom of=test.img bs=1M count=1024; done

in a PV guest whilst running:

while true;
  do echo vbd-$DOMID-$VBD >unbind;
  echo unbound;
  sleep 5;
  echo vbd-$DOMID-$VBD >bind;
  echo bound;
  sleep 3;
  done

in dom0 from /sys/bus/xen-backend/drivers/vbd to continuously unbind and
re-bind its system disk image.

This is a highly useful feature for a backend module as it allows it to be
unloaded and re-loaded (i.e. updated) without requiring domUs to be halted.
This was also tested by running:

while true;
  do echo vbd-$DOMID-$VBD >unbind;
  echo unbound;
  sleep 5;
  rmmod xen-blkback;
  echo unloaded;
  sleep 1;
  modprobe xen-blkback;
  echo bound;
  cd $(pwd);
  sleep 3;
  done

in dom0 whilst running the same loop as above in the (single) PV guest.

Some (less stressful) testing has also been done using a Windows HVM guest
with the latest 9.0 PV drivers installed.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
---
 drivers/block/xen-blkback/xenbus.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

Comments

Roger Pau Monné Dec. 9, 2019, 12:17 p.m. UTC | #1
On Thu, Dec 05, 2019 at 02:01:23PM +0000, Paul Durrant wrote:
> By simply re-attaching to shared rings during connect_ring() rather than
> assuming they are freshly allocated (i.e assuming the counters are zero)
> it is possible for vbd instances to be unbound and re-bound from and to
> (respectively) a running guest.
> 
> This has been tested by running:
> 
> while true; do dd if=/dev/urandom of=test.img bs=1M count=1024; done
> 
> in a PV guest whilst running:
> 
> while true;
>   do echo vbd-$DOMID-$VBD >unbind;
>   echo unbound;
>   sleep 5;
>   echo vbd-$DOMID-$VBD >bind;
>   echo bound;
>   sleep 3;
>   done

So this does unbind blkback while leaving the PV interface as
connected?

Thanks, Roger.
Paul Durrant Dec. 9, 2019, 12:24 p.m. UTC | #2
> -----Original Message-----
> From: Roger Pau Monné <roger.pau@citrix.com>
> Sent: 09 December 2019 12:17
> To: Durrant, Paul <pdurrant@amazon.com>
> Cc: linux-kernel@vger.kernel.org; xen-devel@lists.xenproject.org; Konrad
> Rzeszutek Wilk <konrad.wilk@oracle.com>; Jens Axboe <axboe@kernel.dk>;
> Boris Ostrovsky <boris.ostrovsky@oracle.com>; Juergen Gross
> <jgross@suse.com>; Stefano Stabellini <sstabellini@kernel.org>
> Subject: Re: [PATCH 4/4] xen-blkback: support dynamic unbind/bind
> 
> On Thu, Dec 05, 2019 at 02:01:23PM +0000, Paul Durrant wrote:
> > By simply re-attaching to shared rings during connect_ring() rather than
> > assuming they are freshly allocated (i.e assuming the counters are zero)
> > it is possible for vbd instances to be unbound and re-bound from and to
> > (respectively) a running guest.
> >
> > This has been tested by running:
> >
> > while true; do dd if=/dev/urandom of=test.img bs=1M count=1024; done
> >
> > in a PV guest whilst running:
> >
> > while true;
> >   do echo vbd-$DOMID-$VBD >unbind;
> >   echo unbound;
> >   sleep 5;
> >   echo vbd-$DOMID-$VBD >bind;
> >   echo bound;
> >   sleep 3;
> >   done
> 
> So this does unbind blkback while leaving the PV interface as
> connected?
> 

Yes, everything is left in place in the frontend. The backend detaches from the ring, closes its end of the event channels, etc. but the guest can still send requests which will get serviced when the new backend attaches.

  Paul
Jürgen Groß Dec. 9, 2019, 1:57 p.m. UTC | #3
On 05.12.19 15:01, Paul Durrant wrote:
> By simply re-attaching to shared rings during connect_ring() rather than
> assuming they are freshly allocated (i.e assuming the counters are zero)
> it is possible for vbd instances to be unbound and re-bound from and to
> (respectively) a running guest.
> 
> This has been tested by running:
> 
> while true; do dd if=/dev/urandom of=test.img bs=1M count=1024; done
> 
> in a PV guest whilst running:
> 
> while true;
>    do echo vbd-$DOMID-$VBD >unbind;
>    echo unbound;
>    sleep 5;
>    echo vbd-$DOMID-$VBD >bind;
>    echo bound;
>    sleep 3;
>    done
> 
> in dom0 from /sys/bus/xen-backend/drivers/vbd to continuously unbind and
> re-bind its system disk image.

Could you do the same test with mixed reads/writes and verification of
the read/written data, please? A write-only test is not _that_
convincing regarding correctness. It only proves the guest is not
crashing.

I'm fine with the general approach, though.


Juergen
Paul Durrant Dec. 9, 2019, 2:01 p.m. UTC | #4
> -----Original Message-----
> From: Jürgen Groß <jgross@suse.com>
> Sent: 09 December 2019 13:58
> To: Durrant, Paul <pdurrant@amazon.com>; linux-kernel@vger.kernel.org;
> xen-devel@lists.xenproject.org
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Roger Pau Monné
> <roger.pau@citrix.com>; Jens Axboe <axboe@kernel.dk>; Boris Ostrovsky
> <boris.ostrovsky@oracle.com>; Stefano Stabellini <sstabellini@kernel.org>
> Subject: Re: [PATCH 4/4] xen-blkback: support dynamic unbind/bind
> 
> On 05.12.19 15:01, Paul Durrant wrote:
> > By simply re-attaching to shared rings during connect_ring() rather than
> > assuming they are freshly allocated (i.e assuming the counters are zero)
> > it is possible for vbd instances to be unbound and re-bound from and to
> > (respectively) a running guest.
> >
> > This has been tested by running:
> >
> > while true; do dd if=/dev/urandom of=test.img bs=1M count=1024; done
> >
> > in a PV guest whilst running:
> >
> > while true;
> >    do echo vbd-$DOMID-$VBD >unbind;
> >    echo unbound;
> >    sleep 5;
> >    echo vbd-$DOMID-$VBD >bind;
> >    echo bound;
> >    sleep 3;
> >    done
> >
> > in dom0 from /sys/bus/xen-backend/drivers/vbd to continuously unbind and
> > re-bind its system disk image.
> 
> Could you do the same test with mixed reads/writes and verification of
> the read/written data, please? A write-only test is not _that_
> convincing regarding correctness. It only proves the guest is not
> crashing.

Sure. I'll find something that will verify content.

> 
> I'm fine with the general approach, though.
> 

Cool, thanks,

  Paul

> 
> Juergen
diff mbox series

Patch

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index e8c5c54e1d26..0b82740c4a9d 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -196,24 +196,24 @@  static int xen_blkif_map(struct xen_blkif_ring *ring, grant_ref_t *gref,
 	{
 		struct blkif_sring *sring;
 		sring = (struct blkif_sring *)ring->blk_ring;
-		BACK_RING_INIT(&ring->blk_rings.native, sring,
-			       XEN_PAGE_SIZE * nr_grefs);
+		BACK_RING_ATTACH(&ring->blk_rings.native, sring,
+				 XEN_PAGE_SIZE * nr_grefs);
 		break;
 	}
 	case BLKIF_PROTOCOL_X86_32:
 	{
 		struct blkif_x86_32_sring *sring_x86_32;
 		sring_x86_32 = (struct blkif_x86_32_sring *)ring->blk_ring;
-		BACK_RING_INIT(&ring->blk_rings.x86_32, sring_x86_32,
-			       XEN_PAGE_SIZE * nr_grefs);
+		BACK_RING_ATTACH(&ring->blk_rings.x86_32, sring_x86_32,
+				 XEN_PAGE_SIZE * nr_grefs);
 		break;
 	}
 	case BLKIF_PROTOCOL_X86_64:
 	{
 		struct blkif_x86_64_sring *sring_x86_64;
 		sring_x86_64 = (struct blkif_x86_64_sring *)ring->blk_ring;
-		BACK_RING_INIT(&ring->blk_rings.x86_64, sring_x86_64,
-			       XEN_PAGE_SIZE * nr_grefs);
+		BACK_RING_ATTACH(&ring->blk_rings.x86_64, sring_x86_64,
+				 XEN_PAGE_SIZE * nr_grefs);
 		break;
 	}
 	default: