diff mbox series

migration/rdma: Use huge page register VM memory

Message ID 51819991cecb42f6a619768bc61d0bfd@kingsoft.com (mailing list archive)
State New, archived
Headers show
Series migration/rdma: Use huge page register VM memory | expand

Commit Message

LIZHAOXIN1 [李照鑫] June 7, 2021, 1:57 p.m. UTC
When using libvirt for RDMA live migration, if the VM memory is too large,
it will take a lot of time to deregister the VM at the source side, resulting
in a long downtime (VM 64G, deregister vm time is about 400ms).
    
Although the VM's memory uses 2M huge pages, the MLNX driver still uses 4K
pages for pin memory, as well as for unpin. So we use huge pages to skip the
process of pin memory and unpin memory to reduce downtime.
   
The test environment:
kernel: linux-5.12
MLNX: ConnectX-4 LX
libvirt command:
virsh migrate --live --p2p --persistent --copy-storage-inc --listen-address \
0.0.0.0 --rdma-pin-all --migrateuri rdma://192.168.0.2 [VM] qemu+tcp://192.168.0.2/system
    
Signed-off-by: lizhaoxin <lizhaoxin1@kingsoft.com>

Comments

Daniel P. Berrangé June 7, 2021, 2:17 p.m. UTC | #1
On Mon, Jun 07, 2021 at 01:57:02PM +0000, LIZHAOXIN1 [李照鑫] wrote:
> When using libvirt for RDMA live migration, if the VM memory is too large,
> it will take a lot of time to deregister the VM at the source side, resulting
> in a long downtime (VM 64G, deregister vm time is about 400ms).
>     
> Although the VM's memory uses 2M huge pages, the MLNX driver still uses 4K
> pages for pin memory, as well as for unpin. So we use huge pages to skip the
> process of pin memory and unpin memory to reduce downtime.
>    
> The test environment:
> kernel: linux-5.12
> MLNX: ConnectX-4 LX
> libvirt command:
> virsh migrate --live --p2p --persistent --copy-storage-inc --listen-address \
> 0.0.0.0 --rdma-pin-all --migrateuri rdma://192.168.0.2 [VM] qemu+tcp://192.168.0.2/system
>     
> Signed-off-by: lizhaoxin <lizhaoxin1@kingsoft.com>
> 
> diff --git a/migration/rdma.c b/migration/rdma.c
> index 1cdb4561f3..9823449297 100644
> --- a/migration/rdma.c
> +++ b/migration/rdma.c
> @@ -1123,13 +1123,26 @@ static int qemu_rdma_reg_whole_ram_blocks(RDMAContext *rdma)
>      RDMALocalBlocks *local = &rdma->local_ram_blocks;
>  
>      for (i = 0; i < local->nb_blocks; i++) {
> -        local->block[i].mr =
> -            ibv_reg_mr(rdma->pd,
> -                    local->block[i].local_host_addr,
> -                    local->block[i].length,
> -                    IBV_ACCESS_LOCAL_WRITE |
> -                    IBV_ACCESS_REMOTE_WRITE
> -                    );
> +        if (strcmp(local->block[i].block_name,"pc.ram") == 0) {

'pc.ram' is an x86 architecture specific name, so this will still
leave a problem on other architectures I assume.

> +            local->block[i].mr =
> +                ibv_reg_mr(rdma->pd,
> +                        local->block[i].local_host_addr,
> +                        local->block[i].length,
> +                        IBV_ACCESS_LOCAL_WRITE |
> +                        IBV_ACCESS_REMOTE_WRITE |
> +                        IBV_ACCESS_ON_DEMAND |
> +                        IBV_ACCESS_HUGETLB
> +                        );
> +        } else {
> +            local->block[i].mr =
> +                ibv_reg_mr(rdma->pd,
> +                        local->block[i].local_host_addr,
> +                        local->block[i].length,
> +                        IBV_ACCESS_LOCAL_WRITE |
> +                        IBV_ACCESS_REMOTE_WRITE
> +                        );
> +        }
> +
>          if (!local->block[i].mr) {
>              perror("Failed to register local dest ram block!\n");
>              break;

Regards,
Daniel
Dr. David Alan Gilbert June 7, 2021, 3 p.m. UTC | #2
* Daniel P. Berrangé (berrange@redhat.com) wrote:
> On Mon, Jun 07, 2021 at 01:57:02PM +0000, LIZHAOXIN1 [李照鑫] wrote:
> > When using libvirt for RDMA live migration, if the VM memory is too large,
> > it will take a lot of time to deregister the VM at the source side, resulting
> > in a long downtime (VM 64G, deregister vm time is about 400ms).
> >     
> > Although the VM's memory uses 2M huge pages, the MLNX driver still uses 4K
> > pages for pin memory, as well as for unpin. So we use huge pages to skip the
> > process of pin memory and unpin memory to reduce downtime.
> >    
> > The test environment:
> > kernel: linux-5.12
> > MLNX: ConnectX-4 LX
> > libvirt command:
> > virsh migrate --live --p2p --persistent --copy-storage-inc --listen-address \
> > 0.0.0.0 --rdma-pin-all --migrateuri rdma://192.168.0.2 [VM] qemu+tcp://192.168.0.2/system
> >     
> > Signed-off-by: lizhaoxin <lizhaoxin1@kingsoft.com>
> > 
> > diff --git a/migration/rdma.c b/migration/rdma.c
> > index 1cdb4561f3..9823449297 100644
> > --- a/migration/rdma.c
> > +++ b/migration/rdma.c
> > @@ -1123,13 +1123,26 @@ static int qemu_rdma_reg_whole_ram_blocks(RDMAContext *rdma)
> >      RDMALocalBlocks *local = &rdma->local_ram_blocks;
> >  
> >      for (i = 0; i < local->nb_blocks; i++) {
> > -        local->block[i].mr =
> > -            ibv_reg_mr(rdma->pd,
> > -                    local->block[i].local_host_addr,
> > -                    local->block[i].length,
> > -                    IBV_ACCESS_LOCAL_WRITE |
> > -                    IBV_ACCESS_REMOTE_WRITE
> > -                    );
> > +        if (strcmp(local->block[i].block_name,"pc.ram") == 0) {
> 
> 'pc.ram' is an x86 architecture specific name, so this will still
> leave a problem on other architectures I assume.

Yes, and also break even on PC when using NUMA.
I think the thing to do here is to call qemu_ram_pagesize on the
RAMBlock; 

  if (qemu_ram_pagesize(RAMBlock....) != qemu_real_host_page_size)
     it's a huge page

I guess it's probably best to do that in qemu_rdma_init_one_block or
something?

I wonder how that all works when there's a mix of different huge page
sizes?

Dave

> > +            local->block[i].mr =
> > +                ibv_reg_mr(rdma->pd,
> > +                        local->block[i].local_host_addr,
> > +                        local->block[i].length,
> > +                        IBV_ACCESS_LOCAL_WRITE |
> > +                        IBV_ACCESS_REMOTE_WRITE |
> > +                        IBV_ACCESS_ON_DEMAND |
> > +                        IBV_ACCESS_HUGETLB
> > +                        );
> > +        } else {
> > +            local->block[i].mr =
> > +                ibv_reg_mr(rdma->pd,
> > +                        local->block[i].local_host_addr,
> > +                        local->block[i].length,
> > +                        IBV_ACCESS_LOCAL_WRITE |
> > +                        IBV_ACCESS_REMOTE_WRITE
> > +                        );
> > +        }
> > +
> >          if (!local->block[i].mr) {
> >              perror("Failed to register local dest ram block!\n");
> >              break;
> 
> Regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
LIZHAOXIN1 [李照鑫] June 10, 2021, 3:33 p.m. UTC | #3
Yes, 'pc.ram' is the specific name for x86. I have read that
memory_region_allocate_system_memory assigns different names
to other architectures.
Thanks for your reminding.

Regards,
lizhaoxin.

-----邮件原件-----
发件人: Daniel P. Berrangé <berrange@redhat.com> 
发送时间: 2021年6月7日 22:18
收件人: LIZHAOXIN1 [李照鑫] <LIZHAOXIN1@kingsoft.com>
抄送: qemu-devel@nongnu.org; quintela@redhat.com; dgilbert@redhat.com; sunhao2 [孙昊] <sunhao2@kingsoft.com>; DENGLINWEN [邓林文] <DENGLINWEN@kingsoft.com>; YANGFENG1 [杨峰] <YANGFENG1@kingsoft.com>
主题: Re: [PATCH] migration/rdma: Use huge page register VM memory

On Mon, Jun 07, 2021 at 01:57:02PM +0000, LIZHAOXIN1 [李照鑫] wrote:
> When using libvirt for RDMA live migration, if the VM memory is too 
> large, it will take a lot of time to deregister the VM at the source 
> side, resulting in a long downtime (VM 64G, deregister vm time is about 400ms).
>     
> Although the VM's memory uses 2M huge pages, the MLNX driver still 
> uses 4K pages for pin memory, as well as for unpin. So we use huge 
> pages to skip the process of pin memory and unpin memory to reduce downtime.
>    
> The test environment:
> kernel: linux-5.12
> MLNX: ConnectX-4 LX
> libvirt command:
> virsh migrate --live --p2p --persistent --copy-storage-inc 
> --listen-address \
> 0.0.0.0 --rdma-pin-all --migrateuri rdma://192.168.0.2 [VM] 
> qemu+tcp://192.168.0.2/system
>     
> Signed-off-by: lizhaoxin <lizhaoxin1@kingsoft.com>
> 
> diff --git a/migration/rdma.c b/migration/rdma.c index 
> 1cdb4561f3..9823449297 100644
> --- a/migration/rdma.c
> +++ b/migration/rdma.c
> @@ -1123,13 +1123,26 @@ static int qemu_rdma_reg_whole_ram_blocks(RDMAContext *rdma)
>      RDMALocalBlocks *local = &rdma->local_ram_blocks;
>  
>      for (i = 0; i < local->nb_blocks; i++) {
> -        local->block[i].mr =
> -            ibv_reg_mr(rdma->pd,
> -                    local->block[i].local_host_addr,
> -                    local->block[i].length,
> -                    IBV_ACCESS_LOCAL_WRITE |
> -                    IBV_ACCESS_REMOTE_WRITE
> -                    );
> +        if (strcmp(local->block[i].block_name,"pc.ram") == 0) {

'pc.ram' is an x86 architecture specific name, so this will still leave a problem on other architectures I assume.

> +            local->block[i].mr =
> +                ibv_reg_mr(rdma->pd,
> +                        local->block[i].local_host_addr,
> +                        local->block[i].length,
> +                        IBV_ACCESS_LOCAL_WRITE |
> +                        IBV_ACCESS_REMOTE_WRITE |
> +                        IBV_ACCESS_ON_DEMAND |
> +                        IBV_ACCESS_HUGETLB
> +                        );
> +        } else {
> +            local->block[i].mr =
> +                ibv_reg_mr(rdma->pd,
> +                        local->block[i].local_host_addr,
> +                        local->block[i].length,
> +                        IBV_ACCESS_LOCAL_WRITE |
> +                        IBV_ACCESS_REMOTE_WRITE
> +                        );
> +        }
> +
>          if (!local->block[i].mr) {
>              perror("Failed to register local dest ram block!\n");
>              break;

Regards,
Daniel
LIZHAOXIN1 [李照鑫] June 10, 2021, 3:35 p.m. UTC | #4
Yes, When I configured two NUMAs for the VM, the name of the memory 
is 'ram-node*', and other architectures had different names.
As you suggested, I use qemu_ram_pagesize() and qemu_real_host_page_size 
to determine which Ramblocks use huge page.
I will send the patch second version later.

when there's a mix of different huge page sizes, there is no difference in their 
behavior, register or pin are just to prevent the memory from being swapped out. 
Huge page itself will not be swapped out,so huge page no need deregister or unpin.

The libvirt xml of my VM is 
...
<memoryBacking>
    <hugepages>
      <page size='2048' unit='KiB' nodeset='0'/>
      <page size='1048576' unit='KiB' nodeset='1'/>
    </hugepages>
  </memoryBacking>
...
<numa>
      <cell id='0' cpus='0-7' memory='31457280' unit='KiB' memAccess='shared'/>
      <cell id='1' cpus='8-15' memory='2097152' unit='KiB' memAccess='shared'/>
</numa>
...

After testing, the RDMA live migration is normal, and the downtime is significantly reduced.

-----邮件原件-----
发件人: Dr. David Alan Gilbert <dgilbert@redhat.com> 
发送时间: 2021年6月7日 23:00
收件人: Daniel P. Berrangé <berrange@redhat.com>
抄送: LIZHAOXIN1 [李照鑫] <LIZHAOXIN1@kingsoft.com>; qemu-devel@nongnu.org; quintela@redhat.com; sunhao2 [孙昊] <sunhao2@kingsoft.com>; DENGLINWEN [邓林文] <DENGLINWEN@kingsoft.com>; YANGFENG1 [杨峰] <YANGFENG1@kingsoft.com>
主题: Re: [PATCH] migration/rdma: Use huge page register VM memory

* Daniel P. Berrangé (berrange@redhat.com) wrote:
> On Mon, Jun 07, 2021 at 01:57:02PM +0000, LIZHAOXIN1 [李照鑫] wrote:
> > When using libvirt for RDMA live migration, if the VM memory is too 
> > large, it will take a lot of time to deregister the VM at the source 
> > side, resulting in a long downtime (VM 64G, deregister vm time is about 400ms).
> >     
> > Although the VM's memory uses 2M huge pages, the MLNX driver still 
> > uses 4K pages for pin memory, as well as for unpin. So we use huge 
> > pages to skip the process of pin memory and unpin memory to reduce downtime.
> >    
> > The test environment:
> > kernel: linux-5.12
> > MLNX: ConnectX-4 LX
> > libvirt command:
> > virsh migrate --live --p2p --persistent --copy-storage-inc 
> > --listen-address \
> > 0.0.0.0 --rdma-pin-all --migrateuri rdma://192.168.0.2 [VM] 
> > qemu+tcp://192.168.0.2/system
> >     
> > Signed-off-by: lizhaoxin <lizhaoxin1@kingsoft.com>
> > 
> > diff --git a/migration/rdma.c b/migration/rdma.c index 
> > 1cdb4561f3..9823449297 100644
> > --- a/migration/rdma.c
> > +++ b/migration/rdma.c
> > @@ -1123,13 +1123,26 @@ static int qemu_rdma_reg_whole_ram_blocks(RDMAContext *rdma)
> >      RDMALocalBlocks *local = &rdma->local_ram_blocks;
> >  
> >      for (i = 0; i < local->nb_blocks; i++) {
> > -        local->block[i].mr =
> > -            ibv_reg_mr(rdma->pd,
> > -                    local->block[i].local_host_addr,
> > -                    local->block[i].length,
> > -                    IBV_ACCESS_LOCAL_WRITE |
> > -                    IBV_ACCESS_REMOTE_WRITE
> > -                    );
> > +        if (strcmp(local->block[i].block_name,"pc.ram") == 0) {
> 
> 'pc.ram' is an x86 architecture specific name, so this will still 
> leave a problem on other architectures I assume.

Yes, and also break even on PC when using NUMA.
I think the thing to do here is to call qemu_ram_pagesize on the RAMBlock; 

  if (qemu_ram_pagesize(RAMBlock....) != qemu_real_host_page_size)
     it's a huge page

I guess it's probably best to do that in qemu_rdma_init_one_block or something?

I wonder how that all works when there's a mix of different huge page sizes?

Dave

> > +            local->block[i].mr =
> > +                ibv_reg_mr(rdma->pd,
> > +                        local->block[i].local_host_addr,
> > +                        local->block[i].length,
> > +                        IBV_ACCESS_LOCAL_WRITE |
> > +                        IBV_ACCESS_REMOTE_WRITE |
> > +                        IBV_ACCESS_ON_DEMAND |
> > +                        IBV_ACCESS_HUGETLB
> > +                        );
> > +        } else {
> > +            local->block[i].mr =
> > +                ibv_reg_mr(rdma->pd,
> > +                        local->block[i].local_host_addr,
> > +                        local->block[i].length,
> > +                        IBV_ACCESS_LOCAL_WRITE |
> > +                        IBV_ACCESS_REMOTE_WRITE
> > +                        );
> > +        }
> > +
> >          if (!local->block[i].mr) {
> >              perror("Failed to register local dest ram block!\n");
> >              break;
> 
> Regards,
> Daniel
> --
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
diff mbox series

Patch

diff --git a/migration/rdma.c b/migration/rdma.c
index 1cdb4561f3..9823449297 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -1123,13 +1123,26 @@  static int qemu_rdma_reg_whole_ram_blocks(RDMAContext *rdma)
     RDMALocalBlocks *local = &rdma->local_ram_blocks;
 
     for (i = 0; i < local->nb_blocks; i++) {
-        local->block[i].mr =
-            ibv_reg_mr(rdma->pd,
-                    local->block[i].local_host_addr,
-                    local->block[i].length,
-                    IBV_ACCESS_LOCAL_WRITE |
-                    IBV_ACCESS_REMOTE_WRITE
-                    );
+        if (strcmp(local->block[i].block_name,"pc.ram") == 0) {
+            local->block[i].mr =
+                ibv_reg_mr(rdma->pd,
+                        local->block[i].local_host_addr,
+                        local->block[i].length,
+                        IBV_ACCESS_LOCAL_WRITE |
+                        IBV_ACCESS_REMOTE_WRITE |
+                        IBV_ACCESS_ON_DEMAND |
+                        IBV_ACCESS_HUGETLB
+                        );
+        } else {
+            local->block[i].mr =
+                ibv_reg_mr(rdma->pd,
+                        local->block[i].local_host_addr,
+                        local->block[i].length,
+                        IBV_ACCESS_LOCAL_WRITE |
+                        IBV_ACCESS_REMOTE_WRITE
+                        );
+        }
+
         if (!local->block[i].mr) {
             perror("Failed to register local dest ram block!\n");
             break;