diff mbox

migration/block: Avoid involve into blk_drain too frequently

Message ID 1489478243-10366-1-git-send-email-jemmy858585@gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

858585 jemmy March 14, 2017, 7:57 a.m. UTC
From: Lidong Chen <jemmy858585@gmail.com>

Increase bmds->cur_dirty after submit io, so reduce the frequency involve into blk_drain, and improve the performance obviously when block migration.

Signed-off-by: Lidong Chen <jemmy858585@gmail.com>
---
 migration/block.c | 2 ++
 1 file changed, 2 insertions(+)

Comments

Eric Blake March 14, 2017, 3:12 p.m. UTC | #1
On 03/14/2017 02:57 AM, jemmy858585@gmail.com wrote:
> From: Lidong Chen <jemmy858585@gmail.com>
> 
> Increase bmds->cur_dirty after submit io, so reduce the frequency involve into blk_drain, and improve the performance obviously when block migration.

Long line; please wrap your commit messages, preferably around 70 bytes
since 'git log' displays them indented, and it is still nice to read
them in an 80-column window.

Do you have benchmark numbers to prove the impact of this patch, or even
a formula for reproducing the benchmark testing?

> 
> Signed-off-by: Lidong Chen <jemmy858585@gmail.com>
> ---
>  migration/block.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/migration/block.c b/migration/block.c
> index 6741228..f059cca 100644
> --- a/migration/block.c
> +++ b/migration/block.c
> @@ -576,6 +576,8 @@ static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
>              }
>  
>              bdrv_reset_dirty_bitmap(bmds->dirty_bitmap, sector, nr_sectors);
> +            sector += nr_sectors;
> +            bmds->cur_dirty = sector;
>              break;
>          }
>          sector += BDRV_SECTORS_PER_DIRTY_CHUNK;
>
Eric Blake March 14, 2017, 3:15 p.m. UTC | #2
On 03/14/2017 10:12 AM, Eric Blake wrote:
> On 03/14/2017 02:57 AM, jemmy858585@gmail.com wrote:
>> From: Lidong Chen <jemmy858585@gmail.com>
>>
>> Increase bmds->cur_dirty after submit io, so reduce the frequency involve into blk_drain, and improve the performance obviously when block migration.
> 
> Long line; please wrap your commit messages, preferably around 70 bytes
> since 'git log' displays them indented, and it is still nice to read
> them in an 80-column window.

Also, in the subject:

s/involve into/invoking/
858585 jemmy March 15, 2017, 2:28 a.m. UTC | #3
On Tue, Mar 14, 2017 at 11:12 PM, Eric Blake <eblake@redhat.com> wrote:
> On 03/14/2017 02:57 AM, jemmy858585@gmail.com wrote:
>> From: Lidong Chen <jemmy858585@gmail.com>
>>
>> Increase bmds->cur_dirty after submit io, so reduce the frequency involve into blk_drain, and improve the performance obviously when block migration.
>
> Long line; please wrap your commit messages, preferably around 70 bytes
> since 'git log' displays them indented, and it is still nice to read
> them in an 80-column window.
>
> Do you have benchmark numbers to prove the impact of this patch, or even
> a formula for reproducing the benchmark testing?
>

the test result is base on current git master version.

the xml of guest os:
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/instanceimage/ab3ba978-c7a3-463d-a1d0-48649fb7df00/ab3ba978-c7a3-463d-a1d0-48649fb7df00_vda.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/domu/ab3ba978-c7a3-463d-a1d0-48649fb7df00_vdb'/>
      <target dev='vdb' bus='virtio'/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
    </disk>

i used fio running in guest os.  and the context of  fio configuration is below:
[randwrite]
ioengine=libaio
iodepth=128
bs=512
filename=/dev/vdb
rw=randwrite
direct=1

when the vm is not durning migrate, the iops is about 10.7K.

then i used this command to start migrate virtual machine.

virsh migrate-setspeed ab3ba978-c7a3-463d-a1d0-48649fb7df00 1000
virsh migrate --live ab3ba978-c7a3-463d-a1d0-48649fb7df00
--copy-storage-inc qemu+ssh://10.59.163.38/system

before apply this patch, during the block dirty save phase, the iops
in guest os is  only 4.0K, the migrate speed is about 505856 rsec/s.
after apply this patch, during the block dirty save phase, the iops in
guest os is is 9.5K. the migrate speed is about 855756 rsec/s.

with old version qemu(1.2.0), call bdrv_drain_all function to wait aio
complete, before apply this patch, the result is worse.
because the main thread is block by bdrv_drain_all for a long time,
the vnc is becoming response very slowly.
this bug is only obvious when set the migrate speed with a big number.

>>
>> Signed-off-by: Lidong Chen <jemmy858585@gmail.com>
>> ---
>>  migration/block.c | 2 ++
>>  1 file changed, 2 insertions(+)
>>
>> diff --git a/migration/block.c b/migration/block.c
>> index 6741228..f059cca 100644
>> --- a/migration/block.c
>> +++ b/migration/block.c
>> @@ -576,6 +576,8 @@ static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
>>              }
>>
>>              bdrv_reset_dirty_bitmap(bmds->dirty_bitmap, sector, nr_sectors);
>> +            sector += nr_sectors;
>> +            bmds->cur_dirty = sector;
>>              break;
>>          }
>>          sector += BDRV_SECTORS_PER_DIRTY_CHUNK;
>>
>
> --
> Eric Blake   eblake redhat com    +1-919-301-3266
> Libvirt virtualization library http://libvirt.org
>
858585 jemmy March 15, 2017, 2:34 a.m. UTC | #4
On Tue, Mar 14, 2017 at 11:15 PM, Eric Blake <eblake@redhat.com> wrote:
>
> On 03/14/2017 10:12 AM, Eric Blake wrote:
> > On 03/14/2017 02:57 AM, jemmy858585@gmail.com wrote:
> >> From: Lidong Chen <jemmy858585@gmail.com>
> >>
> >> Increase bmds->cur_dirty after submit io, so reduce the frequency involve into blk_drain, and improve the performance obviously when block migration.
> >
> > Long line; please wrap your commit messages, preferably around 70 bytes
> > since 'git log' displays them indented, and it is still nice to read
> > them in an 80-column window.
>
> Also, in the subject:
>
> s/involve into/invoking/

thanks for you suggestion.
this is the first time i commit a patch for qemu, i will send a new version.

>
> --
> Eric Blake   eblake redhat com    +1-919-301-3266
> Libvirt virtualization library http://libvirt.org
>
Fam Zheng March 15, 2017, 2:57 a.m. UTC | #5
On Wed, 03/15 10:28, 858585 jemmy wrote:
> On Tue, Mar 14, 2017 at 11:12 PM, Eric Blake <eblake@redhat.com> wrote:
> > On 03/14/2017 02:57 AM, jemmy858585@gmail.com wrote:
> >> From: Lidong Chen <jemmy858585@gmail.com>
> >>
> >> Increase bmds->cur_dirty after submit io, so reduce the frequency involve into blk_drain, and improve the performance obviously when block migration.
> >
> > Long line; please wrap your commit messages, preferably around 70 bytes
> > since 'git log' displays them indented, and it is still nice to read
> > them in an 80-column window.
> >
> > Do you have benchmark numbers to prove the impact of this patch, or even
> > a formula for reproducing the benchmark testing?
> >
> 
> the test result is base on current git master version.
> 
> the xml of guest os:
>     <disk type='file' device='disk'>
>       <driver name='qemu' type='qcow2' cache='none'/>
>       <source file='/instanceimage/ab3ba978-c7a3-463d-a1d0-48649fb7df00/ab3ba978-c7a3-463d-a1d0-48649fb7df00_vda.qcow2'/>
>       <target dev='vda' bus='virtio'/>
>       <alias name='virtio-disk0'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x04'
> function='0x0'/>
>     </disk>
>     <disk type='block' device='disk'>
>       <driver name='qemu' type='raw' cache='none' io='native'/>
>       <source dev='/dev/domu/ab3ba978-c7a3-463d-a1d0-48649fb7df00_vdb'/>
>       <target dev='vdb' bus='virtio'/>
>       <alias name='virtio-disk1'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x05'
> function='0x0'/>
>     </disk>
> 
> i used fio running in guest os.  and the context of  fio configuration is below:
> [randwrite]
> ioengine=libaio
> iodepth=128
> bs=512
> filename=/dev/vdb
> rw=randwrite
> direct=1
> 
> when the vm is not durning migrate, the iops is about 10.7K.
> 
> then i used this command to start migrate virtual machine.
> 
> virsh migrate-setspeed ab3ba978-c7a3-463d-a1d0-48649fb7df00 1000
> virsh migrate --live ab3ba978-c7a3-463d-a1d0-48649fb7df00
> --copy-storage-inc qemu+ssh://10.59.163.38/system
> 
> before apply this patch, during the block dirty save phase, the iops
> in guest os is  only 4.0K, the migrate speed is about 505856 rsec/s.
> after apply this patch, during the block dirty save phase, the iops in
> guest os is is 9.5K. the migrate speed is about 855756 rsec/s.

Thanks, please include these numbers in the commit message too.

Fam
858585 jemmy March 15, 2017, 3:11 a.m. UTC | #6
On Wed, Mar 15, 2017 at 10:57 AM, Fam Zheng <famz@redhat.com> wrote:
> On Wed, 03/15 10:28, 858585 jemmy wrote:
>> On Tue, Mar 14, 2017 at 11:12 PM, Eric Blake <eblake@redhat.com> wrote:
>> > On 03/14/2017 02:57 AM, jemmy858585@gmail.com wrote:
>> >> From: Lidong Chen <jemmy858585@gmail.com>
>> >>
>> >> Increase bmds->cur_dirty after submit io, so reduce the frequency involve into blk_drain, and improve the performance obviously when block migration.
>> >
>> > Long line; please wrap your commit messages, preferably around 70 bytes
>> > since 'git log' displays them indented, and it is still nice to read
>> > them in an 80-column window.
>> >
>> > Do you have benchmark numbers to prove the impact of this patch, or even
>> > a formula for reproducing the benchmark testing?
>> >
>>
>> the test result is base on current git master version.
>>
>> the xml of guest os:
>>     <disk type='file' device='disk'>
>>       <driver name='qemu' type='qcow2' cache='none'/>
>>       <source file='/instanceimage/ab3ba978-c7a3-463d-a1d0-48649fb7df00/ab3ba978-c7a3-463d-a1d0-48649fb7df00_vda.qcow2'/>
>>       <target dev='vda' bus='virtio'/>
>>       <alias name='virtio-disk0'/>
>>       <address type='pci' domain='0x0000' bus='0x00' slot='0x04'
>> function='0x0'/>
>>     </disk>
>>     <disk type='block' device='disk'>
>>       <driver name='qemu' type='raw' cache='none' io='native'/>
>>       <source dev='/dev/domu/ab3ba978-c7a3-463d-a1d0-48649fb7df00_vdb'/>
>>       <target dev='vdb' bus='virtio'/>
>>       <alias name='virtio-disk1'/>
>>       <address type='pci' domain='0x0000' bus='0x00' slot='0x05'
>> function='0x0'/>
>>     </disk>
>>
>> i used fio running in guest os.  and the context of  fio configuration is below:
>> [randwrite]
>> ioengine=libaio
>> iodepth=128
>> bs=512
>> filename=/dev/vdb
>> rw=randwrite
>> direct=1
>>
>> when the vm is not durning migrate, the iops is about 10.7K.
>>
>> then i used this command to start migrate virtual machine.
>>
>> virsh migrate-setspeed ab3ba978-c7a3-463d-a1d0-48649fb7df00 1000
>> virsh migrate --live ab3ba978-c7a3-463d-a1d0-48649fb7df00
>> --copy-storage-inc qemu+ssh://10.59.163.38/system
>>
>> before apply this patch, during the block dirty save phase, the iops
>> in guest os is  only 4.0K, the migrate speed is about 505856 rsec/s.
>> after apply this patch, during the block dirty save phase, the iops in
>> guest os is is 9.5K. the migrate speed is about 855756 rsec/s.
>
> Thanks, please include these numbers in the commit message too.
OK, i will.
>
> Fam
diff mbox

Patch

diff --git a/migration/block.c b/migration/block.c
index 6741228..f059cca 100644
--- a/migration/block.c
+++ b/migration/block.c
@@ -576,6 +576,8 @@  static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
             }
 
             bdrv_reset_dirty_bitmap(bmds->dirty_bitmap, sector, nr_sectors);
+            sector += nr_sectors;
+            bmds->cur_dirty = sector;
             break;
         }
         sector += BDRV_SECTORS_PER_DIRTY_CHUNK;