diff mbox

[1/2] kvm: fix dirty bit tracking for slots with large pages

Message ID 1244634691-9765-2-git-send-email-ieidus@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Izik Eidus June 10, 2009, 11:51 a.m. UTC
When slot is already allocted and being asked to be tracked we need to break the
large pages.

This code flush the mmu when someone ask a slot to start dirty bit tracking.

Signed-off-by: Izik Eidus <ieidus@redhat.com>
---
 virt/kvm/kvm_main.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

Comments

Avi Kivity June 10, 2009, 11:54 a.m. UTC | #1
Izik Eidus wrote:
> When slot is already allocted and being asked to be tracked we need to break the
> large pages.
>
> This code flush the mmu when someone ask a slot to start dirty bit tracking.
>
> Signed-off-by: Izik Eidus <ieidus@redhat.com>
> ---
>  virt/kvm/kvm_main.c |    2 ++
>  1 files changed, 2 insertions(+), 0 deletions(-)
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 669eb4a..4a60c72 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1160,6 +1160,8 @@ int __kvm_set_memory_region(struct kvm *kvm,
>  			new.userspace_addr = mem->userspace_addr;
>  		else
>  			new.userspace_addr = 0;
> +
> +		kvm_arch_flush_shadow(kvm);
>  	}
>  	if (npages && !new.lpage_info) {
>  		largepages = 1 + (base_gfn + npages - 1) / KVM_PAGES_PER_HPAGE;
>   

Ryan, can you try this out with your large page migration failures?
Izik Eidus June 10, 2009, 12:06 p.m. UTC | #2
Avi Kivity wrote:
> Izik Eidus wrote:
>> When slot is already allocted and being asked to be tracked we need 
>> to break the
>> large pages.
>>
>> This code flush the mmu when someone ask a slot to start dirty bit 
>> tracking.
>>
>> Signed-off-by: Izik Eidus <ieidus@redhat.com>
>> ---
>>  virt/kvm/kvm_main.c |    2 ++
>>  1 files changed, 2 insertions(+), 0 deletions(-)
>>
>> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
>> index 669eb4a..4a60c72 100644
>> --- a/virt/kvm/kvm_main.c
>> +++ b/virt/kvm/kvm_main.c
>> @@ -1160,6 +1160,8 @@ int __kvm_set_memory_region(struct kvm *kvm,
>>              new.userspace_addr = mem->userspace_addr;
>>          else
>>              new.userspace_addr = 0;
>> +
>> +        kvm_arch_flush_shadow(kvm);
>>      }
>>      if (npages && !new.lpage_info) {
>>          largepages = 1 + (base_gfn + npages - 1) / KVM_PAGES_PER_HPAGE;
>>   
>
> Ryan, can you try this out with your large page migration failures?
>
Wait, i think it is in the wrong place., i am sending a second seires :(
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 669eb4a..4a60c72 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1160,6 +1160,8 @@  int __kvm_set_memory_region(struct kvm *kvm,
 			new.userspace_addr = mem->userspace_addr;
 		else
 			new.userspace_addr = 0;
+
+		kvm_arch_flush_shadow(kvm);
 	}
 	if (npages && !new.lpage_info) {
 		largepages = 1 + (base_gfn + npages - 1) / KVM_PAGES_PER_HPAGE;