diff mbox

KVM: dirty all pages in kvm_write_guest_cached()

Message ID 1428438897-22206-1-git-send-email-rkrcmar@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Radim Krčmář April 7, 2015, 8:34 p.m. UTC
We dirtied only one page because writes originally couldn't span more.
Use improved syntax for '>> PAGE_SHIFT' while at it.

Fixes: 8f964525a121 ("KVM: Allow cross page reads and writes from cached translations.")
Signed-off-by: Radim Kr?má? <rkrcmar@redhat.com>
---
 The function handles cross memslot writes in a different path.

 I think we should dirty pages after partial writes too (r < len),
 but it probably won't happen and I already started refactoring :)

 virt/kvm/kvm_main.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

Comments

Paolo Bonzini April 8, 2015, 8:49 a.m. UTC | #1
On 07/04/2015 22:34, Radim Kr?má? wrote:
> We dirtied only one page because writes originally couldn't span more.
> Use improved syntax for '>> PAGE_SHIFT' while at it.
> 
> Fixes: 8f964525a121 ("KVM: Allow cross page reads and writes from cached translations.")
> Signed-off-by: Radim Kr?má? <rkrcmar@redhat.com>

Cross-page reads and writes should never get here; they have
ghc->memslot set to NULL and go through the slow path in kvm_write_guest.

What am I missing?

Paolo

> ---
>  The function handles cross memslot writes in a different path.
> 
>  I think we should dirty pages after partial writes too (r < len),
>  but it probably won't happen and I already started refactoring :)
> 
>  virt/kvm/kvm_main.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index aadef264bed1..863df9dcab6f 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1665,6 +1665,7 @@ int kvm_write_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
>  {
>  	struct kvm_memslots *slots = kvm_memslots(kvm);
>  	int r;
> +	gfn_t gfn;
>  
>  	BUG_ON(len > ghc->len);
>  
> @@ -1680,7 +1681,10 @@ int kvm_write_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
>  	r = __copy_to_user((void __user *)ghc->hva, data, len);
>  	if (r)
>  		return -EFAULT;
> -	mark_page_dirty_in_slot(kvm, ghc->memslot, ghc->gpa >> PAGE_SHIFT);
> +
> +	for (gfn =  gpa_to_gfn(ghc->gpa);
> +	     gfn <= gpa_to_gfn(ghc->gpa + len - 1); gfn++)
> +		mark_page_dirty_in_slot(kvm, ghc->memslot, gfn);
>  
>  	return 0;
>  }
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Radim Krčmář April 8, 2015, 9:26 a.m. UTC | #2
2015-04-08 10:49+0200, Paolo Bonzini:
> On 07/04/2015 22:34, Radim Kr?má? wrote:
> > We dirtied only one page because writes originally couldn't span more.
> > Use improved syntax for '>> PAGE_SHIFT' while at it.
> > 
> > Fixes: 8f964525a121 ("KVM: Allow cross page reads and writes from cached translations.")
> > Signed-off-by: Radim Kr?má? <rkrcmar@redhat.com>
> 
> Cross-page reads and writes should never get here; they have
> ghc->memslot set to NULL and go through the slow path in kvm_write_guest.

Only cross-memslot writes have NULL memslot.

> What am I missing?

kvm_gfn_to_hva_cache_init() queries how many pages are remaining in the
memslot and it compares it with the amount of needed pages.
If the write will fit in memslot, it will be done without
kvm_write_guest, regardless of the amount of written pages.

The relevant code path in kvm_gfn_to_hva_cache_init():
  gfn_t nr_pages_needed = end_gfn - start_gfn + 1;
  ghc->memslot = gfn_to_memslot(kvm, start_gfn);
  ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn, &nr_pages_avail);
  if (!kvm_is_error_hva(ghc->hva) && nr_pages_avail >= nr_pages_needed)
    ghc->hva += offset;
  return 0;
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Paolo Bonzini April 8, 2015, 10:43 a.m. UTC | #3
On 08/04/2015 11:26, Radim Kr?má? wrote:
> 2015-04-08 10:49+0200, Paolo Bonzini:
>> On 07/04/2015 22:34, Radim Kr?má? wrote:
>>> We dirtied only one page because writes originally couldn't span more.
>>> Use improved syntax for '>> PAGE_SHIFT' while at it.
>>>
>>> Fixes: 8f964525a121 ("KVM: Allow cross page reads and writes from cached translations.")
>>> Signed-off-by: Radim Kr?má? <rkrcmar@redhat.com>
>>
>> Cross-page reads and writes should never get here; they have
>> ghc->memslot set to NULL and go through the slow path in kvm_write_guest.
> 
> Only cross-memslot writes have NULL memslot.

The power of wrong comments...

Considering how kvm_gfn_to_hva_cache_init is used (one 1-byte field, two
4-byte fields, one 28-bytes struct that is 32-bytes aligned, one
32-bytes field that is in practice cacheline-aligned), I wonder if we
should just use ghc->memslot = NULL for cross page writes.  This would
bypass the bug you are fixing here, and avoid worries about partial writes.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index aadef264bed1..863df9dcab6f 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1665,6 +1665,7 @@  int kvm_write_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
 {
 	struct kvm_memslots *slots = kvm_memslots(kvm);
 	int r;
+	gfn_t gfn;
 
 	BUG_ON(len > ghc->len);
 
@@ -1680,7 +1681,10 @@  int kvm_write_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
 	r = __copy_to_user((void __user *)ghc->hva, data, len);
 	if (r)
 		return -EFAULT;
-	mark_page_dirty_in_slot(kvm, ghc->memslot, ghc->gpa >> PAGE_SHIFT);
+
+	for (gfn =  gpa_to_gfn(ghc->gpa);
+	     gfn <= gpa_to_gfn(ghc->gpa + len - 1); gfn++)
+		mark_page_dirty_in_slot(kvm, ghc->memslot, gfn);
 
 	return 0;
 }