diff mbox series

[2/5] KVM: PPC: Book3S HV: Align gfn to L1 page size when inserting nest-rmap entry

Message ID 20181221032843.13012-3-sjitindarsingh@gmail.com (mailing list archive)
State New, archived
Headers show
Series KVM: PPC: Book3S HV: Fix dirty page logging for a nested guest | expand

Commit Message

Suraj Jitindar Singh Dec. 21, 2018, 3:28 a.m. UTC
Nested rmap entries are used to store the translation from L1 gpa to L2
gpa when entries are inserted into the shadow (nested) page tables. This
rmap list is located by indexing the rmap array in the memslot by L1
gfn. When we come to search for these entries we only know the L1 page size
(which could be PAGE_SIZE, 2M or a 1G page) and so can only select a gfn
aligned to that size. This means that when we insert the entry, so we can
find it later, we need to align the gfn we use to select the rmap list
in which to insert the entry to L1 page size as well.

By not doing this we were missing nested rmap entries when modifying L1
ptes which were for a page also passed through to an L2 guest.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 arch/powerpc/kvm/book3s_hv_nested.c | 2 ++
 1 file changed, 2 insertions(+)

Comments

David Gibson Jan. 2, 2019, 2:48 a.m. UTC | #1
On Fri, Dec 21, 2018 at 02:28:40PM +1100, Suraj Jitindar Singh wrote:
> Nested rmap entries are used to store the translation from L1 gpa to L2
> gpa when entries are inserted into the shadow (nested) page tables. This
> rmap list is located by indexing the rmap array in the memslot by L1
> gfn. When we come to search for these entries we only know the L1 page size
> (which could be PAGE_SIZE, 2M or a 1G page) and so can only select a gfn
> aligned to that size. This means that when we insert the entry, so we can
> find it later, we need to align the gfn we use to select the rmap list
> in which to insert the entry to L1 page size as well.
> 
> By not doing this we were missing nested rmap entries when modifying L1
> ptes which were for a page also passed through to an L2 guest.
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

> ---
>  arch/powerpc/kvm/book3s_hv_nested.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
> index 0dfbf093bde5..9dfb927ea14f 100644
> --- a/arch/powerpc/kvm/book3s_hv_nested.c
> +++ b/arch/powerpc/kvm/book3s_hv_nested.c
> @@ -1226,6 +1226,8 @@ static long int __kvmhv_nested_page_fault(struct kvm_vcpu *vcpu,
>  			return ret;
>  		shift = kvmppc_radix_level_to_shift(level);
>  	}
> +	/* Align gfn to the start of the page */
> +	gfn = (gpa & ~((1UL << shift) - 1)) >> PAGE_SHIFT;
>  
>  	/* 3. Compute the pte we need to insert for nest_gpa -> host r_addr */
>
diff mbox series

Patch

diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
index 0dfbf093bde5..9dfb927ea14f 100644
--- a/arch/powerpc/kvm/book3s_hv_nested.c
+++ b/arch/powerpc/kvm/book3s_hv_nested.c
@@ -1226,6 +1226,8 @@  static long int __kvmhv_nested_page_fault(struct kvm_vcpu *vcpu,
 			return ret;
 		shift = kvmppc_radix_level_to_shift(level);
 	}
+	/* Align gfn to the start of the page */
+	gfn = (gpa & ~((1UL << shift) - 1)) >> PAGE_SHIFT;
 
 	/* 3. Compute the pte we need to insert for nest_gpa -> host r_addr */