diff mbox series

[03/15] KVM: selftests: Align HVA for HugeTLB-backed memslots

Message ID 20210210230625.550939-4-seanjc@google.com (mailing list archive)
State New, archived
Headers show
Series VM: selftests: Hugepage fixes and cleanups | expand

Commit Message

Sean Christopherson Feb. 10, 2021, 11:06 p.m. UTC
Align the HVA for HugeTLB memslots, not just THP memslots.  Add an
assert so any future backing types are forced to assess whether or not
they need to be aligned.

Cc: Ben Gardon <bgardon@google.com>
Cc: Yanan Wang <wangyanan55@huawei.com>
Cc: Andrew Jones <drjones@redhat.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Aaron Lewis <aaronlewis@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 tools/testing/selftests/kvm/lib/kvm_util.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Ben Gardon Feb. 11, 2021, 12:52 a.m. UTC | #1
On Wed, Feb 10, 2021 at 3:06 PM Sean Christopherson <seanjc@google.com> wrote:
>
> Align the HVA for HugeTLB memslots, not just THP memslots.  Add an
> assert so any future backing types are forced to assess whether or not
> they need to be aligned.
>
> Cc: Ben Gardon <bgardon@google.com>
> Cc: Yanan Wang <wangyanan55@huawei.com>
> Cc: Andrew Jones <drjones@redhat.com>
> Cc: Peter Xu <peterx@redhat.com>
> Cc: Aaron Lewis <aaronlewis@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Reviewed-by: Ben Gardon <bgardon@google.com>

> ---
>  tools/testing/selftests/kvm/lib/kvm_util.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 584167c6dbc7..deaeb47b5a6d 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -731,8 +731,11 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>         alignment = 1;
>  #endif
>
> -       if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
> +       if (src_type == VM_MEM_SRC_ANONYMOUS_THP ||
> +           src_type == VM_MEM_SRC_ANONYMOUS_HUGETLB)
>                 alignment = max(huge_page_size, alignment);
> +       else
> +               ASSERT_EQ(src_type, VM_MEM_SRC_ANONYMOUS);
>
>         /* Add enough memory to align up if necessary */
>         if (alignment > 1)
> --
> 2.30.0.478.g8a0d178c01-goog
>
Yanan Wang Feb. 25, 2021, 7:40 a.m. UTC | #2
Hi Sean,

On 2021/2/11 7:06, Sean Christopherson wrote:
> Align the HVA for HugeTLB memslots, not just THP memslots.  Add an
> assert so any future backing types are forced to assess whether or not
> they need to be aligned.
>
> Cc: Ben Gardon <bgardon@google.com>
> Cc: Yanan Wang <wangyanan55@huawei.com>
> Cc: Andrew Jones <drjones@redhat.com>
> Cc: Peter Xu <peterx@redhat.com>
> Cc: Aaron Lewis <aaronlewis@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>   tools/testing/selftests/kvm/lib/kvm_util.c | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 584167c6dbc7..deaeb47b5a6d 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -731,8 +731,11 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>   	alignment = 1;
>   #endif
>   
> -	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
> +	if (src_type == VM_MEM_SRC_ANONYMOUS_THP ||
> +	    src_type == VM_MEM_SRC_ANONYMOUS_HUGETLB)
Sorry for the late reply, I just returned from vacation.
I am not sure HVA alignment is really necessary here for hugetlb pages. 
Different from hugetlb pages,
the THP pages are dynamically allocated by later madvise(), so the value 
of HVA returned from mmap()
is host page size aligned but not THP page size aligned, so we indeed 
have to perform alignment.
But hugetlb pages are pre-allocated on systems. The following test 
results also indicate that,
with MAP_HUGETLB flag, the HVA returned from mmap() is already aligned 
to the corresponding
hugetlb page size. So maybe HVAs of each hugetlb pages are aligned 
during allocation of them
or in mmap() ? If so, I think we better not do this again here, because 
the later *region->mmap_size += alignment*
will cause one more hugetlb page mapped but will not be used.

cmdline: ./kvm_page_table_test -m 4 -b 1G -s anonymous_hugetlb_1gb
some outputs:
Host  virtual  test memory offset: 0xffff40000000
Host  virtual  test memory offset: 0xffff00000000
Host  virtual  test memory offset: 0x400000000000

cmdline: ./kvm_page_table_test -m 4 -b 1G -s anonymous_hugetlb_2mb
some outputs:
Host  virtual  test memory offset: 0xffff48000000
Host  virtual  test memory offset: 0xffff65400000
Host  virtual  test memory offset: 0xffff6ba00000

cmdline: ./kvm_page_table_test -m 4 -b 1G -s anonymous_hugetlb_32mb
some outputs:
Host  virtual  test memory offset: 0xffff70000000
Host  virtual  test memory offset: 0xffff4c000000
Host  virtual  test memory offset: 0xffff72000000

cmdline: ./kvm_page_table_test -m 4 -b 1G -s anonymous_hugetlb_64kb
some outputs:
Host  virtual  test memory offset: 0xffff58230000
Host  virtual  test memory offset: 0xffff6ef00000
Host  virtual  test memory offset: 0xffff7c150000

Thanks,
Yanan
>   		alignment = max(huge_page_size, alignment);
> +	else
> +		ASSERT_EQ(src_type, VM_MEM_SRC_ANONYMOUS);
>   
>   	/* Add enough memory to align up if necessary */
>   	if (alignment > 1)
Sean Christopherson March 13, 2021, 12:17 a.m. UTC | #3
On Thu, Feb 25, 2021, wangyanan (Y) wrote:
> Hi Sean,
> 
> On 2021/2/11 7:06, Sean Christopherson wrote:
> > Align the HVA for HugeTLB memslots, not just THP memslots.  Add an
> > assert so any future backing types are forced to assess whether or not
> > they need to be aligned.
> > 
> > Cc: Ben Gardon <bgardon@google.com>
> > Cc: Yanan Wang <wangyanan55@huawei.com>
> > Cc: Andrew Jones <drjones@redhat.com>
> > Cc: Peter Xu <peterx@redhat.com>
> > Cc: Aaron Lewis <aaronlewis@google.com>
> > Signed-off-by: Sean Christopherson <seanjc@google.com>
> > ---
> >   tools/testing/selftests/kvm/lib/kvm_util.c | 5 ++++-
> >   1 file changed, 4 insertions(+), 1 deletion(-)
> > 
> > diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> > index 584167c6dbc7..deaeb47b5a6d 100644
> > --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> > +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> > @@ -731,8 +731,11 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
> >   	alignment = 1;
> >   #endif
> > -	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
> > +	if (src_type == VM_MEM_SRC_ANONYMOUS_THP ||
> > +	    src_type == VM_MEM_SRC_ANONYMOUS_HUGETLB)
> Sorry for the late reply, I just returned from vacation.

At least you had an excuse :-)

> I am not sure HVA alignment is really necessary here for hugetlb pages.
> Different from hugetlb pages, the THP pages are dynamically allocated by
> later madvise(), so the value of HVA returned from mmap() is host page size
> aligned but not THP page size aligned, so we indeed have to perform
> alignment.  But hugetlb pages are pre-allocated on systems. The following
> test results also indicate that, with MAP_HUGETLB flag, the HVA returned from
> mmap() is already aligned to the corresponding hugetlb page size. So maybe
> HVAs of each hugetlb pages are aligned during allocation of them or in mmap()?

Yep, I verified the generic and arch version of hugetlb_get_unmapped_area() that
KVM supports all align the address.  PARISC64 is the only one that looks suspect,
but it doesn't support KVM.

> If so, I think we better not do this again here, because the later
> *region->mmap_size += alignment* will cause one more hugetlb page mapped but
> will not be used.

Agreed.  I think I'll still add the assert, along with a comment calling out
that HugetlB automatically handles alignment.

Nice catch, thanks!
diff mbox series

Patch

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 584167c6dbc7..deaeb47b5a6d 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -731,8 +731,11 @@  void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	alignment = 1;
 #endif
 
-	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
+	if (src_type == VM_MEM_SRC_ANONYMOUS_THP ||
+	    src_type == VM_MEM_SRC_ANONYMOUS_HUGETLB)
 		alignment = max(huge_page_size, alignment);
+	else
+		ASSERT_EQ(src_type, VM_MEM_SRC_ANONYMOUS);
 
 	/* Add enough memory to align up if necessary */
 	if (alignment > 1)