diff mbox series

[v2,2/6] LoongArch: KVM: Select huge page only if secondary mmu supports it

Message ID 20240619080940.2690756-3-maobibo@loongson.cn (mailing list archive)
State New, archived
Headers show
Series LoongArch: KVM: Fix some issues relative with mmu | expand

Commit Message

Bibo Mao June 19, 2024, 8:09 a.m. UTC
Currently page level selection about secondary mmu depends on memory
slot and page level about host mmu. There will be problem if page level
of secondary mmu is zero already. So page level selection should depend
on the following three conditions.
 1. Memslot is aligned for huge page and vm is not migrating.
 2. Page level of host mmu is huge page also.
 3. Page level of secondary mmu is suituable for huge page, it cannot
be normal page since it is not supported to merge normal pages into
huge page now.

Signed-off-by: Bibo Mao <maobibo@loongson.cn>
---
 arch/loongarch/include/asm/kvm_mmu.h |  2 +-
 arch/loongarch/kvm/mmu.c             | 16 +++++++++++++---
 2 files changed, 14 insertions(+), 4 deletions(-)

Comments

Huacai Chen June 23, 2024, 7:55 a.m. UTC | #1
Hi, Bibo,

On Wed, Jun 19, 2024 at 4:09 PM Bibo Mao <maobibo@loongson.cn> wrote:
>
> Currently page level selection about secondary mmu depends on memory
> slot and page level about host mmu. There will be problem if page level
> of secondary mmu is zero already. So page level selection should depend
> on the following three conditions.
>  1. Memslot is aligned for huge page and vm is not migrating.
>  2. Page level of host mmu is huge page also.
>  3. Page level of secondary mmu is suituable for huge page, it cannot
> be normal page since it is not supported to merge normal pages into
> huge page now.
>
> Signed-off-by: Bibo Mao <maobibo@loongson.cn>
> ---
>  arch/loongarch/include/asm/kvm_mmu.h |  2 +-
>  arch/loongarch/kvm/mmu.c             | 16 +++++++++++++---
>  2 files changed, 14 insertions(+), 4 deletions(-)
>
> diff --git a/arch/loongarch/include/asm/kvm_mmu.h b/arch/loongarch/include/asm/kvm_mmu.h
> index 099bafc6f797..d06ae0e0dde5 100644
> --- a/arch/loongarch/include/asm/kvm_mmu.h
> +++ b/arch/loongarch/include/asm/kvm_mmu.h
> @@ -55,7 +55,7 @@ static inline void kvm_set_pte(kvm_pte_t *ptep, kvm_pte_t val)
>  static inline int kvm_pte_write(kvm_pte_t pte) { return pte & _PAGE_WRITE; }
>  static inline int kvm_pte_dirty(kvm_pte_t pte) { return pte & _PAGE_DIRTY; }
>  static inline int kvm_pte_young(kvm_pte_t pte) { return pte & _PAGE_ACCESSED; }
> -static inline int kvm_pte_huge(kvm_pte_t pte) { return pte & _PAGE_HUGE; }
> +static inline int kvm_pte_huge(kvm_pte_t pte)  { return !!(pte & _PAGE_HUGE); }
Why do we need this change?

Huacai

>
>  static inline kvm_pte_t kvm_pte_mkyoung(kvm_pte_t pte)
>  {
> diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
> index 9e39d28fec35..c6351d13ca1b 100644
> --- a/arch/loongarch/kvm/mmu.c
> +++ b/arch/loongarch/kvm/mmu.c
> @@ -858,10 +858,20 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
>
>         /* Disable dirty logging on HugePages */
>         level = 0;
> -       if (!fault_supports_huge_mapping(memslot, hva, write)) {
> -               level = 0;
> -       } else {
> +       if (fault_supports_huge_mapping(memslot, hva, write)) {
> +               /* Check page level about host mmu*/
>                 level = host_pfn_mapping_level(kvm, gfn, memslot);
> +               if (level == 1) {
> +                       /*
> +                        * Check page level about secondary mmu
> +                        * Disable hugepage if it is normal page on
> +                        * secondary mmu already
> +                        */
> +                       ptep = kvm_populate_gpa(kvm, NULL, gpa, 0);
> +                       if (ptep && !kvm_pte_huge(*ptep))
> +                               level = 0;
> +               }
> +
>                 if (level == 1) {
>                         gfn = gfn & ~(PTRS_PER_PTE - 1);
>                         pfn = pfn & ~(PTRS_PER_PTE - 1);
> --
> 2.39.3
>
Bibo Mao June 24, 2024, 1:28 a.m. UTC | #2
On 2024/6/23 下午3:55, Huacai Chen wrote:
> Hi, Bibo,
> 
> On Wed, Jun 19, 2024 at 4:09 PM Bibo Mao <maobibo@loongson.cn> wrote:
>>
>> Currently page level selection about secondary mmu depends on memory
>> slot and page level about host mmu. There will be problem if page level
>> of secondary mmu is zero already. So page level selection should depend
>> on the following three conditions.
>>   1. Memslot is aligned for huge page and vm is not migrating.
>>   2. Page level of host mmu is huge page also.
>>   3. Page level of secondary mmu is suituable for huge page, it cannot
>> be normal page since it is not supported to merge normal pages into
>> huge page now.
>>
>> Signed-off-by: Bibo Mao <maobibo@loongson.cn>
>> ---
>>   arch/loongarch/include/asm/kvm_mmu.h |  2 +-
>>   arch/loongarch/kvm/mmu.c             | 16 +++++++++++++---
>>   2 files changed, 14 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/loongarch/include/asm/kvm_mmu.h b/arch/loongarch/include/asm/kvm_mmu.h
>> index 099bafc6f797..d06ae0e0dde5 100644
>> --- a/arch/loongarch/include/asm/kvm_mmu.h
>> +++ b/arch/loongarch/include/asm/kvm_mmu.h
>> @@ -55,7 +55,7 @@ static inline void kvm_set_pte(kvm_pte_t *ptep, kvm_pte_t val)
>>   static inline int kvm_pte_write(kvm_pte_t pte) { return pte & _PAGE_WRITE; }
>>   static inline int kvm_pte_dirty(kvm_pte_t pte) { return pte & _PAGE_DIRTY; }
>>   static inline int kvm_pte_young(kvm_pte_t pte) { return pte & _PAGE_ACCESSED; }
>> -static inline int kvm_pte_huge(kvm_pte_t pte) { return pte & _PAGE_HUGE; }
>> +static inline int kvm_pte_huge(kvm_pte_t pte)  { return !!(pte & _PAGE_HUGE); }
> Why do we need this change?
In later there is such usage like !kvm_pte_huge(*ptep)
       if (ptep && !kvm_pte_huge(*ptep))

I had thought it should be 0/1 if !kvm_pte_huge() is used. However the 
original is ok by test.

I will remove this modification.

Regards
Bibo Mao


> 
> Huacai
> 
>>
>>   static inline kvm_pte_t kvm_pte_mkyoung(kvm_pte_t pte)
>>   {
>> diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
>> index 9e39d28fec35..c6351d13ca1b 100644
>> --- a/arch/loongarch/kvm/mmu.c
>> +++ b/arch/loongarch/kvm/mmu.c
>> @@ -858,10 +858,20 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
>>
>>          /* Disable dirty logging on HugePages */
>>          level = 0;
>> -       if (!fault_supports_huge_mapping(memslot, hva, write)) {
>> -               level = 0;
>> -       } else {
>> +       if (fault_supports_huge_mapping(memslot, hva, write)) {
>> +               /* Check page level about host mmu*/
>>                  level = host_pfn_mapping_level(kvm, gfn, memslot);
>> +               if (level == 1) {
>> +                       /*
>> +                        * Check page level about secondary mmu
>> +                        * Disable hugepage if it is normal page on
>> +                        * secondary mmu already
>> +                        */
>> +                       ptep = kvm_populate_gpa(kvm, NULL, gpa, 0);
>> +                       if (ptep && !kvm_pte_huge(*ptep))
>> +                               level = 0;
>> +               }
>> +
>>                  if (level == 1) {
>>                          gfn = gfn & ~(PTRS_PER_PTE - 1);
>>                          pfn = pfn & ~(PTRS_PER_PTE - 1);
>> --
>> 2.39.3
>>
diff mbox series

Patch

diff --git a/arch/loongarch/include/asm/kvm_mmu.h b/arch/loongarch/include/asm/kvm_mmu.h
index 099bafc6f797..d06ae0e0dde5 100644
--- a/arch/loongarch/include/asm/kvm_mmu.h
+++ b/arch/loongarch/include/asm/kvm_mmu.h
@@ -55,7 +55,7 @@  static inline void kvm_set_pte(kvm_pte_t *ptep, kvm_pte_t val)
 static inline int kvm_pte_write(kvm_pte_t pte) { return pte & _PAGE_WRITE; }
 static inline int kvm_pte_dirty(kvm_pte_t pte) { return pte & _PAGE_DIRTY; }
 static inline int kvm_pte_young(kvm_pte_t pte) { return pte & _PAGE_ACCESSED; }
-static inline int kvm_pte_huge(kvm_pte_t pte) { return pte & _PAGE_HUGE; }
+static inline int kvm_pte_huge(kvm_pte_t pte)  { return !!(pte & _PAGE_HUGE); }
 
 static inline kvm_pte_t kvm_pte_mkyoung(kvm_pte_t pte)
 {
diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
index 9e39d28fec35..c6351d13ca1b 100644
--- a/arch/loongarch/kvm/mmu.c
+++ b/arch/loongarch/kvm/mmu.c
@@ -858,10 +858,20 @@  static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
 
 	/* Disable dirty logging on HugePages */
 	level = 0;
-	if (!fault_supports_huge_mapping(memslot, hva, write)) {
-		level = 0;
-	} else {
+	if (fault_supports_huge_mapping(memslot, hva, write)) {
+		/* Check page level about host mmu*/
 		level = host_pfn_mapping_level(kvm, gfn, memslot);
+		if (level == 1) {
+			/*
+			 * Check page level about secondary mmu
+			 * Disable hugepage if it is normal page on
+			 * secondary mmu already
+			 */
+			ptep = kvm_populate_gpa(kvm, NULL, gpa, 0);
+			if (ptep && !kvm_pte_huge(*ptep))
+				level = 0;
+		}
+
 		if (level == 1) {
 			gfn = gfn & ~(PTRS_PER_PTE - 1);
 			pfn = pfn & ~(PTRS_PER_PTE - 1);