diff mbox series

[v2,6/6] LoongArch: KVM: Mark page accessed and dirty with page ref added

Message ID 20240619080940.2690756-7-maobibo@loongson.cn (mailing list archive)
State New, archived
Headers show
Series LoongArch: KVM: Fix some issues relative with mmu | expand

Commit Message

maobibo June 19, 2024, 8:09 a.m. UTC
Function kvm_map_page_fast() is fast path of secondary mmu page fault
flow, pfn is parsed from secondary mmu page table walker. However
the corresponding page reference is not added, it is dangerious to
access page out of mmu_lock.

Here page ref is added inside mmu_lock, function kvm_set_pfn_accessed()
and kvm_set_pfn_dirty() is called with page ref added, so that the
page will not be freed by others.

Also kvm_set_pfn_accessed() is removed here since it is called in
the following function kvm_release_pfn_clean().

Signed-off-by: Bibo Mao <maobibo@loongson.cn>
---
 arch/loongarch/kvm/mmu.c | 23 +++++++++++++----------
 1 file changed, 13 insertions(+), 10 deletions(-)

Comments

kernel test robot June 20, 2024, 3:05 a.m. UTC | #1
Hi Bibo,

kernel test robot noticed the following build warnings:

[auto build test WARNING on 92e5605a199efbaee59fb19e15d6cc2103a04ec2]

url:    https://github.com/intel-lab-lkp/linux/commits/Bibo-Mao/LoongArch-KVM-Delay-secondary-mmu-tlb-flush-until-guest-entry/20240619-161831
base:   92e5605a199efbaee59fb19e15d6cc2103a04ec2
patch link:    https://lore.kernel.org/r/20240619080940.2690756-7-maobibo%40loongson.cn
patch subject: [PATCH v2 6/6] LoongArch: KVM: Mark page accessed and dirty with page ref added
config: loongarch-defconfig (https://download.01.org/0day-ci/archive/20240620/202406201000.BjivosoH-lkp@intel.com/config)
compiler: loongarch64-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240620/202406201000.BjivosoH-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202406201000.BjivosoH-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> arch/loongarch/kvm/mmu.o: warning: objtool: __jump_table+0x0: special: can't find orig instruction


objdump-func vmlinux.o __jump_table:
Huacai Chen June 22, 2024, 5:21 a.m. UTC | #2
Hi, Bibo,

What is the relationship between this patch and the below one?
https://lore.kernel.org/loongarch/20240611034609.3442344-1-maobibo@loongson.cn/T/#u


Huacai

On Wed, Jun 19, 2024 at 4:09 PM Bibo Mao <maobibo@loongson.cn> wrote:
>
> Function kvm_map_page_fast() is fast path of secondary mmu page fault
> flow, pfn is parsed from secondary mmu page table walker. However
> the corresponding page reference is not added, it is dangerious to
> access page out of mmu_lock.
>
> Here page ref is added inside mmu_lock, function kvm_set_pfn_accessed()
> and kvm_set_pfn_dirty() is called with page ref added, so that the
> page will not be freed by others.
>
> Also kvm_set_pfn_accessed() is removed here since it is called in
> the following function kvm_release_pfn_clean().
>
> Signed-off-by: Bibo Mao <maobibo@loongson.cn>
> ---
>  arch/loongarch/kvm/mmu.c | 23 +++++++++++++----------
>  1 file changed, 13 insertions(+), 10 deletions(-)
>
> diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
> index 3b862f3a72cb..5a820a81fd97 100644
> --- a/arch/loongarch/kvm/mmu.c
> +++ b/arch/loongarch/kvm/mmu.c
> @@ -557,6 +557,7 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool writ
>         gfn_t gfn = gpa >> PAGE_SHIFT;
>         struct kvm *kvm = vcpu->kvm;
>         struct kvm_memory_slot *slot;
> +       struct page *page;
>
>         spin_lock(&kvm->mmu_lock);
>
> @@ -599,19 +600,22 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool writ
>         if (changed) {
>                 kvm_set_pte(ptep, new);
>                 pfn = kvm_pte_pfn(new);
> +               page = kvm_pfn_to_refcounted_page(pfn);
> +               if (page)
> +                       get_page(page);
>         }
>         spin_unlock(&kvm->mmu_lock);
>
> -       /*
> -        * Fixme: pfn may be freed after mmu_lock
> -        * kvm_try_get_pfn(pfn)/kvm_release_pfn pair to prevent this?
> -        */
> -       if (kvm_pte_young(changed))
> -               kvm_set_pfn_accessed(pfn);
> +       if (changed) {
> +               if (kvm_pte_young(changed))
> +                       kvm_set_pfn_accessed(pfn);
>
> -       if (kvm_pte_dirty(changed)) {
> -               mark_page_dirty(kvm, gfn);
> -               kvm_set_pfn_dirty(pfn);
> +               if (kvm_pte_dirty(changed)) {
> +                       mark_page_dirty(kvm, gfn);
> +                       kvm_set_pfn_dirty(pfn);
> +               }
> +               if (page)
> +                       put_page(page);
>         }
>         return ret;
>  out:
> @@ -920,7 +924,6 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
>                 kvm_set_pfn_dirty(pfn);
>         }
>
> -       kvm_set_pfn_accessed(pfn);
>         kvm_release_pfn_clean(pfn);
>  out:
>         srcu_read_unlock(&kvm->srcu, srcu_idx);
> --
> 2.39.3
>
maobibo June 24, 2024, 1:12 a.m. UTC | #3
On 2024/6/22 下午1:21, Huacai Chen wrote:
> Hi, Bibo,
> 
> What is the relationship between this patch and the below one?
> https://lore.kernel.org/loongarch/20240611034609.3442344-1-maobibo@loongson.cn/T/#u

It is updated version about the patch listed at this website, I put all 
migration relative patches into one patch set, to prevent that it is 
lost in so many mail threads:)

Regards
Bibo Mao
> 
> 
> Huacai
> 
> On Wed, Jun 19, 2024 at 4:09 PM Bibo Mao <maobibo@loongson.cn> wrote:
>>
>> Function kvm_map_page_fast() is fast path of secondary mmu page fault
>> flow, pfn is parsed from secondary mmu page table walker. However
>> the corresponding page reference is not added, it is dangerious to
>> access page out of mmu_lock.
>>
>> Here page ref is added inside mmu_lock, function kvm_set_pfn_accessed()
>> and kvm_set_pfn_dirty() is called with page ref added, so that the
>> page will not be freed by others.
>>
>> Also kvm_set_pfn_accessed() is removed here since it is called in
>> the following function kvm_release_pfn_clean().
>>
>> Signed-off-by: Bibo Mao <maobibo@loongson.cn>
>> ---
>>   arch/loongarch/kvm/mmu.c | 23 +++++++++++++----------
>>   1 file changed, 13 insertions(+), 10 deletions(-)
>>
>> diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
>> index 3b862f3a72cb..5a820a81fd97 100644
>> --- a/arch/loongarch/kvm/mmu.c
>> +++ b/arch/loongarch/kvm/mmu.c
>> @@ -557,6 +557,7 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool writ
>>          gfn_t gfn = gpa >> PAGE_SHIFT;
>>          struct kvm *kvm = vcpu->kvm;
>>          struct kvm_memory_slot *slot;
>> +       struct page *page;
>>
>>          spin_lock(&kvm->mmu_lock);
>>
>> @@ -599,19 +600,22 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool writ
>>          if (changed) {
>>                  kvm_set_pte(ptep, new);
>>                  pfn = kvm_pte_pfn(new);
>> +               page = kvm_pfn_to_refcounted_page(pfn);
>> +               if (page)
>> +                       get_page(page);
>>          }
>>          spin_unlock(&kvm->mmu_lock);
>>
>> -       /*
>> -        * Fixme: pfn may be freed after mmu_lock
>> -        * kvm_try_get_pfn(pfn)/kvm_release_pfn pair to prevent this?
>> -        */
>> -       if (kvm_pte_young(changed))
>> -               kvm_set_pfn_accessed(pfn);
>> +       if (changed) {
>> +               if (kvm_pte_young(changed))
>> +                       kvm_set_pfn_accessed(pfn);
>>
>> -       if (kvm_pte_dirty(changed)) {
>> -               mark_page_dirty(kvm, gfn);
>> -               kvm_set_pfn_dirty(pfn);
>> +               if (kvm_pte_dirty(changed)) {
>> +                       mark_page_dirty(kvm, gfn);
>> +                       kvm_set_pfn_dirty(pfn);
>> +               }
>> +               if (page)
>> +                       put_page(page);
>>          }
>>          return ret;
>>   out:
>> @@ -920,7 +924,6 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
>>                  kvm_set_pfn_dirty(pfn);
>>          }
>>
>> -       kvm_set_pfn_accessed(pfn);
>>          kvm_release_pfn_clean(pfn);
>>   out:
>>          srcu_read_unlock(&kvm->srcu, srcu_idx);
>> --
>> 2.39.3
>>
diff mbox series

Patch

diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
index 3b862f3a72cb..5a820a81fd97 100644
--- a/arch/loongarch/kvm/mmu.c
+++ b/arch/loongarch/kvm/mmu.c
@@ -557,6 +557,7 @@  static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool writ
 	gfn_t gfn = gpa >> PAGE_SHIFT;
 	struct kvm *kvm = vcpu->kvm;
 	struct kvm_memory_slot *slot;
+	struct page *page;
 
 	spin_lock(&kvm->mmu_lock);
 
@@ -599,19 +600,22 @@  static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool writ
 	if (changed) {
 		kvm_set_pte(ptep, new);
 		pfn = kvm_pte_pfn(new);
+		page = kvm_pfn_to_refcounted_page(pfn);
+		if (page)
+			get_page(page);
 	}
 	spin_unlock(&kvm->mmu_lock);
 
-	/*
-	 * Fixme: pfn may be freed after mmu_lock
-	 * kvm_try_get_pfn(pfn)/kvm_release_pfn pair to prevent this?
-	 */
-	if (kvm_pte_young(changed))
-		kvm_set_pfn_accessed(pfn);
+	if (changed) {
+		if (kvm_pte_young(changed))
+			kvm_set_pfn_accessed(pfn);
 
-	if (kvm_pte_dirty(changed)) {
-		mark_page_dirty(kvm, gfn);
-		kvm_set_pfn_dirty(pfn);
+		if (kvm_pte_dirty(changed)) {
+			mark_page_dirty(kvm, gfn);
+			kvm_set_pfn_dirty(pfn);
+		}
+		if (page)
+			put_page(page);
 	}
 	return ret;
 out:
@@ -920,7 +924,6 @@  static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
 		kvm_set_pfn_dirty(pfn);
 	}
 
-	kvm_set_pfn_accessed(pfn);
 	kvm_release_pfn_clean(pfn);
 out:
 	srcu_read_unlock(&kvm->srcu, srcu_idx);